[00:11:52] 10Gerrit, 06Release-Engineering-Team, 06Operations, 10hardware-requests, 13Patch-For-Review: Requesting 1 spare misc box for Gerrit in codfw - https://phabricator.wikimedia.org/T148187#3120420 (10faidon) [00:17:54] If an extension doesn't have a jsduck.json config file, where is Jenkins getting the config for tests for that extension? From the same file in core? [01:08:12] 10Gerrit, 06Operations, 10Ops-Access-Requests: archiva-deploy password for Chad H. - https://phabricator.wikimedia.org/T161067#3120509 (10demon) [01:13:34] 10Gerrit, 06Release-Engineering-Team: Update gerrit to 2.13.6 - https://phabricator.wikimedia.org/T158946#3120525 (10Paladox) Looks like 2.13.7 will be released really soon with some fixes for ) https://gerrit-review.googlesource.com/#/c/100810/ [01:16:51] 06Release-Engineering-Team, 06Operations, 06Parsing-Team, 07HHVM, 07Wikimedia-Incident: API cluster failure / OOM - https://phabricator.wikimedia.org/T151702#3120540 (10Krinkle) [01:26:35] paladox: ^ ? [01:26:46] (my previous question from like 1 hr ago.... ;P) [01:36:21] 10Gerrit, 06Release-Engineering-Team: Update gerrit to 2.13.6 - https://phabricator.wikimedia.org/T158946#3120549 (10demon) The 2.13.4..2.13.7 diff looks good, we'll target it after I'm back from vacation. [05:46:38] Project mediawiki-core-code-coverage build #2649: 04STILL FAILING in 2 hr 46 min: https://integration.wikimedia.org/ci/job/mediawiki-core-code-coverage/2649/ [07:20:15] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.16 deployment blockers - https://phabricator.wikimedia.org/T158997#3120834 (10mmodell) 05Open>03Resolved [07:20:18] Project beta-update-databases-eqiad build #15823: 04FAILURE in 18 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15823/ [08:20:17] Project beta-update-databases-eqiad build #15824: 04STILL FAILING in 15 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15824/ [08:43:12] !log deployment-ms-be01: swift-init reload object - T160990 [08:43:16] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [08:43:16] T160990: deployment-ms-be01.deployment-prep and deployment-ms-be02.deployment-prep have high load / system CPU - https://phabricator.wikimedia.org/T160990 [08:45:24] !log deployment-ms-be01: swift-init reload container - T160990 [08:45:27] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [08:48:00] !log deployment-ms-be01: swift-init reload all - T160990 [08:48:03] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [08:53:48] * hashar whistles having a coffee and breaking swift on beta [08:59:01] 10Beta-Cluster-Infrastructure, 10media-storage: deployment-ms-be01.deployment-prep and deployment-ms-be02.deployment-prep have high load / system CPU - https://phabricator.wikimedia.org/T160990#3120947 (10hashar) On deployment-ms-be01 I reload the `object` server with 30 workers that might have helped. There... [09:04:46] 10Beta-Cluster-Infrastructure, 06Labs, 10media-storage: Rebalance deployment-ms-be01 and deployment-ms-be02 so they run on different labvirt - https://phabricator.wikimedia.org/T161083#3120949 (10hashar) [09:15:41] 10Beta-Cluster-Infrastructure, 10media-storage: On beta enable swift statsd metric - https://phabricator.wikimedia.org/T161084#3120970 (10hashar) [09:15:58] 10Beta-Cluster-Infrastructure, 10media-storage: On beta enable swift statsd metric - https://phabricator.wikimedia.org/T161084#3120970 (10hashar) [09:20:18] Project beta-update-databases-eqiad build #15825: 04STILL FAILING in 17 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15825/ [09:32:56] 10Beta-Cluster-Infrastructure, 10media-storage: deployment-ms-be01.deployment-prep and deployment-ms-be02.deployment-prep have high load / system CPU - https://phabricator.wikimedia.org/T160990#3121017 (10hashar) The best I can tell is: * lowering number of workers might help * most of the time is spent in the... [09:45:01] !log beta: purging all Linux kernel from Swift instances [09:45:04] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [09:53:57] 10Continuous-Integration-Infrastructure: Upgrade git package on zuul-merger instances contint1001 / contint2001 to benefit git-daemon - https://phabricator.wikimedia.org/T161086#3121040 (10hashar) [10:15:37] 10Gerrit, 06Release-Engineering-Team: Update gerrit to 2.13.7 - https://phabricator.wikimedia.org/T158946#3121207 (10Paladox) [10:20:16] Project beta-update-databases-eqiad build #15826: 04STILL FAILING in 16 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15826/ [10:49:02] (03PS2) 10Hashar: SmashPig: trigger composer on both branches [integration/config] - 10https://gerrit.wikimedia.org/r/343862 [10:49:08] (03CR) 10Hashar: [C: 032] SmashPig: trigger composer on both branches [integration/config] - 10https://gerrit.wikimedia.org/r/343862 (owner: 10Hashar) [10:50:06] (03Merged) 10jenkins-bot: SmashPig: trigger composer on both branches [integration/config] - 10https://gerrit.wikimedia.org/r/343862 (owner: 10Hashar) [10:56:10] (03CR) 10Hashar: [C: 032] "The two jobs are triggered on each of master and deployment branches and they both pass :-}" [integration/config] - 10https://gerrit.wikimedia.org/r/343862 (owner: 10Hashar) [11:06:00] beta is broken. The revert patch https://gerrit.wikimedia.org/r/#/c/344106/ should fix it [11:08:04] PROBLEM - Puppet run on integration-c1 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [11:16:56] hashar https://gerrit.wikimedia.org/r/#/c/344109/ [11:17:04] hasharLunch ^^ [11:20:06] Project beta-update-databases-eqiad build #15827: 04STILL FAILING in 5.5 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15827/ [11:39:41] 10Gerrit, 06Release-Engineering-Team: Update gerrit to 2.13.7 - https://phabricator.wikimedia.org/T158946#3121485 (10Paladox) [11:42:59] 10Gerrit, 07LDAP, 07Upstream: underscores in usernames are not recognized - https://phabricator.wikimedia.org/T50774#3121503 (10Paladox) Hopefully i fixed it in https://gerrit-review.googlesource.com/#/c/100799/ :) [12:00:21] 10Browser-Tests-Infrastructure, 07JavaScript, 15User-zeljkofilipin: Write documentation on Selenium tests in Node.js - https://phabricator.wikimedia.org/T161103#3121559 (10zeljkofilipin) [12:01:51] 10Browser-Tests-Infrastructure, 05Continuous-Integration-Scaling, 13Patch-For-Review, 15User-zeljkofilipin: migrate mwext-mw-selenium to Nodepool instances - https://phabricator.wikimedia.org/T137112#3121588 (10zeljkofilipin) Is this resolved? [12:03:39] 10Browser-Tests-Infrastructure, 10MediaWiki-General-or-Unknown, 07JavaScript, 05MW-1.29-release (WMF-deploy-2017-03-21_(1.29.0-wmf.17)), and 4 others: Port Selenium tests from Ruby to Node.js - https://phabricator.wikimedia.org/T139740#3121596 (10zeljkofilipin) [12:06:50] 10Browser-Tests-Infrastructure, 07JavaScript, 15User-zeljkofilipin: Write documentation on Selenium tests in Node.js - https://phabricator.wikimedia.org/T161103#3121606 (10zeljkofilipin) [12:11:50] 10Gerrit, 07LDAP, 07Upstream: underscores in usernames are not recognized - https://phabricator.wikimedia.org/T50774#3121614 (10Paladox) It shows here https://github.com/gerrit-review/gerrit/blob/49df12cb7da00f9298d8a37e231d55ecc83fa0c5/gerrit-httpd/src/main/java/com/google/gerrit/httpd/auth/ldap/LdapLoginSe... [12:20:05] Project beta-update-databases-eqiad build #15828: 04STILL FAILING in 5.3 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15828/ [12:38:23] hasharLunch: hey, when you're back from lunch please check https://gerrit.wikimedia.org/r/#/c/343738/ [13:13:44] (03CR) 10Hashar: [C: 04-1] "In short CI is CPU starved and lengthening the timeout would only makes the issue worse. Executors will be kept busy longer and prevent o" [integration/config] - 10https://gerrit.wikimedia.org/r/343738 (owner: 10Ladsgroup) [13:14:00] Amir1: replied [13:14:21] in short CI jobs have not enough CPU because they are ultimately all running on the same host that is more or less at 100% CPU already :( [13:14:28] RECOVERY - Long lived cherry-picks on puppetmaster on deployment-puppetmaster02 is OK: OK: Less than 100.00% above the threshold [0.0] [13:15:26] Thanks. Hmm. I hope we can find a way [13:15:34] let me read and see what's possible [13:20:06] Project beta-update-databases-eqiad build #15829: 04STILL FAILING in 6.2 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15829/ [13:24:04] 10Continuous-Integration-Infrastructure (Little Steps Sprint), 10Wikidata: Revisit Jenkins jobs being triggered for Wikibase - https://phabricator.wikimedia.org/T160989#3117613 (10Ladsgroup) >>! In T160989#3118490, @Addshore wrote: >> Drop the Zend PHP 5.5 jobs entirely >> Wikimedia is on HHVM and we don't rea... [13:32:29] 10Continuous-Integration-Infrastructure (Little Steps Sprint), 10Wikidata: Revisit Jenkins jobs being triggered for Wikibase - https://phabricator.wikimedia.org/T160989#3121773 (10Addshore) >>! In T160989#3121754, @Ladsgroup wrote: > What about moving it to Travis? It would help because the number of triggered... [13:35:55] hashar There is an security update for the ssh slaves plugin. [13:37:03] twentyafterfour, RainbowSprinkles, I think phab is ailing :( [13:37:14] 'Unable to establish a connection to any database host (while trying "phabricator_policy"). All masters and replicas are completely unreachable.' [13:37:39] oh, maybe DNS and nothing to do with phab... [13:39:22] andrewbogott it's working again [13:39:29] Most likly it was the dns thing. [13:39:32] yep [13:39:46] PROBLEM - Puppet run on deployment-restbase01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [13:40:04] PROBLEM - Puppet run on integration-publishing is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [13:40:30] PROBLEM - Puppet run on deployment-aqs03 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [13:42:20] PROBLEM - Puppet run on saucelabs-01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [13:49:08] PROBLEM - Puppet run on buildlog is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [13:50:00] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.17 deployment blockers - https://phabricator.wikimedia.org/T160549#3121890 (10hashar) [13:51:02] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.17 deployment blockers - https://phabricator.wikimedia.org/T160549#3103115 (10hashar) HHVM 3.12 -> 3.18 has been done on canaries so there might be some additional log spam such as {T161095} [13:51:04] hashar i've started replacing trilead-ssh2 with apache sshd in https://github.com/jenkinsci/ssh-slaves-plugin/pull/47/files (fails) [13:57:04] (03CR) 10Hashar: [C: 04-1] "Looks fine. The CI CPU usage is starved though :( So I am holding adding more jobs until T161006 is solved. Should be soon ™" [integration/config] - 10https://gerrit.wikimedia.org/r/324719 (https://phabricator.wikimedia.org/T139740) (owner: 10Zfilipin) [14:00:03] RECOVERY - Puppet run on integration-publishing is OK: OK: Less than 1.00% above the threshold [0.0] [14:06:46] hashar: kinda surprised it doesnt ddos beta :P [14:07:04] Reedy: it ? [14:07:12] scanner00 [14:07:16] ah yeah [14:07:20] do you have access to it ? [14:07:30] looks like some process is faulty in a death loop [14:07:49] we had a task filled yesterday that showed some brute force of accounts that do not even exist. Maybe that is scanner00 :-} [14:09:11] heh [14:09:17] I just gave myself access to it [14:09:46] I think it's just a badly setup cronjob [14:09:54] That respawns unconditionally [14:10:05] Irregardles whether the original is still there [14:11:44] 30 4 * * 1 ~/workspace/wikimedia-security-automated-scanning/bin/scan_from_cron.sh [14:11:47] hashar: ^ [14:12:48] Though, the scanner user doesn't have a workspace folder in their home dir [14:13:27] Looks like ZAP hasn't been updated since 2015 either [14:17:22] RECOVERY - Puppet run on saucelabs-01 is OK: OK: Less than 1.00% above the threshold [0.0] [14:19:47] RECOVERY - Puppet run on deployment-restbase01 is OK: OK: Less than 1.00% above the threshold [0.0] [14:20:30] RECOVERY - Puppet run on deployment-aqs03 is OK: OK: Less than 1.00% above the threshold [0.0] [14:21:39] Yippee, build fixed! [14:21:39] Project beta-update-databases-eqiad build #15830: 09FIXED in 1 min 38 sec: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/15830/ [14:24:51] Reedy: maybe it got set by Chris Steipp a long time ago [14:25:13] Reedy: guess you can report your finding on the task and figure out dapatrick whether that is still of any use ;) [14:31:06] I know he's evaluating stuff [14:31:14] It might be that people don't know it's still running [14:31:23] probably [14:31:35] there is still one process running right? [14:33:58] yeah [14:34:00] I left one [14:34:03] Could kill that [14:34:21] hashar: I dunno why, but no processed started since January [14:34:28] Dunno if that's because it's overloaded or what [14:47:30] hashar: Ugh [14:47:34] It's auto installing kernel updates [14:47:35] /dev/vda1 20G 8.9G 9.2G 50% / [14:47:41] Let's see how much autoremove frees up [15:00:00] yeah kernels are not auto purges :( [15:00:00] apt-get autoremove --purge [15:00:08] and the apt cache is never purged afaik so : apt-get clean [15:00:52] hashar: use a cron job to purge? [15:01:05] maybe [15:01:28] What are you purging exactly? [15:02:30] PROBLEM - Puppet run on deployment-aqs02 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [15:03:42] Does shinken have a rename cmd the underscore messes up my collapse msg script i made [15:42:30] RECOVERY - Puppet run on deployment-aqs02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:03:20] (03PS1) 10Umherirrender: Add non-voting unit tests [integration/config] - 10https://gerrit.wikimedia.org/r/344162 [16:04:53] (03CR) 10Paladox: [C: 04-1] Add non-voting unit tests (032 comments) [integration/config] - 10https://gerrit.wikimedia.org/r/344162 (owner: 10Umherirrender) [16:10:19] (03PS2) 10Umherirrender: Add non-voting unit tests [integration/config] - 10https://gerrit.wikimedia.org/r/344162 [16:10:55] (03CR) 10Paladox: [C: 031] "thanks." [integration/config] - 10https://gerrit.wikimedia.org/r/344162 (owner: 10Umherirrender) [16:11:09] (03CR) 10Umherirrender: "Patch Set 2: Fixed newlines" [integration/config] - 10https://gerrit.wikimedia.org/r/344162 (owner: 10Umherirrender) [16:18:50] 10Browser-Tests-Infrastructure, 10MediaWiki-General-or-Unknown, 07JavaScript, 05MW-1.29-release (WMF-deploy-2017-03-21_(1.29.0-wmf.17)), and 4 others: Port Selenium tests from Ruby to Node.js - https://phabricator.wikimedia.org/T139740#3122233 (10zeljkofilipin) [16:19:13] 10Browser-Tests-Infrastructure, 10MediaWiki-General-or-Unknown, 07JavaScript, 05MW-1.29-release (WMF-deploy-2017-03-21_(1.29.0-wmf.17)), and 4 others: Port Selenium tests from Ruby to Node.js - https://phabricator.wikimedia.org/T139740#2441243 (10zeljkofilipin) [16:26:02] 10Gerrit, 07LDAP, 07Upstream: underscores in usernames are not recognized - https://phabricator.wikimedia.org/T50774#3122266 (10demon) 05Open>03declined >>! In T50774#3121614, @Paladox wrote: > It shows here https://github.com/gerrit-review/gerrit/blob/49df12cb7da00f9298d8a37e231d55ecc83fa0c5/gerrit-http... [17:29:10] 10Browser-Tests-Infrastructure, 05Continuous-Integration-Scaling, 13Patch-For-Review, 15User-zeljkofilipin: migrate mwext-mw-selenium to Nodepool instances - https://phabricator.wikimedia.org/T137112#3122457 (10hashar) Pending: test: invoke rspec directly https://gerrit.wikimedia.org/r/330856 [18:00:16] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.17 deployment blockers - https://phabricator.wikimedia.org/T160549#3122522 (10demon) [18:16:51] Project mediawiki-core-code-coverage build #2650: 04STILL FAILING in 3 hr 16 min: https://integration.wikimedia.org/ci/job/mediawiki-core-code-coverage/2650/ [19:04:44] Reedy: Yes, the automated scanning badly needs some love. [19:06:05] Reedy, Hashar Feel free to disable the cron job that runs the scans for now. I'll re-enable after I have time to focus on that project. [19:32:39] Does anyone know if the train is happening today? (thcipriani greg-g) [19:32:54] FYI the patch for the Flow SQL error was merged and I have a cherry-pick lined up for it: https://gerrit.wikimedia.org/r/#/c/344188/ [19:33:11] RoanKattouw: awesome, I'm merging some cherry-picks for other blockers now [19:33:14] Cool [19:33:18] Yeah I saw there were a number this week [19:33:23] indeed [19:34:33] +2'd thanks for the patch :) [19:35:59] RoanKattouw: The Flow one was just bad timing, but glad we got it fixed [19:36:01] Thx for that [19:36:21] Yeah I'm glad we finally found it [19:36:27] It's been a mystery for over a year [19:36:55] And with some very useful grepping from Reedy and debugging help from ebernhardson I managed to fix it [19:37:07] However, it's also depressing because the functionality that I fixed doesn't actually work [19:37:39] I fixed it in that it no longer throws SQL errors, but it doesn't actually do what it's supposed to do, and fixing that will be very complicated [19:39:09] a vignette that is a microcosm of my experience with software generally. [19:49:53] RainbowSprinkles https://groups.google.com/forum/#!topic/repo-discuss/81UIOL1ai6s (making cookbook plugin in gerrit readonly replacing it with the examples plugin) :) [20:04:50] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.17 deployment blockers - https://phabricator.wikimedia.org/T160549#3123100 (10Krinkle) [20:41:09] 10Deployment-Systems, 10Scap, 07Puppet: Unify co-master sync - https://phabricator.wikimedia.org/T161156#3123212 (10demon) [20:41:20] 10Deployment-Systems, 10Scap, 07Puppet: Unify co-master sync - https://phabricator.wikimedia.org/T161156#3123224 (10demon) p:05Triage>03Low [20:41:43] 10Deployment-Systems, 10Scap: Unify co-master sync - https://phabricator.wikimedia.org/T161156#3123212 (10demon) [20:47:21] 06Release-Engineering-Team (Deployment-Blockers), 05Release: MW-1.29.0-wmf.17 deployment blockers - https://phabricator.wikimedia.org/T160549#3123234 (10thcipriani) Looks like most blockers got movement if not 100% resolution. I just sync'd out wmf.17 to group0. Monitoring now. Thanks all for the quick actions... [21:35:40] 10Gerrit, 06Operations, 10Ops-Access-Requests: archiva-deploy password for Chad H. - https://phabricator.wikimedia.org/T161067#3123353 (10Dzahn) Of course just sharing the password is easiest, but the perfect solution would be if we add another group to pwstore to handle this long-term. right, @Muehlenhoff ? [21:40:29] Hi relengineers! [21:40:38] if anyone has a chance to merge/deploy this one, I think it's ready to try again: [21:40:41] https://gerrit.wikimedia.org/r/336960 [22:15:33] hashar ^^ [22:23:33] twentyafterfour hi i get this error https://phabricator.wikimedia.org/P5114 when visiting https://phab-01.wmflabs.org/config/cluster/search/ [22:23:41] "This page raised PHP errors. Find them in DarkConsole or the error log." [22:23:49] hmm [22:27:57] paladox: try pulling wmf/stable and try again? [22:28:11] 55e49eb should fix it [22:28:25] Ok, could you press the update button on the phab repo please? [22:28:31] Should make the commit appear faster [22:28:49] ok git pulled now [22:28:54] the index issue has shown again [22:28:54] https://phab-01.wmflabs.org/config/issue/elastic.broken-index/ [22:28:59] twentyafterfour ^^ [22:29:28] also on https://phab-01.wmflabs.org/config/cluster/search/ it shows elasticsearch status as failed [22:29:34] but elasticsearch is working [22:29:48] oh let me delete the index [22:29:51] and try a reindex [22:30:51] ok [22:30:59] it shouldn't require reindexing but hmm [22:31:33] shows status: okay now [22:31:51] yep [22:32:21] Seems after deleting the index and recreating it. it works now. [22:32:37] ok... I wonder why it required reindex... [22:33:11] yep. [22:33:23] Does the problem happen on prod? [22:48:31] (03Abandoned) 10Ladsgroup: Increase timeout for Wikidata jobs from 30 minutes to 40 minutes [integration/config] - 10https://gerrit.wikimedia.org/r/343738 (owner: 10Ladsgroup) [22:55:02] paladox: I hope not [22:55:10] Ok [23:05:21] Who could review https://gerrit.wikimedia.org/r/#/c/306484/ ? I'm basically emptying an entire Git repo as that extension has moved its code to GitHub. And I'm clueless if I'm doing it right. Feedback / review very welcome... Thanks in advance! [23:06:22] andre__: LGTM [23:06:39] Reedy, thanks! Would you like to merge? :P [23:06:43] About the only comment... Is possibly adding a README or GONE_TO_GITHUB [23:06:54] why emptying the repo? [23:07:11] can't it simply be kept as a github mirror? [23:07:14] ala https://github.com/wikimedia/mediawiki-extensions-SemanticMediaWiki/blob/master/GONE_TO_GITHUB.txt [23:07:16] Platonides, so people know when pulling that the code isn't updated anymore [23:07:23] Platonides: We can't mirror from github AFAIK [23:07:32] :( [23:07:51] Is that extension deployed? [23:07:51] Reedy: whatever you want me to change (or not), feel free to add a comment in Gerrit (or approve, or not). [23:08:04] I just want that task off my list after all these months [23:08:07] PdfBook? No [23:08:40] (I think we require deployed ext's to be in Wikimedia Git/Gerrit anyway?) [23:08:51] I think so, too [23:09:10] We can't even deploy from Phab atm ;P [23:09:24] wouldn't just a daily cron work for mirroring the repo, anyway? [23:09:28] I don't think https://phabricator.wikimedia.org/diffusion/MREL/browse/master/make-wmf-branch/config.json or https://phabricator.wikimedia.org/source/mediawiki-config/browse/master/wmf-config/extension-list work well with external URLs [23:09:40] extension-list doesn't care [23:09:43] As that's what's in the branch [23:09:46] but config.json doesn't :) [23:10:03] I've never understood the details, obviously :P [23:17:06] Reedy: Thanks. Amended: https://gerrit.wikimedia.org/r/#/c/306484/ [23:17:23] hashar twentyafterfour im updating the arcanist module for homebrew in https://github.com/Homebrew/homebrew-php/pull/4056 :) [23:18:02] We don't want people using homebrew since it doesn't target our version of Phabricator exactly [23:18:12] (same reason we tell people not to use deb packages) [23:18:19] We provide a wmf/stable branch [23:18:22] https://github.com/OrganicDesign/extensions [23:18:22] wt [23:18:23] f [23:18:33] These people are using a git repo like we used the SVN repo [23:18:40] https://github.com/OrganicDesign/extensions/tree/master/MediaWiki/PdfBook [23:19:01] https://github.com/OrganicDesign/extensions/blame/master/MediaWiki/PdfBook/extension.json [23:19:05] Yes they do. [23:19:05] The mind boggles [23:19:20] * Reedy imagines RainbowSprinkles puking currently [23:19:35] Using git like that? Yes. But there's something to be said for the simplicity of it [23:19:40] I miss that about SVN, no lie [23:19:52] RainbowSprinkles it will soon target our version. [23:20:33] + url "https://github.com/wikimedia/arcanist/archive/release/2017-03-08/1.tar.gz" [23:20:33] + sha256 "8edb125944f9aa3dc5ea082f2b080543ae324dba3cb055476f1db40219f222e6" [23:20:33] + version "201703081" [23:21:00] Yes, but will we update homebrew every time we update wmf/stable? [23:21:04] paladox: why use homebrew to install arcanist? It's really just as simple as cloning the git repo, no install required [23:21:05] What if we have wmf/stable hacks we want? [23:21:36] twentyafterfour i doint use homebrew for arcanist. But apparently other users do who found that arcanist is broken because of the version in homebrew. [23:21:38] There's a reason I put those warnings on the docs. [23:21:39] +1 to wmf hacks... people who aren't using our phab won't want our hacks [23:22:04] twentyafterfour I build my own arcanist installer [23:22:06] https://github.com/paladox/Arcanist-installer-for-mac [23:22:10] paladox: I mean if people outside WMF use homebrew for arcanist that's on them [23:22:21] But for wmf, you absolutely should follow the docs I wrote [23:22:47] * bd808 refuses to follow directions and invents new problems just for kicks [23:22:48] oh [23:22:56] https://www.mediawiki.org/wiki/Phabricator/Arcanist [23:23:04] Warning Warning: Please install Arcanist using git (the first option below) as it will ensure API compatibility. Using pre-packaged alternatives (eg: Debian repositories, Homebrew or anything that's not on this page) will highly likely cause API compatibility issues. [23:23:13] RainbowSprinkles i built my own installer. It's as simple as doing git submodule update --init --recursive. [23:23:19] then run a shell script [23:23:25] it then registers arcanist path [23:24:05] RainbowSprinkles i've done the same for windows https://github.com/paladox/Arcanist-installer-for-windows [23:24:06] Your own installer wrapper around the WMF branch version is fine [23:24:10] But not homebrew [23:24:14] yep [23:24:14] That follows upstream, not us [23:24:30] but homebrew wont accept upstreams [23:24:36] because they doint tag there release [23:25:28] My point is, they're getting their source...however they get it...from Phacility, not WMF [23:25:34] We're not packaging wmf-arcanist at homebrew [23:25:45] yep [23:25:48] So we shouldn't tell people to use homebrew :) [23:26:49] yep [23:27:04] RainbowSprinkles it's not easy to register a path on windows for arcanist though. [23:27:22] So i have https://github.com/paladox/Arcanist-installer-for-windows [23:27:26] that does everything [23:27:34] I haven't developed on windows in years, for good reason. [23:27:35] including installing php. [23:27:38] Too many basic tools don't work [23:27:54] theres ubuntu now on windows. [23:29:14] In which case just use that environment and who cares if it's in your windows prompt ;-) [23:29:40] lol. I will laugh if someone does rm -rf / [23:30:36] I mean our best solution to anyone running windows is to get virtualbox and vagrant setup [23:30:49] It's bound to be easier than trying to figure out why basic things don't work for them [23:30:52] :) [23:30:54] That is not as easy as running the setup.exe i have. [23:31:05] :) [23:31:59] my favorite .exe is http://ftp.nl.debian.org/debian/tools/win32-loader/stable/win32-loader.exe [23:32:08] does that setup.exe also setup a MW development environment? No. :) [23:33:59] Someone could actually build an installer that does that. [23:34:13] It's easy building installers on windows then on linux.