[00:13:41] 6Release-Engineering-Team, 6Developer-Relations, 6Phabricator, 10Phabricator-Sprint-Extension: Let's all stay in the loop on the Projects V3 update - https://phabricator.wikimedia.org/T120276#1860893 (10ksmith) @greg: Thanks for creating the new task, and for the clarification about the veto. I hope your... [00:25:44] 6Release-Engineering-Team, 6Developer-Relations, 6Phabricator, 10Phabricator-Sprint-Extension: Let's all stay in the loop on the Projects V3 update - https://phabricator.wikimedia.org/T120276#1861004 (10greg) >>! In T120276#1860893, @ksmith wrote: > I don't expect us to be able to steer phacility's develop... [00:29:14] Ooh, a RelEng question: [00:29:28] What's our policy going to be for back-porting PHP7 compat patches? [00:29:35] lol [00:29:37] I mean, we're arguing about minimum supported versions. [00:29:57] But there's also maximum-supported versions, given that apparently core doesn't work right now with 7(?). [00:29:57] to production? [00:30:01] Backporting to what? [00:30:07] or to old releases? [00:30:09] Presumably, if it's to support PHP 7, but doesn't break PHP 5.X... It's fine [00:30:29] So once master works with PHP7, would we also do back-ports to REL1_23/5/6? What about REL1_24? 1_22? Etc. [00:30:46] ah [00:31:12] Depends when we want to make them [00:31:18] first opinion: upgrading already released mw's to support later php versions doesn't seem worth our time [00:31:19] 'Cos users talking about how they dread updating MW make me sad, but not as much as "hey my host upgraded us to PHP7 and now MW 1.12 doesn't work!" comments. [00:31:41] If it's not a supported version of MW, tough? [00:31:46] greg-g: If people have such old wikis they can't use a current update.php and their old one doesn't work with PHP7… [00:31:55] "Not supported" seems to be a very grey area. [00:31:59] Next Ubuntu release(s) and LTS should have both PHP5.X and PHP7 [00:32:10] That's in April, right? [00:32:13] James_F: how old doesn't hav eupdate.php? [00:32:15] Yeah, 16.04 [00:32:18] .php5 [00:32:24] lol [00:32:33] legoktm: presumaly, it's .php7 [00:32:34] greg-g: Not 'have', have a PHP7 compat version. [00:32:37] so we end up with a php5/php7/php thing like we did with PHP4 [00:32:40] oh [00:33:14] greg-g: (I'm assuming worst-case of an update.php incompatibility, which may be overkill.) [00:33:19] * greg-g nods [00:33:45] Also if we can back-port to REL1_23 it's very unlikely we couldn't do so for REL1_24 trivially… [00:34:00] James_F: given that PHP7 wasn't stable when those were released, I think there's no expectation it'll work. Like how 1.15 may have said 5.1+ (I'm making this up), but no one expects 5.6 to work with it. I think we should target full PHP7 compat for 1.27 [00:34:08] Maybe backport to 1.26 if its easy [00:34:14] but if it's a not supported releases? [00:34:16] I concur with lego [00:34:27] Will PHP7 support break hhvm stuff? [00:35:06] It shouldn't. [00:35:48] I guess we're not going to be filling MW full of php 7 features anyway, cause that'll break 5.X [00:37:12] Say I have (say) MW 1.22 right now running on PHP 5.3, and I hate MW upgrading and want to say on 1.22. [00:37:21] Then my host upgrades me to PHP 7.0 [00:37:26] We shall shun you from the community [00:37:51] Yell at your hosting provider [00:37:58] Yeah [00:38:01] why do I care about 1.22? [00:38:08] I can't see many forcing you to use PHP 7 [00:38:14] ie it's PHP7 or no PHP [00:38:18] Suddenly my wiki is broken. I decide to reluctantly upgrade to 1.27. But I can't do so, as the upgrade won't run. Or something. [00:38:33] we're gonna have the 4 style backcompat for years [00:38:39] Is that a real issue or hypothetical? [00:38:39] Well, if you don't care about 1.22, why are we even talking about the PHP 5.5 change for 1.27? Screw that. [00:38:54] Presumably, if you're upgrading to a supported release, we would try to help people [00:39:02] James_F: 1.27 should be PHP7 compatible. [00:39:08] So the upgrade would run. [00:39:16] legoktm: Right now we're apparently holding up real work on hypothetical complaints about versions we already don't support, so… [00:39:20] OK. [00:39:50] yay hypotheticals! [00:39:52] Reedy: {{cn}} on "we" and "help". :-) [00:39:58] * legoktm runs away for a bit [00:40:05] James_F: Someone comes on IRC and... [00:40:24] no one can hear them scream [00:40:41] * James_F grins. [00:41:05] Jeroen made some comment about writing PHP 5 being weird after writing 7 [00:41:32] https://www.mediawiki.org/wiki/Version_lifecycle tells me I don't care about 1.22, and if someone else tells me I should, then we need to change that page [00:41:41] (and thus our policy) [00:41:48] I don't think we do [00:41:53] And shouldn't [00:41:58] Cause we just open a can of worms [00:42:02] exactly [00:42:03] I think we should first decide the 'why' before we pick/change a policy. [00:42:09] "You support X version now, why not X-5 etc?" [00:42:11] why what? [00:42:28] why we limit the number of releases we support? easy, too much work [00:42:32] Why do 'we' 'support' 'users' (definitions) of MediaWiki. [00:42:44] well, that's a bigger can of worms :P [00:42:53] Yeah, that's the worm-can about which I'm asking. :-) [00:42:57] why have releases at all? [00:43:00] Indeed. [00:43:09] #mediawiki support policy: You're on your own, mate. You made your bed... [00:43:34] Why spend scant donor funds for Wikipedia[*] on helping multi-billion dollar companies[*] run software they can afford to write themselves? [00:43:50] (I'm not trolling, exactly, but pointed questions.) [00:44:00] "exactly" [00:44:04] just this morning I was telling a Canonical friend that we're basically a "HEAD or gtfo" dev shop :) [00:44:34] James_F: it's a legit question, and one that I have two opinions about [00:44:43] * Reedy takes that as greg-g saying fuck deployment schedules [00:44:48] my "what I want to spend time on during work hours" answer, and my "what I wish we could do" answer [00:44:55] "We're not running head on Wikimedia, can't help you, sorry" [00:45:00] Krenair: I was a third party user before I came to WMF. I ran a classified multi-hundred-thousand MediaWiki install for the UK Government. I definitely had budget to not care about WMF's support. I liaised with the CIA and so on other governments who also did. [00:45:06] Reedy: :P I'm down [00:45:19] greg-g: Yeah. [00:46:32] what were the [*]s for [00:47:10] James_F: Did you have lots of links to secure.wm.o and leak referrers all of the place from your private internal wikis? :P [00:47:53] I'm going to guess that computers able to connect to classified systems weren't able to connect to secure.wm.o, Reedy [00:48:09] Krenair: Because I'm aware that they're slightly unfair generalisations that aren't always true. Some (how many?) of our donors like to donate to us for MediaWiki. Some (how many?) of our third-party users are impoverished noble causes which deserve charity support. Etc. [00:48:14] Krenair: Aha. Umm. Actually… [00:48:23] Krenair: What's Northrop Grumman excuse? [00:48:27] Reedy: We scrubbed outbound refer[r]er links. [00:48:36] They're a cyber defense contractor, among other things [00:49:00] doesn't mean their internal site is classified Reedy [00:49:02] James_F, :/ [00:49:11] Krenair: I presume it is though [00:49:14] Very likely it is [00:49:21] https://phabricator.wikimedia.org/T119274 [00:50:16] Krenair: It's complicated. [00:51:20] heh, war, isp, and oil companie [00:51:23] ss [00:51:41] well, telecom provider [00:51:42] but yeah [00:51:55] I suggested we just break secure.wm.o redirects [00:51:58] Do it. [00:51:59] And wait for the OTRS ticket [00:52:14] It'll stop Google indexing them [00:52:22] AH. That reminds me why I opened the ticket [00:52:25] CoolURLs™ isn't something we do anyway. [00:52:30] right, provider [00:52:41] I've never been cool, why start now? [00:52:47] * James_F grins. [00:53:01] Krenair: Did you phile a task about the secure.wm.o links? [00:53:22] from google? [00:53:23] yeah [00:53:43] https://phabricator.wikimedia.org/T93531 [00:54:17] I knew there was a reason I filed the task [00:54:20] https://phabricator.wikimedia.org/T93531 [01:02:55] ok, homeward bound, later all [01:03:04] (I hope some of you have a very crappy movie in your head now) [01:10:33] James_F: https://gerrit.wikimedia.org/r/257510 [01:33:46] 10Deployment-Systems, 3Scap3: Investigate parallel-ssh library once paramiko supports hmac-256/hmac-512 - https://phabricator.wikimedia.org/T114110#1861155 (10mmodell) hmac-256 is now supported in paramiko, according to the [[ http://www.paramiko.org/changelog.html#1.16.0 | changelog. ]] [02:37:16] twentyafterfour: So I sliced & diced the gerrit.wm.o traffic that's using old gitweb redirects (which redirect to gitblit atm). It basically breaks down into 3 traffic categories. (1) Misbehaving bots (2) some [like 3? 4?] really outdated MW installs using LocalizationUpdate to fetch latest HEAD messages and (3) some guy or two who's relying on RSS feeds of [02:37:16] a couple of repos. [02:37:40] (1) can go die, who cares. (2) we should help upgrade. (3) made me chuckle but I'm not sure it's a big deal tbh [02:38:02] I'd rather just kill the old gitweb redirection rather than drag it along. [02:39:06] 6Release-Engineering-Team, 3releng-201516-q3, 10Wikimedia-Developer-Summit-2016: Code-review migration to Differential status/discussion - https://phabricator.wikimedia.org/T114320#1861249 (10mmodell) [02:40:58] ostriches: yeah I agree, tech debt begone [02:45:59] twentyafterfour: `grep gitweb access.log | grep "MediaWiki/1" | sort -uk1,1 | cut -d ' ' -f 1` gives me 5 IPs. Those are the oudated MW installs doing LU. [02:48:30] Aw, one of them is Audacity. We can help a fellow FLOSS project out! :) [02:49:22] They have 2 wikis, hmm [02:49:40] (legacy)?wiki.audacityteam.org [03:11:25] Project browsertests-MobileFrontend-en.m.wikipedia.beta.wmflabs.org-linux-firefox-sauce build #905: 04FAILURE in 29 min: https://integration.wikimedia.org/ci/job/browsertests-MobileFrontend-en.m.wikipedia.beta.wmflabs.org-linux-firefox-sauce/905/ [03:12:41] Ouch, they have 3 wikis. And one is 1.18.x [03:23:09] pwn their wiki and upgrade it for them? [03:23:14] lmao [03:26:26] Hmm, none of their wikis have LU installed. Wonder why they're scraping our i18n then... [03:27:18] Old LU cron? [05:22:05] Project browsertests-Wikidata-WikidataTests-linux-firefox-sauce build #449: 15ABORTED in 4 hr 0 min: https://integration.wikimedia.org/ci/job/browsertests-Wikidata-WikidataTests-linux-firefox-sauce/449/ [05:29:03] Project browsertests-Wikidata-WikidataTests-linux-chrome-sauce build #233: 15ABORTED in 4 hr 0 min: https://integration.wikimedia.org/ci/job/browsertests-Wikidata-WikidataTests-linux-chrome-sauce/233/ [05:36:41] Project browsertests-MultimediaViewer-en.wikipedia.beta.wmflabs.org-os_x_10.9-chrome-sauce build #278: 04FAILURE in 20 min: https://integration.wikimedia.org/ci/job/browsertests-MultimediaViewer-en.wikipedia.beta.wmflabs.org-os_x_10.9-chrome-sauce/278/ [06:45:44] Project browsertests-Core-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce build #829: 04FAILURE in 26 min: https://integration.wikimedia.org/ci/job/browsertests-Core-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce/829/ [07:51:57] 10Beta-Cluster-Infrastructure, 5Patch-For-Review: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861463 (10akosiaris) 5Open>3Resolved a:3akosiaris Merged and now I see ``` akosiaris@deployment-restbase02:~$ sudo salt-call --local grains.get tre... [08:16:58] https://integration.wikimedia.org/ci/job/mwext-qunit/8766/artifact/log/mw-error.log wasn't this one happening little while ago already? now it is back it seems [08:28:31] Project browsertests-CirrusSearch-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce build #790: 04FAILURE in 8 min 31 sec: https://integration.wikimedia.org/ci/job/browsertests-CirrusSearch-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce/790/ [10:16:35] 10Continuous-Integration-Infrastructure, 7Monitoring, 5Patch-For-Review: Monitor Jenkins master listens on TCP port 8888 (ZeroMQ) - https://phabricator.wikimedia.org/T120669#1861672 (10hashar) a:3hashar [10:16:43] 10Continuous-Integration-Infrastructure, 7Monitoring, 5Patch-For-Review, 7WorkType-Maintenance: Monitor Jenkins master listens on TCP port 8888 (ZeroMQ) - https://phabricator.wikimedia.org/T120669#1858578 (10hashar) [10:23:39] !log beta: salt-key --delete=i-000005d2.eqiad.wmflabs [10:23:42] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:27:18] !log beta: fixing puppet.con on a bunch of hosts. The [agent] server = deployment-puppetmaster.eqiad.wmflabs is wrong, missing 'deployment-prep' sub domain [10:27:21] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:31:27] !log beta: puppet being fixed on memc04 sentry2 cache-upload04 cxserver03 db1 [10:31:30] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:32:36] !log beta: salt-key --delete=deployment-cache-upload04.eqiad.wmflabs (missing 'deployment-prep' subdomain) [10:32:40] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:34:49] holy shit [10:42:28] !log beta: fixing salt on bunch of hosts. There are duplicate process on a few of them. Fix up is: killall salt-minion && rm /var/run/salt-minion.pid && /etc/init.d/salt-minion start [10:42:32] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:44:33] !log beta: deployment-cache-text04 upgrading openssl libssl1.0.0 [10:44:36] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [10:47:33] 10Beta-Cluster-Infrastructure: deployment-cache-mobile04 puppet Could not find class role::cache::mobile - https://phabricator.wikimedia.org/T120811#1861735 (10hashar) 3NEW [10:50:27] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1861748 (10hashar) [10:51:59] 10Beta-Cluster-Infrastructure, 10RESTBase: deployment-restbase01 Could not find class restbase::deploy - https://phabricator.wikimedia.org/T120813#1861750 (10hashar) 3NEW [10:58:17] !log dropped deployment-cache-text04 puppet SSL certificates [10:58:21] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:03:15] 10Beta-Cluster-Infrastructure: deployment-cache-mobile04 puppet Could not find class role::cache::mobile | deployment-cache-text04 Could not find class role::cache::text - https://phabricator.wikimedia.org/T120811#1861775 (10hashar) [11:04:20] mobrovac: are you around ? :-} [11:04:33] mobrovac: the beta cluster puppetmaster has a dirty file modules/restbase/templates/config.labs.yaml.erb [11:04:36] with a local change [11:04:49] hashar: hmmm [11:04:54] hashar: lemme take a look [11:05:03] might be a rebase that went wrong [11:05:15] ah yeah [11:05:15] You are currently rebasing branch 'production' on '4847369'. [11:05:17] :-D [11:05:45] i think we can just rebase --abort [11:05:52] and figure out the conflicting patch [11:06:22] ah this is indeed a rebase gone bad [11:06:50] lets abort it [11:06:59] hashar: https://gerrit.wikimedia.org/r/#/c/257408/ should be the good variant [11:07:02] some patch must have been merged in operations/puppet.git [11:07:08] which is merged in ops/puppet [11:07:12] yes [11:07:15] mobrovac: excellent [11:07:17] let me abort and rebase :-} [11:07:22] hashar: kk gr8, thnx [11:11:05] oh [11:11:08] FOR GOD SAKE [11:11:15] why does root default editor ends up being nanon [11:11:18] nano [11:13:36] oh [11:13:45] et bonjour mobrovac :-} [11:14:10] bonjour hashar à toi aussi :) [11:14:37] hashar: dunno, i've asked that more than once about nano, but every time that ended in a vi-vs-emacs flame war [11:14:38] :) [11:20:52] !log beta: rebased puppet.git on puppetmaster [11:20:55] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:21:31] mobrovac: oh that is easy to solve. vi is in POSIX [11:21:35] nano/emacs are not [11:21:36] :-D [11:22:43] hehehehehe [11:22:46] good one hashar! [11:22:55] that is why I learned vi in the first place [11:23:15] cause it is on all unix systems [11:23:45] !log puppet catching up a lot of changes on deployment-cache-mobile04 and deployment-cache-text04 [11:23:48] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:24:01] 10Beta-Cluster-Infrastructure, 10RESTBase: deployment-restbase01 Could not find class restbase::deploy - https://phabricator.wikimedia.org/T120813#1861816 (10mobrovac) Is https://gerrit.wikimedia.org/r/252887 still cherry-picked on `deployment-puppetmaster` ? [11:24:03] 10Beta-Cluster-Infrastructure: deployment-cache-mobile04 puppet Could not find class role::cache::mobile | deployment-cache-text04 Could not find class role::cache::text - https://phabricator.wikimedia.org/T120811#1861817 (10hashar) p:5Triage>3High [11:24:11] 10Beta-Cluster-Infrastructure, 7WorkType-Maintenance: deployment-cache-mobile04 puppet Could not find class role::cache::mobile | deployment-cache-text04 Could not find class role::cache::text - https://phabricator.wikimedia.org/T120811#1861735 (10hashar) [11:25:22] 10Beta-Cluster-Infrastructure, 7WorkType-Maintenance: deployment-cache-mobile04 puppet Could not find class role::cache::mobile | deployment-cache-text04 Could not find class role::cache::text - https://phabricator.wikimedia.org/T120811#1861828 (10hashar) 5Open>3Resolved a:3hashar Had to rebase puppet.gi... [11:27:03] oh my [11:27:15] $ /usr/local/bin/ldap-yaml-enc.py deployment-tin.deployment-prep.eqiad.wmflabs [11:27:15] Traceback (most recent call last): [11:27:17] File "/usr/local/bin/ldap-yaml-enc.py", line 114, in [11:27:17] roles = host_info['puppetClass'] [11:27:19] KeyError: 'puppetClass' [11:27:22] that is the actual useful trace [11:27:26] when puppet returns: returned 1: [11:27:28] yeah!!!!!!! [11:36:27] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1861845 (10hashar) And on the puppetmaster: ```hashar@deployment-puppetmaster:~$ /usr/local/bin/ldap-yaml-enc.py deployment-tin.deployment-prep.eqiad.wmflabs Traceback (most recent call la... [11:40:25] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1861856 (10hashar) 5Open>3stalled Pending on {T120817} [11:40:48] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1861861 (10hashar) p:5Triage>3High [11:42:14] !log running puppet on deployment-restbase01 . Catch up on lot of changes [11:42:16] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:43:41] 10Beta-Cluster-Infrastructure, 10RESTBase: deployment-restbase01 Could not find class restbase::deploy - https://phabricator.wikimedia.org/T120813#1861865 (10hashar) 5Open>3Resolved a:3hashar I have rebased operations/puppet on the puppetmaster (there was some rebase conflicts). That caused puppet to no... [11:43:46] mobrovac: restbase01 pass puppet again [11:43:58] mobrovac: the class not found was due to the aborted rebase [11:44:29] though I have [11:44:30] Notice: /Stage[main]/Cassandra::Metrics/Base::Service_unit[cassandra-metrics-collector]/Service[cassandra-metrics-collector]/ensure: ensure changed 'stopped' to 'running' [11:44:37] seems cassandra-metrics-collector does not start properly [11:45:46] 10Beta-Cluster-Infrastructure, 10RESTBase: deployment-restbase01 Could not find class restbase::deploy - https://phabricator.wikimedia.org/T120813#1861868 (10hashar) On a second puppet run on restbase01: Notice: /Stage[main]/Restbase/Package[restbase/deploy]/ensure: ensure changed 'purged' to 'present' Notic... [11:48:07] !log beta: salt-key --delete deployment-cxserver03.eqiad.wmflabs [11:48:10] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [11:50:55] 10Beta-Cluster-Infrastructure, 5Patch-For-Review: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861877 (10hashar) 5Resolved>3Open There are still some instances improperly set. I used: `root@deployment-salt:~ # salt --timeout 10 --show-timeout -... [11:51:03] 10Beta-Cluster-Infrastructure: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861879 (10hashar) [11:55:38] for god sake [12:00:23] 10Beta-Cluster-Infrastructure: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861890 (10hashar) In theory one could use salt compound: salt --timeout=10 --show-timeout --out=txt -C '* and not G@trebuchet_master:deployment-bastion.deployment-prep.eqia... [12:12:53] 10Beta-Cluster-Infrastructure: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861909 (10hashar) 5Open>3Resolved Had to run puppet on the other instances. In some cases puppet is broken so I went ahead and manually fixed /etc/salt/grains. [12:13:03] 10Beta-Cluster-Infrastructure, 7WorkType-Maintenance: `trebuchet_master` getting set incorrectly in deployment-prep - https://phabricator.wikimedia.org/T119988#1861911 (10hashar) [12:54:59] Yippee, build fixed! [12:54:59] Project browsertests-GettingStarted-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce build #683: 09FIXED in 58 sec: https://integration.wikimedia.org/ci/job/browsertests-GettingStarted-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce/683/ [12:59:26] hashar: what's that with cxserver03? [12:59:56] ie salt-key --delete deployment-cxserver03.eqiad.wmflabs [13:06:53] Yippee, build fixed! [13:06:54] Project browsertests-PageTriage-en.wikipedia.beta.wmflabs.org-linux-chrome-sauce build #753: 09FIXED in 53 sec: https://integration.wikimedia.org/ci/job/browsertests-PageTriage-en.wikipedia.beta.wmflabs.org-linux-chrome-sauce/753/ [13:23:45] kart_: wrong hostname [13:24:07] kart_: labs instance names now have a subdomain based on the project name: .deployment-prep.eqiad.wmflabs [13:24:14] so deployment-cxserver03.eqiad.wmflabs is obsolete [13:24:26] deployment-cxserver03.deployment-prep.eqiad.wmflabs is the proper one :-} [13:41:27] Thanks! [13:45:26] hashar, can you have a look at https://phabricator.wikimedia.org/T120792? Might be the answer is obvious [13:51:34] hashar: meanwhile, https://integration.wikimedia.org/ci/job/mwext-qunit/8775/console [13:51:41] some issues with Jenkins. [13:52:04] rm: cannot remove ‘/mnt/home/jenkins-deploy/tmpfs/jenkins-2/lessphp_c19h7h8ykkgkggc4cggksw0o0gsw0wk.lesscache’: Permission denied [13:52:07] etc [13:57:31] 10Continuous-Integration-Config, 10Math, 10MathSearch: Make Restbase availible to jenkins - https://phabricator.wikimedia.org/T120657#1862060 (10hashar) We can probably craft a job that setup a local RestBase and then run the Math integration tests against it. Maybe with `restbase-mod-table-sqlite`. Then,... [14:18:33] !log Removed integration-slave-trusty-1012:/mnt/home/jenkins-deploy/tmpfs/jenkins-2 which was left behind by a job. Caused other jobs to fail due to lack of permission to chmod/rm-rf this dir. [14:18:37] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [14:23:00] 10Continuous-Integration-Config, 10Math, 10MathSearch: Make Restbase available to Jenkins - https://phabricator.wikimedia.org/T120657#1862083 (10Nikerabbit) [14:30:10] 10Continuous-Integration-Infrastructure: Dozens of jobs failing on integration-slave-trusty-1012 because chmod fails for /tmp/jenkins-2 - https://phabricator.wikimedia.org/T120824#1862091 (10Krinkle) 3NEW a:3Krinkle [14:34:29] Project browsertests-MobileFrontend-SmokeTests-linux-chrome-sauce build #347: 04FAILURE in 6 min 28 sec: https://integration.wikimedia.org/ci/job/browsertests-MobileFrontend-SmokeTests-linux-chrome-sauce/347/ [14:36:01] andrewbogott: good morning :-} [14:36:22] andrewbogott: I am not sure why I had Nodepool to use the fqdn as a hostname https://gerrit.wikimedia.org/r/257597 drops the domain [14:36:29] I have no idea what will happen though [14:38:07] hashar: want to try it now, or wait for another day? [14:38:25] I think the reason I did it is that else the instances ends up missing the fqdn [14:38:44] might have caused them to fail when running firstboot.sh , but I use different images now [14:38:45] so yeah [14:38:47] lets do it [14:38:52] and see what happens :-} [14:39:32] $ cat /etc/hostname [14:39:32] ci-jessie-wikimedia-11406 [14:39:32] $ hostname --fqdn [14:39:33] ci-jessie-wikimedia-11406.contintcloud.eqiad.wmflabs.eqiad.wmflabs [14:39:38] that applies on scandium, right? [14:39:44] I have no clue how the hostname is set [14:39:47] labnodepool1001.Eqiad.wmnet [14:39:53] scandium is solely a zuul merger instance [14:40:12] 10MediaWiki-Releasing, 6Developer-Relations, 10Wikimedia-Blog-Content, 3DevRel-December-2015, 5MW-1.26-release: Write blog post announcing MW 1.26 - https://phabricator.wikimedia.org/T112842#1862142 (10Qgil) Thank you! Integrated. Alright, let's go out with the post we have. @jrbs, can your team review... [14:42:12] 10Continuous-Integration-Infrastructure: Dozens of jobs failing on integration-slave-trusty-1012 because chmod fails for /tmp/jenkins-2 - https://phabricator.wikimedia.org/T120824#1862148 (10Krinkle) 5Open>3Resolved Not sure what job left this behind, but I've just sudo rm -rf'ed it manually for now on integ... [14:42:46] andrewbogott: so I think Nodepool daemon figure out nodepool.yaml conf file has been changed and magically reload itself [14:44:03] got [14:44:03] 2015-12-08 14:43:51,969 INFO nodepool.NodeLauncher: Creating server with hostname ci-jessie-wikimedia-11418 in wmflabs-eqiad from image ci-jessie-wikimedia for node id: 11418 [14:44:25] that seems right... [14:44:38] openstack server list [14:44:44] shows it up properly [14:44:58] yeah, the entry on wikitech is right [14:45:02] success [14:45:08] so presuming the rest of the CI setup can now locate the instance, we’re good :) [14:47:05] ah cloud-init [14:48:24] 127.0.1.1 ci-jessie-wikimedia-1449584040.eqiad.wmflabs ci-jessie-wikimedia-1449584040 [14:48:25] 127.0.1.1 ci-jessie-wikimedia-11418.eqiad.wmflabs ci-jessie-wikimedia-11418 [14:48:27] that is /etc/hosts [14:48:35] it is missing contintcloud (the project name) [14:48:48] Do you have a custom script that’s setting the hostname? [14:48:57] or is that supposed to come from dhcp by magic? [14:49:44] I might have a script somewhere that relay on magic to happen [14:49:53] ^^^ my absolutely useless comment :D [14:50:04] I just checked /etc/hosts on a ‘normal’ labs instance [14:50:13] I think cloud-init rely on the Ec2 metadata service [14:50:13] and it doesn’t have its proper name at all [14:50:25] which apparently has never caused us any problems [14:50:59] integration-slave-trusty-1023.integration.eqiad.wmflabs has 127.0.1.1 ubuntu.openstack.eqiad.wmflabs ubuntu [14:51:00] :-D [14:51:04] hashar: if hostname -d is correct [14:51:11] then probably this is fine [14:51:35] yeah, it looks like nothing updates that on normal labs instances [14:51:40] also we leave off teh project part of teh hostname in some cases anyways and atm the non-project fqdn is still unique right? [14:51:46] I can make a bug about that but it must not matter to anyone [14:51:48] $ hostname -d [14:51:48] eqiad.wmflabs [14:53:09] oh... [14:53:14] I blame ec2: $ curl http://169.254.169.254/latest/meta-data/hostname [14:53:15] ci-jessie-wikimedia-11426.eqiad.wmflabs [14:53:21] well, that’s wrong but I think that comes from resolv.conf [14:53:30] not from /etc/hosts [14:53:43] btw https://phabricator.wikimedia.org/T120830 <- won’t help you with contint [14:55:01] yeah [14:55:14] but on Nodepool instance it works just fine :-) [14:55:33] cause they use cloud-init on the first boot, so /etc/hosts get updated "properly" [14:56:15] Yeah, since dns has entries for both .eqiad.wmflabs and .project.eqiad.wmflabs [14:56:23] it might be a coin-toss which one cloudinit uses for /etc/hosts [14:56:45] Are things actually broken with your current CI workflow? Or is this just a tidyness issue? [14:56:51] (Wondering if we need to revert) [14:56:52] tidy I think [14:57:34] ok :) [14:57:38] oh [14:57:48] the PTR is properly set to ci-jessie-wikimedia-11430.contintcloud.eqiad.wmflabs. [14:58:28] if something goes wrong, the revert is straightforward [14:59:24] true [14:59:29] hashar: are you attending the upcoming ldap debacle? [14:59:50] not aware of any ldap debacle [15:03:30] andrewbogott: thanks andrew to have noticed the dupe domain :-} [15:03:33] I closed the task [15:04:04] 10Continuous-Integration-Infrastructure, 5Continuous-Integration-Scaling, 6Labs, 7Nodepool, and 2 others: weird double-domained DNS entries for nodepool nodes - https://phabricator.wikimedia.org/T120792#1862211 (10hashar) [15:04:29] maybe we can prevent the wiki pages creations on Nova Resource: namespace [15:30:57] andrewbogott: ostriches: working on gerrit? [15:45:05] RECOVERY - Puppet failure on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] [15:45:43] chasemp: oh you fixed puppet on deployment-tin apparently :-} [15:45:59] ¡magic! [15:46:05] well it's fixed until it gets replaced unless https://gerrit.wikimedia.org/r/#/c/257612/ [15:46:24] but yeah I ran puppet there successfully w/ that [15:48:28] again: Notice: Cannot find site jenkins_u0_mw in sites table [15:50:20] chasemp: I am not sure why only that one failed [15:50:28] Nikerabbit: it probably happens on all builds [15:50:40] hashar: it must be new? and had no explicit otherwise class [15:50:51] ahh [15:50:53] it literally had no lookup-able class [15:50:55] yeah it is classless [15:53:24] hashar: seems to at least happen on and off on contenttranslation patches [15:54:25] Nikerabbit: you will want to ask Wikidata folks [15:54:33] Nikerabbit: i have no clue how that site table works [15:55:20] ok [15:55:25] will defer to kart_ then [16:10:42] (03CR) 1020after4: [C: 032] Add Cards to list of extensions to branch [tools/release] - 10https://gerrit.wikimedia.org/r/257432 (https://phabricator.wikimedia.org/T116676) (owner: 10Jdlrobson) [16:12:23] (03Merged) 10jenkins-bot: Add Cards to list of extensions to branch [tools/release] - 10https://gerrit.wikimedia.org/r/257432 (https://phabricator.wikimedia.org/T116676) (owner: 10Jdlrobson) [16:21:17] jzerebecki: any more idea on regular failures of wikibase with ContentTranslation? [16:21:37] hashar: should it be discuss on #wikidata or here? [16:22:00] *qunit failures. [16:22:13] kart_: there are still failures? [16:31:26] jzerebecki: had 3 times today. [16:31:29] :/ [16:32:08] kart_: the failure in https://gerrit.wikimedia.org/r/#/c/257283/ is probably not from wikidata, as without the patch it passes [16:33:06] kart_: the 2nd time is the same failure in the patch depending on that. can you link the 3rd failure? [16:35:04] jzerebecki: ok. That two only. [16:35:20] kart_, Nikerabbit: btw. any idea why wikibase qunit tests fail when they run together with ULS and cldr?: https://phabricator.wikimedia.org/T117886#1862396 [16:36:10] jzerebecki: what's the test that fails? [16:38:03] Nikerabbit: qunit doesn't say so in the jenkins log :( so I don't know, but hear say has it that there are jquery animations from ULS still in progress when those tests are otherwise done [16:38:03] I just see a long page full of backtraces [16:38:38] jzerebecki: that would be weird... I don't think ULS does any animations by default [16:39:10] unless it happens to display "your language has changed" tooltip [16:39:51] jzerebecki: any documentation for local wikibase setup? [16:39:56] (quick :)) [16:40:02] I hardly use qunit so I don't know if the error reporting that can be improved, I think it is much easier to understand when seen in a browser [16:40:29] kart_: mediawiki vagrant wikidata role [16:59:49] 10Beta-Cluster-Infrastructure, 3Scap3, 6Analytics-Backlog, 6Services, and 2 others: Set up AQS in Beta - https://phabricator.wikimedia.org/T116206#1862556 (10Milimetric) [17:00:14] jzerebecki: better. thanks! [17:26:45] thcipriani: let me know if you run into trouble with making the branch [17:45:47] 10Continuous-Integration-Infrastructure, 10CirrusSearch, 6Discovery, 3Discovery-Cirrus-Sprint: ElasticSearch taking 17% of RAM on integration slaves - https://phabricator.wikimedia.org/T89083#1862661 (10EBernhardson) It doesn't look like we are using the phantomjs build. AFAIK it would be perfectly fine to... [17:53:45] 10MediaWiki-Releasing, 6Developer-Relations, 10Wikimedia-Blog-Content, 3DevRel-December-2015, 5MW-1.26-release: Write blog post announcing MW 1.26 - https://phabricator.wikimedia.org/T112842#1862678 (10Legoktm) Hold on a bit, that blog post sells us way short. There are way more than "two new features" i... [18:06:09] twentyafterfour: will do, going to start cutting in ~1hr [18:09:50] 10Continuous-Integration-Infrastructure, 10CirrusSearch, 6Discovery, 3Discovery-Cirrus-Sprint: ElasticSearch taking 17% of RAM on integration slaves - https://phabricator.wikimedia.org/T89083#1862762 (10Krinkle) >>! In T89083#1862661, @EBernhardson wrote: > It doesn't look like we are using the phantomjs b... [18:21:40] twentyafterfour: have you been using make-wmf-branch-climate? [18:25:54] looks like last commit was in October, so I guess not. [18:30:03] Project browsertests-CentralNotice-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce build #526: 04FAILURE in 2.6 sec: https://integration.wikimedia.org/ci/job/browsertests-CentralNotice-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce/526/ [18:41:29] git thcipriani yes [18:41:36] -git [18:42:05] thcipriani: you need to merge master into the branch though [18:42:17] or just use the old script [18:42:53] twentyafterfour: kk, I'll probably just use the script in master for the time being. [18:44:40] I finally figured out how to maintain the branch without even using make-wmf-branch, just by doing `git co wmf/1.27.0-wmf.7; git co -b wmf/1.27.0-wmf.8; git merge master -Xtheirs` [18:45:23] that results in a branch that only differs from master in the ways that it should (the submodules, the .gitreview file, etc) [18:46:18] I thought core master had no submodules? [18:47:11] right, but if you start from the previous wmf branch, and merge master into that, it keeps the submodules [18:47:46] the same thing would have to be repeated for all the submodules though to update their branches [18:47:53] ah, gotcha. [18:48:04] indeed. [18:48:40] the trick was using -Xtheirs to resolve conflicts by taking the version from master [18:49:26] seems that the critical path to cutting a branch will still be the pull/.gitreview/push dance for the 145ish extensions [18:50:01] yeah that's the shitty part [18:50:54] didn't you work on a solution with --reference clones at some point? More work the first time you deploy, but subsequent deploys should be fine. [18:51:33] roughly that ruby thing you posted a while back...git-fastclone? [18:51:35] that doesn't work on tin because of old git version [18:51:50] ah, I wasn't even going to try on tin. [18:51:59] yeah I don't do it on tin anymore either [18:53:05] I started setting up a labs instance with git 2.0 [18:54:27] between network proximity and reference clones it should be pretty fast [18:55:24] but I'm gonna rewrite make-wmf-branch from scratch because it's just not salvageable really [18:56:01] how were push permissions going to be handled on the labs instance? [18:56:40] either agent forwarding or put an ssh key on the instance [19:03:05] PROBLEM - Puppet failure on deployment-tin is CRITICAL: CRITICAL: 25.00% of data above the critical threshold [0.0] [19:25:42] twentyafterfour: so branch cutting died trying to push up an extension for some unknown reason. Best to just edit config.json and pick up where I left off? [19:27:12] thcipriani: yeah. that's the main thing I tried to fix in the climate branch :-/ [19:27:50] twentyafterfour: kk, doing: edit config.json, modify the checks in make-wmf-branch [19:29:15] !log beta: aborted rebase on puppetmaster. [19:29:20] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [19:32:35] !log LDAP got migrated. We might have mwdeploy local users that got created on beta cluster instances :( [19:32:39] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [19:38:06] RECOVERY - Puppet failure on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] [19:50:15] PROBLEM - Puppet failure on deployment-kafka04 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [19:51:47] Is there an ETA on the cut? [19:59:14] 10Continuous-Integration-Config, 10Math, 10MathSearch: Make Restbase available to Jenkins - https://phabricator.wikimedia.org/T120657#1863349 (10GWicke) We could add mathoid to https://github.com/gwicke/mediawiki-node-services, update the docker image & then spin up a docker container for tests. [20:00:55] James_F: branch cut is underway currently. [20:01:05] thcipriani: Thanks! [20:01:15] just to the point where it's branching core, FYI. [20:11:01] I am wondering whether we could cut the branch using Gerrit REST Api [20:11:23] so you would send a few hundreds http queries and {done} [20:15:32] maybe, never played with the restapi. All it's doing is modifying defaultbranch= in .gitreview for each extension and pushing to a new branch. [20:16:35] speaking of which, just realized I'm going to have to patch the new branch of core to add in all the other submodules up to Gather (since branching stumbled on the Gather extension) :( [20:25:15] RECOVERY - Puppet failure on deployment-kafka04 is OK: OK: Less than 1.00% above the threshold [0.0] [20:28:22] thcipriani: lame [20:28:42] thcipriani: sorry it went south on you [20:29:12] heh, not a big deal. Seems like just a bunch of `git add submodule`s that need doing. [20:29:19] yeah [20:29:27] (if the core cut ever gets finished) :P [20:29:30] branch cutting is extremely annoying when it goes bad [20:30:24] does the climate branch have a --continue-at=[extension] ? something like that? [20:31:29] !log beta: rebased operations/puppet and locally fixed a conflict [20:31:33] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [20:31:56] !log beta cluster instances switching to new ldap configuration [20:31:59] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL, Master [20:33:20] 10Deployment-Systems, 10Architecture, 10Wikimedia-Developer-Summit-2016-Organization, 7Availability: WikiDev 16 working area: Software engineering - https://phabricator.wikimedia.org/T119032#1863578 (10RobLa-WMF) If {T118932} is still stalled, that might be a required topic for WikiDev [20:34:15] PROBLEM - Puppet failure on deployment-tin is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:34:34] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1863582 (10chasemp) [20:35:08] 10Beta-Cluster-Infrastructure: deployment-tin fails because of ldap-yaml-enc.py - https://phabricator.wikimedia.org/T120812#1863586 (10chasemp) 5stalled>3Resolved a:3chasemp :) [20:35:28] RECOVERY - Puppet failure on wmfbranch is OK: OK: Less than 1.00% above the threshold [0.0] [20:39:19] thcipriani: it has a --resume option and it is supposed to save the checkpoint automatically [20:39:51] thcipriani: but I never got to test it extensively because the failures are only occasional and when it does fail it happens when I'm under time pressure to just get it done [20:40:36] twentyafterfour: I can understand that :P [20:44:06] RECOVERY - Puppet failure on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] [20:56:22] twentyafterfour: here's a dumb question: pushing to this branch: push origin HEAD:refs/for/wmf/1.27.0-wmf.8 ? [20:59:21] (evidently that was correct) [21:16:20] Project browsertests-QuickSurveys-en.m.wikipedia.beta.wmflabs.org-linux-chrome-sauce build #94: 04FAILURE in 19 sec: https://integration.wikimedia.org/ci/job/browsertests-QuickSurveys-en.m.wikipedia.beta.wmflabs.org-linux-chrome-sauce/94/ [21:26:22] PROBLEM - Puppet failure on wmfbranch is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [21:26:45] 10Continuous-Integration-Infrastructure, 10CirrusSearch, 6Discovery, 3Discovery-Cirrus-Sprint: ElasticSearch taking 17% of RAM on integration slaves - https://phabricator.wikimedia.org/T89083#1863833 (10EBernhardson) the unit tests run via jenkins, but they do not talk to elasticsearch. They are strictly u... [21:34:55] thcipriani: yep [21:40:56] twentyafterfour: hmm, checkout didn't create the docroot/bit/static/1.27.0-wmf.8 dir. Alright to create manually? [21:41:09] weird [21:41:24] thcipriani: I guess so, though I can't think why it wouldn't do that? [21:42:14] twentyafterfour: script said it did it in the output: Created /w/static/1.27.0-wmf.8/skins symlink [21:42:43] extra strange [21:45:22] (03CR) 10Hashar: [C: 032] Add thumbor/base-engine [integration/config] - 10https://gerrit.wikimedia.org/r/257492 (owner: 10Gilles) [21:45:36] twentyafterfour: got a second for a hangout? [21:46:18] (03Merged) 10jenkins-bot: Add thumbor/base-engine [integration/config] - 10https://gerrit.wikimedia.org/r/257492 (owner: 10Gilles) [21:46:55] (03PS2) 10Hashar: Add thumbor/video-engine [integration/config] - 10https://gerrit.wikimedia.org/r/257260 (owner: 10Gilles) [21:47:32] thcipriani: sure [21:47:38] (03CR) 10Hashar: [C: 032] "rebased" [integration/config] - 10https://gerrit.wikimedia.org/r/257260 (owner: 10Gilles) [21:48:36] (03Merged) 10jenkins-bot: Add thumbor/video-engine [integration/config] - 10https://gerrit.wikimedia.org/r/257260 (owner: 10Gilles) [21:56:14] 10MediaWiki-Releasing, 6Developer-Relations, 10Wikimedia-Blog-Content, 3DevRel-December-2015, 5MW-1.26-release: Write blog post announcing MW 1.26 - https://phabricator.wikimedia.org/T112842#1863897 (10jrbs) >>! In T112842#1862678, @Legoktm wrote: > Hold on a bit, that blog post sells us way short. There... [22:01:29] RECOVERY - Puppet failure on wmfbranch is OK: OK: Less than 1.00% above the threshold [0.0] [22:14:51] twentyafterfour: halp. I edited wikiversions.json to bump testwiki but activeWikiVersions only shows the 1 version :( [22:15:04] er activeMWVersions [22:15:19] thcipriani: I don't usually rely on activemwversions [22:15:31] ok, so the edit of the file should be enough? [22:15:54] yeah edit the file, sync, then check testwiki to make sure that special:version shows the new version number [22:16:03] kk, thanks [22:16:07] oh but first, merge the patch with the new extension [22:16:43] thcipriani: ^ you probably already did but if you forgot, then scap will fail [22:17:19] twentyafterfour: oh, the cards extension? [22:17:23] yeah [22:17:28] do you have that patch handy? [22:17:51] the https://gerrit.wikimedia.org/r/#/c/257434/ [22:18:07] jdlrobson: ^ fyi, merging [22:18:48] it's the extension-list change that is needed... [22:19:01] otherwise the mismatch breaks l10n somewhere along the line [22:19:43] I think we should have scripts to build extension-list dynamically [22:19:55] so that it always matches [22:20:11] might be nice. [22:20:39] twentyafterfour: thanks for the heads-up, likely would've missed that one. [22:23:03] thanks thcipriani [22:26:00] 10Continuous-Integration-Config, 10Fundraising-Backlog, 10Wikimedia-Fundraising-CiviCRM: Bad empty CI jobs on wikimedia/fundraising/crm deployment branch - https://phabricator.wikimedia.org/T120881#1863979 (10awight) 3NEW [22:27:41] twentyafterfour: blerg, complaining about missing extension in wmf.7 because it's in the extension list now :( [22:28:10] so...can i add the cards extension to wmf7 so the messages update? [22:28:44] thcipriani: weird. ok yeah do that I guess [22:29:54] twentyafterfour: sigh. Does that mean I should cut a new wmf.7 branch for cards? [22:30:26] thcipriani: no it probably just needs to be present in core [22:31:17] right, it's complaining that it needs an extension.json in extensions/Cards which probably means it needs to be there as a submodule, right? [22:32:48] 10Continuous-Integration-Config, 10Fundraising-Backlog, 10Wikimedia-Fundraising-CiviCRM: Bad empty CI jobs on wikimedia/fundraising/crm deployment branch - https://phabricator.wikimedia.org/T120881#1864010 (10JanZerebecki) Just adding the noop job to all pipelines of wikimedia/fundraising/crm in integration/... [23:02:07] thcipriani: here's my notes from my first train deploy -- https://github.com/bd808/wmf-kanban/issues/57 -- I know there are at least 2 things I noted there (~2 years ago) that still could use fixing [23:04:28] bd808: "cutting branch should be automatic" not a thing, certainly :) [23:08:51] man we just need to do that [23:22:25] thcipriani: congrats on the deploy ! [23:22:40] been too tired to assist / watch over your shoulders :( [23:22:44] hashar: not time for congratulations just yet :\ [23:22:47] greg-g: Or, at least, the "press button to make jenkins do it" [23:22:57] "What version do you want to branch?" [23:23:10] "press button" is still an extra step [23:23:44] Well, if we just do it "every week", we don't want it every week... [23:23:58] So you've still gotta set up better scheduling [23:24:01] Which isn't much better [23:24:05] One click branching is better [23:24:23] or you know [23:24:25] deploy from master [23:24:33] on patch merge :D [23:25:21] anyway will read Tyler notes [23:25:34] sleeeep [23:29:38] 10MediaWiki-Releasing, 6Developer-Relations, 10Wikimedia-Blog-Content, 3DevRel-December-2015, 5MW-1.26-release: Write blog post announcing MW 1.26 - https://phabricator.wikimedia.org/T112842#1864113 (10Qgil) >>! In T112842#1862678, @Legoktm wrote: > Hold on a bit, that blog post sells us way short. There... [23:39:04] The release is getting uncomfortably late. :-( [23:40:05] James_F: indeed. It's at 96% on testwiki [23:40:14] thcipriani: Yay. [23:40:27] Three-week-furlough makes for riskier deployment trains. [23:40:29] er, the cdb rebuild is at 96% complete across all servers for the push to testwiki [23:40:35] I guessed. ;-) [23:40:49] i18n being the bane of updates. [23:41:03] indeed. [23:58:56] en or gtfo [23:58:59] wait [23:59:06] gotta have en-gb [23:59:08] * Reedy grins [23:59:59] Reedy: I'm seriously thinking about adding en-us and making en international where possible.