[01:48:54] https://gerrit.wikimedia.org/r/#/c/437887/ fixes the 7.1 test failure [06:08:22] legoktm: I wrote a bunch of things https://gerrit.wikimedia.org/r/c/437867/2/make-release/makerelease.py [06:09:05] no_justification: did you take a look at PS3? [06:10:18] Heh, exactly what I wanted from the patch, I'll remove the -1 [06:10:22] But yes, my comments still stand [06:10:39] _That_ is where I was going w.r.t. releases [06:11:34] +2 on ps3 [06:12:24] I think we could actually split out the GPG bit into a second script too [06:12:31] So Jenkins could continually build the branches [06:12:48] And then whoever's doing the releasing just issues new tags, and then signs the resulting release [06:13:48] hmm [06:14:13] that was the original plan a long time ago [06:15:45] Yeah. But then we went down the rabbit hole of patch-tarball-then-the-git-stuff [06:25:39] https://phabricator.wikimedia.org/T196602 :) [06:32:26] Gerrit 2.15.2 is ready to release as well I think :) [07:29:09] no_justification: :) [07:29:27] Includes better ldap debugging I think :). [09:27:18] Krinkle: sorry, was off. the partitioned topic is only refreshlinks, and we partition it by db shards so as to control the write amplitude of mariadb [09:27:49] eqiad and codfw variants just tell you where the job originated (in which dc) [09:27:54] so currently that's all eqiad [09:51:12] mobrovac: right, but to count size of queues as either insertion or process rate (actual Mw submissions), should we count both the partitioned and non partitioned variant of that job? [09:51:56] I’m not sure not sure how it works like if one feeds to the other or.. [09:52:01] ah no no, only the partitioned [09:52:17] or only the other [09:52:32] Only the normal form would be easier I guess [09:52:34] basically we receive the event and then resend it back to the partitioned topic [09:52:41] Cool [09:52:52] so one should equal the other [09:53:18] Hmm I don’t see dB names or shard names in that partitioned topic name though [09:53:22] for effective rates, the partitioned topics are more correct as the exec concurrency there is lower than the one sending them in [09:53:33] Yeah [09:53:48] For processing we may want to exclude it somehow, tricky.. [09:53:48] Krinkle: the mapping is partition_no = dbshard_no - 1 [09:54:06] Oh okay it’s not at the topic level [09:54:11] Thanks [09:54:32] Made some changes last night but will do some more today [09:54:58] Krinkle: the partitioning is done using this config - https://github.com/wikimedia/change-propagation/blob/master/config.jobqueue.wikimedia.yaml#L79-L129 [12:41:49] tgr: hey! the mcr-full.wmflabs.org farm seems to be offline [12:44:29] the vagrant box was stopped [12:45:12] DanielK_WMDE_: I think the cloud team did some restarts? in theory vagrant should be brought up in reboot, but maybe that failed [12:45:21] seems fine apart from that [12:55:13] ok, thanks [13:38:51] AaronSchulz: https://phabricator.wikimedia.org/T196635 [13:39:00] redis being down causes a stack overflow of retries in beta... [13:51:27] That.. sounds familiar [13:51:35] Happened a few months back [15:29:15] tgr: did you make any progress with fixing the selenium tests? can I help somehow? [15:29:54] I'm mostly replying to comments on the SDC docs today [15:31:45] DanielK_WMDE_: I can't get either video recording or X11 forwarding to work, and without that I have no idea how to figure out the reason for the failures, so I'll have to do manual testing instead [15:32:23] the X11 forwarding error might be a problem with my local system, maybe you'll have more luck with it [15:32:41] I documented the config in https://wikitech.wikimedia.org/wiki/Help:MediaWiki-Vagrant_in_Cloud_VPS#SSH_to_the_Vagrant_box [15:38:31] tgr: addshore is doing manual testing, make sure you don't duplicate each other's effort [15:38:54] i can poke at the X11 stuff. havn't done that in... oh, since my university days... [15:39:04] o/ I havn't written down anywhere what I have done yet, I'v mainly being doing deletions undeletions and moves currently [15:39:47] I might be able to get the selenium testing working for me locally and pointing at the test system [15:40:43] that sounds like a good thing to try [15:52:24] I'll get on that now then :) [17:06:38] tgr: so I can run the core or extension tests against the test wiki from my local machine, all seems to work, just ran the Echo ones, but, well, there is only 1 test [17:07:15] yeah MediaWiki is not big on browser tests :( [17:07:42] If we can think of some cases that we want to test I' happy to write them [17:07:43] it's a bit strange that they work though [17:07:48] tgr: is it? [17:08:15] running them on the testwiki gives a big list of errors which seem not related to the test setup [17:08:22] login failures and such [17:08:39] I'd have expected the same result [17:11:01] anomie: the test dbs are actually done now! [17:11:08] addshore: Cool [17:11:11] anyway I want to grep for doEdit* calls and make a list of actions that cause extensions to invoke them, so we have a list of things to test (don't think it's worth automating) [17:11:12] comment was added to the ticket during our meeting https://phabricator.wikimedia.org/T196172#4264758 [17:11:54] tgr: sounds like a good plan, I was going to look through the hooks used by extensions too and see which code paths we should check [17:11:56] but not this week, I have to catch up on paperwork :/ [17:12:04] I didn't try running them on test wiki [17:12:58] there are a bunch of ruby browser tests, not sure if it is worth the effort to set them up [17:13:00] tgr, addshore: extensions calling doEdit is not as critical as extensions hooking into doEdit. [17:13:28] what i worry about most is that i slightly changed when and how a hook is called, and that this may cause trouble [17:13:50] DanielK_WMDE: regarding having it on group0 for longer than usual, it looks like we have approval https://phabricator.wikimedia.org/T196585#4264833 [17:14:29] yeah, that too, but doEditUpdates for example has lots of options that slightly affect how the call will play out [17:14:53] true [17:15:31] Where would be best list the things we want to test? wiki page? ether pad? google sheet? [17:15:38] addshore: so before we merge, we make an extra branch, and deploy that to test?and only merge into master later? [17:15:45] I can work on this for most of tomorrow [17:16:15] DanielK_WMDE: I guess that is one detail still to figure out, we probably wait for .x branch to be created, then cherry pick the chain over to it and merge them there [17:16:32] *wait for .x, then create a .x+mcr and cherry pick them there [17:16:44] yea, that sounds good [17:17:04] as to the list... wiki page is probably best [17:17:10] link it from the task, please [17:18:11] *finds the task* [17:18:41] on mediawiki.org? I could make a subpage of something but I'm not really sure if we have a root MCR page yet? [17:28:32] addshore: i just make subpages of my user page ;) [17:28:46] e.g. https://www.mediawiki.org/wiki/User:Daniel_Kinzler_(WMDE)/MCR-SlotRoleHandler [17:29:05] https://www.mediawiki.org/wiki/Requests_for_comment/Multi-Content_Revisions exists, too [17:29:11] https://www.mediawiki.org/wiki/User:Daniel_Kinzler_(WMDE)/MCR-StorageLayerTesting [17:32:00] I created some initial sections [17:32:07] but I'm probably not working on anything much else today [17:44:39] anomie: https://phabricator.wikimedia.org/T196598#4262872 mail was down yesterday (re: https://gerrit.wikimedia.org/r/#/c/436970/) [17:45:13] Good to know [20:00:15] no_justification: all of the remaining release blockers seem to be things that existed in previous releases, not regressions [20:01:59] Any reports from rc.2? [20:46:49] legoktm: Indeed. [20:46:55] But, this happens every release cycle :) [20:47:03] Also, heh. Found this! https://github.com/Authentise/git-release [20:47:14] Funny replacement for makerelease? :P [20:47:30] Nah, it's abandonware [20:48:33] huh, apparently git 2.17 will now happily let you delete the master branch. it used to warn about it [20:48:39] Now https://github.com/hartym/git-semver is pretty cool [20:49:03] MatmaRex: It was a lame warning tbh. There's no reason you have to have a "master" [20:49:18] well i liked it [20:49:26] i just deleted my master and was very confused for a few minutes [20:49:43] Be careful when deleting remote branches period :) [20:49:44] it will also do that even if you have the branch checked out [20:49:49] which is super confusing [20:50:02] Yep, but that warns on next `git status` [20:50:11] Something about "tracks origin/foobar that no longer exists" [20:50:26] (i mean, my local master, not remote) [20:51:12] no_justification: it doesn't. i get "On branch master" "No commits yet" [20:51:23] and every single file in the repo shown as a new file to be committed [20:51:54] i have no master branch and yet it still thinks i am on master branch. surely that's a bug [20:52:20] Then it's not tracking your upstream branch....? [20:52:26] I just saw this in practice like 20 mins ago [20:52:57] well yes? but that's not my point. i don't care about the upstream branch [20:53:01] anyway. nevermind [20:53:40] legoktm: YES! [20:53:41] https://github.com/Kentzo/git-archive-all [20:53:43] BAM [20:53:59] oooh [20:54:25] So then, our exclusion lists go into .gitattributes (which allows bundled extensions to exclude extra things without having to maintain a big meta-list) [20:55:04] And release becomes: [20:55:04] 1) Clone recursively [20:55:04] 2) `git-archive-all` [20:55:04] 3) Sign the releases [20:55:09] Is it really this easy? [21:04:12] Unrelated, #til about rebase.autoStash [21:04:54] If you set rebase.AutoStash = true and pull.rebase = true.....`git pull` behaves sanely :) [21:08:13] no_justification: Finding things to make your work easier just in time for it not to be you that benefits from the easier life? ;-) [21:09:06] Hey, those git things are useful anywhere I go [21:09:34] Are you going to be doing code releases via tarballs? :-) [21:09:45] (Either way, lovely find.) [21:11:23] I doubt it, nobody releases code like that anymore :p [21:20:20] * greg-g just keeps copy/pasting chad ideas into tasks: https://phabricator.wikimedia.org/T196602 [21:28:18] patch for another release blocker: https://gerrit.wikimedia.org/r/#/c/306863/ [21:28:45] and if someone has hhvm set up and wants to test the fix for https://gerrit.wikimedia.org/r/#/c/438078/ ... [22:20:51] legoktm it seems that "* (T182366) UploadBase::checkXMLEncodingMissmatch() now works on PHP 7.1+" is added 2 times in the release notes, is that expected? :) [22:20:51] T182366: UploadBaseTest::testCheckXMLEncodingMissmatch failing on PHP 7.1 - https://phabricator.wikimedia.org/T182366 [22:20:55] https://github.com/miraheze/mediawiki/commit/30d58769ac54f8ef3f0c54558f43b9a73a68b37e [22:23:14] paladox: yes, it's in the standard release notes plus the "changes since rc.2" section [22:23:23] ah ok thanks :) [22:23:41] Twice is better than 0 ;) [22:23:55] heh [23:25:53] TimStarling, one thing I forgot to mention ... we made the first parsoid lang variant deploy today ... serbian, kurdish, piglatin .... crh, zhwiki are close behind ... (and html -> html API for variant converstons for restbase) ... so, at some point, we'll have a RFC about whether this new setup should be adopted for core as well ... which will come as part of the port. [23:26:37] good stuff