[00:21:40] AaronS: is this Thursday. It looks sold out, but we could probably finagle you a ticket. [00:24:51] wtf is "deliberately unstable systems" [00:25:04] I am browsing tech conference listings and apparently this is a thing now [00:26:31] http://velocityconf.com/web-mobile-business-conf-2015/public/schedule/topic/1310 [00:29:06] ori, (1) systems for testing your software's error resilience (2) demonstration that your software isn't actually worst in the world [00:29:45] gartner says "Deliberately unstable processes refer to processes that are designed for change and can dynamically adjust according to customer needs." [00:30:07] So basically make a startup in your company? [00:30:33] pfft. basically, adaptive systems but they needed a new buzzword [00:30:48] shall we take bets on when this phrase first appears in management jargon? [00:30:58] http://www.gartner.com/newsroom/id/2866617 -- number 7 in the list there [00:31:29] Well gartner has said it so I'm sure it's a topic of disucssion from $BOSS-1 [00:40:59] bd808: did you have time to look on the xhprof patch? [00:41:29] SMalyshev: I can look now. [00:41:41] bd808: great, thanks [00:43:58] SMalyshev: So to use this you would need to put xhprof_lib into $IP? [00:44:28] bd808: not sure what $IP is but yet, you'd have to copy or symlink xhprof_lib into the docroot [00:44:45] bd808: this can of course be changed if you have better idea [00:44:53] but it needs xhprof_lib [00:45:00] SMalyshev: *nod* $IP == the root of the MediaWiki git repo [00:45:09] aha, yes [00:45:43] It means "install path" I think [00:45:59] ok, makes sense [00:46:01] yup; $IP = getenv( 'MW_INSTALL_PATH' ); [00:47:49] <^d> install or include path, I think [00:48:41] just to illustrate this is what we can have from dump: http://wikidata-wdq.testme.wmflabs.org/xhprof_html/?run=54f62e402dc66&source=wikidatawiki [00:48:51] which is much better than what text produces [00:49:45] yeah. I'm searching around for some parts to make it easier to actually use. The xhprof php files are not packaged very nicely [00:50:35] bd808: true. I just did git checkout of xhprof repo and symlinked required parts. maybe there's a better way [00:50:49] e/g/ if we have private composer repo we could use that [00:50:54] <^d> that's basically what we do in vagrant [00:51:03] <^d> clone and link [00:51:20] ah, didn't think about vagrant maybe I should make a role for it [00:51:27] There was once a composer package for xhprof but it was unofficial and Evan nuked the composer.json file [00:51:36] for installing xhproflib I mean [00:51:39] we have an xhprof role actually [00:51:50] bd808: does it install xhproflib? [00:52:01] I think so ... [00:52:14] <^d> xhprofgui role [00:52:24] <^d> xhprof itself is installed by default [00:52:33] ah. let me see what this does [00:53:49] maybe then I can make that code rely on the vagrant setup [00:54:56] Do you just need this for local testing or are you hoping to make it easy to use by anyone? [00:55:25] Because the role gives good instructions on hacking it in via StartProfiler.php [00:55:33] * bd808 forgot all about that work ^d did [00:55:59] <^d> profilinggggg [00:56:15] ^d: where are the instructions? [00:56:48] vagrant roles info xhprofgui [00:56:48] <^d> https://www.mediawiki.org/wiki/Manual:Profiling and StartProfiler.sample [00:56:53] <^d> And that [00:57:28] ^d: I have a profiling related mystery. Calls to load.php are not being profiled for me [00:57:36] and I'm not sure why not [00:57:42] <^d> Hmm [00:57:54] ^d: I don't see there any reference to xhprof gui [00:57:57] <^d> Does it call wfLogProfilingData()? [00:58:12] <^d> SMalyshev: The on-wiki stuff doesn't mention xhprofgui, no [00:58:20] ^d: yup. and it sources WebStart.php [00:58:45] ^d: so my patch is for enabling using xhpfogui with StartProfiler etc. that's what it does [00:59:05] b/c before that I don't think there was a way... at least I couldn't find one [00:59:12] SMalyshev: Run -- vagrant roles info xhprofgui -- and read the howto there [00:59:50] <^d> We should add support for using the xhprof util save_run() as an output method [00:59:52] It does what you are doing but outside of the Profiler class hierarchy [00:59:58] bd808: aha, I didn't see that. Well, that's another way to do the same, though more hackish I guess [01:00:01] <^d> Shouldn't be hard [01:00:27] ^d: That's what SMalyshev is trying to do with https://gerrit.wikimedia.org/r/#/c/194221/ [01:00:30] but if you think it's enough we could just put it maybe on wiki page and let it be that way [01:00:40] but it's a little clunky at the moment [01:01:20] Maybe we could just mimic what XHProfRuns_Default::savd_run does [01:01:26] I think if we already have profiler infrastructure why not use it [01:01:32] without needing the classes [01:01:52] I just don't like the install method. The rest of it looks good [01:02:47] bd808: you mean eliminate the need for loading xhprof lib? probably can be done [01:02:58] https://github.com/phacility/xhprof/blob/master/xhprof_lib/utils/xhprof_runs.php#L124 [01:03:08] this is just serialize and filename generation basically [01:03:27] *nod* [01:04:11] that would be nicer I think. Then there's no strange setup needed to pull in their files which aren't really used for much [01:04:39] Most of the logic there is for using the dumps rather than making them [01:05:29] 6MediaWiki-Core-Team, 7HHVM: HHVM with FastCGI does not support streaming output - https://phabricator.wikimedia.org/T91468#1085239 (10tstarling) a:3tstarling [01:13:40] bd808: ok, now dependency on xhprof is gone [01:16:14] * bd808 tries it out [01:19:12] now the only thing it needs is outputDir config [01:29:37] SMalyshev: {{done}}. I fixed one little spacing problem and add a tiny bit of docs [01:29:49] bd808: cool, thanks [01:30:09] go forth and profile! [01:32:33] speaking of spaces... https://gerrit.wikimedia.org/r/#/c/193766/ needs reviews :) [01:46:03] legoktm: have you run it against a wider set of files to see if there are weird false positives? [01:46:57] It looks good to me, but I remember making sniffs being a case of edge case whack-a-mole at $DAYJOB-1 [02:22:17] bd808: I ran it across all of core's includes/ and everything it caught was legit [02:24:37] legoktm: do you want me to schedule an AuthManager discussion in tomorrow's RFC IRC meeting slot? [02:24:57] TimStarling: yes please! [02:28:21] robla said you wanted it [02:28:28] I will send a wikitech-l post now [04:44:04] <^d> AaronS: Fixed up the profiling stuff [04:44:09] <^d> Now I get entries like: 15.30% 481.450 3 - MWHttpRequest::__construct-FindHooks::getHooksFromOnlineDocCategory [04:44:49] <^d> Actually, should probably do something nicer like the execute() method [04:44:54] <^d> Rather than the constructor as the constant [04:47:15] TimStarling: hey, I have a C question. Two questions, really. Both about this file: . (1) Why does it not terminate reliably on CTRL-C? (I have to kill -9 it.) Does it get stuck in recv()? (2) Is it a good practice to handle SIGINT like that and to free up memory "properly" or would it be better to just have while (1) { .. } and let the OS clean up? [04:48:55] 1) probably 2) no, just let it die [04:49:09] that's the worst reason I've ever seen for having a SIGINT handler [04:50:33] oh, and SIGTERM as well [04:50:57] brandon suggested adding SIGTERM [04:51:11] sorry, I should tone it down a bit, I didn't realise you wrote it [04:51:24] I thought you were complaining about someone else's code [04:51:25] no, I don't mind [04:51:32] it's very nice but you just need to change this one thing [04:51:34] I'd rather know [04:51:36] haha [04:51:58] there's no need to clean up memory on SIGINT/SIGTERM because the OS will do it for you [04:52:18] it will also close all filehandles and generally clean up every resource that belongs to a program [04:53:02] the only reason for handling SIGINT/SIGTERM is if you have things like temporary files to clean up or pseudo-atomic database operations to finish [04:53:03] right, so presumably one should only trap it if there is some additional clean-up that won't get done manually [04:53:12] ^d: seems fine except one comment [04:53:19] <^d> Yeah, I just saw [04:53:20] <^d> Fixing [04:54:18] makes sense [04:54:32] maybe you could argue that processes should free memory on SIGTERM, but it is a general policy that they don't [04:54:39] <^d> AaronS: Amended [04:54:39] no library will free memory on SIGTERM, including libc [04:55:11] <^d> 29.24% 505.153 3 - CurlHttpRequest::execute [04:55:11] <^d> 28.99% 500.793 3 - CurlHttpRequest::execute-FindHooks::getHooksFromOnlineDocCategory [04:55:14] <^d> \o/ [04:55:15] <^d> Yay [04:58:34] there's even a question as to whether you should free memory on normal termination [04:59:09] you know some programs can take seconds to free every little allocation they made, following all sorts of data structures to free everything up [04:59:29] if the program just killed itself with SIGTERM it would be done a lot faster [04:59:54] the main argument for freeing things is that it makes it easier to find leaks with valgrind etc. [05:00:00] yeah, i was about to say [05:01:01] but i think in this case i just wanted to be scrupulous and dot all the i's on instinct [05:01:13] maybe that's why some other people do it [05:01:53] probably [05:02:54] but if you have a program that's executed many times per second, like say nagios checks, maybe there is an argument for optimising shutdown [05:03:28] from the program's point of view, maybe it has done malloc() a million times and so needs to call free() a million times [05:03:51] i think it makes more sense to let the OS handle it barring an explicit reason why it couldn't [05:04:07] but from the kernel's point of view, a process has mmap()ed half a dozen segments, and it can free each with a few instructions [05:05:01] right [05:13:37] <^d> AaronS: Heh https://gerrit.wikimedia.org/r/#/c/194269/ [05:26:26] ori: I think (after some reflection) that if you did want to support a leak checker like valgrind, a SIGTERM handler would be a reasonable way to do it [05:26:28] just not SIGINT because that's annoying [05:27:28] in your sendto() loop you don't exit if EINTR is encountered [05:27:42] that could be the reason for the stall when you press ctrl-c [05:31:38] you could respond to EINTR there by checking the execute flag [05:32:57] maybe just check it unconditionally before each sendto() since the signal may happen at another time [05:37:07] i got rid of the handler already: . I like that better, it's less code. [05:37:20] It exits right away on ctrl-C now [05:38:21] i'm still not sure why it didn't before, it still doesn't do anything on EINTR except continue to loop [05:39:10] ^d: https://gerrit.wikimedia.org/r/#/c/190949/ [05:40:13] <^d> How did all that extra caching sneak in there? [05:49:42] TimStarling: does https://gerrit.wikimedia.org/r/#/c/190748/ look sane? [05:51:45] yes, looks fine [05:55:10] <^d> no callers about, +2d [06:00:16] ^d: https://gerrit.wikimedia.org/r/#/c/190955/ [06:03:18] that would leave https://gerrit.wikimedia.org/r/#/c/190781/ [06:08:37] <^d> 184005 would be nice [06:24:08] <^d> AaronS: Stupid https://gerrit.wikimedia.org/r/#/c/194275/ [06:25:00] <^d> And then https://gerrit.wikimedia.org/r/#/c/194277/, but I'm not sold on the approach using wfGetCaller() [07:53:13] 6MediaWiki-Core-Team, 10MediaWiki-Configuration, 5Patch-For-Review: doMaintenance.php creates ConfigFactory::getDefaultInstance() before Setup.php is run - https://phabricator.wikimedia.org/T90680#1085506 (10Legoktm) [07:58:36] <^d> AaronS: [07:58:38] <^d> vagrant@mediawiki-vagrant:/vagrant/mediawiki/maintenance/benchmarks$ php5 bench_wfGetCaller.php [07:58:38] <^d> 100 times: function BenchWfGetCaller->wfGetCaller() : [07:58:38] <^d> 4.70ms ( 0.05ms each) [07:58:40] <^d> vagrant@mediawiki-vagrant:/vagrant/mediawiki/maintenance/benchmarks$ php5 bench_wfGetCaller.php --count=10000 [07:58:42] <^d> 10000 times: function BenchWfGetCaller->wfGetCaller() : [07:58:44] <^d> 222.72ms ( 0.02ms each) [07:58:46] <^d> vagrant@mediawiki-vagrant:/vagrant/mediawiki/maintenance/benchmarks$ php5 bench_wfGetCaller.php --count=1000000 [07:58:48] <^d> 1000000 times: function BenchWfGetCaller->wfGetCaller() : [07:58:50] <^d> 23472.74ms ( 0.02ms each) [08:17:41] 6MediaWiki-Core-Team: CSS validator survey and design work - https://phabricator.wikimedia.org/T989#1085955 (10epriestley) 5Open>3Resolved [08:24:52] 6MediaWiki-Core-Team: Fix non-CentralAuth https preference handling - https://phabricator.wikimedia.org/T1279#22305 (10epriestley) [14:22:41] 6MediaWiki-Core-Team, 10MediaWiki-API: ApiQueryPrefixSearch returns limit+1 results in generator mode - https://phabricator.wikimedia.org/T91503#1088321 (10Anomie) 3NEW a:3Anomie [15:09:47] bd808: So I see the apifeatureusage indexes are existing in prod, but it looks like they're all empty for 2015.02.06 and later? At least, /_cat/indices?v reports 0 docs.count and size is only 345b. [15:20:03] anomie: Logstash logging got shut off when we had the power event in eqiad. [15:20:33] bd808: ... and no one noticed for a month? [15:20:39] https://gerrit.wikimedia.org/r/#/c/191259/ [15:21:20] The elasticsearch cluster behind it is in bad need of hardware upgrade too [15:21:43] I was spending at least an hour a day thing to keep it alive when we had all the wiki traffic on it [15:21:53] *trying to [15:22:28] apifeatureusage isn't in that cluster though. But I suppose the main logstash cluster being flaky takes out logstash entirely? [15:23:03] it causes problem, yeah. Queues back up; packets get dropped [15:24:17] 6MediaWiki-Core-Team, 10Incident-20150205-SiteOutage, 10Wikimedia-Logstash, 6operations, 5Patch-For-Review: Decouple logging infrastructure failures from MediaWiki logging - https://phabricator.wikimedia.org/T88732#1088573 (10Anomie) [15:24:25] The hardware selection got stuck last week too -- https://phabricator.wikimedia.org/T84958 -- I'll poke r.obh about that [15:25:22] * anomie marks T88732 as a blocker for T1272 [15:54:54] <^d> manybubbles: I added a note about LVS to the "good" slide. I don't hear enough stories of people running Elastic behind a loadbalancer, worthwhile tip to share [15:55:05] +1 [16:05:03] <^d> manybubbles: re: rolling restarts. I saw there's a talk on "upgrading your cluster" [16:05:09] <^d> Hoping it's got some useful tips [16:05:10] <^d> :) [16:15:44] 6MediaWiki-Core-Team, 10CirrusSearch, 7Epic: Expose Cirrus' category intersections more prominently - https://phabricator.wikimedia.org/T84887#1088865 (10Bawolff) Very simple first idea that we could do (in the spirit of brainstorming) In the sidebar under "tools", add a link intersect this category. When u... [16:25:15] 6MediaWiki-Core-Team, 10CirrusSearch, 7Epic: Expose Cirrus' category intersections more prominently - https://phabricator.wikimedia.org/T84887#1088931 (10Chad) >>! In T84887#1088865, @Bawolff wrote: > Very simple first idea that we could do (in the spirit of brainstorming) > > In the sidebar under "tools",... [16:40:17] 6MediaWiki-Core-Team, 10CirrusSearch, 10Wikimedia-Hackathon-2015, 7Epic: Expose Cirrus' category intersections more prominently - https://phabricator.wikimedia.org/T84887#1088967 (10Bawolff) [17:55:36] <^d> AaronS: I guess the thing I don't like about using wfGetCaller() to profile wfShellExec() is what about when calling it with a wrapper like wfShellExecWithStderr()? [17:56:02] <^d> Maybe add an optional 6th parameter to override it? [17:56:05] * ^d sighs [17:56:28] well it has $options [17:56:47] <^d> Oh that's what that array is for [17:58:13] <^d> That makes it easy, nvm [18:05:16] <^d> AaronS: Amended, adds a profileMethod key to $options [18:15:50] 6MediaWiki-Core-Team, 10MediaWiki-API: ApiQueryPrefixSearch returns limit+1 results in generator mode - https://phabricator.wikimedia.org/T91503#1089396 (10Ricordisamoa) 5Open>3Resolved [18:26:19] csteipp: Can you cover MW-Core in SoS today? [18:26:25] I'm in the hack talk hangout [18:27:01] bd808: Yeah, no prob [18:27:02] I dumped a list of highlights in the etherpad [18:46:36] 7Blocked-on-MediaWiki-Core, 10MediaWiki-ResourceLoader, 10MediaWiki-extensions-Sentry, 6Multimedia, 5Patch-For-Review: Add startup script to automatically wrap asynchronous functions in try..catch - https://phabricator.wikimedia.org/T85262#942722 (10MarkTraceur) [18:57:12] * anomie is officially sick of T73236 [19:04:44] it has something to do with configuration? [19:04:47] too much text [19:05:44] $wgOptionalTags[$root][$prefix][$lastname] wat [19:07:20] legoktm: "too much text" is a big part of it, actually. That and he wants a config variable to decide whether to "define" core-define tags on Special:Tags then allow on-wiki admins to turn on or off actually applying the tags. And he wants i18n for the internal tag names rather than just the display names. [19:09:43] 6MediaWiki-Core-Team, 10MediaWiki-API: ApiQueryPrefixSearch returns limit+1 results in generator mode - https://phabricator.wikimedia.org/T91503#1089551 (10Anomie) It looks like the branch hasn't been cut yet, so this should be deployed to WMF wikis with 1.25wmf20. See https://www.mediawiki.org/wiki/MediaWiki_... [19:20:01] fyi I poked about https://phabricator.wikimedia.org/T90704 in SoS. Does someone know whether lock wait timeout would cause writes to database get rolled back? [19:21:18] Nikerabbit: AaronS should help you with that bug. Poke him mercilessly. :) [19:22:18] We talked briefly about it in our weekly meeting but really just to nominate AaronS to help [19:23:21] bd808: ookay... [19:23:53] it sounds like you forced him ;) [19:24:11] Product managers get to twist arms a bit [19:25:13] And it seems aligned with things he's been updating [19:26:09] anomie: if we allow people to create tags on-wiki, what's the point of $wgOptionalTags? [19:28:52] legoktm: The basic idea of T73236 is that people want core to tag a revision if someone changes it from a redirect to a non-redirect or vice versa. Which means somehow these core-defined tags need to act like they were defined by the 'ListDefinedTags' hook. $wgOptionalTags is an extremely overcomplicated way to do that in the one proposal, based on the idea that LocalSettings.php should be able to completely not-define these tags if that's wanted. [19:29:35] ahhhh, thank you for the tl;dr :) [19:30:55] anomie: was a usecase provided for being able to undefine core tags? [19:31:16] legoktm: "Special:Tags might get too long" [19:31:39] lol [19:31:44] that's what pagers are for! [19:31:49] Partially since he sees VE almost getting to the point of adding tags for "added-letter-A", "added-letter-B", etc. [19:32:02] wait what [19:32:19] That's a slight exaggeration. But he did say about adding images, adding templates, adding tables, etc. [19:32:33] "Because VE sometimes has bugs with stuff" [19:35:16] good whatever part of the day [19:35:30] o/ [19:42:54] Query: REPLACE INTO `spoofuser` (su_name,su_normalized,su_legal,su_error) VALUES ('WikiGuy2517','v2:W1K1GUY2517','1',NULL) [19:42:54] Function: SpoofUser::batchRecord [19:42:54] Error: 1213 Deadlock found when trying to get lock; try restarting transaction (10.64.16.22) [19:42:58] legoktm: ^ is that fixed? [19:47:59] https://phabricator.wikimedia.org/T90967 [19:48:03] it is (on master [19:48:04] ) [19:48:15] Nikerabbit: lock waits only cancel the last statement...but if an exception bubbled up in MW it would cause rollback [19:50:51] AaronS: so you think that is what is happening here? Since the other option mentioned in the bug seems not be possible to me. [19:52:12] legoktm: is https://gerrit.wikimedia.org/r/#/c/193763/1/maintenance/fixStuckGlobalRename.php safe to use in production? [19:52:28] It looks sane at a glance, but I don't really have the time right now to test lcoally [19:55:02] Nikerabbit: do you catch DBError when it times out? [19:58:52] hoo: yes, I already used it in production. Last night I updated CA to master on both branches [19:59:26] AaronS: no the code is not expecting DBError there [19:59:50] not even the spanish inquisition? [20:00:11] bunnies [20:00:57] legoktm: Ok, will use it :D [20:01:00] Maybe merge it after [20:05:08] Nikerabbit: not sure you want to though, you'd have to check $db->wasLockTimeout() and even still the rollback behavior depends on the dbms [20:05:49] mysql used to rollback everything, then it switched to trying to just cancel the statement when possible (more standard like) and tweaked that as recently as 5.0 (e.g. lock wait timeouts) [20:06:08] * AaronS recalls lots of people complaining about mysql rollback behavior changes [20:06:35] the app needs a reliable way of knowing whether it was a statement or trx rollback [20:06:46] usually best to just avoid lock timeouts in the first place ;) [20:07:08] yeah [20:07:32] so there are too many writes/deletes to that table, right? [20:08:18] legoktm: :S Your script picked the wrong renamed [20:08:20] * renamer [20:08:40] uhh :/ [20:09:10] Not an issue in that case: No age move happened [20:09:12] but :S [20:09:12] did the query fail? [20:09:17] or? [20:09:22] No, it picked up an old rename [20:09:25] user got renamed before [20:09:54] ah, I didnt account for that [20:10:13] we could just order by timestamp? [20:10:34] yeah [20:10:43] but also it seemed to have queried by old name [20:13:06] Commented [20:13:30] Not sure about the index for getArg [20:14:47] <^d> AaronS: https://gerrit.wikimedia.org/r/#/c/194277/ [20:27:23] legoktm, hop: I just went through a series of strange. Got an email while I was out earlier about a stalled renamed for User:Gabriel2517. I checked GlobalRenameProgress on my phone, and it showed the rename as waiting on wikidata. I get home 20 minutes later and check, and the logs have all changed and the rename was completed. [20:27:39] It all seems fixed now, it's just...ghosts in the rename queue [20:28:18] Keegan: I think that was hoo? [20:28:51] Oh yes [20:28:55] Sorry [20:28:59] should have logged that [20:29:09] thanks, hoo. It's still odd the way the progress tables were showing [20:29:35] While Gabriel->WikiGuy was stuck, it was showing the current rename. Now it shows the old rename from December [20:29:55] And GlobalRenameProgress/WikiGuywhateverthenumberis is the correct log [20:30:37] Keegan: Yeah :S It assumes the new name is given for logs [20:30:43] but the progress works for both new and old [20:31:02] Guess we can fix that now that we have the log relation [20:31:08] Interesting. Thanks again [20:31:16] Back when we initially came up with that, we didn't have that [20:32:17] I hope this doesn't remain a daily thing :/ [20:32:58] The bug that caused this failure is fixed on master [20:33:49] Excellent [20:34:14] tbh, this makes me think that we should be *very* careful with global merge... [20:34:34] 6MediaWiki-Core-Team, 10MediaWiki-RfCs, 7HTTPS: RFC: MediaWiki HTTPS policy - https://phabricator.wikimedia.org/T75953#1089998 (10Spage) The 2014-11-26 RfC discussion was https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-11-26-21.01.html [20:35:38] hoo: indeed. I just sent yet another request for engineering resources for that tool [20:37:13] hoo: WikiGuy2517 just emailed and said he's still getting the "rename in progress" when attempting to log in. Should he wait a bit? [20:37:46] eh, did the memcache entry not get cleared? [20:38:09] I thought the job cleared the cache when it's done? [20:38:16] Keegan: How long ago was that? [20:38:25] it should [20:38:55] 7 minutes [20:38:56] 6MediaWiki-Core-Team, 10MediaWiki-RfCs, 7HTTPS: RFC: MediaWiki HTTPS policy - https://phabricator.wikimedia.org/T75953#1090010 (10Spage) ArchiCom meeting: - @csteipp , can you update the RFC - @hashar status of your action items in the meetbot discussion [20:39:23] legoktm: It only clears the old name [20:39:40] Unless promotetoglobal is set [20:41:41] Keegan: Ok, tell them to try again, manually cleared the cache [20:41:52] Great [20:44:10] ah fu... [20:44:16] from: wikidatawikito: Gabriel2517renamer: DerHexermovepages: 1suppressredirects: [20:44:23] So it totally screwed [20:44:42] The maint. script doesn't call the parent __construct [20:44:49] thus it got confused about arguments [20:45:09] MatmaRex: Know anything about mediawiki.jqueryMsg? Specifically, any sane way to have its parse() allow , , , and so on? [20:46:38] anomie: does adding it to the allowedHtmlElements array in /resources/src/mediawiki/mediawiki.jqueryMsg.js work? if yes, there; if not, then this is a multi-day endeavor :) [20:47:01] legoktm: Any idea on how to fix that? [20:47:03] MatmaRex: That probably would [20:47:19] * anomie hopes such a patch would be merged, though [20:49:11] hoo: fix what? [20:49:34] The fact I started a rename for user wikidatawiki to Gabriel2517 [20:49:54] The script took my --wiki argument as old username [20:50:09] >.< [20:50:26] That's because of the missing parent constructor call [20:51:35] I called it like: [20:51:51] mwscript ../../../../../home/legoktm/fixStuckGlobalRename.php --wiki=ltwiki --logwiki=metawiki "Tsuchiya Hikaru" "Gray eyes" [20:51:51] foreachwikiindblist stuck_renames.dblist ../../../../../home/legoktm/fixStuckGlobalRename.php --logwiki=metawiki "Tsuchiya Hikaru" "Gray eyes" [20:52:18] probably foreachwiki passes --wiki as last argument [20:52:27] hoo@terbium:~$ mwscript ../../../home/hoo/fixStuckGlobalRename.php --wiki wikidatawiki --logwiki metawiki Gabriel2517 WikiGuy2517 [20:52:29] I think we should just get rid of args and use explicit option parameters: --oldname="Foo" [20:52:34] Or that [20:52:51] what if you do --wiki=wikidatawiki --logwiki=metawiki instead of spaces? [20:53:34] Guess that would work [20:53:41] but is only a workaround [20:56:02] MatmaRex: Oh. mw.jqueryMsg.getMessageFunction isn't deprecated, just accessing it via "gM" (whatever that is). [20:58:54] anomie: 19, SpecialUserlogin has 19 possible Hooks::run() calls [20:59:10] bd808: ugh [20:59:47] that is a sign from the past that this rewrite/rethink is long over due [21:11:01] bd808: Do you have an answer for Daniel's question about PostAuth..Filter? I think you named it [21:15:08] I mumbled something... [21:18:53] AaronS: I had to revert the patch regarding the atomic transaction as it was breaking builds in extensions. I filed the (possibly related) remaining db error as https://phabricator.wikimedia.org/T91567 [21:29:11] sounds like an RL bug? why would it insert NULL for the module ? [21:34:14] AaronS: Hm. yeah [21:34:19] Those are test sample data [21:34:22] I'll look into that one [21:34:34] AaronS: So the atomic transactions, what was that supposed to address? [21:34:50] I feel like there was an error that it was going to fix, but I don't see the error anyhmore [21:35:46] it's mostly for sanity *if* you turn off DBO_TRX, so both or neither writes happen [21:36:22] (the two writes to the two module tables) [21:37:13] AaronS: Right [21:37:44] in the case where startAtomic made a trx, it should just rollback if the section fails [21:38:24] in general, that's tricky, as other stuff may have done writes before then and some DB errors only cancel the last statement [21:38:42] bd808, csteipp, legoktm: So I suppose our next step is to start making a skeleton. Should we have a meeting tomorrow to figure out details on how we're going to coordinate that? [21:38:59] related are the "uncomitted trx" errors [21:39:10] anomie: Yup. I'll set one up [21:39:25] I guess SAVEPOINT would work, but that's a bit heavy [21:39:31] prolly not worth it [21:40:07] in theory it could let you rollback all the stuff from the last startAtomic() on the same nesting level [21:40:37] atm, it's hard to recover from DB errors in startAtomic()/endAtomic() sections [21:40:47] bd808: ok [21:40:58] so it just bubbles up and fails the request...which of course made qunit fail [21:41:03] legoktm, csteipp: would you guys need a room tomorrow? [21:41:13] I'll be at home [21:41:18] I'll be in the office [21:41:21] getting a room in SF is the hardest part of this [21:41:37] I can stay at my desk [21:42:08] 1PM SF; 4PM anomie work? [21:42:14] bd808: Works for me [21:43:08] wfm [21:43:49] {{done}} [21:44:08] +1 [21:51:55] 6MediaWiki-Core-Team, 5Patch-For-Review: Create a local job class that enqueues jobs to final location - https://phabricator.wikimedia.org/T89308#1090313 (10aaron) [22:16:25] csteipp: What does tgr need help with on that ticket? [22:16:50] bd808: I'm not really sure. Mark just asked for someone from core to help. [22:17:12] Looks like Timo is giving him code review... [22:18:07] * bd808 will check with tgr [22:18:13] AaronS: Hm.. yeah, that's a bit beyond my understanding of the db code :) [22:18:35] AaronS: All I know is it was consistently happening on each build [22:18:45] But not triggered by mediawiki-core, however. [22:19:35] 6MediaWiki-Core-Team: CSS validator survey and design work - https://phabricator.wikimedia.org/T989#1090369 (10Aklapper) 5Resolved>3Open [22:29:13] One 90 minute quarterly review for MW core (incl. Multimedia)/Services (incl. Parsoid)/Ops/RelEng/ECT. Insanity! [22:30:31] Is that like 2.5 minutes per staff member per quarter [22:30:46] may as well just ask for a TPS report [22:47:12] ori, SMalyshev: I don't see your names in the hackathon travel spreadsheet. Today is the last normal day to apply to either/both the EU and Wikimania hackathons -- http://goo.gl/forms/kuX7mofzg0 [22:48:33] bd808: I don't think I'll be going - given there is limited space, and i have a bunch of work here and I have no specific ideas what to do there which are required to register... [22:48:49] SMalyshev: :( ok [22:49:22] It would be good for you to go to at least one I think, but up to you obviously [22:49:50] meeting the dev community and getting nerd talk time is pretty valuable in my opinion [22:50:27] * bd808 found ori's name and thus un-pings him [22:53:36] AaronS, csteipp: how about you guy? Did you get applications in? [22:53:56] I'm on the fence... And I didn't find a buddy. [22:54:10] Don't worry about that part [22:54:19] bd808: it looks like it more fit for people that have concrete things to propose and team (or at least somebody) to work with, so I don't want to take place while I have no idea what to do there [22:55:05] SMalyshev: Can you fix xml handling in php for me :) [22:55:34] csteipp: if you tell me what needs fixing I can try. Don't have to go to Europe for that :) [22:55:48] 6MediaWiki-Core-Team, 10MediaWiki-extensions-OAuth, 7Epic: Support a nice sso experience with MediaWiki's OAuth - https://phabricator.wikimedia.org/T86869#1090550 (10Tgr) [22:57:45] bd808: I’ve been ‘turned down’ by 3 people, is all a bit strange [22:58:11] No love for the Panda? [22:58:22] mostly they aren’t coming to the hackathons :) [22:58:27] Tim L and Magnus... [22:58:45] ah. you need to go to both just to help with labs stuff [22:59:05] I'm sure I got in for mw-vagrant install parties [22:59:30] yeah, am proposing a worhsop... [22:59:41] which reminds me, I should start pinging people about usb keys and such [22:59:45] it’s hard to find non-wmf buddies when we hire so many of them... [22:59:51] fhocutts has been hired! [22:59:58] yeah? cool [23:00:10] I think so? Has a @wm.o email now, so... [23:01:12] bd808: hey, thanks a lot for looking out for me and checking for my name [23:01:35] ori: :) yw. I know this is the sort of thing that slips by sometimes [23:03:08] bd808: do you have any opinions on https://phabricator.wikimedia.org/T91176, specifically on whether we should cherry-pick the patch or just update composer? https://gerrit.wikimedia.org/r/#/c/194364/ is what the update looks like and it is a lot of changes :/ [23:05:45] manybubbles: do you still have the VE window open? [23:05:54] no [23:06:03] was on ops with MatmaRex - I closed it:( [23:06:09] I tried to copy and paste the whole page [23:06:10] bummer...there is a way of rescuing the text [23:06:14] yeah [23:10:27] bd808: I started the form then cancelled...I mostly just want to go to Wikimania and meet people...not so much the hackathons [23:10:58] * AaronS should travel to France one day on his own... [23:11:44] AaronS: *nod* I think the story I heard was that all the Wikimania travel was lumped in with the hackathon this year. You might check with Quim or Rachel [23:12:00] yes, it is [23:12:08] ? [23:12:20] so that france hackathon - if you don't go there isn't wikimania? [23:13:00] no.not like that [23:13:26] just they put wikimania hackathon and wikimania general togheter [23:14:36] I think in years past there have been different application cycles (but going to the wikimania hackathon always implied going to the rest of the conference) [23:15:07] legoktm: reading.... [23:16:50] legoktm: lgtm honestly. I'm not sure why we aren't just using the composer phar but that's a whole separate question [23:21:26] ah [23:22:15] bd808: also I think the buddy thing is fine, though it doesn't work as well for me [23:22:31] * autismcat is interested to see how that turns out [23:22:57] eh. It's an experiment [23:23:26] if they get 20% more interaction between staff and volunteers I bet it will be considered a success [23:25:27] hoo: was logging added to renameusersql? https://phabricator.wikimedia.org/T89681 [23:25:48] No, not yet [23:25:49] bd808: I think it better fits the idea of a hackathon [23:26:23] is that Special:log logging or wfDebug logging? [23:26:26] instead of a "lets talk to coworkers we don't see" venue [23:27:03] I guess that's just allstaff, would be nice to have more :) [23:27:52] bd808: heh: legoktm@integration-slave1005:~$ /srv/deployment/integration/composer/vendor/bin/composer --version [23:27:52] Composer version @package_branch_alias_version@ (@package_version@) @release_date@ [23:28:08] lol