[00:09:54] _joe_: I recall adding some comments to https://noc.wikimedia.org/conf/highlight.php?file=jobqueue-eqiad.php [00:28:19] 3MediaWiki-API, MediaWiki-Core-Team: Clean up ApiResult and ApiFormatXml, create new formatversion - https://phabricator.wikimedia.org/T76728#1045274 (10Mattflaschen) [00:57:33] csteipp: How hard/impossible would it be to make a tool in toollabs that did bulk uploads like GWToolset does using OAuth to commons to submit each image? [00:57:44] 3VisualEditor, WikiEditor, VisualEditor-Performance, Analytics, MediaWiki-Core-Team: Apply Schema:Edit instrumentation to WikiEditor - https://phabricator.wikimedia.org/T88027#1045363 (10Krenair) TODO: * I should figure out what we can do about action.abort.type * Deal with the action.saveFailure.type schema iss... [00:58:05] I'm not remembering if GWT gets special upload size limits per image [01:14:26] 3wikidata-query-service, MediaWiki-Core-Team: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1045422 (10Manybubbles) @Haasepeter - Stas and I are in Berlin now and should be able to talk pretty much any time during the day there. Next week I'm pretty free as well, just send me/us an invi... [01:27:41] bd808: Pretty trivial [01:28:36] I think, unless they've done something crazy since I looked at it last. [01:28:40] csteipp: really? It seems like having GWT as an external thing would make many people's lives simpler [01:28:57] bd808: Yeah, we recommended that early on [01:29:09] >_< [01:29:30] bd808: But the sponsoring organization specified it had to run on our cluster for "reliability" [01:29:52] And we weren't going to use a misc server for it [01:30:16] * csteipp_afk really goes to dinner this time [01:30:22] o/ [01:35:08] http://wiki.wiki/Special:Version [01:35:32] 1.23 and running in the root namespace. tsk tsk [02:35:02] TimStarling, _joe_: http://hhvm.com/blog/8405/coming-soon-in-hhvm [02:35:20] most significant: "A restructuring of the FastCGI server, which fixed several memory leaks and reliability issues, especially under very high load." [04:21:10] legoktm: Do you know if we have a bug filed to track adding vendor stuff to the 1.25 tarballs? [04:21:20] * bd808 isn't finding an obvious one [04:31:54] 3Librarization, MediaWiki-Documentation, MediaWiki-Core-Team: Document new library requirements for logging interface, cdb, xhprof etc. - https://phabricator.wikimedia.org/T74163#1045637 (10bd808) [05:29:00] 3MediaWiki-extensions-OAuth, MediaWiki-Core-Team: Unclear phrasing on OAuth help page - https://phabricator.wikimedia.org/T62131#664374 (10bd808) [05:41:57] 3MediaWiki-extensions-OAuth, MediaWiki-Core-Team: Support a nice sso experience with MediaWiki's OAuth - https://phabricator.wikimedia.org/T86869#1045743 (10bd808) p:5Triage>3Normal [05:48:36] 3MediaWiki-Core-Team, Wikimedia-Logstash, operations, Incident-20150205-SiteOutage: Decouple logging infrastructure failures from MediaWiki logging - https://phabricator.wikimedia.org/T88732#1019000 (10bd808) [10:45:25] 3Datasets-General-or-Unknown, Services, MediaWiki-Core-Team: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1046090 (10ArielGlenn) I've added a first round of notes to that page. I'd like to invite the members of the xmldatadumps-l list to weigh in on user stories and on freq... [11:52:12] TimStarling: Thx a lot for the sqlite debugging [11:52:24] np [11:52:38] TimStarling: I'm trying to narrow down when that code was introduced. or why it's there. Do you know why one couldn't wait for a reserved lock? [11:53:02] or maybe the better question is why exFlag is 0? [11:53:02] yeah, there was a reply on the sqlite-users mailing list [11:53:07] Oh nice [11:53:07] it's to avoid a deadlock [11:54:02] TimStarling: So there's never a wait for write queries, only for select? [11:54:34] Or are we doing a particular kind of statement that triggers this? I'm still not quite sure how the different locks map to sql statements from the code perspective. [11:55:27] it'll wait on the write query if the transaction starts with a write query [11:55:57] if you begin a transaction and then do a select, then you are in danger of hitting this issue if you issue a write query later in the same transaction [11:56:42] TimStarling: Ah, when the same php thread is doing both? [11:56:49] It'd be waiting for the outer call from itself? [11:57:27] yes, the same PHP thread does the read and the write [11:58:22] deadlocks happen whenever you have two separate locks which are acquired in a different order [11:58:38] Looking at the mailing thread now [11:58:46] Yeah [11:59:08] the BEGIN IMMEDIATE workaround would probably work for testing at least [11:59:21] just make all transactions BEGIN IMMEDIATE instead of BEGIN [11:59:49] it'll serialize the transactions instead of having concurrent readers, but that's OK for this test, right? [12:01:25] TimStarling: I've redefined how far I'd expect to dig in this area of software. A minute :) [12:02:49] I suppose less concurrent readers would affect production. Perhaps not within synchronous php, but this would affect not just the individual process but also different requests handled by the same server. [12:04:05] It's not one query that is the issue though I think (or have you identified a particular or small set of queries causing this?). So perhaps we'd do it for all sqlite dbs? Or a subclass of SqliteDatabase? Assuming we can transform to IMMEDIATE in a generic way without changing other code. [12:04:57] DatabaseSqlite* [12:07:54] Hm.. we're using beginTransaction() from PDO, not query('BEGIN') like in the base class such as for MySQL. [12:09:10] https://bugs.php.net/42766 [12:09:28] Looks like Drupal is doing raw querying instead https://api.drupal.org/api/drupal/core!vendor!symfony!http-foundation!Symfony!Component!HttpFoundation!Session!Storage!Handler!PdoSessionHandler.php/function/PdoSessionHandler%3A%3AbeginTransaction/8 [12:11:44] https://www.drupal.org/node/1120020#comment-6271416 [12:12:03] That looks familiar :) [12:12:26] Except there it hit a deadlock. [13:42:41] is anomie able to get on line? [13:43:01] my house isn't [13:50:44] 3MediaWiki-Core-Team, wikidata-query-service: Investigate BigData for WDQ - https://phabricator.wikimedia.org/T88717#1046235 (10Beebs.systap) >>! In T88717#1045422, @Manybubbles wrote: > @Haasepeter - Stas and I are in Berlin now and should be able to talk pretty much any time during the day there. Next week I'... [14:55:52] 3CirrusSearch, MediaWiki-Core-Team: CirrusSearch: Allow *ORs* of incategory to be sent via a post or get parameter - https://phabricator.wikimedia.org/T89823#1046311 (10Manybubbles) 3NEW a:3Manybubbles [16:32:21] bd808: yeah, but I already fixed that [16:33:13] bd808: https://phabricator.wikimedia.org/T74726 [16:33:23] legoktm: Cool. close https://phabricator.wikimedia.org/T89793 as a dup then? [16:34:15] done [16:34:23] thx! [17:21:45] 3operations, MediaWiki-Core-Team: Review Graphite scaling options - https://phabricator.wikimedia.org/T1018#1046987 (10fgiunchedi) 5Open>3Invalid no activity and ssd machines are provisioned, resolving [17:48:18] 3CirrusSearch, MediaWiki-Core-Team: CirrusSearch: Ignore ( and ) in prefix search - https://phabricator.wikimedia.org/T89201#1029734 (10Quiddity) [18:08:34] 3SUL-Finalization, MediaWiki-Core-Team: Check for invalid usernames - https://phabricator.wikimedia.org/T89495#1047180 (10Legoktm) [18:09:12] 3SUL-Finalization, Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Invalid usernames on Wikimedia web sites - https://phabricator.wikimedia.org/T5507#78065 (10Legoktm) [18:09:42] 3Services, MediaWiki-Core-Team, Datasets-General-or-Unknown: Improve Wikimedia dumping infrastructure - https://phabricator.wikimedia.org/T88728#1047186 (10bd808) >>! In T88728#1046090, @ArielGlenn wrote: > I've added a first round of notes to that page. I'd like to invite the members of the xmldatadumps-l list... [18:18:24] 3MediaWiki-extensions-OAuth, MediaWiki-Core-Team: Support a nice sso experience with MediaWiki's OAuth - https://phabricator.wikimedia.org/T86869#1047211 (10csteipp) [18:18:25] 3MediaWiki-extensions-OAuth, MediaWiki-Core-Team: Unclear phrasing on OAuth help page - https://phabricator.wikimedia.org/T62131#1047209 (10csteipp) 5Open>3Resolved lgtm [18:28:10] legoktm: what sul emails did you send last week? I want to get it in SoS notes [18:33:25] bd808: they're asking people to confirm their email and merge their accounts. https://phabricator.wikimedia.org/T73241 is the bug [18:34:00] legoktm: thx! [19:07:45] ^d: how would you feel about some Cirrus review time to help out Flow? -- https://gerrit.wikimedia.org/r/#/q/status:open+project:mediawiki/extensions/CirrusSearch+owner:%22Matthias+Mullie+%253Cmmullie%2540wikimedia.org%253E%22,n,z [19:08:20] <^d> It's a chain of commits. I left a comment on the top parent on Feb 4th wanting some changes before I merged. [19:08:48] * bd808 looks [19:09:09] <^d> Here's the ones we've already merged for this: https://gerrit.wikimedia.org/r/#/q/owner:mmullie%2540wikimedia.org+Cirrus+is:merged,n,z :) [19:09:43] ^d: *nod* I got poked as PM so I'm PM'ing :) [19:16:35] ^d: counter poke delivered [19:40:46] 3SUL-Finalization, Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Invalid usernames on Wikimedia web sites - https://phabricator.wikimedia.org/T5507#1047501 (10Keegan) a:3Keegan [20:16:56] <^d> AaronS: If you've got a few minutes, could use some input. https://phabricator.wikimedia.org/T54333 [20:31:11] odd [20:31:17] 3SUL-Finalization, Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Invalid usernames on Wikimedia web sites - https://phabricator.wikimedia.org/T5507#1047903 (10Keegan) There are 218 local badusername accounts that have to be renamed for SUL finalization, so it's time to settle this and consider appropriate n... [20:33:41] ^d: https://gerrit.wikimedia.org/r/#/c/191381/ [21:00:26] legoktm: Rachel said you needed a review of https://phabricator.wikimedia.org/T76774? [21:00:36] You just want me to say yes, I'm ok with that? [21:01:11] csteipp: uh, well I'll have a patch later tonight that will need review from either you or hoo [21:01:54] legoktm: Cool. Add me when you're done and I'll try to look at it [21:02:10] will do [21:32:03] ^d: https://gerrit.wikimedia.org/r/#/c/191229/ btw [21:55:26] 3MediaWiki-API, MediaWiki-Core-Team: API blocks query module causes PHP undefined property notice if bkprop parameter does not include 'timestamp' - https://phabricator.wikimedia.org/T89893#1048201 (10Anomie) a:3Anomie [21:56:11] 3MediaWiki-API, MediaWiki-Core-Team: API blocks query module causes PHP undefined property notice if bkprop parameter does not include 'timestamp' - https://phabricator.wikimedia.org/T89893#1048100 (10Anomie) Note it only happens when continuation is needed, and in that case the continuation value is broken too. [22:06:14] AaronS, https://gerrit.wikimedia.org/r/#/c/174059/ completely broke eval.php fr me [22:14:05] We did it! The Needs Review/Feedback column in the work board is finally overflowing (27/25). Time to do some reviews folks [22:15:00] * bd808 found 3 finished things to move to done [22:15:40] https://phabricator.wikimedia.org/project/board/37/query/all/ -- review things for anomie if you can. An hour of your day would make all the difference [22:20:13] MaxSem: that was a pretty crazy change [22:20:31] fancy trying to detect the end of a statement by shelling out to detect parse errors [22:20:47] maybe just revert that? [22:20:58] we can probably do a better job with a few regexes [22:21:02] sounds reasonable [22:21:08] I wonder how php -a does it [22:22:10] https://gerrit.wikimedia.org/r/#/c/191486/ [22:22:23] regexes won't work, better to just live without the feature [22:25:43] it could be done with token_get_all() but I think that is not implemented in HHVM [22:26:16] that would still be a lot of code, not worth it [22:26:27] * AaronS already looked at that [22:26:46] bd808: https://gerrit.wikimedia.org/r/#/q/owner:%22Aaron+Schulz%22+status:open,n,z ;) [22:28:07] ok folks give AaronS code reviews too [22:28:57] You folks looking at how eval.php blows up under hhvm? [22:29:18] well, by default $wgMaxShellMemory is too low for HHVM to start up [22:29:31] I was trying to figure out how to unbreak that a couple days ago and just resorted to to bypass it [22:29:33] hence my cryptic comment on the eval.php change [22:29:52] ah that makes sense (low mem limit) [22:30:03] bd808, I just reverted locally [22:30:20] woooorkss grrret [22:31:43] ok, I'm reviewing things [22:32:00] thanks Tim [22:49:23] I'm trying to refresh my memory on cascade protection since I seem to remember having some fairly intense discussions with werdna about why it was necessary to do cascade protection updates on view [22:49:41] but it predates the start of my IRC logs [22:49:45] https://www.mediawiki.org/wiki/Special:Code/MediaWiki/19095 [22:49:54] and it predates CodeReview [22:52:43] it's coming back to me, anyway [22:52:59] I think the theory was that main page vandalism is a really serious thing and you have to be extra careful to not allow it [22:53:03] at all ever [22:54:36] also there is the issue of date magic words used on the main page [22:55:29] well the prioritized job should handle that (it can have it's own runner per-box if needed) [22:55:52] as for date stuff, that's quite exploitable as is without some workarounds [22:56:33] the main problem is vandalizing just ahead of time before the new "page of the day" template is used...one could just noinclude the day+1 to lock it to avoid that [22:57:11] I considering having the queue parse both now and 1 hour in the future and merge the links (or track the speculative ones elsewhere) and cascade on those links...but that seemed overkill [22:57:37] that would also require delayed jobs [23:04:04] maybe I'll put a note on [[Talk:Main Page]] about cascade protecting the future version [23:05:28] there's https://en.wikipedia.org/wiki/Wikipedia:Main_Page/Tomorrow [23:05:43] which is cascade protected [23:06:30] probably the same trick [23:07:13] yeah, ideally it would be plus a day minus a few minutes, instead of plus a day [23:07:58] since /Tomorrow could unprotect the new version shortly before or after the real main page takes over [23:09:59] so we should create https://en.wikipedia.org/wiki/Wikipedia:Main_Page/Tomorrow_minus_a_few_minutes ? [23:12:59] I'm just going to edit /Tomorrow, I don't think anyone will mind [23:21:08] hmm, but nobody will ever view /Tomorrow so triggerOpportunisticLinksUpdate() won't be called [23:21:29] well we could just set up a cron job to curl the page? [23:21:54] someone has a bot that purges the main page ever 15min I think [23:23:04] is /Tomorrow used on the main page itself in any way? [23:24:13] I don't think so? [23:24:43] there's no link to it from the main page [23:30:38] you think the best idea is to just do {{#if:{{Wikipedia:Main_Page/Tomorrow}}||}}? [23:31:37] any faster ways to make a templatelinks entry that you can think of? [23:33:22] style="display: none;" ? [23:33:58] that is not faster [23:33:58] oh no, that would still uselessly include the HTML [23:34:28] legoktm just casually proposing sending out an extra gigabit of HTML traffic [23:34:35] :P [23:35:13] putting things in a #if condition is a nice way to include a template without giving any output [23:37:13] I checked the timing, it is fast enough [23:50:38] swtaarrs: yt? [23:51:08] idle for >100hrs, not likely. [23:53:42] TimStarling: any chance you could look at mw1141? HHVM locked up on that host. I depooled it but did not restart HHVM to make this possible to investigate. This is an issue we've seen at least a dozen times in prod -- an app server locking up with lots of threads in HPHP::StatCache::refresh, happening right after a deploy which touches lots of files. [23:53:45] the task is [23:54:48] maybe it's just further evidence that we should do a rolling restart of the app servers whenever we push a new branch to production -- but there's still no particularly clean way to do that [23:55:17] there's a backtrace attached to the task. I tried to get a core dump but gdb segfaulted. [23:56:19] sounds like a couple of hours of work... [23:56:39] the rolling restart or chasing the StatCache thing? [23:57:50] StatCache [23:58:50] kk. It's not critical, because it happens only occasionally, and when it happens it only affects one or two app servers. [23:58:56] I'll leave mw1141 depooled and let _joe_ know [23:59:15] it would be fun, but I did just promise bd808 I would do lots of reviews ;)