[00:04:06] any API experts around? [00:04:33] Brad should be offline by now I think [00:04:46] yurikR on -dev [00:45:25] bd808: I guess we should hold the bagostuff patch until we merge your MWLoggerFactory one [00:46:15] I'm cool either way. The loggerfactory patch has a followup that will need to be merged concurrently to put a shim class back in [00:47:07] Otherwise we will have to race in a config change for beta & prod since they both use MWLogger in their logging config [00:47:29] but we should fix the usage in bagostuff as soon as the other lands [00:47:43] I really only want to keep that shim around a for a couple of weeks [00:47:52] class_exists('MWLoggerFactory') ? 'MWLoggerFactory' : 'MWLogger' ? [00:47:59] heh [00:48:26] I guess that could be done but it's more awkward than the shim class [00:49:01] In related news, prod is all monolog for logging now [00:49:16] and it's killing logstash at the momemt [00:49:22] *nt [00:50:25] I hope only because we are double logging via log2udp relay packets and the redis queue [00:54:09] legoktm: Do you want to make the config change that needs to go with the bagostuff patch or should I? [01:21:04] 3MediaWiki-Search, MediaWiki-Core-Team: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981001 (10Liuxinyu970226) Can't work again on zhwiktionary and zhwikibooks [01:29:04] bd808: I can do it [01:29:16] kewl [01:29:30] legoktm: ygpm (unrelated) [02:10:47] <^demon|away> robla: You should send more e-mails like that. [02:10:49] <^demon|away> Now I'm on the edge of my seat [02:10:51] <^demon|away> :) [02:11:19] heh :-) [05:02:10] robla: my idea for mediawiki 2.0 was not to start with an empty repository.. [05:11:03] the model i think we should consider emulating is python 3. the major version change was not about jettisoning code and starting from scratch but about breaking backward-compatibility guarantees in the interest of righting a large backlog of wrong design decisions [05:50:59] io.js hit 1.0: http://www.reddit.com/r/programming/comments/2sd86j/iojs_100/ [05:51:19] top comment, showing we are not the only ones with snarky feelings: "Now I can js my io while while I am building async microservices" [11:26:34] 3MediaWiki-Core-Team, Continuous-Integration, MediaWiki-Configuration: Update jenkins for extension registration changes - https://phabricator.wikimedia.org/T86359#981543 (10hashar) The jobs that tests multiple extensions together ( mediawiki-extensions-{hhvm,zend} ), clone a varying list of extension to the wor... [13:43:36] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981797 (10Aklapper) @Liuxinyu970226: Can you provide a link / steps to reproduce please? [13:47:32] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981802 (10Liuxinyu970226) >>! In T78553#981797, @Aklapper wrote: > @Liuxinyu970226: Can you provide a link / steps to reproduce please? Try "全选 (all)"... [14:30:02] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981872 (10Manybubbles) 5Resolved>3Open [14:31:34] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#848137 (10Manybubbles) Reopened - it is indeed broken on zhwiktionary but not enwiktionary. No idea why yet. [14:32:30] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981883 (10Manybubbles) Also - its clicking the buttons that is broken. Clicking on the check boxes still seems to work. [14:35:43] 3MediaWiki-Core-Team, MediaWiki-Search: Special:Search on mw.org throwing an error - Advanced search checkboxes broken - https://phabricator.wikimedia.org/T78553#981895 (10Manybubbles) a:5rmoen>3Manybubbles The all/none buttons still work on zhwikipedia but do not on zhwikibooks or zhwiktionary..... [14:35:48] manybubbles: More pony requests, huh. [14:36:01] ? [14:36:12] T86982 [14:37:37] yeah - its a pain. [15:43:28] manybubbles: good morning [15:43:34] morning! [15:43:39] manybubbles: I have added maven verify to the jenkins job for wikidata/Gremlin [15:43:39] thanks for that change! [15:43:41] seems to work [15:43:43] :D [15:43:47] wonderful [15:43:53] I'm happy to be able to make luis happy [15:44:36] :D [15:45:44] <^demon|away> jidanni is on Phab! [15:45:52] * ^demon|away grabs some popcorn [15:47:41] ^demon|away: \O/ [15:47:49] I am not missing triaging his bug reports [15:48:23] <^demon|away> Heh, but you did secretly miss reading them :p [15:48:41] yeah [15:48:54] ganglieri (spelling?) had a bunch of RTL issues reported [15:49:01] but they were very hard to understand as well :/ [15:49:16] <^demon|away> gangleri's bugs are impossible to decipher. [15:49:49] I think he is hebrew and use some translating system to produce the bug reports [15:55:22] week-end time, see you 1/1 on tuesday :] [16:20:29] <^demon|away> manybubbles: I merged all but 4 of Matthias' patches [16:20:36] cool [16:21:15] no breaky? [16:22:44] <^demon|away> Not as far as I can tell locally [16:22:57] <^demon|away> I've run updateSearchIndexConfig with about every combination of parameters I can think of [16:37:52] bd808: Any news on logging fatals? [16:52:41] hoo: not really yet. Right now I'm trying to get logstash back to a live state. I crushed it with non-lossy log input from MW [17:00:58] <^d> Hmm, I'm curious if we could get a little more performance out of the autoloader in production by stripping out classes we'll /never/ use. [17:01:10] <^d> (installer stuff, alternative db backends, alternative search backends, etc) [17:02:56] we also added a bunch of maint scripts that are never going to get autoloaded to the autoloader when we switched to the new script [17:03:31] <^d> It's probably a micro-optimization, but it's just a thought :) [17:03:42] meh. it's a php array that will be cached by the runtime [17:03:49] ast cache that is [17:04:14] <^d> *nod* yeah [17:04:30] Looking at sliming down the startup cost of each and every extension would probably go farther [17:04:54] every request loads every extension entry point script [17:05:28] which is kind of insane but needed for hook registration I guess [17:05:53] <^d> Oh yeah I was going to uninstall another extension on prod. [17:14:36] E:MarkAsHelpful is only enabled on testwiki... [17:19:26] what logging can we kill for prod? api-feature-usage.log is 23G; CirrusSearch-all.log is 8.6G; localhost.log is 2.8G; runJobs.log is 5.8G; xff.log is 13G; any of those unuseful in logstash? [17:20:18] we could rig the log config to keep sending to udp2log but not to logstash for any or all of them [17:20:36] <^d> I don't think we need CirrusSearch-all in logstash. [17:21:41] I'll work on a config patch that lets us set logstash=>false to keep things from replicating to logstash [17:24:05] <^d> legoktm: Removing [17:25:05] woot [17:25:07] bd808: https://gerrit.wikimedia.org/r/185457 :) [17:25:51] :) And a full developer! [17:25:58] * bd808 hugs legoktm [17:27:08] :> [17:27:28] <^d> bd808: Welcome to the big kid table :) [17:34:55] bd808: I filed https://phabricator.wikimedia.org/T86990 last night...how hard would it be to compile a list of all enabled extensions and pass it to one wfLoadExtensions() call instead of individual include_once calls? [17:35:33] hmmm... [17:37:36] well actually it could be done pretty easily. Make a dir to hold 1 file per enabled extension; populate via puppet like we do for the php includes; change the localsettings loader to scan that dir to create the list and then load. [17:37:46] <^d> We don't handle dependencies yet really. Some extensions are pretty stupid and require a specific order [17:37:57] ugh that [17:38:07] but it's true [17:38:09] <^d> So I think we'd want to add the call and then start adding extensions once we're sure they're sane and don't fuck with global state [17:38:39] or depend on classes from another extension to init right? [17:39:02] that's one of the ways it happens (if $globalfoo then do something different) [17:39:03] <^d> Yeah, if they call things from their init file then that's bad. [17:39:13] <^d> As long as they delay to $wgExtensionFunctions or something we should be ok [17:39:43] We should audit all of the prod extensions for that [17:40:00] using wgExtensionFunctions callbacks [17:40:01] Well I'm assuming that we'd fix those issues while creating the extension.json file [17:40:21] I gotcha [17:40:32] ExtensionFunctions are a hack, and we should be able to get rid of a bunch when switching to extension registration [17:40:57] look, docs! https://www.mediawiki.org/wiki/Extension_registration#Attributes [17:41:41] also, some extensions use ExtensionFunctions, others use a SetupAfterCache hook [17:42:58] "When VisualEditor wants to access this attribute it uses:" so both sides have to change at once? [17:43:26] bd808: We actually use a unison. [17:43:47] *nod* back-compat [17:43:55] <^d> Look, it's a wild James_F! [17:43:58] * ^d throws a pokeball [17:43:59] Yup. legoktm's idea, I'm just the grunt that did the typing. :-) [17:44:05] ^d: Wild? I'm furious. ;-) [17:44:44] bd808: See https://gerrit.wikimedia.org/r/#/c/183524/4/VisualEditor.hooks.php line 429+. [17:44:57] bd808: or https://gerrit.wikimedia.org/r/#/c/180554/3/includes/specials/SpecialTrackingCategories.php,cm :P [17:45:09] Yup. [17:45:19] ^d, legoktm: Does https://gerrit.wikimedia.org/r/#/c/185462/1 look reasonable? Or should I flip it and make logstash opt-in rather than opt-out? [17:45:47] opt-out seems a bit better to me but unsure [17:46:07] <^d> I think I want one logging system to rule them all :p [17:46:26] me too but we're not ready [17:46:35] as the current melt shows [17:47:34] * bd808 is concerned that the redis list on logstash1001 is neither shrinking or growing [17:47:40] I think opt-out is fine [17:48:06] <^d> +1 [18:33:12] legoktm: You're on the global renamer mailing list? [18:33:25] A user brought up stuff there, just wanted to make sure you saw [18:35:44] yes [18:40:39] legoktm: https://gerrit.wikimedia.org/r/185477 [18:41:41] how is it confusing? [18:41:47] the majority of accounts are unlocked [18:42:03] Yeah, but people can no longer "find" that information now on eg. spam bots [18:42:11] so they're not sure whether they're locked or not [18:42:15] if it's not locked, it's unlocked [18:42:56] I can see how it'll be confusing if you're used to seeing Locked: No, but once you realize that it shows up for locked accounts it shouldn't be confusing right...? [18:43:07] Well, probably [18:43:12] It's like we don't show a "This user is not blocked" red box on Special:Contribs if you're unblocked [18:43:23] Right, but that is a red obvious box [18:45:20] Special:Contribs now shows if accounts are locked too [19:50:12] 3MediaWiki-Core-Team, MediaWiki-API: Clean up core API data formats for format=json2 - https://phabricator.wikimedia.org/T87053#982727 (10Anomie) 3NEW a:3Anomie [19:50:21] 3MediaWiki-Core-Team, MediaWiki-API: Clean up ApiResult and ApiFormatXml, create format=json2 - https://phabricator.wikimedia.org/T76728#982738 (10Anomie) [19:50:24] 3MediaWiki-Core-Team, MediaWiki-API: Clean up core API data formats for format=json2 - https://phabricator.wikimedia.org/T87053#982737 (10Anomie) [20:10:17] ^d, legoktm, MaxSem: sanity check review plz? https://gerrit.wikimedia.org/r/#/c/185463/3 [20:11:39] bd808: it's the same as the beta change right? [20:12:06] yes, but copy-pasta into the other logging file [20:12:13] and with some config to use it [20:12:32] +1'd [20:12:46] sweet [20:20:54] so I think there's a race condition in the globalrenamequeue that's causing emails to not be sent [20:21:12] we start the rename, and then look up their email to notify them that their request was approved [20:21:33] but the rename could have already wiped out their info, so there would be no email to send to. [20:23:45] bah. really? [20:24:31] but the bug report says that a rejected one didn't get notified either :/ [20:24:38] https://phabricator.wikimedia.org/T777#981886 [20:27:47] "maybe have an interface for users requesting renames to check up on their request" we have that right? [20:27:58] * bd808 tries to remember what he helped build [20:28:17] I think if you go to Special:GlobalRenameRequest it should tell you it's queued [20:28:49] http://sulfinalization.wmflabs.org/wiki/Special:GlobalRenameRequest/status as LocalUserWithPendingRequest/wiki [20:29:02] "Your username change request is currently in progress and awaiting approval by a steward. You will be notified by email when the request is processed. " [20:29:09] and the from and to names [20:29:43] And the redirect to status is automatic as long as the request hasn't been processed [20:30:05] once it it processed then you'd get the form again [20:31:36] and the email send uses the same user object that was loaded before the processing started [20:31:53] but is the email actually loaded out of the db? [20:32:11] hmmm.... maybe not? [20:32:34] User::newFromName() just creates a stub user object [20:33:39] and I only call $oldUser->getName() in the non-global + approved case [20:34:39] and when testing probably the old user was always warm in cache :/ [20:34:58] stupid smart object stub crap [20:35:28] doesn't User::getEmail() assume they have an account on meta? [20:35:59] dunno. that's how you guys told me to send the mail as I recall [20:37:50] hmmm [20:42:23] User->sendMail turns around an calls UserMailer::send [20:43:40] that function doesn't seem like it cares about the user (doesn't even have a handle to it) [20:44:02] yeah, UserMailer just deals with MailAddress's [20:44:53] and MailAddress::newFromUser just copies the fields over (email, name, real name) [20:45:14] so, no nothing to do with meta in there [20:45:46] but if $oldUser wasn't unstubbed and the cache was cold I could see how that might blow shit up [20:46:37] so SpecialGlobalRenameQueue::doResolveRequest should probably unstub $oldUser right away [20:47:17] A line like: $oldUser->getEmail(); should be good enough [20:47:43] *nod* [20:58:23] 3Beta-Cluster, MediaWiki-Core-Team, operations: Create a terbium clone for the beta cluster - https://phabricator.wikimedia.org/T87036#982926 (10hashar) Seems this should go to #operations , #hhvm and #mediawiki-core-team and be rephrased to: "convert work machine (tin, terbium) to Trusty and hhvm usage" + me... [21:24:07] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#982985 (10greg) [21:32:59] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#983029 (10hashar) [21:34:21] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#982188 (10hashar) [21:34:55] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#982188 (10hashar) Thanks Greg. I have added some steps to the task description. I could not find a project/Task related to the Trusty migration :-/ [22:02:58] bd808: https://gerrit.wikimedia.org/r/185553 [22:04:03] +2 [22:05:14] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#983124 (10EBernhardson) [22:05:17] thanks [22:05:59] 3Beta-Cluster, MediaWiki-Core-Team, operations: Convert work machines (tin, terbium) to Trusty and hhvm usage - https://phabricator.wikimedia.org/T87036#982188 (10EBernhardson) updated description again, to clarify that the scripts don't have any dependency on hhvm, it is being used for its gdb like debug consol... [22:30:44] legoktm: http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Logstash%20cluster%20eqiad&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2&st=1421447393&g=mem_report&z=large [22:30:54] the ram is all gone :( [22:30:57] ......... :| [22:31:05] http://downloadmoreram.com/ [22:31:18] redis is putting back pressure on the tiny amount of headspace we had [22:31:38] which I certainly should have thought about [22:32:58] I'm thinking about dumping the redis queues (~1.3M events in each one) but holding off until I think the world can't live without it [23:05:53] bd808: is there a single redis instance or three? [23:07:11] 3 [23:07:19] one on each logstash node [23:07:27] each is backed up roughtly equally [23:07:57] the wikis randomly pick one to send all messages for a given request cycle to [23:08:28] at this point I think dropping the backlog won't really hurt much [23:08:37] hopefully it will keep up going forward [23:08:58] the total list sizes have been slowly dropping for several hours [23:09:11] so its consuming a bit more than is coming in [23:09:29] but there is a pile of things from when the es servers were oom [23:10:49] I'm thinking something like `LTRIM -50000 -1` [23:11:44] Or LTRIM -50000 9999999 [23:14:02] ori: stupid idea or jfdi? [23:45:44] bd808: is https://www.mediawiki.org/w/index.php?title=Manual%3ADeveloping_libraries&diff=1357547&oldid=1356615 ok with you? [23:52:18] legoktm: yeah looks fine [23:54:57] I love that "Chad will taunt you mercilessly if you do." has survived all of the edits so far :)