[00:01:06] well, http://graphite.wikimedia.org/render/?width=588&height=315&_salt=1409869403.785&title=Rate%20of%20MediaWiki%20exceptions%20and%20fatal%20errors%2C%20past%2024%20hours&vtitle=Errors%20per%20second&target=alias(mw.errors.fatal.rate%2C%22Fatal%20errors%22)&target=alias(mw.errors.exception.rate%2C%22Exceptions%22) is totally blank [00:01:09] ori: i should probably give it a go on my windows box first [00:01:14] and may have been since the vanadium decom [00:01:20] so maybe it just needs to be removed [00:01:32] "Patch aside this doesn't happen if I used the bat file instead of the sh file (in cygwin)." [00:01:43] doh. that's why it didn't work [00:01:56] damn you cygwin [00:02:08] and damn you git shell vs cmd.exe [00:02:25] * bd808 dams bill gates for good measure [00:02:34] heh, and I was only using that to avoid the cryptic permission error (sometimes "file not found") [00:02:43] oh well. got a slight improvement out of it and now i know a little more about win32ole [00:02:48] I just randomly tried the bat file in admin mode and it worked [00:02:53] not really sure how i feel about that though [00:03:29] marxarelli: next you can learn PHP 2.0 -- http://casadebender.com/reference/other/phpfi2.html [00:03:35] almost as useful [00:03:36] maybe we could put a bail in setup.sh for cygwin [00:04:20] that language is going places :P [00:04:35] i can feel it [00:05:12] marxarelli: does anything else not work in cygwin for setup.sh? I've set up vagrant several times on that (doing the cpu part manually) [00:05:31] though it's still a pile eggshells probably [00:05:48] dancecat: hmm... not sure [00:06:13] it should all work other than that call to a windows cmd.exe program [00:06:17] dancecat: have you tried out the ps? [00:06:18] https://gerrit.wikimedia.org/r/#/c/206332/ [00:09:07] gah too much i/o atm, I'll try soon [00:11:15] dancecat: do multiple vagrant vms conflict for any silly reason? [00:11:34] ori: why are we talking to ourselves? [00:11:46] marxarelli that is [00:11:53] I guess there are ports rules [00:12:12] That's all that should conflict [00:12:18] ok [00:12:20] and you can set different ports [00:12:23] yep [00:12:29] you might want to set a different static ip on the second [00:12:31] I'm running 3-4 VMs at any given time [00:13:44] bd808: http://www.newegg.com/Product/Product.aspx?Item=N82E16819117404&nm_mc=KNC-GoogleAdwords-PC&cm_mmc=KNC-GoogleAdwords-PC-_-pla-_-Processors+-+Desktops-_-N82E16819117404&gclid=CO6x7o7UjcUCFcVgfgodJp8AiA&gclsrc=aw.ds ;) [00:15:00] I have a 2.3 GHz Intel Core i7 (4 physical/8 virtual) [00:15:57] my personal laptop is a bit faster I think. I know it has 16G instead of the puny 8G the WMF sprung for [00:16:04] * dancecat has a core i5 at 4Ghz [00:16:17] i7 is soo much better [00:16:17] my manager laptop even has 8 gigs of ram [00:16:39] greg-g: You need it for all those google docs ;) [00:16:46] greg-g: ask how much JamesF has [00:16:50] bd808: unfortunately true [00:17:09] dancecat: how much does James_F have? [00:17:20] bd808: i7 wasn't worth for me, but things will get cheaper [00:17:30] gotta run (massive headache right now) but, dancecat, let me know if that ps works for you so we can merge it [00:17:33] and then I will flock to newegg [00:17:43] marxarelli: sorry I gave that to [00:17:45] ...you [00:17:53] greg-g: 16GB :) [00:18:11] dancecat: I guess PMs need more than "line managers" [00:18:14] or was it 32...hrm, can't remember [00:18:20] it's almost impressive. on the verge of hallucinating i think (on second thought, maybe i should keep staring at this screen) [00:18:21] one of those, some stupidly huge number [00:18:25] I have 16 too; it's only civilized [00:18:36] OS X and Chrome know how to utilise it [00:18:58] marxarelli: it sounds like fun at first, but then, the screen takes over [00:19:18] greg-g: i'll stick to eating san pedro [00:19:26] see ya! o/ [00:19:29] :) [00:19:36] http://i.imgur.com/HP0qhB0.png [00:40:41] dancecat: I need the RAM because of the ultra-memory-hungry need to run citoid, mathoid, parsoid, texvmc and a bunch of other things in Vagrant. :-( [00:41:05] James_F: is it 16 or 32? [00:41:27] James_F: haven't you heard that a $5 VPS should run all that? ;) [00:42:30] dancecat: 32. [00:42:42] bd808: It /can/ run that, but I am exceedingly impatient. :-) [00:42:58] I can't even do 32, only two dimm slots [00:43:27] Oh, the laptop is only 16, sorry. [00:43:29] The desktop is 32. [00:43:33] The other desktop is also 32. [00:43:45] though ... http://www.anandtech.com/show/7742/im-intelligent-memory-to-release-16gb-unregistered-ddr3-modules [02:45:56] 6MediaWiki-API-Team, 6Analytics-Engineering, 10MediaWiki-API, 10Wikipedia-Android-App, and 2 others: Add page_id and namespace to X-Analytics header in App / api requests - https://phabricator.wikimedia.org/T92875#1233108 (10Mattflaschen) >>! In T92875#1122371, @dr0ptp4kt wrote: > I prefer header enrichmen... [03:14:06] 6MediaWiki-API-Team, 6Analytics-Engineering, 10MediaWiki-API, 10Wikipedia-Android-App, and 2 others: Add page_id and namespace to X-Analytics header in App / api requests - https://phabricator.wikimedia.org/T92875#1233139 (10Mattflaschen) I intended to mention: This caused {T97104}. [05:19:28] Nemo_bis: if you're around, could you nuke https://commons.wikimedia.org/wiki/User:Niteshift/MVneu/2015_April_11-20 ? [06:46:57] Mecklenburg-Vorpommern brings memories of primary school games [16:22:37] <^d> dancecat: Trivial: https://gerrit.wikimedia.org/r/206403 [16:27:31] <^d> thx [16:30:00] ^d: https://gerrit.wikimedia.org/r/#/c/206310/ [16:42:17] ^d: https://gerrit.wikimedia.org/r/#/c/206357/ [17:16:45] bd808: I tried to figure out from the api team board what the open tasks are and failed miserably [17:17:24] so if there is more stuff in the oauth/authmanager projects I can help with, could you throw them my way? [17:18:50] tgr: sure. I'll take a look today. I honestly don't know where we are at either [17:19:08] I may send you some code reviews for things that have unmerged patches [17:46:45] bd808: Yay not a Google Group! [17:46:45] :) [17:46:45] I one my first battle [17:46:53] *won [17:58:01] * hoo notices that people are moving more and more stuff to github [17:58:04] not happy about that [18:01:18] like what? [18:01:57] iOS is the next one, probably [18:02:43] lame [18:02:56] like, why? [18:02:56] I think the composer merge stuff is living there [18:03:02] also just noticed jquery.uls is there [18:03:08] yeah jquery.uls is known [18:03:22] yuvipanda raged at them a lot, didn't help [18:03:36] but with the ios apps, they're not going to get contributions of objective c code on github anyway [18:03:37] That's extra bad given only WMF people have access to stuff there it seems [18:03:37] s [18:04:10] hoo: ? [18:04:51] WMDE sorta pioneered the whole move-to-gh approach if we're pointing fingers :P [18:05:24] ori: That wasn't our decision ... [18:06:37] * my [18:06:43] composer merge is on github and not gerrit. That's on me [18:06:53] I was hoping for 3rd party contributions [18:07:02] so far 1 patch with no tests :( [18:09:11] bd808: it hasn't worked out quite as well with 3rd party contributors, in my opinion [18:09:27] most of our volunteers are on gerrit [18:09:33] yeah... hasn't worked well for us as well [18:09:36] * aude doesn't make decisions [18:09:45] also it's tedious to give volunteers access there a bit [18:09:46] for merge plugin I was hoping for the larger Composer community [18:09:52] maybe... [18:10:09] It could make sense in some rare cases [18:10:12] * bd808 shrugs [18:10:19] a small experiment [18:10:19] but most of the time it doesn't work [18:10:24] at least it didn't for us [18:10:32] we barely got outside contributors [18:10:39] people have to pay attention about pull requests [18:10:41] and most of these were wikimedians also on gerrit anyway [18:10:50] * aude has trouble to keep up with gerrit + github [18:11:08] so, stuff gets forgotten on github :/ [18:11:18] Following up github is easier for me, because I have a dashboard with changes I actually care about [18:11:21] or should care about [18:19:57] hoo: aude can you comment about your bad experiences on https://phabricator.wikimedia.org/T95749 , plz :) [18:20:01] bd808: ^ you too :) [18:21:12] speaking of pull requests: https://github.com/davedash/parental-leave/pull/17 [18:22:24] We're about to head of for Pizza, sorry [18:22:32] will try to remember and comment on some other time [18:22:36] feel free to poke me [18:24:45] greg-g: i was interviewing a candidate for the perf team from sweden [18:24:56] and i asked him how he first got involved in perf engineering [18:25:26] his answer was like, well, when my wife and i had the twins three years ago, i was on paternity leave for a year, so i had a lot of time to think etc.. [18:27:23] (•_•) [18:27:53] * ori took 5 days because i was new and afraid of losing my job [18:28:00] (pre-WMF) [18:38:37] bd808, https://gerrit.wikimedia.org/r/#/c/206424/ - wrong ticket? [18:39:00] oops yeah [18:39:28] ori: https://gerrit.wikimedia.org/r/#/c/206409/ [18:43:04] ori: yeah, it's hard (paternity leave) [19:07:43] ori: I'm gonna try and figure out https://www.mediawiki.org/wiki/User:Krinkle/1.25 / https://phabricator.wikimedia.org/T94810 [19:08:03] Kind of need a url though to see where its coming from [19:09:35] Krinkle: by the way, I doubt your friends in editing will ever let you escape, since you are too productive [19:09:48] but there is _always_ an empty seat waiting for you in the perf team if you are ever interested [19:10:43] I assure you my intentions surrounding this right now are resourceloader and ve-driven, but I'll take it into consideration. [19:11:11] Thanks :) [19:25:50] bd808: Hm.. what would you say is an appropriate way of getting the http url of a request logged alongside a debug message in MediaWiki? MWExceptionHandler inserts it, but none of our regular logger formats do it. [19:26:50] Checking if you have an idea for this, otherwise I'll stick with wfDebugLog( group, text ) and append request url to tet. [19:27:41] Krinkle: the new logger config in prod could log it for everything easily but I didn't set it up like that for the text logs [19:27:50] everything in logstash has the url [19:28:09] Yeah, I do remember it having that in log stash [19:28:12] Curious how it did that [19:28:16] the text logs would add it if the url were stuck in the message context [19:28:48] I'm creating a wmf branch patch today to add logging for a particular RL cache bug, and need the url to debug further. [19:29:16] IThe WebProcessor adds the url to the leg event -- https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/logging.php#L47 [19:29:23] *log event [19:29:48] https://github.com/Seldaek/monolog/blob/master/src/Monolog/Processor/WebProcessor.php#L29-L35 [19:30:10] bd808: But we don't have logstash back yet for main debuglog groups, and this is not applied to logs sent to fluorine? [19:30:21] right [19:30:29] OK [19:30:39] I expect logstash to be back for everything sometime next week [19:30:48] * bd808 knocks wood [19:31:22] knocks log [19:31:22] We could easily add the url to everything on fluorine now if it's wanted [19:31:58] Putting a %url% in this format would do that -- https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/logging.php#L93 [19:32:11] %extra.url% I guess [19:33:02] the %context% bit will stick anything that is passed in the $context array to a LoggerInterface call [19:34:22] bd808: Hm.. interesting [19:34:36] bd808: I'm not comfortable changing that format though. Is nothing using this? [19:35:13] All the logs on fluorine and beta cluster as of yesterday afternoon [19:35:44] Right [19:35:52] But it's the same format as before, but now powered by monolog formatter [19:36:00] I mean, is nothing relying on the format of those logs [19:36:36] it's a lot like the former format. As far as I know nothing critical is relying on the format [19:36:55] I asked and there weren't many responses [19:37:00] cool [19:37:06] Well, having a url in there would be very very useful. [19:37:18] Though it's a long value to add, might make it harder to scan [19:37:36] my guess is that our logs are so ugly that mostly the things that will be broken are awk scripts people made to find things that they wanted [19:37:45] yeah [19:37:50] it's easier to use in logstash [19:37:55] Yeah [19:38:07] or if we switched to json all the time and made some little tools [19:38:16] I'll stick to a hack for now with the view forward that we're not using these text files for long anymore [19:38:25] I have something I use for scap logs. they are all sjon all the time [19:38:32] *nod* [19:38:33] Ah, right. [19:38:43] json would work well [19:38:56] Do we use json or that text format for debug group logs to logstash? [19:39:06] text format right now [19:39:13] for exceptions and errors we have a json group as well [19:39:17] not sure which one goes to logstash [19:39:17] oh logstash, that's json [19:40:06] I wrote special rules to unwrap the exception-json and call it exception in logstash [19:40:11] I want to kill that eventually [19:40:16] I see. So WebProcessor of monolog also adds those fields as keys in the json package [19:40:20] but taking it a bit slow [19:40:36] my Lyon project is going to be to do lots of logging patches [19:40:41] cool [19:41:04] and hopefully find some helpers [19:41:17] I think we can do all of core in the weekend honestly [19:42:07] my goal will be to get rid of wfDebug() usage in core [19:42:19] and hopefully wfDebugLog() too [19:44:22] ^d: https://gerrit.wikimedia.org/r/#/c/206418/ I've love to tweak the lock timeout [19:44:27] bd808: In favour of what exactly? [19:44:49] Direct use of LoggerFactory and the PSR-3 objects it makes [19:44:59] bd808: Wouldn't that be quite verbose? [19:45:12] so we can log with named channels (like wfDebugLog) and severity levels [19:45:24] no more that what we have now [19:45:49] bd808: sharing the instance through injection? [19:46:03] LoggerFactory::getInstance( $logGroup )->warning( 'something bad happened' ); [19:46:18] I call that verbose [19:46:25] meh [19:46:26] Not an API I'm going to remember [19:46:46] The logger can be injected too [19:46:51] which is more ideal [19:47:09] we're doing that in several places now in core like bagostuff [19:47:36] The logger instances are stateless and sharable [19:48:06] bd808: interesting. So the one in bagostuff would be given one that has a predefined group [19:48:15] or should the class be able to override the group? [19:48:34] group comes with the logger. it's a constructor arg [19:48:41] I suppose injection would also help with making libraries since LoggerFactory is mediawiki-core [19:48:47] Right [19:48:58] Nice [19:49:39] yeah. Nice libs will use Psr\Log\LoggerAwareInterface to indicate that they want a logger [19:50:12] we've been following that pattern as we update things in core [19:50:16] Yep [19:50:20] and a couple of exenstions [19:50:57] in some magic future we may have a DI container of some sort that groks that and gives you one automgically [19:51:02] someday :) [19:51:29] https://github.com/search?utf8=%E2%9C%93&q=%22implements+LoggerAwareInterface%22+%40wikimedia&type=Code&ref=searchresults / https://github.com/search?type=Code&q=%22use+Psr%5CLog%5CLoggerAwareInterface%3B%22%20@wikimedia [19:52:11] The first benefit of using psr3 directly is the log leves [19:52:14] *levels [19:52:53] when we are using levels semi-consistently we can change the prod config to log everything that is warning or higher regardless of channel [19:53:07] Yeah [19:53:38] There's a couple non-warning things that we monitor based on frequency though [19:53:45] e.g. module invalidations and cache misses [19:53:46] right [19:53:53] and we could still do that [19:53:57] I imagine should not be warning level [19:54:10] but the base logging config could be much simpler [19:54:16] Yeah [19:54:33] there will always be a need to log somethings with more verbosity [19:54:56] would we still hardcode channel filters somewhere to avoid bad code from producing garbage channels in es/logstash that are hard to purge? [19:55:21] maybe [19:55:38] logstash only lives for 30 days so might not be a big deal [19:55:46] 31 days I guess [19:55:49] Oh? [19:55:50] Interesting [19:56:03] Makes sense [19:57:21] there is an Elasticsearch index for each day and a cron job that drops the now-30day index each morning [19:57:58] I haven't envisioned logstash as archival records, just operational for now [19:58:28] Yeah, history is useful to know when something started but it shouldnt take 30+ days before you make such query [19:58:37] hopefully not :) [19:58:40] Wikimedia-log-errors [19:58:44] :/ [19:58:48] It's a start [19:58:54] yup [20:00:31] ori: Could you review https://gerrit.wikimedia.org/r/#/q/Ibedc31659ed91262bca115101136fe60df6c5134,n,z ? I'd like to push it our to collect some data about this error. [21:27:18] 10MediaWiki-Core-Team, 7Performance: [draft] Performance Roadmap April - June 2015 (Q4 2014/2015) - https://phabricator.wikimedia.org/T93845#1235177 (10Aklapper) [21:45:49] ori: any time to review https://gerrit.wikimedia.org/r/#/c/205825/9 ? [21:56:03] dancecat: probably not, I am writing the postmortem for yesterday's outage [21:56:11] is it urgent? [21:56:40] ori: no, just getting antsy [23:34:29] Krinkle: https://gerrit.wikimedia.org/r/#/c/74400/