[01:16:04] Anyone know where we have documentation of deploying config changes to beta? [01:16:46] csteipp: MW config changes? [01:16:52] bd808: Yeah [01:17:01] it's just het deploy [01:17:22] So login and sync-file? [01:17:46] ..pull and sync-file that is [01:17:50] no, merge, sync on tin, wait for jenkins [01:18:09] Oh. Tin pushes to beta? [01:18:16] jenkins does [01:18:16] Wow, I'm way out of date... [01:18:24] That's very cool [01:18:47] * bd808 looks for the jenkins job docs [01:19:11] https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/How_code_is_updated [01:19:34] zuul fires beta-mediawiki-config-update-eqiad on merge [01:20:01] that fetches to deployment-tin and then fires the beta-scap-eqiad job [01:20:25] Oh, nice. Thanks for pointer! [01:21:04] before we had multi-master deploy servers in prod you just needed to git pull on tin, but now you need to sync-file too or it will set off an alert for the masters being out of sync [15:45:44] Krinkle: It would be helpful to narrow it down further than just between "before $ps_session" and "after scopedProfileOut( $ps_session )". From the log messages in that paste, I'd probably start looking inside PHPSessionHandler::read(), the paste shows you at least got as far as the $session->persist() there, and getting much farther would put you at the scopedProfileOut(). [15:45:44] Also, BTW, you might be able to trigger the bug at will by messing with the redis (or other memcache) data for the session to change the value of $data["metadata"]["expires"] to a past timestamp (I can't reproduce it myself, but doing that gives me the sequence of log messages like in your paste). [15:47:13] anomie: thanks, I'll try next time it happens [15:50:55] Krinkle: Also, which version of PHP, in case it turns out to matter? [15:51:11] anomie: Zend PHP 5.6.16 [15:51:15] (Homebrew, Mac) [15:51:49] Not too different a version from what I have then (5.6.17-3 from Debian) [21:47:42] bd808: this showed up in a "new books on programming" newsletter I subscribe to: http://www.amazon.com/Learning-ELK-Stack-Saurabh-Chhajed/dp/1785887157 [21:47:52] in case you're interested [21:51:27] ori: thx. I'm not a big learn from books guy for web stuff. It tends to move faster than the publishing process. [21:51:49] I really should look at kibana4 again though if I'm going to still be the ELK guy [21:54:09] I just thumbed through the pages Amazon's "preview" feature makes available -- the chapter called "ELK Stack in Production" talks about how several prominent ELK installations are configured (LinkedIn, SCA) and what choices they made with respect to data retention, etc. [21:55:06] that's chapter 9 (of 10 total); 1-8 probably do not have anything you don't already know [22:03:44] ebernhardson heard about some ELK stuff at elasticon. I haven't debriefed with him on it though [22:04:13] one teaser he gave was that the cool kids are using app -> kafka -> logstash these days [22:04:45] is kafka a quasi official transport then? [22:05:18] we have it running [22:05:26] just to get EL error messages [22:11:52] yeah, LinkedIn (unsurprisingly) uses Kafka to ship data to ELK [22:12:06] go figure :) [22:13:07] This is what the current ELK Stack implementation at LinkedIn looks like: 100 plus ELK clusters across 20 plus teams and six data centers. Some of the larger clusters have: - Greater than 32 billion docs (30+ TB); - Daily indices that average 3.0 billion docs (~3 TB) [22:13:23] that should have had quotation marks around it, I'm quoting the book [22:14:59] it's good to know that it can work well at that scale. Though that volume of data is kind of ridiculous. [22:17:14] I think companies figure: 1) we have more money than we deserve or know what to do with; 2) we don't have a well-designed data architecture, so we can't tell apart useful data from garbage; 3) bragging about huge volumes of data is good PR [22:17:36] so: keep all data forever [22:19:48] http://www.confluent.io/blog/apache-kafka-hits-1.1-trillion-messages-per-day-joins-the-4-comma-club [22:21:21] ~3T/day actually isn't "that" much on a log all the things scale [22:21:51] I'd love to not have to hunt for things to stop logging once every 6 months :/