[00:00:05] fiddling scheduled ! [00:18:14] I think I got it [00:38:49] Cyberpower678: \o/ [00:40:31] hi [00:40:42] Cyberpower678: Success [00:41:55] Cyberpower678: just expand the where clause to: WHERE `rev_user` ='226562' and `rev_timestamp` > '1' [00:43:51] speed up from ~20 sec => ~ 2sec [00:44:55] made the optimizer change his mind [00:45:04] I'll do it after finals. :-) [00:45:38] '1', 'SIMPLE', 'revision', 'range', 'PRIMARY,rev_timestamp,page_timestamp,user_timestamp', <-- possible keys 'user_timestamp' <-- used key [00:45:51] \o/ \o/ \o/ \o/ [00:46:15] Cyberpower678: but for archive table, you have to file a bug [00:46:51] Cyberpower678: something like: please provide index KEY `user_timestamp` (`rev_user`,`rev_timestamp`), [00:47:12] hedonil, bug is already filed. [00:47:20] Cyberpower678: good [00:47:38] hedonil, https://bugzilla.wikimedia.org/63777#c2 [00:48:02] Cyberpower678: I'll add some comment on this [00:53:50] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c3 (10metatron) Esisting Indeces on revision table: PRIMARY KEY (`rev_page`,`rev_id`), UNIQUE KEY `rev_id` (`rev_id`), KEY `rev_times... [00:57:20] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c4 (10metatron) s/Indeces/Indices/ [00:57:53] So the problem is that MySQL's planner doesn't recognize that it could use the (rev_user, rev_timestamp) index for a look-up for rev_user? [00:58:57] yeah, unless you give him both fields (all fields) of that composite index [01:00:18] a broad hint for that guy ;) [01:00:34] That hand-crafting queries feels like NoSQL :-). [01:02:39] it would be easier, if one could access the table directly (with index hints like USE INDEX, IGNORE INDEX or FORCE INDEX) [01:03:48] Or if it was replicated to PostgreSQL :-). [01:06:47] Cyberpower678: pong [01:10:37] springle, yay [01:10:49] hedonil, springle is in charge of replication [01:11:17] Cyberpower678: just saw it. he's alive ! :P [01:13:51] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c39 (10Tim Landscheidt) (In reply to Krinkle from comment #38) > [...] > In theory this could still be exploited by being a patient person and > waiting for the moment one month... [01:14:36] springle, hi. We have a minor issue with the current indexing of the archive_userindex and revision_userindex tables. [01:14:48] springle, hedonil can explain it better than I can [01:15:02] ok [01:15:03] springle, but I was wondering if that can be taken care of. [01:15:35] hedonil, are you still there? [01:15:53] Cyberpower678: yep [01:16:05] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c5 (10metatron) Additional explanation on this (as Cyberpower678 pointed out) `rev_user_text` && `ar_user_text` may give wrong results, if... [01:16:05] hedonil, would you like to explain it to springle [01:16:20] Cyberpower678: ;) [01:17:23] springle, bug 63777 explains a great deal of it. [01:17:37] looking [01:18:04] springle: the issue with revision table seems to be fixed now, all we need now is an additional index on archive table for `ar_user` [01:18:37] hedonil, if the revision table could take hints a little better, that would be nice. [01:19:18] Cyberpower678: he, broad hints it is ;) [01:19:36] what exactly was the issue with revision? comment #2 in the bug -- slow at first, fast later? [01:19:41] sorry, comment #1 [01:20:29] springle: on revision, te optimizer chose KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`,`rev_user`,`rev_deleted`,`rev_minor_edit`,`rev_text_id`,`rev_comment`) by default [01:21:35] for count(*) on rev_user? seems odd [01:21:55] springle, yep. If that could be fixed, it would be nice. [01:22:29] springle: unless you expanded te WHERE clause to include both: `rev_user` AND `rev_timestamp` [01:22:31] springle, but the biggest issue right now is that ar_user isn't even indexed at all right now, making the queries take forever right now. [01:23:22] hedonil: this is on the view enwiki_p.revision? [01:23:29] * springle hates views [01:23:37] springle: yep [01:24:00] springle: unfortunately we can't use tables here ;) [01:24:09] hedonil, doesn't this misindex apply globally though? [01:24:35] Since each wiki replicated is setup to be the same. [01:24:55] Cyberpower678: hmm, don't know, optimizer is doing his job [01:25:05] Cyberpower678: which is in most cases ok [01:25:08] springle, can we place a greater priority on fixing archive's indexing? [01:25:25] but archive is the real problem now [01:25:30] yep [01:25:44] hedonil, if optimizer is working everywhere else, why not enwiki? /me scratching head. [01:26:12] ah the view is created with algorithm=undefined [01:26:20] should be algorithm=merge [01:26:51] springle: this sounds sane [01:27:16] springle, that's for the optimizer algorithm running on revision? [01:27:54] it's more the view materialization algorithm. the optimizer just works with whatever it's given [01:28:52] algorithm=merge means merge the query with the view's query and optimize the result. algorithm=undefined means it might decide to use a temp table [01:30:00] we can fix this but changing the maintain-replicas.pl script Coren wrote i guess. will have to check [01:30:04] s/but/by/ [01:32:12] as for an archive index user_timestamp... it's really quite impressive that production hasn't got one either. bet it was because archive was hard to alter online when it had no PK [01:32:42] commenting on the bug. we can probably fix both issues [01:34:14] oh, as for revision being slow, then fast, then slow again: that's cold data. in production this was solved by partitioning revision [01:36:31] How big is the enwiki repdb? [01:37:04] a930913: https://tools.wmflabs.org/tools-info/?dblist=s1.labsdb [01:39:05] 1TB? O.O [01:39:33] a930913, quite the rounding, eh? [01:39:34] Isn't the full revision dump only a few hundred GB? [01:40:12] a930913: Indices are included here [01:41:14] hedonil: That's only a few bytes per row though, no? [01:42:07] a930913: I can look it up in deep - but in some schemas indices are 1/2 the total [01:43:04] hedonil: Even so, it's still an order of magnitude greater than I was expecting. [01:43:29] a930913: yeah, indices can blow up the whole thing [01:43:32] I mean if all the text can fit into that size when compressed... [01:46:36] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c6 (10Sean Pringle) Adding the index should be OK. [01:46:37] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777 (10Sean Pringle) a:3Sean Pringle [01:49:44] Cyberpower678: hedonil: was there another bug about slow revision? or was that just a related conversation [01:51:14] I think mis firing optimizer on revision and missing index on archive is the only issue of concern right now. [01:58:44] springle: afaik this have been all the (kown) issues [01:59:08] a930913: I just made an overview on schema enwiki http://tools.wmflabs.org/tools-info/schema_enwiki.html [01:59:46] a930913: you can compare the columns data_length && index_length [02:00:33] springle, just a question of curiosity, when can this be fixed? [02:02:29] hedonil: Seems to be the various links taking up the most space. [02:02:56] a930913: and revision, of course [02:03:53] Cyberpower678: within this week [02:04:02] springle, :D [02:04:12] hedonil, ^^ [02:04:24] Cyberpower678: yeah [02:04:43] hedonil, problem should be fixed with this week springle predicts. [02:05:15] springle: good news. thanks for your action [02:05:29] Cyberpower678: moar power [02:05:29] yw [02:06:12] hedonil, you want more power? [02:06:39] Cyberpower678: Cyber Power Power :P [02:09:58] hedonil, :D [02:19:42] Cyberpowerpower: Cyberpower³ [02:20:16] Cyberpower� :Erroneous Nickname [02:20:25] hedonil, ^ [02:21:05] hehe invalid power [02:21:14] :P [02:21:52] level over 9000 [02:22:37] * Cyberpower^99999 is OP 10000 [02:22:43] aaand this one for all of us https://www.flickr.com/photos/110698835@N04/13869738913/ [02:24:20] I'm powerful. [02:25:19] hedonil, ^ [02:25:22] just Demiurge [02:26:04] one cannot top that !:P [02:27:01] ultimate instance of GOD [02:27:44] hedonil, my OP level 10,000 allows me to kill anything with a fraction of a bullet. >:D [02:28:06] hedonil, Borderlands 2 again, btw. [02:28:15] tsss [02:28:23] op = overpower [02:31:16] BD2_OP10000: btw. I set up a running php version with xdebug [02:31:32] of what? [02:31:33] no more print_r/echo debugging [02:32:08] hedonil, you just now installed a compiler? :D [02:32:31] just a debugger [02:32:31] I've had one since before Cyberbot II started its first trial. :p [02:32:50] but not on labs? [02:33:12] ! [02:33:12] hello :) [02:33:17] haha [02:33:22] I've had one on my computer for quite some time. [02:33:34] summoned wm-bot :-) [02:33:43] !say hi [02:33:43] hi [02:33:56] !woah [02:34:05] hm wm-bot, I have a question [02:34:10] @info [02:34:10] http://bots.wmflabs.org/~wm-bot/dump/%23wikimedia-labs.htm [02:35:07] wm-bot: What's the Answer to the Ultimate Question of Life, The Universe, and Everything? [02:35:07] Hi hedonil, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [02:35:35] hmm [02:35:44] hedonil, HelixFossil, is a partial AI bot [02:36:18] You must enable Javascript to view this page. [02:36:21] bleh [02:36:50] * hedonil operates in paranoid mode right now [02:36:56] hedonil, where are you? [02:36:57] so, no javascript [02:37:13] I'm here [02:37:30] hedonil, on the internet of course. What page is asking for JS? [02:37:41] http://askhelixfossil.com/ [02:38:14] hedonil, not that HelixFossil [02:38:25] It's an IRC bot on #wikipedia-en-accounts [02:38:30] ahh [02:38:33] You can chat with it. [02:38:45] * hedonil tries [02:38:51] HelixFossil, what is the answer to life, the universe, and everything [02:38:51] BD2_OP10000: The meaning of life is part of God's mysterious plans. [02:38:52] HelixFossil, what is the answer to life, the universe, and everything [02:38:52] BD2_OP10000: The meaning of life is part of God's mysterious plans. [02:39:45] (04:38:52) ChanServ: (notice) [#wikipedia-en-accounts-unreg] IMPORTANT MESSAGE #wikipedia-en-accounts is now a private channel, and you have been REDIRECTED to #wikipedia-en-accounts-unreg because you are not one or more of these: 1) Identified to NickServ, 2) Current, active member of ACC, 3) Identified to WMF. If you are here in error, please ping a voiced user in #wikipedia-en-accounts-unreg. [02:40:18] hedonil, stupid me forgot, private information is discussed in that channel. [02:40:33] That's why you're not allowed access to it right now. [02:40:40] yeah [02:41:29] I have access to data similar to CUs. :p [02:42:13] HelixFossil, what is the answer to life, the universe, and everything [02:42:13] BD2_OP10000: It is found in the Bible. [02:42:14] HelixFossil, what is the answer to life, the universe, and everything [02:42:14] BD2_OP10000: Actually, it's twenty-three, not forty-two. [02:42:41] BD2_OP10000: Aha, 23! [02:42:56] 23 & me [02:43:00] lol [02:46:23] yes 23 it is. [02:46:24] Principia Discordia, the sacred text of Discordianism, holds that 23 (along with the discordian prime 5) is one of the sacred numbers of Eris, goddess of discord. [02:57:50] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c40 (10Krinkle) (In reply to Tim Landscheidt from comment #39) > (In reply to Krinkle from comment #38) > > [...] > > > In theory this could still be exploited by being a patien... [04:03:36] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c41 (10Tisza Gergő) > To avoid strange statistical variance like that, I'd recommend making timestamp of the currently running month appear as last month's. I don't see how that... [07:31:21] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs IRC bot should use a machine-readable format (no more parsing mailing list messages) - 10https://bugzilla.wikimedia.org/40970 (10Andre Klapper) a:3Merlijn van Deen [07:31:51] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864 (10Andre Klapper) a:3Merlijn van Deen [07:43:02] !log integration rebase operations/puppet repo on puppetmaster [07:43:04] Logged the message, Master [07:43:35] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c1 (10Merlijn van Deen) Created attachment 15290 --> https://bugzilla.wikimedia.org/attachment.cgi?id=15290&action=edit First e-mail [07:43:49] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c2 (10Merlijn van Deen) Created attachment 15291 --> https://bugzilla.wikimedia.org/attachment.cgi?id=15291&action=edit Second e-mail [07:44:05] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c3 (10Merlijn van Deen) Two distinct e-mails were sent by Bugzilla (both attached). Diff: --- 001412.raw 2014-05-05 04:39:20.089465908 +0000 +++ 001413.raw 2014-05-05 04:39:20.317471864 +0000 @@ -1... [09:34:53] 3Wikimedia Labs / 3Infrastructure: Create users jenkins and zuul on the NFS server - 10https://bugzilla.wikimedia.org/64868 (10Antoine "hashar" Musso) 3NEW p:3Unprio s:3normal a:3None For the integration project, I would need user and groups 'jenkins' and 'zuul' to be created on the NFS server. The... [11:13:51] 3Tool Labs tools / 3wikibugs IRC bot: pywikibugs should not abbreviate "Assigned" to three characters. - 10https://bugzilla.wikimedia.org/64632 (10Andre Klapper) a:3Merlijn van Deen [11:21:21] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c4 (10Andre Klapper) There is no bug in wikibugs here. This is a bug in Bugzilla's job queue having a hiccup when several changes happen quickly on a bug report. The two corresponding bugmail notifica... [11:30:08] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs bot: add to wikivoyage channel and customize output - 10https://bugzilla.wikimedia.org/41142#c9 (10Merlijn van Deen) a:3Merlijn van Deen This is easy to implement in the new wikibugs, so please re-open the bug if you would still like bug reports for specific c... [11:35:06] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c5 (10Merlijn van Deen) p:5Unprio>3Low Ah, but then probably only one of the comments got reported on IRC. That /is/ a bug! (and I think it was reported for the old wikibugs bot, too, but I can't f... [12:38:28] can I request a new instance for testing code of this project https://github.com/Daniel-Mietchen/OA-signalling [12:46:23] notconfusing: Do you need your own instance for that (which project?) or is that something you want to do as part of the Tools project (https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools)? [12:48:13] scfc_de: we are writing an import bot for wikisource, so we will write a tool as well that does the importing, were thinking taht rather than testing importing on live wikisource that we would do it on an instance. would you recommend another way? [12:50:08] notconfusing: So you need the instance to run a MediaWiki instance to test against? [12:50:36] scfc_de: that's correct [12:50:57] specificall a wikisource instance [12:55:25] hi, I would like to know if it is possible to configure in $HOME/.lighttpd.conf the path of access.log and error.log. [12:57:23] rotpunkt: More or less. See https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Web_services under 'request logging' [12:57:37] notconfusing: Then I think you should probably request a new project for that. AFAIK, andrewbogott_afk or Coren are the gatekeepers for that. [12:58:45] scfc_de: thank you for pointing me in the right direction, andrewbogott_afk or Coren can you help you make a new project called "oa-signalling" [12:59:55] rotpunkt: The error.log is a bit more complicated, though, as it's set in tool-lighttpd-starter (or a permutation of those words :-)). I wouldn't change it unless it's *really* important for you. [13:00:05] 3Tool Labs tools / 3wikibugs IRC bot: wikibugs duplicate reporting - 10https://bugzilla.wikimedia.org/64864#c6 (10MZMcBride) Huh, I missed that Bugzilla actually submitted my comment twice, as bug 44448 comment 36 and as bug 44448 comment 37. That seems like the real issue here. The bot mis-reported #c36 twi... [13:02:47] no problem, I have thought to move access.log and error.log in $HOME/logs/ but if it's not user-configurable, it's ok in the default place [13:29:45] notconfusing: I can make you a project, just a moment... [13:31:42] huh. anyone know what notconfusing's wiki name is? [14:11:10] andrewbogott: Probably https://en.wikipedia.org/wiki/User:Daniel_Mietchen [14:37:31] definitely not [14:38:25] https://wikitech.wikimedia.org/wiki/User:Maximilianklein [14:42:12] Hm.. mediawiki nightly snapshots is still broken. Ever since the crontab change / grid enforcement. For some reason it won't run under grid [14:42:24] Nothing in .err or .out [14:42:37] (tools/snapshots) [14:43:02] The php script starts (I can tell from the write actions it does to disk), and then dies unexpectedly [14:43:26] Cron looks like 0 * * * * /usr/bin/jsub -N snapshots-updateSnaphots -mem 1280m -once -quiet ~/update.sh [14:43:33] update.sh: php /data/project/snapshots/src/mwSnapshots/scripts/updateSnaphots.php > /data/project/snapshots/src/mwSnapshots/logs/updateSnaphots.log 2>&1 [14:43:47] Job is killed with exit code 137, kernal. That's odd. [14:44:04] The memory is higher than it needs to be, just raised it in case that was hte issue, but it isn't. [14:50:32] Does Dario show up on IRC? [15:13:16] Gah overslept! [15:15:08] * Coren catches up on backlog [15:16:08] Might've been something intermittend that stayed for 3 days but was suddently fixed just now? The run of the 16 minutes ago succceeded. [15:24:49] Krinkle: 137 = 128 + 9 = SIGKILL = usually out of memory. [15:29:27] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Returning_the_status_of_a_particular_job [15:29:59] Krinkle: Nothing I did on my end. [15:34:53] Coren: this doesn't look good though: https://gist.github.com/Krinkle/393909d399e6b1f6fa4b [15:34:55] I guess that's php and git to blame [15:35:32] it varies depending on how many mediawiki core branches changed since the last run. [15:35:42] Eeew. [15:35:44] that shouldnt matter though since it ought to free up memory between each branch within the script [15:36:16] the scrip basically iterates over remote git branches, check hash against static file, if different, check it out, make a tarball and move on [15:36:19] https://tools.wmflabs.org/snapshots/?action=updatelog [15:36:48] consdiering the 1.9G spike, I've changed the crontab to 2048 for now [15:39:25] Coren, could https://wikitech.wikimedia.org/wiki/User:Merlijn_van_Deen/common.js be added to https://wikitech.wikimedia.org/wiki/MediaWiki:Common.js to make the terminology on wikitech pages less confusing for TL users? [15:39:50] Krinkle: 1.9G. Hm. Can you correlate that with branch complexity or something? Making the tarball might actually map the whole thing in memory for some inane reason? [15:39:58] valhallasw: Lemme look at it [15:40:25] basically changes titles & labels on the project group admin pages we link to from https://tools.wmflabs.org [15:43:41] Coren: Usee `git archive HEAD --format='tar' | gzip > filename.tar.gz` [15:44:16] (03PS1) 10Merlijn van Deen: Fix 'manage maintainers' link [labs/toollabs] - 10https://gerrit.wikimedia.org/r/131484 [15:44:56] (03PS2) 10Merlijn van Deen: (Bug 63741) Fix 'manage maintainers' link [labs/toollabs] - 10https://gerrit.wikimedia.org/r/131484 [15:47:52] Krinkle: Not a solution to the problem, but why on Tools and not as part of the Jenkins setup? [15:47:52] valhallasw: I'm not sure I like the idea; at best it trades one confusion for another. Do we actually know in practice that people /are/ getting confused despite the documentation? [15:48:45] scfc_de: No particular reason other than more freedom in iteration, user interface etc. [15:49:04] Could become part of integration.wm.o portal at some point [15:49:17] Krinkle: k [15:49:26] Also scalability, we don't want to do it on post-merge of every single branch. Doing it once per hour is enough. [15:49:39] archiving takes longer than the average time between commits sometimes. [15:50:02] Of course we can have the jenkins job do it hourly as well. [15:50:50] I'm just aware that debs are built for some repos (for every commit IIRC), so it would feel "natural" to do the tar balls for other repos in the same way. [15:51:05] Krinkle: Doesn't Jenkins have facilities for the creation of artifacts like this? [15:51:50] No, not for clean git tar balls. But implementing that is trivial. But they'd be stored inside jenkins file storage and through it's web interface. The purpose of this tool is provide a clean entry point and simple interface. [15:52:51] If I do this in Jenkins, it would use nothing of jenkins itself. For one because we're getting rid of Jenkins (we already have in a way, all we're using of Jenkins is the storing of build outputs, it's not doing any git, build, shell, workspace, logic whatsoever) [15:53:06] it'd just be a shell script that does what this tool does now [15:58:40] Coren: I can also adapt it to have the wikitech names in ()'s? [15:59:58] Coren: in any case, the two parallel terminologies are bound to cause confusion one way or another [16:14:54] valhallasw: I'll talk to people and Zürich and see what the general feeling on the matter is; it seems really hackish to do this without a pressing need IMO. [16:17:13] Sure. [16:17:38] Now that you're here :-p Could you +2 or -1 https://gerrit.wikimedia.org/r/102721 ? [17:51:24] the gadgetusage table is replicated? [17:52:26] Steinsplitter: https://bugzilla.wikimedia.org/show_bug.cgi?id=58196 [17:53:11] thx [18:11:41] (03CR) 10coren: [C: 032] "This should work." [labs/toollabs] - 10https://gerrit.wikimedia.org/r/131484 (owner: 10Merlijn van Deen) [18:11:51] (03CR) 10coren: "This should work." [labs/toollabs] - 10https://gerrit.wikimedia.org/r/131484 (owner: 10Merlijn van Deen) [18:11:58] (03CR) 10coren: [V: 032] "This should work." [labs/toollabs] - 10https://gerrit.wikimedia.org/r/131484 (owner: 10Merlijn van Deen) [20:57:43] !log deployment-prep deploying new plugin to Elasticsearch (swift) [20:57:45] Logged the message, Master [20:58:30] <^d> \o/ [21:24:36] !log deployment-prep removing pdf01 instance -- labs just uses production mwlib which works just fine. I'll recreate this when I make the OCG test instance [21:24:38] Logged the message, Master [21:29:16] !log deployment-prep ran puppetstoredconfigclean and revoked puppet and salt keys for i-00000339.eqiad.wmflabs (was pdf01) [21:29:18] Logged the message, Master [21:31:21] hashar, have you made any headway on https://bugzilla.wikimedia.org/show_bug.cgi?id=60684 (getting a ubuntu 14.04 image in labs)? [21:31:41] mwalker: not at all. That is an ops project :) [21:32:24] mwalker: as I understand it 14.04 comes with puppet 3.0 we have a few 14.04 test box that had puppet 2.7 backported on it [21:32:36] yepyep [21:32:39] I run one :p [21:32:54] mwalker: and _joe_ has a script to compare puppet 2.7 and 3.0 versions [21:33:01] beside that, I have no clue :° [21:33:16] you might want to poke the ops list about [21:33:53] having the image on labs probably require a bunch of things to be fixed in our openstack puppet manifest as well [21:34:15] aye; production puppet seems to work just fine on the limited packages I use on 14.04 [21:34:24] but openstack is probably a whole new can of works [21:35:37] I guess so [21:40:21] 3Wikimedia Labs / 3tools: "add / remove maintainers" links at http://tools.wmflabs.org/ should link directly to management page - 10https://bugzilla.wikimedia.org/63741 (10Tim Landscheidt) 5PAT>3ASS [21:48:03] Hi all, I'm wondering how you manage deployment of the wmf/1.NNwmfNN in relation to submodules [21:48:57] renoirb_: On Beta or in production? For the latter, #wikimedia-operations might be more fruitful. [21:49:32] hashar: Do you know if there is a list of all errors Puppet 3.0 throws for the current operations/puppet repo? [21:49:40] Interresting question scfc_de`, how can I know which one is the one you use in production? [21:49:53] scfc_de`: yes there is one. No idea where it is though [21:50:08] scfc_de`: you want to ask ops in #wikimedia-operations :] [21:50:09] ... I heard from Ryan Lane that you are in CI mode. [21:50:15] ok then :] [21:50:29] will do. Thought that here was also about operations [22:41:01] Hi [22:41:11] Labs dump for enwiki is lagging