[19:12:29] @commands [19:12:41] update :) [19:13:18] !wiki test | blah [19:14:32] @flush [19:14:41] :| [19:14:46] !wiki test | blah [19:14:51] at least it didn't crash [19:15:38] Ryan_Lane: I have a question about bots [19:15:52] some folks are asking on wiki if they could host bots there [19:15:54] sure. what's uyp? [19:16:0] I think there are 3 things we could do [19:16:13] 1 tell them we are not ready for production run [19:16:22] I'd really like to have a proper infrastructure up before we allow bots en masse [19:16:25] 2 give them access to current test project [19:16:45] 3 create some bots production project and start puppetizing stuff from test and deploy it there [19:17:5] so number 1? [19:17:11] yeah. for now [19:17:13] ok [19:17:32] we have a few bots to test with right now [19:17:36] yes [19:17:48] so, let's work with that, and see what we can do [19:17:59] cluebot is connecting to toolserver just to grab data in sql [19:18:7] it would be faster if we had sql [19:18:11] yeah [19:18:33] the thing is… bots is part of tool labs, and tool labs is the second part of this project [19:18:47] hm... first part should be ready first? [19:18:48] so, we weren't targeting this right now [19:18:54] ok [19:18:58] :) [19:19:4] it's fine to work on, but some parts may be missing [19:19:14] and other things need to happen before we can focus on those missing parts [19:19:20] ok [19:25:29] hmm. let's move the bastion host. [19:25:48] ok [19:25:53] good idea [19:26:28] hyperon, drdee: I'm going to kill the bastion host soon, so you'll be disconnected. it'll come back up soon after [19:26:37] I'll let you know right before I do so [19:26:44] * Ryan_Lane brings up the replacment [19:27:2] hyperon: actually you could use bots-apache1 instead for that moment [19:27:19] for some reason it still allows outside connection to port [19:27:51] hmm. need to populate the project too [19:27:55] dear ldap [19:30:9] 12/15/2011 - 19:30:08 - Creating a home directory for sara at /export/home/bastion/sara [19:30:9] 12/15/2011 - 19:30:08 - Creating a home directory for demon at /export/home/bastion/demon [19:30:9] 12/15/2011 - 19:30:08 - Creating a home directory for damian at /export/home/bastion/damian [19:30:9] 12/15/2011 - 19:30:08 - Creating a home directory for nimishg at /export/home/bastion/nimishg [19:30:9] 12/15/2011 - 19:30:08 - Creating a home directory for erik at /export/home/bastion/erik [19:30:14] hahaha [19:30:15] damn it [19:30:27] how can I avoid that? [19:30:33] ah [19:30:37] sleep(1) [19:30:47] after sending a message [19:30:57] I can't tell freenode to let it spam? :) [19:31:2] I don't know [19:31:9] even system bots are getting killed [19:31:17] excepting services [19:31:24] thankfully my bot is set to rejoin when killed [19:31:59] I think that if you put a sleep after it should solve it [19:32:1] labs-home-wm: yay for good bots! :) [19:32:10] like sleep (1s) after sending a message [19:32:11] :) [19:32:16] that's a pain :( [19:32:29] hard to implement in python? [19:32:32] I wonder if irclib has some anti-flood support [19:32:38] no. it's just annoying [19:32:40] ah [19:32:47] anti-flood would work exactly same [19:33:7] it would have a queue with messages [19:33:14] and send one every second or two [19:33:19] probably [19:34:55] yeah [19:35:38] hmm why isn't bastion1 responding? [19:36:9] did I forget the security group stuff? [19:36:23] ding ding ding [19:37:8] now I need to copy over the home dirs [19:37:23] and the ssh keys [19:37:27] it's called bastion1? [19:37:28] why... [19:37:40] I thought it would be bastion again [19:37:50] meh. it doesnt actually matter [19:37:55] it'll still be bastion.wmflabs.org [19:37:58] ok [19:38:14] the public address stays the same, it just gets moved to the new instance [19:38:14] then it's ok :) [19:42:5] ok. home dirs moved [19:42:9] ssh key moved [19:42:11] project members added [19:42:17] now I just need to move the public IP [19:42:44] !log testlabs disassociating IP address for bastion [19:42:45] Logged the message, Master [19:42:54] !log bastion associating IP address for bastion.wmflabs.org [19:42:55] Logged the message, Master [19:43:29] heh [19:43:34] seems current connections still work [19:43:45] that's pretty interesting [19:44:7] hm. crap. I wonder if the IP will be the same [19:44:9] It better be [19:44:23] I can probably associate the IP directly to the project [19:47:39] well, that was more of a PITA than it should have been [19:47:53] maybe I shouldn't disallow unallocating IP addresses if a hostname is still added to it [19:48:4] though, it's problematic if I do [19:48:12] then there is a resource hanging around on it [19:48:38] maybe I should allow it with a checkbox, telling people that it'll leave resources hanging [19:48:58] ok. bastion host is back up and is in its own project [19:49:57] I should delete the old instance [19:50:8] !log testlabs deleting bastion instance [19:50:9] Logged the message, Master [19:50:52] hm... works [19:51:28] don't forget to disable rebooting heh [19:51:37] so that apt wouldn't reboot it [19:51:48] I need to diasable a bunch of things for global projects.... [19:52:32] actually, I need to change the extension to only allow sysadmins to add/remove project members [19:52:48] I thought that netadmin is more? [19:52:56] it's just different [19:53:0] ah, ok [19:53:46] now not everyone needs to be a project member of testlabs [19:54:6] * Ryan_Lane goes to remove most people [19:54:25] what are testlabs actually about? it's a first project? [19:54:39] btw why nrpe is disabled on most hosts? [19:54:44] it's supposed to be the cluster cloned [19:54:48] I haven't fixed it yet [19:54:52] k [19:55:3] cluster clone? wow [19:55:7] even db? [19:55:8] new projects will work properly [19:55:9] yes [19:55:13] yay [19:55:23] that's cool [19:55:27] ok. off to meeting [19:55:31] right [19:55:34] will be online, but likely not terribly responsive [19:56:11] old bastion is running! hm... :P [19:56:48] I can even ssh to it [20:3:15] PROBLEM Current Load is now: CRITICAL on bastion1 bastion1 output: CHECK_NRPE: Socket timeout after 10 seconds. [20:3:29] New patchset: Ryan Lane; "Ensure no one has root on bastion instance" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1567 [20:3:40] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/1567 [20:3:46] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1567 [20:3:46] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1567 [20:4:3] let me just say. I fail. [20:4:5] PROBLEM Disk Space is now: CRITICAL on bastion1 bastion1 output: CHECK_NRPE: Socket timeout after 10 seconds. [20:4:9] heh [20:5:3] hm... what's wrong? [20:5:24] I forgot to put bastion project into the list of projects that everyone doesn't get root on [20:5:30] ah [20:5:33] then realized it and fixed it [20:5:34] hm, remake? [20:5:38] ah ok [20:5:43] bastion is still running now [20:5:46] old one [20:5:53] I didn't terminate it yet [20:6:0] PROBLEM Total Processes is now: CRITICAL on bastion1 bastion1 output: CHECK_NRPE: Socket timeout after 10 seconds. [20:6:8] you logged :P that you did [20:6:20] heh [20:6:50] now I did [20:6:56] !monitor bastion1 [20:6:57] and since people are logged into it, it'll have issues [20:7:10] PROBLEM dpkg-check is now: CRITICAL on bastion1 bastion1 output: CHECK_NRPE: Socket timeout after 10 seconds. [20:7:10] ahh Ryan_Lane :) Hello ! [20:7:15] howdy [20:7:21] almost fine :D [20:7:43] hmm. distro check in my certs change is broken [20:8:27] Ryan_Lane: I am wondering how I could have a port translation setup from a public IP port XX to my VM in 10.0.0.0/8 port 80 [20:8:46] !socks-proxy [20:9:3] we don't do PAT [20:9:5] Ryan_Lane: the idea is to give a public URL to over people so they can test the app [20:9:8] ah [20:9:11] which app? [20:9:14] testswarm [20:9:32] I can give it a public IP [20:9:33] that would be temporarily, a month at most I guess [20:9:37] would be great :) [20:11:14] hashar: ok, if you use "manage addresses" in labsconsole, you can allocate an address, and associate it with your instance [20:11:44] Thanks Ryan [20:11:58] do you drink beer or wine by any chance? :D [20:12:1] then you can add a public dns entry [20:12:2] I do :) [20:12:6] good [20:12:8] :-)))))))))))) [20:12:15] cause you are soooo helpful ! :D [20:12:24] heh [20:12:26] I try [20:12:52] you deserve a round of alcoholic beverage (we french, usually celebrate with such drinks) [20:12:56] Q: do you drink wine or beer? [20:13:1] sounds good to me :) [20:13:1] Answer: I do! [20:13:4] :D [20:13:17] I like that [20:13:47] Ryan_Lane: somehow,Nova allocated me an empty IP address. [20:13:49] Allocated new public IP address: [20:13:49] Back to address list [20:13:50] ;) [20:14:4] huh [20:14:4] petan: we will drink both [20:14:5] yeah [20:14:10] :) [20:14:11] that's a bug :) [20:14:12] that is just the message [20:14:29] bah that address suscks it has no 7 in it [20:15:28] oh we can even manage DNS entry [20:15:44] yeah [20:15:50] use wmflabs, not the others :) [20:16:10] the others are for other things. you *can* use the others, but it likely won't work the way you want :) [20:16:15] * hashar updates security rules [20:18:38] https://testswarm.wmflabs.org/testswarm/ !!!11!!BBQ [20:18:43] Ryan_Lane: thanks a ton [20:18:49] yw [20:18:58] just need to polish up my code now [20:19:51] in my previous works we had platforms for dev / tests / integration / validation / preproduction and finally production [20:20:4] but mostly using real hardware [20:20:14] that makes it a pain in the a** [20:20:19] yeah. it does [20:20:30] at least the software was fully packaged at the dev point [20:21:0] with labs, a developer can do all of that up to preproduction :-D [20:21:8] with is just … awesome :D [20:23:27] assuming you know how to do it :( [20:23:32] do we already have mw packaged? [20:24:09] 12/15/2011 - 20:24:09 - Updating keys for overlordq [20:24:09] 12/15/2011 - 20:24:09 - Updating keys for neilk [20:24:09] 12/15/2011 - 20:24:09 - Updating keys for jpostlethwaite [20:24:09] 12/15/2011 - 20:24:09 - Updating keys for siebrand [20:24:09] 12/15/2011 - 20:24:09 - Updating keys for happy-melon [20:24:18] Platonides: Debian has a package [20:24:29] Platonides: I think it was debianized by Evan from wikitravel / status.net [20:24:34] I don't want debian 'mediawiki' package! [20:24:50] I mean the manifets for application servers [20:25:29] Platonides: we don't [20:25:38] Platonides: we don't really want to use a package, either [20:26:06] we want to manage it like we do on the cluster [20:26:11] rsync? [20:26:27] I thought you were going to have a 'profile' for mwX servers [20:26:34] well, maybe not *exactly* the same :) [20:26:42] xD [20:27:04] we are also going to have an all-in-one instance [20:27:10] I don't care that much about mediawiki files [20:27:26] was thinking in the requisites [20:27:39] the requisites are already packaged [20:27:42] and managed via puppet [20:30:57] last time you said something like "we need to package squids first" [20:31:21] ^demon, what can be done with that repo? [20:31:41] Platonides: squid is packaged, but the config is deployed like mediawiki is [20:31:43] <^demon> Which repo? The git one? [20:31:46] yes [20:32:02] we are looking at having some default pushed via puppet [20:32:04] <^demon> You should be able to pull. And if you're setup in LDAP like Ryan said on wikitech, should be able to push. [20:32:19] well... [20:32:22] and what will happen on push? [20:32:24] you have to log into gerrit for it to work [20:33:04] <^demon> Platonides: `git push mediawiki HEAD:refs/for/master` will push it to the merge queue, like with puppet. [20:33:08] which means you need a labs account [20:33:12] <^demon> I added a git-setup to automate that. [20:33:39] I saw it [20:34:39] <^demon> https://gerrit.wikimedia.org/r/#admin,project,test/mediawiki/core,access - current access for core. [20:34:50] <^demon> We can still tweak that :) [20:36:08] btw bastion is still rejecting nrpe [20:36:19] yeah. lemme fix that really quick [20:36:30] :) [20:36:33] ok [20:36:39] <^demon> Platonides: I'll add you to the mediawiki group so you can approve merges to master and play with it a bit more. [20:36:45] did I produce anything? [20:36:51] <^demon> https://gerrit.wikimedia.org/r/#change,1571 [20:36:54] fixed [20:36:57] oh, yeah [20:37:00] RECOVERY dpkg-check is now: OK on bastion1 bastion1 output: All packages OK [20:37:04] :D [20:37:36] the gerrit ids are global [20:37:52] RECOVERY Current Load is now: OK on bastion1 bastion1 output: OK - load average: 0.00, 0.01, 0.00 [20:38:02] yes [20:38:07] <^demon> Platonides: Also, if I haven't already please let me thank you so much for being a guinea pig and playing with our new git repo. Goes much easier when I have people actually exercising the results. [20:38:14] I still think they should be bumped to eg. 120000 [20:38:23] meh. why? [20:38:36] <^demon> So we can keep counting where we left off with svn? ;-) [20:38:40] yep [20:38:51] you can still have revision counts inside of repos [20:38:52] RECOVERY Disk Space is now: OK on bastion1 bastion1 output: DISK OK [20:38:59] Ryan_Lane, how? [20:39:46] <^demon> Platonides: Added you to 'mediawiki' group. You should see options for merging, reviewing, etc. now. [20:39:53] thanks [20:40:07] let's see if i manage to approve it [20:40:08] well, look on ohloh, for instance: http://www.ohloh.net/p/wikimedia-puppet/commits [20:40:24] <^demon> There's an ohlol for wmf puppet? [20:40:27] <^demon> Heh. [20:40:30] hehe: N Nova Resource:I-000000b9‎; 00:07 . . (+668) . . 213.186.127.14 (Talk)‎ [20:40:33] cool [20:40:35] <^demon> ohloh, even. [20:40:45] petan: awesome that that stuff is in the recent changes, eh? :) [20:40:52] yes [20:40:53] my favorite? http://www.ohloh.net/p/wikimedia-puppet/factoids [20:41:00] whot, I can't Corde Review iwthoout verify [20:41:07] "This is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Ohloh." what the fuck? [20:41:29] <^demon> we're just that awesome :) [20:41:33] Application Error / Server Error / Requires verified [20:41:33] hahahaha [20:41:49] you can't submit without verified [20:41:56] you should be able to review [20:42:07] <^demon> He's got verify and review privs. [20:42:08] submit is a submit change to be mergged [20:42:18] yes, I was trying to make the verify as another comment [20:42:37] the message could have been a bit less dramatic [20:42:51] <^demon> Ryan_Lane: Can we tweak the lint tests for this repo btw? Puppet linting is rather useless, we need php linting. [20:42:51] Application Error / Server error, for "you can't do that"? [20:43:04] php -l ftw [20:43:07] ^demon: yes [20:43:07] <^demon> Platonides: Yeah, most every error condition in Gerrit is presented like a server error. [20:43:22] ^demon: please add to the git hooks in puppet [20:43:27] <^demon> "SERVER ERROR: YOU CLICKED THE WRONG BUTTON" [20:43:44] and modify my hooks as much as you want [20:44:40] <^demon> Ah, I see them. [20:44:57] ^demon, you finally didn't remove the svn path from the comment message [20:45:37] <^demon> I guess I can hack that in and rebuild svn-all-fast-export :) [20:45:46] <^demon> There wasn't an option to do that automagically. [20:47:04] it was a trivial source change [20:47:21] let me get it [20:48:01] wasn't the name something like svn2git-foobar ? [20:48:27] <^demon> https://gitorious.org/~marcguenther/svn2git/marcguenther-svn2git/ [20:48:57] yes, I'm looking for the clone I did (and patched) [20:50:27] <^demon> I'm forking it too. [20:50:57] sigh, it was probably in /tmp [20:51:37] it was a couple of changes [20:52:36] well, just one [20:52:51] hm, we can as well link to CodeReview [20:53:02] <^demon> That'd be super-useful! [20:53:11] <^demon> It's just a matter of tweaking Repository::formatMetadataMessage(), right? [20:54:18] http://pastebin.com/RUVGMK4H [20:54:19] yep [20:55:13] I haven't tested it [20:55:17] but it compiles fine [20:55:42] RECOVERY Current Load is now: OK on mediahandler-test mediahandler-test output: OK - load average: 0.00, 0.00, 0.00 [20:56:00] Ryan_Lane: do we have any logrotate class /template in puppet ? Looks like there is a puppet module for that [20:56:02] RECOVERY dpkg-check is now: OK on reportcard1 reportcard1 output: All packages OK [20:56:17] ummm. dunno [20:56:42] RECOVERY Disk Space is now: OK on mediahandler-test mediahandler-test output: DISK OK [20:56:52] RECOVERY Total Processes is now: OK on mediahandler-test mediahandler-test output: PROCS OK: 76 processes [20:56:55] I will just make a file and deploy it :D [20:57:02] RECOVERY Disk Space is now: OK on membase1 membase1 output: DISK OK [20:57:12] RECOVERY Current Load is now: OK on reportcard1 reportcard1 output: OK - load average: 0.28, 0.06, 0.02 [20:58:02] RECOVERY Total Processes is now: OK on membase4 membase4 output: PROCS OK: 77 processes [20:58:32] RECOVERY Disk Space is now: OK on master master output: DISK OK [20:58:32] RECOVERY Total Processes is now: OK on master master output: PROCS OK: 113 processes [20:58:33] petan: nrpe fixedeverywhere [20:58:37] RECOVERY Disk Space is now: OK on reportcard1 reportcard1 output: DISK OK [20:58:37] RECOVERY Disk Space is now: OK on test3 test3 output: DISK OK [20:58:37] RECOVERY dpkg-check is now: OK on membase1 membase1 output: All packages OK [20:58:37] RECOVERY dpkg-check is now: OK on pad1 pad1 output: All packages OK [20:58:37] RECOVERY dpkg-check is now: OK on nova-production1 nova-production1 output: All packages OK [20:58:38] RECOVERY Current Load is now: OK on puppet-lucid puppet-lucid output: OK - load average: 0.04, 0.08, 0.02 [20:58:38] RECOVERY Disk Space is now: OK on puppet-lucid puppet-lucid output: DISK OK [20:58:42] RECOVERY Disk Space is now: OK on membase2 membase2 output: DISK OK [20:58:44] hence the spam :) [20:59:12] RECOVERY Current Load is now: OK on canonical-bridge canonical-bridge output: OK - load average: 0.02, 0.02, 0.00 [20:59:12] RECOVERY Total Processes is now: OK on wikistats-01 wikistats-01 output: PROCS OK: 83 processes [20:59:22] RECOVERY dpkg-check is now: OK on wikistats-01 wikistats-01 output: All packages OK [20:59:31] <^demon> Platonides: I was also looking at a way to discard svn:mergeinfo (optionally, something like --discard-mergeinfo). I got as far as digging into SvnRevision::fetchRevProps(). [20:59:32] RECOVERY Current Load is now: OK on membase4 membase4 output: OK - load average: 0.00, 0.00, 0.00 [20:59:32] RECOVERY Total Processes is now: OK on labs-ocg1 labs-ocg1 output: PROCS OK: 77 processes [20:59:37] RECOVERY Disk Space is now: OK on wikistats-01 wikistats-01 output: DISK OK [20:59:42] RECOVERY Total Processes is now: OK on canonical-bridge canonical-bridge output: PROCS OK: 77 processes [20:59:47] RECOVERY dpkg-check is now: OK on mediahandler-test mediahandler-test output: All packages OK [20:59:47] RECOVERY Total Processes is now: OK on membase2 membase2 output: PROCS OK: 80 processes [21:00:01] <^demon> Although I might've been totally off-base there. [21:00:02] RECOVERY dpkg-check is now: OK on membase4 membase4 output: All packages OK [21:00:02] RECOVERY Current Load is now: OK on test3 test3 output: OK - load average: 0.00, 0.01, 0.05 [21:00:12] RECOVERY dpkg-check is now: OK on canonical-bridge canonical-bridge output: All packages OK [21:00:32] RECOVERY Current Load is now: OK on nova-production1 nova-production1 output: OK - load average: 0.12, 0.12, 0.09 [21:00:32] RECOVERY Current Load is now: OK on membase1 membase1 output: OK - load average: 0.00, 0.04, 0.02 [21:00:32] RECOVERY Current Load is now: OK on membase3 membase3 output: OK - load average: 0.00, 0.02, 0.01 [21:00:42] RECOVERY Disk Space is now: OK on membase4 membase4 output: DISK OK [21:00:42] (sorry for the spam, heh) [21:00:42] RECOVERY Disk Space is now: OK on pad2 pad2 output: DISK OK [21:00:42] RECOVERY Total Processes is now: OK on reportcard1 reportcard1 output: PROCS OK: 89 processes [21:00:47] RECOVERY Total Processes is now: OK on test3 test3 output: PROCS OK: 77 processes [21:00:52] RECOVERY dpkg-check is now: OK on pad2 pad2 output: All packages OK [21:00:52] RECOVERY Total Processes is now: OK on puppet-lucid puppet-lucid output: PROCS OK: 92 processes [21:00:54] I like history metadata [21:01:07] RECOVERY Current Load is now: OK on vumi-gw1 vumi-gw1 output: OK - load average: 0.00, 0.00, 0.00 [21:01:07] RECOVERY Total Processes is now: OK on membase3 membase3 output: PROCS OK: 77 processes [21:01:09] <^demon> I do too, but our revision history is kind of ugly right now. [21:01:12] RECOVERY dpkg-check is now: OK on test3 test3 output: All packages OK [21:01:12] RECOVERY Disk Space is now: OK on canonical-bridge canonical-bridge output: DISK OK [21:01:12] RECOVERY Total Processes is now: OK on nova-production1 nova-production1 output: PROCS OK: 134 processes [21:01:23] New patchset: Hashar; "logrotate: add "managed by puppet" headers" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1572 [21:01:27] RECOVERY Current Load is now: OK on membase2 membase2 output: OK - load average: 0.04, 0.07, 0.04 [21:01:36] I think we should make use of it, not dump it out the window [21:01:37] RECOVERY dpkg-check is now: OK on membase3 membase3 output: All packages OK [21:01:37] RECOVERY dpkg-check is now: OK on labs-ocg1 labs-ocg1 output: All packages OK [21:01:37] RECOVERY dpkg-check is now: OK on membase2 membase2 output: All packages OK [21:01:37] RECOVERY Current Load is now: OK on wikistats-01 wikistats-01 output: OK - load average: 0.00, 0.00, 0.00 [21:01:37] RECOVERY Current Load is now: OK on pad2 pad2 output: OK - load average: 0.01, 0.02, 0.00 [21:01:47] RECOVERY Current Load is now: OK on labs-ocg1 labs-ocg1 output: OK - load average: 0.00, 0.00, 0.00 [21:01:47] RECOVERY Disk Space is now: OK on pad1 pad1 output: DISK OK [21:01:47] RECOVERY Total Processes is now: OK on membase1 membase1 output: PROCS OK: 77 processes [21:01:52] RECOVERY dpkg-check is now: OK on vumi-gw1 vumi-gw1 output: All packages OK [21:01:52] RECOVERY Disk Space is now: OK on membase3 membase3 output: DISK OK [21:01:52] RECOVERY dpkg-check is now: OK on master master output: All packages OK [21:01:52] RECOVERY Current Load is now: OK on master master output: OK - load average: 0.52, 0.22, 0.12 [21:01:57] RECOVERY Disk Space is now: OK on vumi-gw1 vumi-gw1 output: DISK OK [21:02:07] RECOVERY Total Processes is now: OK on pad1 pad1 output: PROCS OK: 85 processes [21:02:12] RECOVERY Current Load is now: OK on pad1 pad1 output: OK - load average: 0.13, 0.05, 0.01 [21:02:12] RECOVERY Disk Space is now: OK on labs-ocg1 labs-ocg1 output: DISK OK [21:02:12] RECOVERY Total Processes is now: OK on vumi-gw1 vumi-gw1 output: PROCS OK: 78 processes [21:02:17] <^demon> Platonides: How do you recommend fixing our merge metadata then? Right now it's kind of messy. [21:02:57] RECOVERY Total Processes is now: OK on pad2 pad2 output: PROCS OK: 85 processes [21:04:28] I'm not sure [21:04:32] how were you looking at the revision graph? [21:04:43] I haven't seen how it looks [21:05:35] <^demon> This git application I'm using on OSX. Roan complained about something similar in gitk I believe. [21:05:40] it looks linear in the last one [21:05:54] I see just one line here [21:06:19] same in core-repack-svn-info [21:06:23] <^demon> That's master. [21:06:29] <^demon> Master will look fine. [21:06:32] <^demon> Checkout REL1_18 [21:06:32] when switching branches? [21:07:00] is that done with 'git branch REL1_18' ? [21:07:07] seems I didn't do what I wanted to [21:07:32] I think I created a branch [21:07:47] <^demon> Delete that branch [21:07:50] ok [21:07:50] git checkout REL1_18 [21:07:50] Branch REL1_18 set up to track remote branch REL1_18 from origin. [21:07:54] <^demon> Yep. [21:08:26] <^demon> Lemme try gitk [21:08:36] I see different lines now [21:08:44] although I'm not sure if they're right or not [21:09:26] I suppose we could look for messages like MFT rXXX and translate it with cherry-picks [21:09:30] but it shouldn't matter [21:09:49] git doesn't store merge metadata [21:09:55] (or so I think) [21:14:18] <^demon> Actually, I think Roan scared me for no reason. [21:14:24] <^demon> It's not looking as bad as I thought it did. [21:15:40] Roan scared me when we were in brighton [21:15:52] I was like: "how one could get from Paris to Sydney" [21:16:29] and 2 seconds later he started giving me airplanes routes to Sydney with company name, time of departure, flight number and how good the seats are [21:16:59] then he told me how "business class" only flights are usually faster because there is less people on board :) [21:17:04] * Platonides points Roan to the salesman problem [21:17:12] Roan is awesome [21:17:21] well actually, you are all awesome [21:17:28] you too, hashar :D [21:18:07] I am not sure I deserve an awesome rating 8-) [21:18:26] argh Error: 8 attempt to write a readonly database [21:18:28] <^demon> We should get shirts for everyone. [21:18:29] \o/ [21:18:34] <^demon> "MediaWiki Developers." [21:18:37] <^demon> "We're awesome." [21:18:40] ahah [21:18:57] and have collaboratively edited using SVG editor [21:19:03] *poke Brion* ^^^^ [21:19:04] Roan knows the sf transit schedules better than anyone who lives here :) [21:19:21] <^demon> "MediaWiki developers: my mom is still proud of me" [21:19:42] she always had!!! :)) [21:23:08] I was surprised when I read today Kaldari's message "The internet was cool" [21:23:40] !kaldari message [21:23:43] damn bot [21:24:06] Platonides: internet changed a lot [21:24:12] I used to connect back in 1995 or so [21:24:18] it was really different at that time [21:24:40] just look around [21:24:42] I guess everything changed around 2003 when it became really mainstream and after the bubble crash [21:25:00] now it seems all sites are going https [21:25:10] not so a couple years ago [21:25:18] the good news is that the old internet is still around :-P But it is like 0.000001% of the market share where it used to be 90% [21:25:29] (the market share is obviously much bigger nowadays though) [21:25:42] HTTPS is a good thing actually [21:25:49] indeed [21:26:07] and I am really happy it got enabled on WMF servers just before I started contracting [21:26:10] *couch* wikitech cert *couch* [21:26:14] cause I am usually working in public place [21:26:19] hehe [21:26:27] I thoght you worked from home? [21:26:42] well my wife used to work from home some years ago [21:27:06] I can't recommend it to anyone when it is everyday [21:27:19] specially when you do not have a dedicated room to act as an office [21:27:33] cause at the end of the day, you really want to close your "office" door and switch to real life [21:27:46] I thought part of the point of going for WMF was so you could work from home and keep an eye on the baby [21:27:53] hehe [21:28:03] you can't work with a baby at home [21:28:14] cause she need care every hour or so [21:28:46] so you can't work around [21:28:56] if it's just every hour, and not also the content of the hour... [21:29:08] anyway the WMF let me work on a 80% basis, I.E I do not work on friday [21:29:18] so I have 3 days every week to take care of my child :) [21:29:19] cool [21:29:20] which is awesome [21:29:43] anyway [21:29:45] I was thinking on typing while she were sleeping [21:29:49] I do work from home. Usually in the morning [21:29:58] then I either eat at home if I am not lazy [21:29:58] if you had to be holding her wanting her to sleep [21:30:02] or get downtown and eat there. [21:30:12] it'd be a hard interface that allowed you to code at the same time ;) [21:30:18] then I move to a local hacker place with awesome people, great lights and music and electricity :) [21:30:42] hehe [21:30:51] she is sleeping for 1 hour 1 hour and half at most [21:31:03] and you still have to clean stuff around for her [21:31:23] so over a 3 hours period there is maybe 40 minutes left to actually work [21:31:52] is she allowing you to sleep? [21:31:57] and since I need a good 10 or 15 minutes to warm up and concentrate … Over a 9 hours period that would only output maybe 1 hour of real work [21:32:15] so you can't keep your baby at home while working :D [21:32:23] as for the nights, yeah she is sleeping well [21:32:26] usually at 9pm [21:32:30] and wake up at 7:30am [21:43:45] gerrit doens't show the reverrt comment? [21:44:18] it does so in another commit :S [21:44:55] it could at least provide a link [21:46:08] <^demon> We were complaining about that earlier. [21:53:09] 12/15/2011 - 21:53:08 - Updating keys for siebrand [21:55:10] !log testswarm Now has a public address thanks to Ryan Lane. https://testswarm.wmflabs.org/testswarm/ [21:55:11] Logged the message, Master [21:55:14] \o/ [21:55:22] Thank youThank youThank you [21:55:29] AHHHHhhhh [21:55:38] I have been waiting for you Timo [21:55:38] k [21:55:49] Krinkle: https://testswarm.wmflabs.org/testswarm/ is the labs testswarm [21:55:51] Krinkle: [21:55:55] Yep, I'm there alright [21:56:02] sorry [21:56:08] well it "mostly" work [21:56:10] oh, it's in use ? [21:56:26] still have to fix an issue where Apache does not have write access to the sqlite database [21:57:04] https://testswarm.wmflabs.org/checkouts/mw/trunk/r106379/ [21:57:04] the fetching script running in cron is a total mess but work [21:57:10] error [21:57:20] really need to make it works correctly and log what it does [21:57:35] yeah that error is because apache is not allowed to access the sqlite database [21:57:44] so currently the wikis are checked out and available, but not installed (i.e. anything mediawiki php related fails) [21:57:53] which belong to testswarm:testswarm -rw-r----- [21:57:55] like api.php, load.php or Special:UnitTesting [21:58:00] JavaScriptTest* [21:58:00] they are installed [21:58:25] I just need to find a way to change the rights on the db file. [21:58:33] oh, okay :) [21:58:47] so it is finally "mostly" working [21:58:51] cool [21:58:59] the debian package is finished [21:59:03] puppetization is done [21:59:04] Is the latest version in wm svn ? [21:59:20] the [21:59:46] I have did my development directly in puppet repository under the 'test' branch [21:59:55] I think I have pushed that to SVN. Let me check [22:00:43] [mediawiki]/trunk/tools/testswarm/scripts/ [22:00:55] https://www.mediawiki.org/wiki/Special:Code/MediaWiki?path=/trunk/debs/testswarm [22:00:56] :) [22:01:32] oh that is the Debian package [22:02:02] there is so many paths and versioning systems that I am losing my tracks [22:02:26] I have a local puppet git repo [22:02:37] a local ubuntu vm to bootstrap the debian package which is in svn [22:02:45] then a vm to test puppet :) [22:03:52] storm outside, will be back soon [22:03:58] need to clean up the garden [22:04:26] no issues with the bastion switch, eh? [22:08:25] no [22:09:50] great [22:09:55] you changed the bastion host? [22:09:57] nrpe is fixed, btw [22:10:01] Platonides: yes. [22:10:04] I moved the project [22:10:05] err [22:10:12] I made a new instance in its own project [22:10:14] that's why i got 'Write failed: Broken pipe', then [22:10:19] yeah [22:10:29] I tried warning you guys ahead of tiem [22:10:31] *time [22:10:39] I deleted the old one too [22:10:42] ? [22:10:45] it's more secure this way [22:11:14] where did you warn? [22:11:18] the bastion host doesn't share home directories with any other instance, and we can manage its security groups separately [22:11:19] in here [22:11:24] for the people I saw logged in [22:11:31] i had a console open but wasn't doing anything [22:11:46] still, I thought there would have been a root global message or something [22:11:50] ah, maybe you logged in inbetween the switch [22:11:56] I probably should have done a wall, yeah [22:12:19] at least I saved all the homedirs, and kept the ssh keys. heh [22:13:13] I should probably move the home directory server to its own project at some point too [22:13:30] I kind of want to replace it with gluster, though [22:13:44] Ryan_Lane: works for me :) [22:14:09] I think we have the project storage ordered now [22:14:27] 120TB or so raw [22:14:41] likely 40-50 un-raw [22:48:07] andrewbogott: looking at your dns changes in gerrit. looks good! [22:48:41] thanks. There are many different ways to implement an identical interface, so I'm expecting contradictory comments on the last patch :) [22:48:47] heh [22:48:56] 2401? [22:49:08] @commands [22:50:00] !os-change is https://review.openstack.org/$1 [22:50:06] !os-change 2401 [22:50:26] yeah, 2401. The guy I've been talking with on openstack-dev really wants me to use a hierarchical model, a list of zones, each of which contains a list of entires. Not sure if that makes sense... [22:50:38] hmm [22:51:40] i am here [22:51:45] it works kind of like that in LDAP [22:52:36] zone is; dn: dc=wmflabs,ou=hosts,dc=wikimedia,dc=org [22:52:44] entry is: dn: dc=canonical-bridge,dc=wmflabs,ou=hosts,dc=wikimedia,dc=org [22:53:02] all entries for a zone are contained within that zone [22:53:35] it's often the case for DNS servers in files too [22:53:52] in bind style, each file is a zone, and the entries are contained within that [22:54:50] did we move our DNS config to gerrit yet? [22:55:21] heh. we made a project, but didn't populate it yet [22:55:27] I presume, though, that at the top level we'll just get a list of zones and iterate over the zones, rather than trying to get one big record with everything in it. So how the API handles zones barely matters. [22:55:47] I guess the way I have it now is slightly inefficient. [22:55:57] ah. [22:56:52] Oh, and, just for reassurance: Every kind of dns record still involves just a zone, key and a value, right? Never key, value, value, value, value? [22:57:19] I'm just taking for granted that I can shove any kind of value into my existing fields, since they're just strings... [22:58:32] (oops, you are occupied irl : ) [23:08:53] New patchset: Pyoungmeister; "minor refactor of test branch for upcoming search work. mostly to see if I can actually properly use branches..." [operations/puppet] (testlabs/searchoverhaul) - https://gerrit.wikimedia.org/r/1578 [23:09:16] New review: gerrit2; "Lint check passed." [operations/puppet] (testlabs/searchoverhaul); V: 1 - https://gerrit.wikimedia.org/r/1578 [23:12:09] 12/15/2011 - 23:12:09 - Updating keys for sara [23:13:02] 12/15/2011 - 23:13:01 - Updating keys for sara [23:14:39] who's Sara? [23:16:13] New patchset: Ryan Lane; "Testing lvs for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1580 [23:16:36] Platonides: new ops team member, working on labs [23:17:25] ssmollett: http://ryandlane.com/blog/2011/09/23/configuring-a-local-environment-for-dealing-with-git/ [23:17:34] Platonides: she's ssmollett on irc [23:17:47] ops is getting quite big [23:17:49] welcome, ssmollett [23:18:49] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1580 [23:18:50] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1580 [23:25:05] yeah. it's nice having enough people to actually do things :) [23:40:04] New patchset: Sara; "Second attempt at testing lvs for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1582 [23:40:18] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/1582 [23:40:27] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1582 [23:40:28] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1582 [23:41:09] 12/15/2011 - 23:41:09 - Updating keys for neilk [23:45:12] New patchset: Sara; "Third attempt at lvs for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1583 [23:45:30] gerrit-wm: lint test? -_- [23:45:49] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1583 [23:45:49] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1583 [23:51:02] New patchset: Sara; "lvs for labs #4." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1584 [23:51:14] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/1584 [23:51:20] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1584 [23:51:20] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/1584