[00:45:43] 09/20/2012 - 00:45:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [00:50:43] 09/20/2012 - 00:50:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [00:55:45] 09/20/2012 - 00:55:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:00:44] 09/20/2012 - 01:00:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:05:43] 09/20/2012 - 01:05:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:10:44] 09/20/2012 - 01:10:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:15:44] 09/20/2012 - 01:15:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:20:44] 09/20/2012 - 01:20:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:25:46] 09/20/2012 - 01:25:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:30:44] 09/20/2012 - 01:30:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:35:45] 09/20/2012 - 01:35:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:40:46] 09/20/2012 - 01:40:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:45:46] 09/20/2012 - 01:45:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:46:02] labs-home-wm: must it be quite that frequent? [01:47:14] * jeremyb supposes Ryan_Lane could answer if labs-home-wm refuses to [01:47:17] ;) [01:49:29] hm [01:49:42] well, it's likely a bug [01:49:49] well sure ;) [01:50:02] all that crap will be going away soon [01:50:08] we'll be switching to gluster home dirs [01:50:13] with pam_mkhomedir [01:50:34] ok... i saw a checklist for the migration somewhere but didn't totally understand it [01:50:43] 09/20/2012 - 01:50:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [01:51:11] well, basically now we just need to move the data, make the old mount ro, and change the mount point in ldap [01:51:20] and make sure gluster is working on all instances [01:55:45] 09/20/2012 - 01:55:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:00:47] 09/20/2012 - 02:00:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:05:47] 09/20/2012 - 02:05:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:10:45] 09/20/2012 - 02:10:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:15:47] 09/20/2012 - 02:15:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:20:43] 09/20/2012 - 02:20:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:25:43] 09/20/2012 - 02:25:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:30:43] 09/20/2012 - 02:30:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:35:45] 09/20/2012 - 02:35:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:40:42] 09/20/2012 - 02:40:42 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:45:43] 09/20/2012 - 02:45:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:50:45] 09/20/2012 - 02:50:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [02:55:46] 09/20/2012 - 02:55:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:00:42] 09/20/2012 - 03:00:42 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:05:46] 09/20/2012 - 03:05:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:10:46] 09/20/2012 - 03:10:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:15:41] 09/20/2012 - 03:15:41 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:20:47] 09/20/2012 - 03:20:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:25:45] 09/20/2012 - 03:25:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:30:43] 09/20/2012 - 03:30:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:35:43] 09/20/2012 - 03:35:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:40:45] 09/20/2012 - 03:40:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:45:44] 09/20/2012 - 03:45:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:50:43] 09/20/2012 - 03:50:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [03:55:44] 09/20/2012 - 03:55:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:00:46] 09/20/2012 - 04:00:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:05:45] 09/20/2012 - 04:05:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:10:44] 09/20/2012 - 04:10:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:12:02] i'm not sure [04:12:21] but i think labs-home-wm is deleting the home directory for l10nupdate in project 'deployment-prep' [04:12:36] just a funny feeling in my stomach, y'know? intuition, i guess. [04:15:43] 09/20/2012 - 04:15:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:20:44] 09/20/2012 - 04:20:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:25:44] 09/20/2012 - 04:25:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:30:46] 09/20/2012 - 04:30:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:35:44] 09/20/2012 - 04:35:44 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:40:42] 09/20/2012 - 04:40:41 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:45:45] 09/20/2012 - 04:45:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:50:47] 09/20/2012 - 04:50:46 - Deleting home directory for l10nupdate in project(s): deployment-prep [04:55:47] 09/20/2012 - 04:55:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [05:00:43] 09/20/2012 - 05:00:42 - Deleting home directory for l10nupdate in project(s): deployment-prep [08:18:33] Ryan_Lane huh [08:18:42] what's these users [08:18:46] l10 [08:21:24] petan: in beta ? [08:21:30] yes [08:21:33] l10n usually means localisation [08:21:39] where is it ? [08:21:49] 05:20:47) 09/20/2012 - 03:20:47 - Deleting home directory for l10nupdate in project(s): deployment-prep [08:21:50] (05:25:46) 09/20/2012 - 03:25:45 - Deleting home directory for l10nupdate in project(s): deployment-prep [08:21:51] (05:30:43) 09/20/2012 - 03:30:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [08:21:52] (05:35:43) 09/20/2012 - 03:35:43 - Deleting home directory for l10nupdate in project(s): deployment-prep [08:22:00] dunno [08:22:17] I think some script creates it [08:22:21] and bot deletes it :D [08:36:48] yeah [08:36:50] comes from puppet [08:40:51] !log integration deleted psm-precise, was used to check our applicationserver classes against Precise [08:43:29] petan: bot dead again :) [08:43:34] yes [08:43:36] it dies a lot [08:43:48] basically it can't survive more than few log messages :P [08:44:18] btw hashar who was that guy who wanted to improve bugzilla feed? [08:44:18] any idea what cause the issue ? [08:44:21] was it you? [08:44:26] someone talked with me about it [08:44:26] oh yeah [08:44:29] aha [08:44:48] so the idea was to setup a bugzilla instance on labs [08:44:54] hashar that bot is written in python and my knowledge of python is like my knowlege of plants [08:45:04] and try out the supybot IRC bot which is used by Gnome : http://code.google.com/p/supybot-bugzilla/ [08:45:30] mh... I would rather implement it to current bot rather than making anything python :D [08:45:49] python bots don't work well [08:45:52] IMHO [08:45:54] :D [08:45:59] like log [08:46:47] so if you are interested, that would be a nice little project :-] [08:47:00] eh, I would be if I knew what exactly is that about [08:47:08] what is goal of that project :) [08:47:16] Goal: replace wikibugs :-] [08:47:29] aah [08:47:30] ok [08:47:33] how to do it: install bugzilla on some labs instance, install supybot + the supybot-bugzilla plugin [08:47:33] yes [08:47:38] hack some configuration [08:47:40] right [08:47:44] report about it :-) [08:47:51] mm [08:48:16] @seen mutante [08:48:16] petan: Last time I saw mutante they were joining the channel, they are still in the channel in #mediawiki at 9/19/2012 7:39:32 PM [08:48:30] !log integration Running apt update + upgrade on integration-apache1 [08:48:35] oah aeo [08:48:38] of course bot is dead [08:48:39] ahha [08:48:47] ok I need someone to make a project [08:49:00] which project are you part of ? [08:49:07] many :)) [08:49:18] so you could probably find one to host a new instance :-) [08:49:29] like beta haha [08:49:43] or the bots project [08:50:07] these: openstack turnkey-mediawiki bots nagios bastion huggle search deployment-prep deployment-prepbackup upload-wizard hugglewa gareth configtest [08:50:18] that is a lot [08:50:58] bots is good for that [09:18:41] stupid jenkins won't start [13:31:52] ori-l: where'd you get that idea?? [13:31:59] * jeremyb hugs l10nupdate [13:34:17] Poor l10nupdate [13:42:05] it is installing on beta for some reason [13:42:13] need to track out the root cause ;--D [14:34:57] Change merged: Andrew Bogott; [labs/private] (master) - https://gerrit.wikimedia.org/r/24312 [14:47:17] !beta running mw-update-l10n [14:47:18] !log deployment-prep running mw-update-l10n [14:48:32] !log deployment-prep fixing cache permissions: sudo chown -R mwdeploy:mwdeploy /home/wikipedia/common/php-master/cache/* [14:48:56] bah does not work well [14:50:14] !log deployment-prep more cache permissions issue chown -R l10nupdate:l10nupdate /home/wikipedia/common/php-master/cache/l10n [14:50:18] petan: it is dead again :( [14:50:21] wm-bot: ping [14:50:21] Hi hashar, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [14:51:08] here [14:51:23] :) [14:51:27] hashar [14:51:28] :D [14:51:31] !log [14:51:37] !log blah [14:51:37] Message missing. Nothing logged. [14:51:45] !log deployment-prep fixing cache permissions: sudo chown -R mwdeploy:mwdeploy /home/wikipedia/common/php-master/cache/* [14:51:50] lol [14:51:59] !beta fixing cache permissions: sudo chown -R mwdeploy:mwdeploy /home/wikipedia/common/php-master/cache/* [14:51:59] !log deployment-prep fixing cache permissions: sudo chown -R mwdeploy:mwdeploy /home/wikipedia/common/php-master/cache/* [14:52:02] :D [14:52:08] !beta deployment-prep more cache permissions issue chown -R l10nupdate:l10nupdate /home/wikipedia/common/php-master/cache/l10n [14:52:08] !log deployment-prep deployment-prep more cache permissions issue chown -R l10nupdate:l10nupdate /home/wikipedia/common/php-master/cache/l10n [14:52:11] stupid bot ;-D [14:52:17] hashar I think we need a new bot, some which is written in c# or c++ at least [14:52:20] python doesn't work [14:53:14] language is rarely an issue -:] [14:53:20] bad implementation / code are [14:53:33] ok, but there is a lack of people that can fix python [14:53:38] cuz no one understand it [14:53:44] well, Damianz does [14:53:48] but he doesn't fix it though [14:53:50] :D [14:54:15] Damianz is busy :P [14:54:29] Python is easy to write, the bot's just written crappily so it doesn't reconnect properly [14:55:23] * Damianz might find the source and stick it in git later [14:55:36] Damianz why it disconnect in 5 seconds? [14:55:45] I don't like idea that it would reconnect every minute [14:56:05] Dunno, would have to look at its output. [14:56:20] that's another problem, it doesn't make logs [14:56:22] to file [14:56:33] yeah that's easy to fix.... [14:56:36] wm-bot runs as service and it has own log file [14:56:46] It needs a better init file too [14:56:51] hmm [14:56:53] yes [14:56:53] So it doesn't just crash if the cache dir isn't there [14:56:57] <^demon> Yay, we can upgrade gerrit machines to precise :) [14:57:06] ^demon are you sure? [14:57:08] :D [14:57:14] <^demon> Yes, I tested :) [14:57:16] :o [14:57:25] also, how would community benefit from that :D [14:57:26] It doesn't break java life in 35 different ways? [14:57:34] I mean would it bring a better gerrit to us? :) [14:58:04] how long is 10.04 going to be supported? I have one server too [14:58:10] <^demon> petan: Well, no immediate benefits. Just general "nice to not have any gotchas when we want to ditch lucid" [14:58:11] I don't plan any upgrade soon [14:58:36] the other server of me is already precise afaik... [14:58:46] but this one, really can't be rebooted [14:59:14] <^demon> Damianz: Oddly enough, openjdk-jre-headless *might* have this weird regression where it's installed somewhere different from before. In that case, we might have to set container.home in gerrit.config to the new path :p [14:59:21] <^demon> But haven't replicated it since ~2 weeks ago [14:59:53] Java is like ruby to me, you end up with 10 installs of versions that have minor differences yet stuff refuses to run on anything except their verison [15:01:00] * Damianz stabs drac for being fussy and not working on his mbp [15:01:03] <^demon> Eh, java itself hasn't bitten us too bad with regards to gerrit gerrit. [15:01:16] <^demon> Scratch one gerrit. [15:01:46] <^demon> Well, other than leapseconds. But I maintain that those are created by über nerds to screw with the rest of us. [15:02:10] Heh the leapsecond thing is just annoying, cpu cycles are insanly wasted [15:02:18] rename page/{10 => 6}/index.html (84%) < Oh git I love you [15:05:46] 09/20/2012 - 15:05:46 - User cmcmahon may have been modified in LDAP or locally, updating key in project(s): bastion,deployment-prep,quality-assurance [15:05:56] 09/20/2012 - 15:05:56 - Updating keys for cmcmahon at /export/keys/cmcmahon [15:29:07] l10n update is still running :( [17:13:48] <^demon> Damianz: Actually, totally just replicated the jvm issue in labs. Silly precise :) [17:13:54] heh [17:14:00] Silly java :) [17:15:13] <^demon> Indeed. [17:15:21] <^demon> Probably too: silly package maintainers. [17:16:25] Are we actually packaging gerrit from git tags then installing from the repo currently? [17:17:05] <^demon> We're packaging the war and installing from apt. [17:17:46] <^demon> But I want to come up with some kind of build system that lets us pick an arbitrary upstream tag, build it, and package that [17:17:52] <^demon> All automagically with rainbows and unicorns. [17:17:54] I thought a war was pretty much a package on its own [17:18:20] <^demon> Kindof. But no way to get it on the host easily. [17:18:24] <^demon> Not gonna stash it in puppet :p [17:18:34] That could be kinda funny [17:18:46] "Why is puppet so slow!?" "errr..." [17:19:11] <^demon> Yeah, apt for a jar/war is kind of silly usually. [17:19:18] <^demon> But hey, it's java, what are you gonna do? [17:19:28] It needs more rainbows [17:19:41] Though so does everything we package currently heh [17:20:44] <^demon> Having some build process in jenkins would be kind of nice. [17:20:51] <^demon> Haven't found a plugin that does it all automagically yet. [17:21:13] <^demon> https://github.com/mika/jenkins-debian-glue is the closest I've found [17:22:45] Hmm [17:23:34] I was thinking it would be nice to either (based on path regex or a plugion option checkbox thing) to auto configure jenkins and build on merge from getting for real ciishness... but it depends on having everything structured the same in the repo heh [17:28:13] <^demon> Damianz: Well for debian packages, it'd be nice to do that :) [17:28:27] <^demon> Not impossible, if we do it properly how git-buildpackage wants you to. [17:29:52] Has a few potential issues but it could be interesting... if we where going to do that we'd really have to pin all the packages first. [17:30:13] Though imo using 'latest' is pretty stilly to start with and prevents you from really testing stuff before hand or staging rollouts of it. [17:31:18] <^demon> I don't use latest in gerrit.pp :) [17:31:21] <^demon> It's pinned. [17:31:50] That's a start :) [17:38:27] <^demon> Ooh, I was just about to step out, but hashar's back :) [17:38:46] not really [17:38:47] finding out a pizza to order :-D [17:39:06] <^demon> I can e-mail you a picture of the pizza I'm eating :p [17:39:55] keep it for the remote staff mailing list :-) [17:43:43] ^demon: will be back later this evening though, need to finish up some work on labs [17:44:14] <^demon> Enjoy your pizza :) [17:57:31] petan: hey, whats up [18:28:27] Change on 12mediawiki a page Developer access was modified, changed by Rohit21agrawal link https://www.mediawiki.org/w/index.php?diff=585374 edit summary: [18:31:18] Change on 12mediawiki a page Developer access was modified, changed by Sumanah link https://www.mediawiki.org/w/index.php?diff=585375 edit summary: /* User:Rohit21agrawal */ [19:08:27] andrewbogott: yt? [19:09:15] ori-l: what's up? [19:09:53] Oh, you wanted me to rearrange IP quotas, right? Take from which and give to which? [19:10:19] edit-engagement -> visualeditor [19:10:24] :) [19:13:10] ori-l: Ok… done, I think. [19:13:22] andrewbogott: thanks! [19:26:45] hey, i can't write a single byte to my home dir, but i'm not sure why it's full [19:26:57] df -h /home/ori --> labs-nfs1:/export/home/visualeditor/ori 18G 17G 0 100% /home/ori [19:27:05] du -h /home/ori | tail -1 --> 16M /home/ori [19:36:24] might be full again [19:50:40] ori-l: please for the love of god don't use your home directory to store things [19:50:46] there's /data/project and /mnt [19:50:58] and clean up all your homedirs while you are at it [19:51:01] 16 megabytes [19:51:04] ! [19:51:04] There are multiple keys, refine your input: $realm, $site, *, :), access, account, account-questions, accountreq, addresses, afk, alert, amend, ask, b, bang, bastion, beta, blehlogging, blueprint-dns, bot, bots, broken, bug, bz, console, cookies, credentials, cs, damianz, damianz's-reset, db, del, demon, deployment-beta-docs-1, deployment-prep, docs, documentation, domain, epad, etherpad, extension, forwarding, gerrit, gerritsearch, gerrit-wm, ghsh, git, git-puppet, gitweb, google, group, hashar, help, hexmode, hyperon, info, initial-login, instance, instance-json, instancelist, instanceproject, keys, labs, labsconf, labsconsole, labsconsole.wiki, labs-home-wm, labs-morebots, labs-nagios-wm, labs-project, leslie's-reset, link, linux, load, load-all, logbot, logs, mac, magic, manage-projects, meh, monitor, morebots, nagios, nagios.wmflabs.org, nagios-fix, newgrp, new-labsuser, new-ldapuser, nova-resource, openstack-manager, origin/test, os-change, osm-bug, pageant, password, pastebin, pathconflict, petan, ping, pl, pong, port-forwarding, project-access, project-discuss, projects, puppet, puppetmaster::self, puppetmasterself, puppet-variables, putty, pxe, queue, quilt, report, requests, resource, revision, rights, rt, Ryan, ryanland, sal, SAL, say, search, security, security-groups, sexytime, socks-proxy, ssh, start, stucked, sudo, sudo-policies, sudo-policy, svn, terminology, test, Thehelpfulone, unicorn, whatIwant, whitespace, wiki, wikitech, wikiversity-sandbox, windows, wl, wm-bot, [19:51:05] heh [19:51:18] * Damianz kicks wm-bot onto ori-l [19:51:43] Ryan_Lane: Buuut we all know ~/.porn is where you keep your code [19:51:43] * ori-l thrashes and gasps under the metallic hulk [19:52:04] ori-l: in editor-engagement you are eating about 700MB [19:52:19] i pasted the output of du [19:52:30] different project [19:52:45] oh. don't change the subject! :P [19:52:49] j/k. let me look. oops if so. [19:52:49] 694M ./ori [19:52:55] they all share the same space [19:53:01] On the bright side, LETS KILL NFS! [19:53:07] * Damianz hides [19:53:10] Damianz: obviously [19:53:15] that is going to take some effort, though [19:53:23] sadly yes [19:53:32] <^demon> I'm using 138M :\ [19:53:41] One of the annoying things that falls down the back of the sofa while the ops are having a pillow fight ontop [19:53:45] seems I need to take the time to do it, though [19:54:25] let's see which instances gluster is broken on [19:56:15] instance 0000009b (nagios) is dead for good, right? [19:56:30] it's corrupt I believe [19:56:35] nagios-main is the active instance [19:56:58] ok. I'm going to delete the old one [19:57:16] !log nagios deleted nagios (0000009b) instance [19:59:08] drdee: is 000000b2 (pageviews instance in statsgrokse) needed anymore? [19:59:12] that's the one I broke, right? [19:59:22] i think so :) [19:59:26] IIRC it is borked [19:59:45] code is on github and data is on dumps [19:59:45] I really hate puppetmaster::self in so many ways, sucks the same way as the test branch did [19:59:59] so yeah you can delete the instance [20:01:59] Change abandoned: Dzahn; "boldly abandoning the very last change on the "test" branch. Bye bye "test". Please resubmit on prod..." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/12164 [20:02:00] ok. cool [20:02:01] thanks [20:03:25] !log statsgrokse deleted pageviews instance [20:05:46] 09/20/2012 - 20:05:46 - User laner may have been modified in LDAP or locally, updating key in project(s): statsgrokse [20:09:46] drdee: analytics instance (000000e2) was broken too, right? [20:09:57] in mobile-stats [20:10:03] mmmmm [20:10:10] not sure [20:10:12] let me try [20:10:19] well, it isn't up [20:10:27] which is why I ask [20:10:40] i thought you had an amazing memory :0 [20:10:44] 09/20/2012 - 20:10:44 - User laner may have been modified in LDAP or locally, updating key in project(s): globaleducation [20:10:47] heh [20:10:52] actually, I could check my email [20:10:57] yeah kill it [20:17:25] Hmmm [20:17:34] Is Dzahn mutante? [20:18:44] Yes [20:19:20] Ah... damn people changing names :P [20:20:17] <^demon> Ryan_Lane: I can't seem to delete i-000003f5. The nova resource page redlinked, but it still says active and won't disappear from my instance list. [20:20:21] Ryan_Lane: I restarted nova-compute on the compute nodes, and now the scheduled on virt0 can't see them: "Failed to schedule_run_instance: No valid host was found. Is the appropriate service running?" [20:20:28] <^demon> (Feel free to nuke from high orbit) [20:20:31] ^demon: Probably my fault… give me a bit to sort this out. [20:20:51] ^demon: give it a bit [20:20:59] yeah [20:21:08] if nova-compute isn't up, it'll queue or not happen [20:21:15] Hmmm, I wonder if my openstack install is done yet now you've reminded me [20:21:21] Ryan_Lane: Should I just restart the scheduler? It looks to me like the compute nodes are up... [20:21:56] Scheduler keeps saying things like "Received compute service update from virt2" [20:22:36] * ^demon waits patiently [20:23:30] andrewbogott: no, that should do it [20:23:52] virt6-8 are down, though [20:24:01] for nova-compute [20:24:20] the virt6 compute log is very busy… does it just take several minutes to come up? [20:24:25] nova-manage service list [20:24:27] oh [20:24:28] yes [20:24:28] it does [20:24:29] I'm used to devstack where it takes ~1 second [20:24:39] for compute nodes with tons of instances it takes a while [20:24:53] Ah, ok. Should've done a rolling upgrade then. [20:24:56] going eat [20:24:59] back in a little bit [20:25:02] ok [20:26:30] I miss the days where Ryan was food for a few hours a day :P [20:30:30] yeahhhh [20:31:13] !log deployment-prep fixed http://bits.beta.wmflabs.org/ . Varnish needed some more tweaks (see {{gerrit|13304}} deployed on deployment-cache-bits-02). [20:31:19] chrismcmahon: bits.beta.wmflabs.org is fixed :-) [20:31:23] until it crash again [20:31:26] bah bot dead [20:31:27] grr [20:31:49] ^demon: Try now [20:34:07] <^demon> andrewbogott: Well it's gone :) [20:38:33] howdy folks. first time on irc in, uh, 10 years? (and a special hello to StevenW) [20:40:36] Hey tedder [20:40:43] looking to get a labs account? :) [20:40:47] heh, yeah. [20:41:05] ^demon, jeremyb ^^ [20:41:15] I think they can help you out. [20:41:17] * jeremyb looks up [20:41:21] actually wondering what the m.o. is for getting the perl+java setup I need. [20:42:12] tedder: welcome back [20:42:24] tedder: not much changed in the IRC world :-] [20:42:34] <^demon> tedder: Welcome. jeremyb can get you squared away with an account. As far as installing things on your instances -- we use puppet for this nowadays. So the proper way is to write a puppet manifest installing the things you need. [20:42:36] which is why it's wonderful, hashar. [20:42:38] <^demon> That way it's re-usable :) [20:42:57] tedder: though we now have a memoserv bot to send private message to people while they are offline [20:42:59] okay, are there examples on github or somewhere? [20:43:04] tedder: and undernet is mostly dead [20:43:25] tedder: i'm not paying much attention here. say my name when you're ready for an account. (or fill out the form on the website) [20:43:25] * tedder looks for #wikimedia-hottub [20:43:39] <^demon> tedder: Not github, we use gerrit. And yes, all of the existing manifests are in git. [20:43:56] I'll fill out the form, jeremyb, no worries. Basic questions right now. [20:44:04] chad loves github [20:44:07] <^demon> https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git [20:44:11] <^demon> Damianz: Shut your face :) [20:44:16] :D [20:44:32] thx ^demon, there were no links from mediawiki/Wikimedia_Labs (if only it was editable. oh wait!) [21:05:12] tedder: what are you using perl+java for? [21:05:32] I have two sets of bots written in those. [21:05:41] want to get them off my home 'server'. [21:07:29] you folks are using generic/shared arch? I mean, I'm creating/modifying puppet scripts that will get the standard machines to have my dependencies, not creating scripts that will only be used to create my own virtual server? [21:13:27] tedder: errr, they're not scripts. they are manifests [21:13:33] have you used puppet before? [21:13:51] no. yeah, I know they aren't scripts, they are configs, basically. [21:14:03] you should be creating classes that tell puppet how to install/set up your software [21:14:19] and then you can use those classes on whichever machines you want to [21:14:34] bbl [21:15:00] and commit to manifests/tedder.pp or something similar? [22:02:16] Damianz: can you clean your bot's homedir in the bots project? [22:02:18] well [22:02:20] your homedir [22:02:42] I don't think there's really anything in it, maybe a clone of the source [22:02:49] cluebotbig.sql [22:03:00] Ah I'll get rid of that, that's in project storage now anyway [22:03:04] hm [22:03:05] wait [22:03:07] there's more too [22:03:26] ./cluebot is eating 500+ MB [22:03:42] cluebot/ cluebotbig.sql cbng-core can go [22:03:47] they're all debugging shiz [22:03:50] it's fine [22:03:53] just clean up what you can [22:04:45] seems that gluster for homedirs works on every instance that's currently up [22:04:54] Should be more empty now [22:05:03] thanks [22:05:21] I'd moved most the stuff like mysql dumps to project store already heh [22:23:40] andrewbogott: can the instance project be added by the bot as well? [22:24:15] Ryan_Lane: I'm pretty sure I need keystone access to get the project name. I can add the project ID easily, but I'm not sure that helps anyone. [22:27:04] it can't be looked up in the database? [22:27:29] I'll check. I'm not sure the name is stored in the db anymore. [22:28:02] ah [22:28:50] Hm, did I break the ganglia link or was it broken before? [22:29:19] it's broken? [22:29:33] the stats on the main page went away for some reason [22:29:49] There's a 'stats' field on the bottom of each instance page [22:30:11] Which seems to have some quoting problem or mismatched parens or something [22:39:34] Yep, I broke it. Fixed now [22:44:23] main page is still broken [22:44:53] ah [22:45:02] there's no properties on the page [22:45:16] the new template needs SMW markup [22:45:54] I can actually put the project in via the wiki [22:46:03] there's a maintenance script to update all the pages, too, btw [22:51:39] hm [22:51:46] it seems nothing was added by doing that automated run [22:52:45] andrewbogott: seems I broke things [22:53:23] Ryan_Lane: It looks like the instance 'project_id' field in essex is actually the just the project name. [22:53:29] So I'm changing the code to use that now. [22:53:30] ah [22:53:30] cool [22:53:37] so, I ran my maintenance script [22:53:39] and it broke things [22:53:43] but only kind of [22:53:56] What does the maintenance script do? [22:54:15] It rebuilds every instance page? [22:54:16] it just edits the pages [22:54:32] Using the same codepath as the one I've been monkeying with? [22:54:35] yep [22:55:05] it wiped out your changes [22:56:24] Are you sure? Most pages didn't have auto-status anyway... [22:56:36] Since they're only added in response to an event, and most instances haven't had events. [22:56:46] oh [22:56:48] nevermind [22:57:02] Which I guess means that most instance pages are temporarily stupid :( [22:57:08] it didnt' update I-000003f5 [22:57:12] not sure why [23:02:09] hm [23:02:13] 000003f5 is deleted I think [23:02:40] though that was added by the plugin [23:04:03] There may be a race with page deletion. It's worked properly the last couple of times... [23:04:18] well, I think it was deleted a while ago [23:04:23] lemme look in the database [23:05:04] I'm about ready to murder git today. There's a bug in the version I'm using where it keep saying I have unstaged changes when I don't [23:05:16] Hard to fix a problem that's in git's imagination [23:05:42] Are you sure? [23:05:46] I haven't seen git lie before [23:05:59] seems i-000003f5 isn't in the database; [23:06:25] hm [23:06:33] maybe it's named differently [23:06:48] ah [23:06:49] yep [23:06:51] it's deleted [23:08:17] RoanKattouw: I would do a 'git diff' and it would return empty, and then rebase would immediately tell me to stage my changes. [23:08:21] But of course it won't do it now. [23:08:42] What do 'git status' and 'git diff --staged' say? [23:08:57] If 'git diff' is empty that does NOT mean you have no staged changed [23:08:58] *changes [23:09:10] You may have git add-ed them already, or you may have deleted/added a file, or whatever [23:09:27] Except I just did a 'git commit' immediately before that. [23:09:37] Which means my staged changes should've been in a patch... [23:09:43] Right [23:09:57] Just sayin', when in doubt, look at git status and it'll tell you what's up [23:10:02] * andrewbogott nods [23:11:53] Ryan_Lane: Are these your pending changes in mediawiki.pp? [23:15:11] Well, I'm merging 'em :) [23:16:48] mediawiki.pp? [23:16:50] where? [23:18:42] andrewbogott: can you add zone too? [23:18:46] err [23:18:47] region? [23:18:57] hm [23:19:02] need keystone for that, eh? [23:19:18] I think so, but lemme check [23:19:52] andrew@devstack:/opt/stack/nova/nova/db$ grep -ir region * [23:19:52] andrew@devstack:/opt/stack/nova/nova/db$ [23:20:00] So, unless region has a secret second name... [23:20:01] :( [23:20:02] heh [23:20:05] looks like it's not in the db [23:20:10] I think it comes from keystone [23:20:18] we should give everything psudo names [23:20:35] Damianz: No joke, OpenStack has four different IDs for an instance. [23:20:37] actually... [23:20:44] it's configured from the client side [23:20:53] andrewbogott: can you make that a setting in the plugin's config? [23:20:58] you'd need it for keystone anyway [23:21:48] You mean so that it's set in nova.conf per host? Sure. [23:22:57] I've updated the template to add SMW properties [23:23:41] this is looking better now: https://labsconsole.wikimedia.org/wiki/Special:Browse/Nova_Resource:I-2D00000430 [23:24:18] holy shit that's ugly [23:24:21] there's a "Browse properties" link in the toolbox in the sidebar to get to that page [23:24:30] Damianz: that's the browse properties link [23:24:42] this is the normal page: https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-00000430 [23:24:48] still kind of ugly [23:24:54] A little better :P [23:25:15] We really do rape data, sometimes it seems it would just be easier to get it from nova and short cache it :P [23:25:32] we should make one template open the table and the other close it [23:25:35] we have lua now too [23:25:40] we could implement this in lua. heh [23:27:05] well, I did that edit wrong [23:27:09] https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-00000430 [23:27:16] ah [23:27:17] crap [23:27:18] I know why [23:27:41] Oh you're squashing the templates together! I wanted that to happen but don't know how [23:27:57] magic :) [23:28:08] you can open a table in one place and close it in another [23:28:43] fixed [23:28:45] Just as soon as compute comes back up on virt8 there should be a project field… I hope. [23:28:45] there we go [23:28:58] heh [23:29:55] is there any way to force the plugin to update all nodes? [23:29:59] err [23:30:01] all instances [23:30:03] So, as to the region name… you want that hardcoded in the role as well, or does that need to be a global? [23:30:18] in the role [23:30:24] because we'll change that per sitew [23:30:24] ok [23:30:26] *site [23:31:07] I've been trying to think of a way to prompt updates. The events monitored are... [23:31:12] *clears throat* [23:31:17] 'compute.instance.delete.start', [23:31:17] 'compute.instance.create.start', [23:31:19] 'compute.instance.create.end', [23:31:20] 'compute.instance.rebuild.start', [23:31:22] 'compute.instance.rebuild.end', [23:31:23] 'compute.instance.resize.start', [23:31:25] 'compute.instance.resize.end', [23:31:26] 'compute.instance.suspend', [23:31:28] 'compute.instance.resume', [23:31:29] 'compute.instance.exists', [23:31:30] 'compute.instance.reboot.start', [23:31:31] 'compute.instance.reboot.end' [23:31:32] So that looks like 'no' at the moment. [23:31:36] bleh [23:31:46] we should add a fake event somewhere [23:31:47] There might be a heartbeat event, but that would be Too Many Updates. [23:31:55] so that we can trigger it [23:33:22] Yeah, I'm trying to think of how that would work… I guess there could just be something like 'nova nudge ' [23:33:54] maybe based on metadata setting? [23:34:05] then we can update a meta-data field [23:36:01] is there a trigger for that? [23:36:15] nova meta puppet-abogott set update_wiki=true [23:37:17] Dunno… I'll look at that once I get regions fixed up [23:37:48] cool [23:38:14] hm. doesn't help for deleted instances. [23:39:08] the wiki takes care of that, but it's likely best if it doesn't need to [23:39:21] Region should be 'pmtpa' for the moment? [23:39:26] yep [23:48:16] ok. I need to head home. back in like 40 minutes [23:52:56] * Damianz imagines Ryan jammed in a tram with 100 other people grumbling and listening to music [23:58:18] If only he were so lucky [23:58:28] There's no trains going to where he lives, just buses