[00:03:46] * petan found a new storage to expand his /home to [00:03:54] 80tb? sounds good [00:04:01] I'm sure some bots could take up a few hundred gig a day in logs :P [00:04:25] I am sure that if I remotely mount taht 80 tb to my desktop I will get few bits too [00:10:07] Could make a mirror of fosdem videos and still have 79tb to spare! [00:11:01] I'm keeping home directories small [00:11:33] no megavideo by wikimedia :/ [00:11:54] wikivideo [00:12:16] petan: patch: **** Can't rename file includes/Article.php to includes/Article.php.orig : Permission denied [00:12:19] sigh [00:12:25] you haven't got your permissions sorted yet :) [00:13:52] actually I think you need to add me to the depops group [00:14:41] or I can add myself actually [00:14:47] Yeah you need to add yourself [00:15:54] Still hoping you don't the servers then nfs gets turned off :P [00:16:12] It would be terrible to accidentally the servers [00:17:17] ssmollett: if $instanceproject == "testing" { blah } else { blah } [00:17:43] thanks. [00:19:16] yw [00:19:46] Some of the puppet stuff is strange, like in the openstack class where it does stuff based on hostname :P [00:22:47] werdna: sec [00:23:03] werdna: you need to insert yourself to group depops in /etc/group [00:23:13] petan: I already did [00:23:15] thanks [00:23:16] :) [00:23:19] ok [00:24:00] petan: dbdump is really slow :/ [00:24:07] what's load? [00:24:14] storage is slow [00:24:21] instances are usually fast [00:25:03] just typing is slow [00:25:11] feels like a slow connection but everything else is fine [00:25:24] ah... it works fine to me [00:25:30] load is less than 1 [00:25:33] should be okay [00:25:33] hmm [00:25:36] Someone needs to invent a waterproof laptop, that way I can go for a shower. [00:25:36] maybe packet loss or sth [00:25:55] or just stop using shower that would have similar effect [00:26:24] I think if that is your main impediment to having a shower, you might have a problem ;-) [00:27:02] !log deployment-prep switched to HEAD [00:27:03] Logged the message, Master [00:27:10] hexmode requested it [00:27:19] as in SVN HEAD? [00:27:20] werdna: we are using HEAD now, so shouldn't be problem [00:27:22] yes [00:27:46] hexmode decided to switch to that version since more people are using it [00:27:46] hmm [00:28:01] did I really type switched to HEAD... [00:28:03] Tbf whenever I've used HEAD it's been pretty stable. [00:28:07] I meant switching [00:28:08] :P [00:28:08] werdna: problem? [00:28:11] it's running now [00:28:18] that kinda defeats the purpose of going to somewhere with the same setup as wmf production [00:28:24] petan: You should have logged "development-prep got HEAD" [00:28:34] :) [00:28:37] classy [00:28:52] ;) [00:28:55] werdna: being "beta" it was never supposed to be what WMF currently is [00:29:03] but what WMF aims to be [00:29:10] at least, that was my thought [00:29:21] then why bother with trying to replicate the rest of the setup? [00:29:21] Beta is where you run production stuff but kinda have an excuse when stuff breaks, right? [00:29:30] it's for testing with a wmf setup, right? [00:29:41] Damianz: :) [00:30:07] werdna: no, that is OPs job... we do try to simulate it somewhat [00:30:11] I have one project in production that's been in "Beta" for 5 years and will soon be in v3 "Beta" :D [00:30:38] I want a place to find bugs earlier [00:30:39] * Damianz thinks he might update the footer [00:30:48] so we need HEAD [00:31:05] I think petan just gave you HEAD. [00:31:11] :P [00:31:16] working [00:31:23] sotrage is slow [00:31:24] buncha teenagers in here [00:31:27] "Dear HR..." [00:31:37] * hexmode stomps off like an old man [00:31:44] * Damianz sticks with his Master [00:31:50] I am not a teenager :) [00:31:57] my beard even has grey, so you can call me a greybeard [00:32:05] ok [00:32:11] hexmode: well, the point of the exercise of testing on the web rather than just on my local box is that if it will break on production it should break on the testing box as well and I can catch it early [00:32:16] Tbf I don't think your hair is related to your age, just working around IT :P [00:32:29] !hexmode is grey old man [00:32:29] Key was added! [00:33:05] werdna: ok but fwiw, TMH people wanted us to upgrade [00:33:11] and they were there first [00:33:24] fair enough [00:33:26] TMH? [00:33:34] Timed Media Handler [00:34:25] ah [00:34:31] It's a shame you can't run a haproxy box with a bunch of acls that based on a cookie will point you to different installs running different versions (stable, trunk, etc) of mw but I guess the setup and updates for one is enough work. [00:34:49] I don't believe you can write a code which doesn't break the testing site and break the production, testing site is already so broken... [00:35:10] someone should fix squid [00:35:14] yes [00:35:20] some RoanKattouw [00:35:42] I saw the problem with Gadget-defines, but dismissed it b/c of squid [00:35:51] he is asleep [00:35:57] or should be [00:36:03] just like me [00:36:09] because he's in same tz [00:36:23] 1:36 am :o [00:36:48] friday! :) [00:37:03] Rebecca Black had a song about that [00:37:12] don't know her [00:37:24] I am outsider from eu [00:37:28] see youtube in english [00:37:32] :) [00:37:45] all music I see is metal... [00:38:16] https://en.wikipedia.org/wiki/Friday_%28Rebecca_Black_song%29 [00:38:18] probably she is singing different style [00:39:32] how does it come the article about weird song is 20 times longer than article about unix filesystem [00:40:00] specialist area vs. article of general interest [00:40:01] shrug [00:40:37] yeah, you don't want k&r writing your encyclopedia [00:40:40] ;) [00:40:45] :o [00:42:04] It's friday, friday, friday... has been for 42min! [00:42:23] I got it almost 2 h :P [00:42:30] enjoying... [00:42:33] :D [00:43:20] I don't think remove 'artist' from that page would be a COI in regards to music :P [00:43:29] be bold [00:44:19] New patchset: Sara; "Test migration from libnss-ldap to libnss-ldapd." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2632 [00:44:39] Still loving http://www.youtube.com/watch?v=SAIEamakLoY, google really know how to do some stuff right. [00:47:25] hexmode / petan: Reckon we could throw together a het deploy setup? [00:47:37] especially since we have that on WMF [00:47:42] it will help reflect wmf setup more [00:47:55] werdna: can you do that? I don't know anything about het deploy [00:47:55] *and* you can use the site for a few purposes at once [00:48:01] Do you want me to? [00:48:04] werdna: dbdump is fucked up, I will reboot it [00:48:05] I like it! [00:48:13] I just tried to ssh there [00:48:33] yeah, I told you D [00:48:35] :D* [00:48:38] though it was just rebooted [00:48:52] do you have access? [00:48:58] yes he does [00:48:58] yeah, petan gave me access [00:49:43] I love the idea, go for it :) [00:50:09] werdna: reedy is also supposed to have access if you need help [00:50:21] but I think he is pretty busy with 1.19 [00:50:37] Roan Kattouw also have access but he's just busy :| [00:50:49] no squid [00:51:07] !log deployment-prep I broke it! [00:51:08] Logged the message, Master [00:51:11] :D [00:51:13] Roan shouldn't be blamed for squid [00:51:21] I should be blamed for squid ;) [00:51:22] Is the deployment site no longer up? [00:51:23] Ryan_Lane should be [00:51:25] yes [00:51:29] exactly [00:51:31] :) [00:51:33] Sid-G: hi [00:51:34] you guys want storage, or squid? [00:51:38] I just broke it :o [00:51:39] sorry [00:51:39] petan: hey [00:51:44] np [00:51:44] should be up in a min [00:51:45] :) [00:51:48] Ryan_Lane: everything! [00:51:52] we'r switching repo [00:51:52] hexmode: :D [00:52:51] petan: Heh, I just logged in now after pestering you way back on the first day. Did nothing in between... (Just saw the deployment schedule... oops...) [00:55:38] petan: heh, the trusted XFF bug again right now :P [00:55:55] Sid-G: works [00:56:02] or maybe not [00:56:21] yes it does [00:56:22] HTTP 500 Internal Error [00:56:27] try to refresh [00:56:32] it's squid [00:56:43] open another page [00:56:48] ok it does [00:56:50] work [00:57:02] * Sid-G goes to test some stuff [00:59:49] !log deployment-prep if anything is broken, it was me [00:59:50] Logged the message, Master [00:59:56] night [01:00:00] lol [01:02:04] hey, will the deployment site go down after the 1.19 upgrade? Or is it a permanent testing ground? [01:03:37] permenant [01:03:40] Think it will pretty much be running nearish to trunk constantly, the idea being changes ar etesting before prod [01:03:43] in some form or another [01:06:00] :D [01:06:29] I just got a testing ground for twinkle changes before putting them in production :D [01:22:33] awww. only 36TB of space per node formatted [01:22:52] lol [01:22:52] ~72TB will be in the cluster [01:23:02] o.O O.o O.O 36TB!!! [01:23:24] we are planning this per-datacenter too [01:23:28] Sid-G: we're hoping to use it in other departments [01:23:29] right now just in pmtpa [01:23:36] werdna: this is just for labs [01:23:39] Only 700 blurays [01:23:52] * Sid-G is wishing he had 36TBs ;) [01:23:53] everyone else can keep their grubby hands off ;) [01:24:05] Sid-G: well, you do, for wikimedia related activities [01:24:10] I hope you bought SSDS :P [01:24:12] lol [01:24:20] SSDS? that's way too expensive [01:24:38] this is four servers with 24 2TB disks [01:25:05] with each server having two raid 6 arrays of 12 disks [01:25:09] Going for a raid5 styleish for project storage? [01:25:15] uh, gadgets dont work at deployment? [01:25:35] and this is gluster storage, so then raid-1'd [01:25:38] ? [01:26:55] testing twinkle at deployment gave the error message: No valid token. I think that's the edit token the API gives? [01:27:13] nvm [01:27:22] works now [01:31:20] wow [01:31:35] I just saw someones cv... they had 16pages! Talk about boring people to death [01:31:42] hahahaha [01:31:49] a 3 page CV I can understand [01:32:05] I struggle to get under 3 without leaving out all detail.. but 16!? [01:36:52] I just got this twice while trying to undo an edit (not rollback): "AbuseFilterHooks::onArticleSaveComplete" ????? ???? ??? ????????? ?? ?????? ?? "1054: Unknown column 'afl_rev_id' in 'field list' (deployment-sql)" the ???s are hindi which my IRC client wont accept. The undo does happen, but all I get is this message (not the original page) [01:38:27] it's either a column typo [01:38:32] or deployment isn't up to date [01:39:22] yup, a typo [01:39:51] or not [01:41:17] deployment-sql is missing a patch [01:41:24] for some reason this col isn't in the default db schema [01:45:17] bad werdna [01:47:11] hmm? [01:47:17] what I do? [01:47:39] you'd added a column in patches, but not seemingly added it to the core AF table file [01:47:56] well, presumably you [01:47:57] oopsies :) [01:47:59] ;) [01:48:00] which patch [01:48:08] see my last commit [01:48:36] Uh, did the channel get the bug that I just tried to tell about twice? Or did my connection beat me yet again? [01:49:10] afl_rev_id? [01:49:15] yeah [01:49:23] i've fixed in svn [01:49:30] :) [01:49:33] need to get someone to run a db patch on all wikis [01:49:51] ok [01:52:30] !log deployment-prep installing ack (source code search tool) on dbdump [01:52:38] Logged the message, junior [01:53:36] lolol [01:54:18] Ryan_Lane: can you run a sql patch on all the wikis? [01:54:27] in labs? [01:54:35] ya [01:54:43] I don't even have access to that project ;) [01:54:49] lolol [01:54:55] also, you're talking to the wrong guy when it comes to mysql [01:55:07] on the cluster it's foreachwiki sql.php file.sql [01:55:10] much easy [01:55:19] does this not exist in beta? [01:55:32] nfi [01:55:35] i've not even tried to login [01:55:37] :D [01:56:02] I'll do it [01:56:07] i know it's supposed to be like production, but i'm guessing it's not really yet [01:56:13] !log deployment-prep running afl_rev_id patch on all wikis [01:56:14] Logged the message, junior [01:56:24] probably not, yeah [01:58:00] werdna@deployment-dbdump:/usr/local/apache/common$ for wiki in ` pretty easy [01:59:24] heh [01:59:39] taking a while though :/ [02:01:15] I've closed my fenari shell [02:01:22] that was the second time I nearly ran something on the wrong box :p [02:02:06] Ryan_Lane: boxes aren't allowed to access the outside internet by default, right? [02:02:25] no, the opposite [02:03:47] hmm [02:05:44] whys that? [02:06:22] just seeing why I can't from deployment-dbdump [02:06:25] maybe that has a custom policy [02:07:15] what can't you reach? [02:07:23] you *can't* reach anything inside of the production cluster [02:07:26] that's on purpose [02:07:38] BAH damn you sql syntax [02:09:06] Ryan_Lane: ahhh [02:09:07] svn [02:09:12] ah. right [02:09:15] Checking out svn+ssh://svn.wikimedia.org/svnroot/mediawiki/branches/wmf/1.19wmf1 to /usr/local/apache/common/php-1.19... [02:09:18] svn: Network connection closed unexpectedly [02:09:19] we *may* want to open that back up [02:09:26] is that recent? [02:09:29] or just use git :) [02:09:31] I swear I checked something out just today [02:09:36] maybe via http [02:09:37] mm, old script [02:09:40] ahh that'll be it [02:09:40] svn is blocked [02:09:45] well [02:09:46] ssh is [02:10:01] I'll change the script [02:13:21] svn is *really* slow, even between servers in the same DC [02:15:15] yep [02:18:06] like, 20-minutes-to-checkout [02:18:10] wtf [02:20:04] I wonder if ^demon has another Git repack job going [02:20:23] memory and cpu usage on formey are fine [02:34:22] !log deployment-prep Moving /usr/local/apache/common/live to /usr/local/apache/common/live-hom and symlinking live to live-hom [02:34:23] Logged the message, junior [02:39:54] RECOVERY Free ram is now: OK on bots-sql3 bots-sql3 output: OK: 20% free memory [02:40:12] Svn is just slow generally. [02:40:18] it's like cvs with more junk. [02:55:44] RECOVERY Disk Space is now: OK on firstinstance firstinstance output: DISK OK [02:55:44] RECOVERY Current Users is now: OK on firstinstance firstinstance output: USERS OK - 0 users currently logged in [02:56:14] RECOVERY Free ram is now: OK on firstinstance firstinstance output: OK: 88% free memory [02:57:34] RECOVERY Total Processes is now: OK on firstinstance firstinstance output: PROCS OK: 80 processes [02:58:14] RECOVERY dpkg-check is now: OK on firstinstance firstinstance output: All packages OK [02:59:24] RECOVERY Current Load is now: OK on firstinstance firstinstance output: OK - load average: 0.03, 0.09, 0.04 [03:07:54] PROBLEM Free ram is now: WARNING on bots-sql3 bots-sql3 output: Warning: 18% free memory [06:37:05] http://upload.beta.wmflabs.org/wikipedia/commons/e/ea/LetsShar1950.webm returns mime type text/plain is this comming from squid or some apache static file or where is the mimetype set? [06:46:17] Naturally the Apache config [06:47:00] Most servers need some proper coaxing for .webm files. It's not in that many deployed configs yet. [06:48:50] added it to /etc/mime.types but did not reload apache, looks ok now [06:48:56] except for the squid cache [14:57:15] hi, i'm trying to test http://ja.wikipedia.beta.wmflabs.org, and i want MediaWiki: pages imported from ja.wikipedia.org. [14:57:27] is there any way to automatize it? [16:04:40] hi all [18:14:26] New patchset: Sara; "Test migration from libnss-ldap to libnss-ldapd." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2632 [18:17:14] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2632 [18:17:39] ssmollett: looks good. I just marked +2 and didn't submit [18:18:12] I think that's our normal operation, now. that way we don't submit something someone elese wasn't necessarily ready to submit :) [18:19:02] that makes sense. [18:19:46] you probably want to create a new instance before submitting, though [18:20:03] otherwise if the manifests fail, your instance won't finish building, and you won't be able to troubleshoot it [18:20:38] ldap configuration sucks to deal with :D [18:20:54] i was planning on just using testing-ldap. [18:21:03] it's already been configured, though [18:21:18] I usually take this approach: http://ryandlane.com/blog/2011/11/02/a-process-for-puppetization-of-a-service-using-nova/ [18:21:25] yes, but i might as well make sure i can revert its config. [18:21:45] true [18:26:33] ssmollett fixed the nss bug? [18:26:35] \o/ [18:27:43] libnss-ldapd (as opposed to libnss-ldap) is bug free. so we just have to migrate everything. [18:27:50] we'll see if it causes a new string of weird bugs :D [18:27:59] I don't think it will. I think this'll fix it [18:28:10] yes, by "bug free", i meant just this bug. there are always bugs. [18:28:16] heh [18:28:30] at least setuid/setgid will work [18:28:54] well that's a start :)) [18:54:24] RECOVERY Current Users is now: OK on testing-ldap testing-ldap output: USERS OK - 3 users currently logged in [18:55:44] RECOVERY Free ram is now: OK on testing-ldap testing-ldap output: OK: 89% free memory [18:56:47] i just ran puppet (before submitting my change) on testing-ldap. [18:57:00] (hi ssmollett) [18:57:04] RECOVERY Total Processes is now: OK on testing-ldap testing-ldap output: PROCS OK: 95 processes [18:57:04] ah. so that it would switch the libraries back? [18:57:09] RECOVERY Disk Space is now: OK on testing-ldap testing-ldap output: DISK OK [18:57:34] RECOVERY dpkg-check is now: OK on testing-ldap testing-ldap output: All packages OK [18:57:34] i tested switching the libraries back manually, and then with puppet. [18:57:40] * Ryan_Lane nods [18:57:45] (hi sumanah) [18:58:03] * sumanah won't distract you, just wanted to wave [18:58:54] RECOVERY Current Load is now: OK on testing-ldap testing-ldap output: OK - load average: 0.01, 0.06, 0.03 [19:09:24] how long does it take for a merge after a change is submitted? [19:09:32] one minute [19:11:37] gerrit shows the status of my change as "Submitted, Merge Pending" for about 10 minutes. is there something else i need to do? [19:12:02] umm [19:12:19] I don't see that [19:12:24] https://gerrit.wikimedia.org/r/#change,2632 [19:12:46] what do you see? [19:12:59] ah. I see where you say that [19:13:08] oh [19:13:11] it depends on another change [19:13:20] you need to rebase... [19:13:30] it's usually best to make a new branch for every change you are doing [19:13:42] then it's less likely one change will end up relying on another unrelated change [19:13:52] it depends on this change: https://gerrit.wikimedia.org/r/#change,2157 [19:14:25] so, rebase, and submit an amended commit [19:14:38] i fail at branching. how do i actually rebase? [19:15:00] heh. I fail at rebasing :) [19:15:06] it's git rebase, though [19:15:28] that said, this should likely work: https://labsconsole.wikimedia.org/wiki/Git#Fixing_a_path_conflict [19:30:23] New patchset: Sara; "Test migration from libnss-ldap to libnss-ldapd." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2632 [19:30:54] no more dependency! :D [19:31:19] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2632 [19:32:35] one day i will understand git. that day will not be today. [19:33:32] Change merged: Sara; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2632 [19:37:52] New patchset: Sara; "Fix puppet manifest for testing migration to libnss-ldapd." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2640 [19:38:12] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/2640 [19:38:43] puppet didn't like having two notifies. [19:38:48] ssmollett: git people assume that if you understand how git stores data, you automatically know everything about it [19:39:43] :) i was figuring i'd learn it as i go. which does seem to be the case, but may not be ideal. [19:40:02] andrewbogott_afk: http://openstackgd.wordpress.com/2012/02/17/dns-for-openstack/ [19:41:56] New review: Sara; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2640 [19:41:56] I hate when people throw code over the wall [19:41:56] Change merged: Sara; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2640 [19:52:02] I kind of want to stab the grid dynamics people [19:52:35] they didn't bother to comment on the DNS blueprint, and they also didn't bother to tell anyone they were working on a DNS project [19:54:48] oh well. ours is upstreamed. [19:55:03] they can take the time to merge or have it replaced. [20:03:35] New patchset: Sara; "Migrating to libnss-ldapd: remove libnss-ldap configs/init script." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2643 [20:03:55] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/2643 [20:04:29] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2643 [20:04:48] Change merged: Sara; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2643 [20:09:34] * aude waves [20:10:38] howdy [20:11:42] how's the storage? [20:11:54] os is installed on them [20:12:20] nice :) [20:12:25] need to get gluster running, clustered, then write some method of managing the volunes [20:12:27] *volumes [20:12:37] Ryan_Lane: I see you already responded :) [20:12:44] on their blog, yeah [20:12:56] when they mentioned they were going to release this, I posted then too [20:13:01] they haven't responded [20:13:08] they don't seem to sit in the irc channel [20:13:20] maybe they don't understand how the open source world works :D [20:14:18] Ryan_Lane: sounds good [20:15:09] thankfully andrew has started some work on a nova-volume driver for this [20:15:23] I'll probably put my hackish solution in place till its ready. heh [20:15:27] good [20:15:42] andrewbogott: which issues did you run into with nova-volume? [20:16:59] I was developing with devstack, so it may be that the issue that stumped me was a red herring. Basically I could not get nova to allocate volumes at all. [20:17:03] * andrewbogott looks for bug entry [20:18:33] https://bugs.launchpad.net/nova/+bug/927924 <- in short, the volume driver was never called. [20:19:02] Ryan_Lane: Do you have a puppety way to set up a diablo installation, or did you set up labs by hand? [20:19:12] I use puppet [20:19:19] in our repo, see openstack.pp [20:20:53] OK. Hm.... I wonder if the volume driver model is the same in diablo. [20:23:00] !test [20:23:00] $1 [20:23:00] for labs projects, i first need to be added to bastion in order to access others (when i join them)? [20:23:36] !git [20:23:36] for more information about git on labs see https://labsconsole.wikimedia.org/wiki/Git [20:23:52] Ryan_Lane: I propose that sometime soon I take on the job of making a puppet packages to install essex. We need that anyway, and it'll be good exercise for me anyway. (Unless you have not already done this) [20:24:21] well, we should add an essex image, and see how badly puppet breaks :) [20:24:34] I can add the image for you today, if you'd like to work on it soon [20:25:55] I don't think I know what you mean by 'add an essex image'. 'image' in the glance sense, or in some other sense? [20:26:04] in the glance sense [20:26:20] cause it's mostly just needing to make some tweeks to our existing puppet config [20:28:08] Ryan_Lane: as far as i can tell, the libnss-ldapd puppet configs work now. i verified a manual revert and then a puppet run on testing-ldap, and also confirmed that login/sudo still works after reboot. that's all for me today. you're welcome to do any further testing and/or update the manifest to push the change to the rest of labs. if you don't, i'll get to it sometime next week. [20:28:32] ...I must not understand what you use images for. Wouldn't an essex instance use the same image as diablo? I mean, puppet doesn't get in the game until an instance is created, does it? [20:28:53] ssmollett: great! thanks [20:29:09] andrewbogott: oh. wait. I'm totally confused [20:29:10] sorry :) [20:29:19] I was thinking ubuntu precise [20:29:24] Oh, good, I knew one of us was. [20:29:33] :p [20:30:36] Anyway... I will add puppet-essex to my list but not start on that for a bit. [20:30:47] And will move glusterization back to the head of the line. [20:33:47] PROBLEM Current Load is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:34:27] PROBLEM Current Users is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:35:07] PROBLEM Disk Space is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:35:47] PROBLEM Free ram is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:36:57] PROBLEM Total Processes is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:37:37] PROBLEM dpkg-check is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: CHECK_NRPE: Error - Could not complete SSL handshake. [20:45:52] Ryan_Lane: you rock!! [20:47:31] I've got git to work [20:48:20] great :) [20:50:07] I believe that mwdumper now builds a jar that includes its dependencies [20:50:58] * OrenOf wonders how to test that [20:54:42] Ryan_Lane: Tell me again how to force puppetd to refresh images from the master? [20:55:24] ("Just wait 30 minutes") [20:55:31] puppetd -tv [20:55:35] as root [20:56:20] thanks! [20:56:26] * andrewbogott writes that down, this time [20:58:45] RECOVERY Current Load is now: OK on diablo-n-gluster diablo-n-gluster output: OK - load average: 0.62, 0.35, 0.22 [20:59:25] RECOVERY Current Users is now: OK on diablo-n-gluster diablo-n-gluster output: USERS OK - 1 users currently logged in [21:00:45] RECOVERY Free ram is now: OK on diablo-n-gluster diablo-n-gluster output: OK: 84% free memory [21:01:25] RECOVERY Disk Space is now: OK on diablo-n-gluster diablo-n-gluster output: DISK OK [21:02:05] RECOVERY Total Processes is now: OK on diablo-n-gluster diablo-n-gluster output: PROCS OK: 90 processes [21:02:35] RECOVERY dpkg-check is now: OK on diablo-n-gluster diablo-n-gluster output: All packages OK [21:15:20] Ryan_Lane: OK, I now have nova installed on a labs box via puppet. Is there a ready-made rc file I can source to get my vars and passwords and such set up? [21:20:58] gimme a little bit. in a meeting with legal [21:34:54] !accountreq [21:34:55] in case you want to have an account on labs, please contact someone who is in charge of doing that: Ryan.Lane, m.utante or ssmolle.tt [21:35:52] Ryan_Lane, I wish to run a tool for simplewiki and a cvn bot on wikimedia labs [21:36:04] *simplewikt [21:37:15] andrewbogott: are you 'andrew' on svn [21:37:30] werdna: yep. [21:37:36] yay! I'm 'andrew' on the cluster :P [21:37:40] It was still available, strangely. [21:37:41] Oh, dang. [21:37:42] lol [21:37:55] heh [21:38:13] I can switch both to 'werdna', which is my ldap username :p [21:38:13] I'm not attached to my username, and haven't made any commits. [21:38:22] Well, that's easier for me :) [21:38:55] <^demon> werdna: I remember the day we were all sitting around the dinner table and I suddenly learned what yours and Liam's names meant. [21:39:19] well, I need to change my ssh config file either way I think [21:39:22] it doesn't matter to me [21:39:32] whatever's easiest for ^demon and Ryan_Lane / whoever manages cluster accounts [21:39:39] I think it'll be easier for me to rename your production account [21:39:48] <^demon> cluster accounts are done in puppet, which you can do :) [21:40:10] !account-questions | Jews [21:40:10] Jews: I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your SVN account name, or your preferred shell account name, if you do not have SVN access. 3. Your preferred email address. [21:40:17] well, kind og [21:40:19] *of [21:40:30] renames aren't handled well via puppet, I think [21:40:30] Ryan_Lane, PM [21:40:34] ok [21:40:53] <^demon> Ryan_Lane: Well presumably we wouldn't rename, we'd just make a new account and disable the old one? [21:41:17] could, but he probably has stuff in his homedir [21:41:27] <^demon> *nod* [21:50:08] cp :)\ [21:50:48] of course things get very confusing when andrewbogott gets a cluster account [21:50:55] because old references to the account name get confusing. [21:58:55] PROBLEM host: diablo-n-gluster is DOWN address: diablo-n-gluster CRITICAL - Host Unreachable (diablo-n-gluster) [22:01:03] <^demon> You know, I just thought of another awesome benefit to moving to git. We can get rid of that stupid codereview-proxy running on kaulen :p [22:01:35] RECOVERY host: diablo-n-gluster is UP address: diablo-n-gluster PING OK - Packet loss = 0%, RTA = 0.74 ms [22:01:52] and just have a cron to periodically update [22:02:33] <^demon> Huh? [22:03:19] My comment about "<^demon> You know, I just thought of another awesome benefit to moving to git. We can get rid of that stupid codereview-proxy running on kaulen :p" [22:03:29] meh, my 2c [22:04:07] <^demon> I don't understand what the cron is for. [22:06:33] !initial-login Jews [22:06:33] https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [22:06:48] !initial-login | Jews [22:06:48] Jews: https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [22:06:49] heh [22:15:08] now to make sure I don't crash all of labs with my gluster changes in puppet :D [22:15:52] !log gluster created four instances - 2 instance storage and 2 volume storage to test puppet changes [22:15:57] Logged the message, Master [22:17:01] it's getting close to the point of needing another compute node [22:17:21] Thanks Ryan_Lane [22:17:25] yw [22:19:12] 02/17/2012 - 22:19:11 - Creating a home directory for tuxed at /export/home/bastion/tuxed [22:19:42] Jews: for bots, talk to members of the bots project for access [22:19:44] !project bots [22:19:44] https://labsconsole.wikimedia.org/wiki/Nova_Resource:bots [22:20:12] 02/17/2012 - 22:20:11 - Updating keys for tuxed [22:21:20] bots bots bots [22:21:57] Always bots [22:24:02] PROBLEM Current Load is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:02] PROBLEM Disk Space is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:02] PROBLEM Current Users is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:02] PROBLEM Free ram is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:37] PROBLEM Disk Space is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:37] PROBLEM Current Users is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:37] PROBLEM Free ram is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:24:37] PROBLEM Total Processes is now: CRITICAL on testing-ldap-build testing-ldap-build output: Connection refused by host [22:25:02] Ryan_Lane, irony: you're listed for bots [22:25:08] :D [22:25:17] PROBLEM Disk Space is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:25:17] PROBLEM dpkg-check is now: CRITICAL on testing-ldap-build testing-ldap-build output: Connection refused by host [22:25:17] PROBLEM Free ram is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:25:17] PROBLEM Total Processes is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:25:24] only because I occasionally log in to restart of one my bots [22:25:38] petan is likely the best to talk to, or Damianz [22:25:57] PROBLEM Free ram is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:25:57] PROBLEM Total Processes is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:26:02] PROBLEM dpkg-check is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:26:27] PROBLEM Total Processes is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:26:32] PROBLEM dpkg-check is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:26:35] kk [22:26:37] PROBLEM Current Load is now: CRITICAL on testing-ldap-build testing-ldap-build output: Connection refused by host [22:27:07] PROBLEM Current Load is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:07] PROBLEM dpkg-check is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:07] PROBLEM Total Processes is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:12] PROBLEM Current Users is now: CRITICAL on testing-ldap-build testing-ldap-build output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:17] Will talk to on-wiki [22:27:47] PROBLEM Current Users is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:47] PROBLEM Disk Space is now: CRITICAL on testing-ldap-build testing-ldap-build output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:47] PROBLEM dpkg-check is now: CRITICAL on production-instance1 production-instance1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:47] PROBLEM Current Load is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:27:54] Not it. [22:28:27] PROBLEM Disk Space is now: CRITICAL on production-volume2 production-volume2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:28:27] PROBLEM Current Load is now: CRITICAL on production-instance2 production-instance2 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:28:27] PROBLEM Current Users is now: CRITICAL on production-volume1 production-volume1 output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:28:27] PROBLEM Free ram is now: CRITICAL on testing-ldap-build testing-ldap-build output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:30:37] labs-nagios-wm: seriously? [22:30:50] labs-home-wm: shut it already ;) [22:31:57] Ryan_Lane, and for my simplewikt tool? [22:32:10] ummm [22:32:11] (It's not a bot, for your information.) [22:32:15] * Ryan_Lane nods [22:32:20] lemme make you a project [22:32:46] so, there's no easy-to-use mediawiki install [22:32:52] Is that nagios freakout related to me not being able to reach diablo-n-gluster anymore? [22:32:53] in fact, everything right now is very bare-bones [22:33:00] Should I keep my bot in my own project or use bots? [22:33:05] I just created a simplewiki project for you [22:33:08] k [22:33:10] use bots [22:33:12] for the bot [22:33:14] k [22:33:19] andrewbogott: nope [22:33:23] 02/17/2012 - 22:33:23 - Creating a project directory for simplewiki [22:33:23] 02/17/2012 - 22:33:23 - Creating a home directory for tuxed at /export/home/simplewiki/tuxed [22:33:25] I made some instances for testing puppet changes [22:33:26] dang [22:33:33] Okay [22:33:38] while they were building nagios complained [22:34:22] 02/17/2012 - 22:34:22 - Updating keys for tuxed [22:35:11] !log simplewiki created simplewikt feedback instance [22:35:11] simplewiki is not a valid project. [22:35:23] Hmm... [22:35:25] labs-morebots: what's that you say? [22:35:34] probably cached. [22:35:42] it takes a little bit for the cache to update [22:35:45] I see [22:36:40] instance creations are in the recent-changes [22:37:47] k [22:38:39] !log simplewiki test [22:38:40] simplewiki is not a valid project. [22:38:48] -_- [22:39:05] RECOVERY Current Users is now: OK on production-instance2 production-instance2 output: USERS OK - 1 users currently logged in [22:39:35] RECOVERY Disk Space is now: OK on production-instance2 production-instance2 output: DISK OK [22:39:42] Hmm... [22:39:43] Failed to fetch http://ubuntu.wikimedia.org/ubuntu/pool/main/r/ruby1.8/ruby1.8_1.8.7.249-2_amd64.deb Hash Sum mismatch [22:39:43] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? [22:39:54] what size instance did you try to create? [22:39:57] go with a small, for now [22:40:06] I think medium has a problem [22:40:15] RECOVERY Disk Space is now: OK on production-instance1 production-instance1 output: DISK OK [22:40:15] RECOVERY Free ram is now: OK on production-instance2 production-instance2 output: OK: 86% free memory [22:40:18] It's small [22:40:38] delete it and try again, then [22:40:49] it's occasionally buggy :( [22:40:55] RECOVERY Free ram is now: OK on production-instance1 production-instance1 output: OK: 79% free memory [22:41:08] "You can not complete the action requested as your user account is not in the project requested. " [22:41:24] o.O [22:41:31] this is for simplewiki? [22:41:35] RECOVERY Total Processes is now: OK on production-instance2 production-instance2 output: PROCS OK: 95 processes [22:41:42] Yes [22:41:49] log out of the wiki and log back in. that makes no sense [22:41:53] k [22:42:05] RECOVERY Total Processes is now: OK on production-instance1 production-instance1 output: PROCS OK: 87 processes [22:42:10] RECOVERY dpkg-check is now: OK on production-instance2 production-instance2 output: All packages OK [22:42:17] Same [22:42:36] hm [22:42:45] RECOVERY dpkg-check is now: OK on production-instance1 production-instance1 output: All packages OK [22:42:49] when trying to delete the instance? [22:43:25] RECOVERY Current Load is now: OK on production-instance2 production-instance2 output: OK - load average: 0.03, 0.10, 0.17 [22:43:26] yes [22:43:55] RECOVERY Current Load is now: OK on production-instance1 production-instance1 output: OK - load average: 0.07, 0.22, 0.24 [22:44:10] can you look at the console log? [22:44:20] 02/17/2012 - 22:44:20 - Creating a home directory for laner at /export/home/simplewiki/laner [22:44:25] PROBLEM Current Load is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:44:40] RECOVERY Current Users is now: OK on production-instance1 production-instance1 output: USERS OK - 1 users currently logged in [22:45:05] PROBLEM Current Users is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:45:21] 02/17/2012 - 22:45:21 - Updating keys for laner [22:45:45] PROBLEM Disk Space is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:46:15] PROBLEM Free ram is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:46:43] Of 146? [22:47:32] I'm sorry? [22:47:35] PROBLEM Total Processes is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:47:37] what do you mean? [22:47:41] Of instance number 146? [22:47:46] yes [22:48:00] Last 2 lines are Failed to fetch http://ubuntu.wikimedia.org/ubuntu/pool/main/r/ruby1.8/ruby1.8_1.8.7.249-2_amd64.deb Hash Sum mismatch [22:48:00] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? [22:48:14] ok. it's weird you can see the log, but can't delete the instance [22:48:15] PROBLEM dpkg-check is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: Connection refused by host [22:48:56] I was able to delete it [22:49:01] so, recreate it now [22:49:14] k [22:49:54] Created instance i-00000147 with image ami-0000001d and hostname i-00000147.pmtpa.wmflabs. [22:49:58] Hope it works. [22:50:40] Ryan_Lane: Are you distractable? [22:50:50] sure [22:51:40] On diablo-n-gluster... it looks like nova is running, but I can't get any of the euca tools to work. [22:51:46] Looks like it's on a roll now [22:51:51] You'd expect them to work wouldn't you? [22:51:54] Jews: great [22:52:10] andrewbogott: yeah [22:52:18] andrewbogott: what's not working? [22:52:24] did you export the credentials? [22:52:25] And is nova via puppet listening on the standard port or a customized one? [22:52:32] using nova-manage project zip [22:52:43] api is on standard port [22:52:56] configuration is set to use ldap, when installed via puppet [22:53:04] Ryan_Lane, okay, now for apache stuff [22:53:10] Hm... actually, I was thinking my issue was with the port but maybe I'm getting past that. [22:53:31] Jews: lemme know if you have issues sshing to the instance. I may have to clear dns cache [22:54:05] Tell me more about credentialling. It doesn't like 'nova-manage project zip andrew' [22:54:14] Is that because 'project' should be an actual project? [22:54:31] yeah, a real project [22:54:43] you have to create a project before you can use euca tools [22:55:16] RECOVERY Total Processes is now: OK on production-volume2 production-volume2 output: PROCS OK: 79 processes [22:55:30] that's reasonable [22:55:56] RECOVERY dpkg-check is now: OK on production-volume2 production-volume2 output: All packages OK [22:56:37] Yay! [22:57:06] RECOVERY Current Load is now: OK on production-volume2 production-volume2 output: OK - load average: 0.02, 0.05, 0.09 [22:57:28] Oh, I don't have ldap :( [22:57:46] RECOVERY Current Users is now: OK on production-volume2 production-volume2 output: USERS OK - 0 users currently logged in [22:57:50] heh [22:58:26] RECOVERY Disk Space is now: OK on production-volume2 production-volume2 output: DISK OK [22:58:27] Do I want openstack:ldap-server? [22:58:56] yep [22:59:06] RECOVERY Free ram is now: OK on production-volume2 production-volume2 output: OK: 83% free memory [22:59:39] Ryan_Lane, now to figure out how to get apache installed and running :P [22:59:45] And expose the service [23:02:18] star.wmflabs? [23:02:22] for certificate? [23:02:31] andrewbogott: yep [23:02:53] New patchset: Ryan Lane; "Add cluster info for gluster clusters" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2651 [23:02:57] !log simplewiki created feedback instance and adding apache+php5 [23:02:59] Logged the message, Master [23:05:04] PROBLEM host: feedbacksimplewikt is DOWN address: feedbacksimplewikt CRITICAL - Host Unreachable (feedbacksimplewikt) [23:06:03] New patchset: Ryan Lane; "Add cluster info for gluster clusters" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2651 [23:06:12] Ryan_Lane, I need to configure Apache and expose it, how do I do that [23:06:26] via apt, or through puppet configuration [23:06:39] And to expose it? [23:06:42] this is a bare bones virtual machine. it isn't set up for tool usage [23:06:55] when you are ready to demo something, I can give you a public IP address [23:06:55] I needed one for tool usage [23:07:01] Ok [23:07:08] labs isn't really ready for tool usage yet [23:07:15] It's simple to set up anyway [23:07:31] if you wish to do everything from scratch, it's fine, but there's no pre-made configuration yet [23:07:55] you can use a socks proxy to access your tool till you are ready for it to be public [23:08:02] !socks-proxy [23:08:02] ssh @bastion.wmflabs.orgĀ -D ; # [23:08:03] Okay [23:08:43] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2651 [23:08:43] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2651 [23:12:07] New patchset: Ryan Lane; "Revert "Add cluster info for gluster clusters"" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2653 [23:13:40] arrrggh, I hate puppet! [23:16:57] "err: Could not retrieve catalog from remote server: Error 400 on SERVER: Complex search on StoreConfigs resources is not supported on node i-00000140.pmtpa.wmflabs" <- ? [23:17:33] sorry. I probably just broke what you were doing [23:17:39] I thought I just reverted that, though [23:17:45] I'll try again [23:17:58] nope, same. [23:19:54] hm [23:21:12] New patchset: Ryan Lane; "Revert "Add cluster info for gluster clusters"" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2655 [23:21:27] it sure as hell isn't reverting it [23:21:30] that's annoying [23:23:42] New patchset: Ryan Lane; "Revert change 2651" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2656 [23:24:14] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2656 [23:24:14] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2656 [23:24:50] andrewbogott: it'll be fixed in like 20 secs :) [23:26:25] well, puppet totally fails at what I'm trying to do with gluster, so goodbye exported resources, I'll just manage cluster members by hand [23:26:33] fucking hate puppet sometimes [23:28:34] New patchset: Ryan Lane; "Remove gluster peering support" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2657 [23:29:01] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2657 [23:29:01] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2657 [23:33:45] Now I'm on to 'Could not find dependency File[/etc/ssl/private/star.wmflabs.key]' [23:34:11] o.O [23:34:12] really? [23:34:36] oh [23:34:43] maybe you are including the wrong cert [23:34:45] just include both [23:35:26] PROBLEM host: feedbacksimplewikt is DOWN address: feedbacksimplewikt CRITICAL - Host Unreachable (feedbacksimplewikt) [23:41:14] RECOVERY host: feedbacksimplewikt is UP address: feedbacksimplewikt PING OK - Packet loss = 0%, RTA = 0.65 ms [23:41:34] PROBLEM Current Load is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:42:14] PROBLEM Current Users is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:42:54] PROBLEM Disk Space is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:43:24] PROBLEM Free ram is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:44:44] PROBLEM Total Processes is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:45:34] PROBLEM dpkg-check is now: CRITICAL on diablo-n-gluster diablo-n-gluster output: DPKG CRITICAL dpkg reports broken packages [23:48:31] Change abandoned: Ryan Lane; "I was confused :)" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2653 [23:48:43] Change abandoned: Ryan Lane; "again..." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2655 [23:50:35] PROBLEM dpkg-check is now: CRITICAL on feedbacksimplewikt feedbacksimplewikt output: CHECK_NRPE: Error - Could not complete SSL handshake. [23:52:14] Well, it looks ready [23:52:26] (my simplewikt feedback thing) [23:55:12] you'll need to modify your default security group and add port 80 to it [23:55:36] really you should likely have made a web security group before you created your instance [23:55:44] docs mention this, not sure if you read them :) [23:56:05] can't change security groups after an instance has been made [23:57:07] is /home shared? [23:57:24] between instances in the same project, yes [23:57:41] otherwise, no [23:57:48] great [23:57:59] but you shouldn't use /home for data [23:58:01] only environment [23:58:08] each instance has a filesystem mounted on /mnt [23:58:34] I'm not using it for data [23:58:40] * Ryan_Lane nods [23:58:44] and what is mounted on /mnt? [23:58:50] a filesystem [23:58:57] with more storage [23:59:00] k