[03:10:52] * jeremyb guesses OrenBochman is asleep... [06:12:42] +ayan [06:12:57] lalalalalalalala' [06:13:03] lalalalalalal [06:13:10] xtreme typist. [06:13:34] hip hop the floor [06:13:41] on the floor [06:13:47] dance the nigh away [06:54:52] PROBLEM Disk Space is now: WARNING on nova-production1 nova-production1 output: DISK WARNING - free space: / 521 MB (5% inode=86%): [09:24:52] PROBLEM Disk Space is now: CRITICAL on nova-production1 nova-production1 output: DISK CRITICAL - free space: / 285 MB (2% inode=86%): [15:00:21] petan: are you available for help with labs [15:05:55] anyone knows labs ? [15:20:36] Kinda [15:20:45] But if it's the same issue I don't know window [15:20:46] s [20:08:57] so, oren's not here and last i heard from him (above) was how do I get a wiki installation [20:09:19] i think the windows issues may be real but in several cases have been a red herring [20:10:04] e.g. he needed to find the right page on the wiki to use to create an instance. same page and same process there regardless of your OS or browser [20:10:24] and now, installing a wiki is also not related to what OS you're ssh'ing from [20:12:27] I told him a bunch of times that he needed to create an instance [20:12:39] I think he assumes everything is already set up for him [20:12:47] though I told him that he'd have to do everything himself [20:13:15] Ryan_Lane: well i told him ~4x before it sunk in from me [20:13:28] (about creating an instance) [20:14:01] yeah [20:14:13] i probably once knew but now i can't remember what his background is :( [20:14:24] he doesn't know how to create an instance [20:14:31] but there's documentation.... [20:14:37] people don't seem to like to read docs [20:14:38] he does now (I hope) because he did it) [20:14:44] oh. did he? [20:14:56] he said he could ssh to it [20:15:23] at first it was just the console he was getting but he couldn't ssh? and then he could. i think [20:15:33] and certainly nagios was talking about it [20:16:15] well, it takes a bit for the instance to finish building [20:16:17] is there a nova endpoint ppl can use directly or just via mediawiki? [20:16:23] just mediawiki right now [20:16:32] we're working on making the endpoint available as well [20:16:45] i was just thinking e.g. if i wanted console on a shell instead of over the web [20:16:46] labsconsole does some stuff nova doesn't support uet [20:16:58] eh? [20:17:00] what do you mean? [20:17:25] there's no console access right now on instances [20:17:30] there really isn't much reason to have consoles [20:17:34] there's console history [20:17:37] no one has the root password on the instances [20:17:40] ooooohhhhh [20:17:44] you mean you console log [20:18:16] well, that's possible, but it would also give you access to create instances via the command line tools too, and that would be problematic [20:18:34] i've gotten that via shell from amazon's ec2. i think in fact that's the only way to do it? or i guess everything's an API client and an API client can have any human interface you want [20:19:02] why problematic? because no associated wiki page would be created? [20:19:06] yeah, mediawiki is using the ec2 endpoint on behalf of you [20:19:20] no DNS entry, no puppet configuration, no mediawiki page [20:19:34] oh [20:19:44] those are the three things nova doesn't support yet [20:19:50] we are working on the first right now [20:19:54] puppet stuff is next [20:19:58] then mediawiki pages [20:20:11] mediawiki pages are likely doable without nova directly supporting it [20:20:33] I could run a cron that would check to see if an instance existed that doesn't have a page, and create the page [20:20:57] but lack of puppet and dns are problematic [20:21:54] anyway, notpeter is talking about (or already started?) some search puppetizing work and i'm also interested in search (both puppet side and also the stuff oren's doing) [20:22:10] yeah. I told Oren to work with peter [20:22:24] well they've also chatted directly with eachother [20:22:45] wow. bots has a ton of instances :D [20:22:51] hehe [20:22:59] that's my other question [20:23:06] that sure sprouted up quick [20:24:37] Ryan_Lane: i set up http://wiki.debian.org/MeetBot on the wikimediadc server for a meeting but then i was thinking it would be good to have in other places (#-office, #-dev, #-au, etc.) what do you think about putting that on labs? [20:24:56] it would be fine [20:25:00] work with petan [20:25:16] he and hyperon have been doing the bots stuff, mostly [20:25:29] can they add/remove ppl or only you? [20:25:36] they can too [20:25:46] oh, cool (mild surprise) [20:26:13] yeah, I want projects to be fairly self-sufficient [20:26:23] it sucks to have to wait on ops for that kind of stuff [20:26:44] yeah, i just didn't know how far things had progressed [20:26:59] that's always been a feature :) [20:27:07] hehe [20:27:09] it was in the original design [20:27:35] the idea is that people need ops very, very little for most things [20:27:49] we control access to labs via bastion [20:27:54] so, for bots i'll talk to petan (i have to anyway, to learn how things are done there). for search, who do i talk to? [20:28:15] oren and notpeter [20:28:25] okey [20:28:37] remember with the bot, that labs is still closed beta [20:28:54] we have occasional hiccups that may make your bot die [20:29:07] virt4? :) [20:29:14] * jeremyb is still trying to wrap his head around `ssh labs-mw1.pmtpa.wmflabs` [20:29:28] I'm going change that documentation right now [20:29:38] is that supposed to be executed *on* bastion? [20:30:06] otherwise i don't get it. it's split view DNS right? [20:30:22] at any rate wmflabs ain't on the root servers :) [20:30:23] not really [20:30:51] if you make the request directly to the server, it'll return [20:31:10] but wmflabs isn't a proper TLD, so of course it isn't going to return ;) [20:31:45] dig @virt1.wikimedia.org bastion1.pmtpa.wmflabs [20:32:16] Btw why do instances need a puppet page? Do you mean the page where you manage the instance (as I don't have any nova/reboot/vodo access)? [20:32:19] we don't have zone transfers, but that's because the way we have DNS setup on the box it isn't possible [20:32:19] heh [20:32:41] Damianz: instances need puppet configuration to build properly [20:32:52] Err I mean mediawiki not puppet [20:32:57] oh [20:33:00] * Damianz finds coffeee [20:33:03] they don't actually *need* that [20:33:06] but.... [20:33:16] it's how I track things properly [20:33:33] for instance: https://labsconsole.wikimedia.org/wiki/Main_Page [20:33:34] then there will be a redlink in sematica results? or no row at all? [20:33:42] the box on the main page tells me how much storage is allocated [20:33:52] no row at all [20:33:59] Ah [20:34:08] the SMW results are important for us [20:34:11] I thought that was just polled on a schedual from the api [20:34:14] it's the only decent statistics we have [20:34:21] nope. it's done on creation [20:34:24] and modification [20:34:35] That's a chunk of ram :D [20:34:38] that's why we want to move that functionality to nova [20:34:46] yeah, RAM is a limiting factor [20:34:50] so is storage [20:34:58] CPU I'm not terribly concerned with [20:35:20] we have more storage allocated than is available, already [20:35:28] I find disk speed to be quite limiting with ram for vms, for local storage anyway. Not sure on the io thoughput of gluster these days, I imagine quick as it can probably talk to multiple nodes. [20:35:31] RAM is getting there [20:35:46] every compute node is a gluster node [20:35:59] and the compute nodes have raid10 for their storage [20:36:03] Ryan_Lane: http://www.statusq.org/archives/2008/07/03/1916/ [20:36:04] it should be relatively quick [20:36:05] So you're replicating every image to every node? [20:36:16] nah, it's gluster raid 1 [20:36:25] Ah [20:36:36] I was told not to stripe [20:36:36] So it's just instead of having a seperate storage cluster. [20:36:48] yeah. we're going to have a separate storage cluster too [20:36:52] but not for instance storage [20:36:54] only for volume storage [20:37:08] we'll be using less instance storage whenever we get volume storage anyway [20:37:24] you're using gluster already? [20:37:29] * jeremyb is confused [20:37:31] jeremyb: yeah, it would be nice for us to use that [20:37:38] the ssh proxycommand stuff [20:37:41] yes, we have gluster [20:37:47] Hmmm, have you considered using glusterfs' built in nfs stuff over the nfs vms? (If you're not allready) [20:37:49] each compute node is running gluster [20:38:06] well, I'd like to move away from NFS completely [20:38:14] That would be cool too :P [20:38:19] there's no reason every node can't have the gluster client stuff installed [20:38:25] since they all support it [20:38:38] so I'll just use glusterfs with autofs, like I'm doing with NFS right now [20:39:06] I really need to make a labs mailing list [20:39:11] * jeremyb is still confused [20:39:12] hah [20:39:18] I guess I have everyone's email address, but I hate to spam people [20:39:32] jeremyb: how so? [20:39:35] make a list! [20:39:46] yeah, will likely make a list [20:39:49] You facy contributing to mailman day? [20:39:59] Damianz: MMLRD? [20:40:01] Today *is* mailman day afterall [20:40:20] Ryan_Lane: where would gluster fit in on the client? what would be under gluster? [20:41:00] jeremyb: ignoring our current use of gluster.... [20:41:20] which i also don't know anything about [20:41:31] All vm's hard disks are on gluster [20:41:32] right now we have one instance, labs-nfs1, that shares home directories to all instances [20:41:46] per-project, of course [20:42:05] we are going to make a new glusterfs cluster soon [20:42:08] whenever the hardware comes in [20:43:02] so, we'll install the glusterfs client package on all instances [20:43:28] and make the script I wrote make and share glusterfs volumes to the instances [20:43:51] and mount the home directories via autofs like I'm doing with the NFS home directories right now [20:44:20] so, how will they access the underlying devices? what will the client do to execute a read or write? (e.g. is it over the network?) [20:44:33] gluster mounts are just like nfs mounts [20:44:37] except its distributed [20:44:52] maybe i'm thinking of a different filesystem [20:44:54] so, if a gluster node goes down, the client requests go to the replica instead [20:45:45] we are using gluster on the compute nodes right now just to store the instance's images [20:45:48] Btw does gluster still have the need for you to stat everything in the volume if a node dies to get it back in sync? (which sucked for loads of small files or a few tb of them) [20:45:52] is there leader election? [20:45:58] Damianz: I think so, es [20:46:00] *yes [20:46:21] jeremyb: I don't think there are leaders [20:46:37] they all share some kind of index [20:46:42] and they are all writable [20:46:47] ah hello, what's up? [20:46:56] hyperon: whoops. sorry for the ping [20:46:59] did someone need something done on bots? [20:47:12] jeremyb wanted to run a bot in the bots project [20:47:35] ah ok. has it been dealt with? [20:47:40] nope [20:47:42] no rush [20:47:49] I wanted him to talk to you and pe tan [20:47:51] was for http://wiki.debian.org/MeetBot [20:48:52] so it's an irc bot? [20:49:01] yes [20:49:08] let me fix it's rdns :) [20:49:56] so i can set up an instance for it if you want [20:50:18] idk how things work [20:50:28] do ppl have multiple bots on the same instance? [20:50:40] I'd like it to work that way at some point [20:51:03] afaik it's one bot per instance so for [20:51:03] I'd also like the database to be on real hardware [20:51:04] anyway, i'm happy with however you want [20:51:14] i expect it to be low resource use [20:51:17] and the project storage to be in gluster volume storage, rather than NFS [20:51:27] Ryan_Lane: Could we get real access to the wikipedia dbs too? :3 [20:51:30] but, we can work that way over time :) [20:51:38] Damianz: that's the long term goal, yes [20:51:52] slow and incremental steps are always best [20:51:59] well we could create one instance for all the irc bots [20:52:10] it's fine to have one per bot right now [20:52:39] the bots project, right now, is really like the test/dev bots project [20:52:48] at some point we'll want a "production" bots project [20:52:56] where people don't have root [20:53:10] changes happen in the test/dev one, then move to the "production" one [20:53:21] isn't petan running a production enwiki bot on bots? [20:53:25] yes [20:53:25] I really should figure out deb control files and package cbng and move it's management to upstart/systemd/w/e ubuntu is using over supervisor and make it more generic rather than me being the only one knowing how to setup/fix it but some of that's gonna be a rewrite :( [20:53:31] hyperon: So am I :P [20:53:33] a couple people are [20:53:46] You know that feeling when you have so much crap to do but like only 366days in the year [20:53:53] Damianz: that would be cool [20:53:59] I'd love to have all the bots debianized [20:54:11] Damianz: what is "it"? [20:54:22] ? [20:54:36] If you mean the bot, cbng [20:54:46] oh [20:54:48] if all bots were debianized, we could install them on all of the bots nodes, then control where they run via puppet [20:54:54] i thought that was some kind of typo [20:55:07] we could monitor the load of the instances via ganglia to see if we need to move bots around [20:55:12] ohhh, cluebot i guess [20:55:14] or create new instances [20:55:14] :D [20:55:26] That would be cool [20:55:38] it would be good for bot authors to have two accounts for their bots on wikis [20:55:39] Ryan_Lane: SGE? :-) [20:55:47] and -testing [20:55:51] or something like that [20:56:07] I need to re-write certain parts so they don't screw up on restarting but everything can be packaged up apart from mysql. Actaully using mysql makes it easy to move the produciton bot around but with that said... [20:56:11] so they can test their bot and know which bot is doing what [20:56:33] Ryan_Lane: You know what petan was talking about queue wise - that would make it a LOT easier to shift bots around as currently restarting most loose a lot of data. [20:56:39] Damianz: just make the mysql part configurable :) [20:56:49] yep [20:56:50] agreed [20:56:55] jeremyb: i'm going to go grab lunch and i'll see about creating instances for the MeetBot if you'd like [20:57:10] well, if you guys add jeremyb to the project, he can create his own [20:57:20] I guess it needs access to the NFS storage, though [20:57:27] hyperon: sure. i'll be off and on todady (no big plans but i'll be eating at some point etc) [20:57:37] today* [20:57:51] ah right, i'll add him to the project [20:58:02] also add him as sysadmin and netadmin [20:58:22] I should really add a dialog to add-project member that allows you to do that at the same time [20:58:43] jeremyb: is your account called jeremyb on bots? [20:58:55] oh. right [20:59:00] hyperon: s/bots/labs/ but yet [20:59:02] the web interface uses the wiki name [20:59:19] Imo setting the wiki+ldap username to the same would save a lot of confusion :PO [20:59:39] yeah, but I was specifically asked to let people have different wiki names [20:59:58] I guess I could make the dialogs accept either [21:00:02] since they are both unique [21:00:25] that's going to take some code refactoring, though :) [21:00:53] The real confusing one is gerrit :( Which btw seems to only let me login if I'm logged into labsconsole. [21:00:54] I should also display the shell names and wiki names in the dialogs too [21:01:07] wiki name (shell name) <- like that [21:01:07] 01/01/2012 - 21:01:06 - Creating a home directory for jeremyb at /export/home/bots/jeremyb [21:01:25] * Damianz pats labs-morebots and offers her a cookie [21:01:32] Damianz: eh? really? you should be able to log into gerrit without logging into labsconsole [21:01:44] they aren't linked at all, except for the fact that they both use ldap [21:02:05] hmm. going to enter some bugs :) [21:02:07] 01/01/2012 - 21:02:07 - Updating keys for jeremyb [21:02:34] soon you guys will be able to modify puppet configuration :) [21:02:48] meaning the puppet config in the instance configuration dialog [21:02:51] jeremyb: alright, account has been made for you on bots with netadmin and sysadmin permissions [21:02:54] so that you can add your own variables and classes [21:02:54] 01 20:29:14 * jeremyb is still trying to wrap his head around `ssh labs-mw1.pmtpa.wmflabs` [21:02:57] 01 20:29:38 < jeremyb> is that supposed to be executed *on* bastion? [21:03:06] yes [21:03:10] jeremyb: look at the documentation now [21:03:15] i saw your edits to [[access]] but it's still not clear [21:03:19] bastion is a bastion host [21:03:22] * jeremyb will change it [21:03:26] why change it? [21:03:29] Would be awesome if we had the wmflabs TLD ;) [21:03:37] Reedy: heh [21:03:38] indeed [21:03:42] and ipv6 [21:03:45] Hmm actually it seems to work now, wasn't last week and someone elses commented on it only working when logged into labs =/ [21:03:48] * Damianz shrugs [21:03:53] it would be even better if we had the wmlabs tld. I'm not a big fan of wmflabs [21:04:02] heh [21:04:08] and wmnet? [21:04:13] we got wmflabs because wmlabs.com and wmlabs.net were taken [21:04:21] nah wmnet is internal [21:04:29] and always should be :) [21:05:45] Alien converts an RPM package file into a Debian package file or Alien can install an RPM file directly. < This is so going to save me working out debian/* [21:06:29] Ryan_Lane: look at [[access]] now [21:07:00] urgh [21:07:16] !access [21:07:16] https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [21:07:22] You know people are going to copy workstation $ then complain it breaks [21:07:29] ah ok [21:07:35] yep [21:07:39] Well they are if they couldn't work it out before [21:07:50] Damianz: *i* didn't get it before [21:08:06] we could color code differnt pieces [21:08:21] and tell people which colors to type [21:09:05] different* [21:09:25] Ok I edited it again [21:09:27] anyway, i'll set up my own proxycmd stuff and then put that on wiki too [21:11:12] jeremyb wasn't the only person that was confused by the documentation [21:11:14] this is clearer [21:12:16] ...i really should finish puppetizing that other irc bot [21:12:27] i finished but i never got git to work with gerrit [21:12:59] I really should figure out how we are using nagios and delete/re-do my gerrit ticket [21:15:02] so, i logged out and back in. https://labsconsole.wikimedia.org/wiki/Special:NovaInstance doesn't say anything about creating an instance [21:15:12] hyperon: well, push it to the test repo, and I'll code review it [21:15:25] hmm [21:15:28] lemme check your groups [21:17:19] jeremyb: ok, it'll show up now [21:17:25] you needed to be added to the global groups [21:18:19] to get into labs we still ssh into bastion, right? [21:18:25] Ryan_Lane: oh, also bastion's pub key print should be published somewhere so i can verify it [21:18:46] there's an open bug on that [21:18:48] hyperon: yep [21:19:42] hmm [21:19:45] I'm more than happy to take patches for openstackmanager on the ssh fingerprint issue :) [21:19:55] I can even give access to the testing instance. heh [21:19:58] bastion.wmflabs.org, right? [21:20:02] yep [21:20:17] Ryan_Lane: bastion in particular should just be hardcoded on a protected page for now [21:20:29] Ryan_Lane: that's the one that actually needs checking [21:20:40] jeremyb: it can be on the "manage instances" [21:20:59] I can also just namespace protect all of the Nova_Resource namespace [21:21:28] hah [21:23:07] 01/01/2012 - 21:23:07 - Updating keys for hyperon [21:23:09] 01/01/2012 - 21:23:09 - Updating keys for hyperon [21:23:11] Ryan_Lane: i've not really been able to understand exactly what glance is for but do we use it or are we considering? [21:23:19] we use glance [21:23:27] only because it's required to use nova [21:23:29] does this labs-home-wm stuff get logged somewhere? [21:23:34] oh, hah [21:23:41] otherwise I could give not a crap about glance [21:23:56] hyperon: need help? [21:24:07] 01/01/2012 - 21:24:06 - Updating keys for hyperon [21:24:08] you inserted someone to bots project? [21:24:08] 01/01/2012 - 21:24:08 - Updating keys for hyperon [21:24:19] jeremyb has been added to it [21:24:29] jeremyb: meh. no need to log it [21:24:30] there is a script we need to run on nfs server [21:24:33] I will do that [21:24:51] jeremyb: it's just informational messages [21:25:07] Ryan_Lane: sure but sometimes you're not in the channel [21:25:08] jeremyb: the wiki's recentchanges page logs the important part [21:25:16] wow i was doing dumbness [21:25:29] not so much an issue for me because i'm always here :) [21:25:44] what are these security groups? [21:25:48] done [21:26:02] i'll visit [[Special:NovaSecurityGroup]] [21:26:09] jeremyb: https://labsconsole.wikimedia.org/w/index.php?title=Nova_Resource:Bots&curid=308&diff=1076&oldid=906&rcid=1242 [21:26:16] !log securitygroup [21:26:16] Message missing. Nothing logged. [21:26:22] !securitygroups [21:26:26] !log bots added jeremyb to nfs [21:26:27] Logged the message, Master [21:26:28] Ryan_Lane: where is the running instance of morebots? [21:26:28] !securitygroup [21:26:37] hyperon: either bots-1 or bots-2 [21:26:43] !security [21:26:43] https://labsconsole.wikimedia.org/wiki/Security_Groups [21:26:46] heh [21:27:05] jeremyb: they are firewalls [21:27:12] yeah, i know [21:27:21] i just didn't know what the ones i was offered were [21:27:37] do i need to do something after i update my ssh keys? [21:27:40] eww. I need to fix the project page's infobox [21:27:58] it should put members into an unordered list [21:28:04] hyperon: nah [21:28:09] what's the deal with not allowing group changes after instance creation? [21:28:15] hyperon: notice that the bot showed your keys changing [21:28:25] jeremyb: it's an EC2 limitation [21:28:44] hyperon@bastion1:~$ ssh hyperon@bots-1 [21:28:45] I need to see if the openstack api removes that limitation [21:28:47] Permission denied (publickey). [21:28:49] because it's *really* annoying [21:28:56] hyperon: did you forward your key? [21:29:02] err [21:29:05] your agent [21:29:38] not after i updated my keys, no [21:29:55] Ryan_Lane: do these instances lose their data on boot? do they stop existing immediately on shutdown? [21:31:38] hyperon: after you update your key, you need to add the new one to your agent [21:31:51] jeremyb: no. they are persistent [21:31:59] jeremyb: if you reboot it, it'll still have its data [21:32:05] amazing! [21:32:11] how un-ec2 like [21:32:18] if the underlying hardware dies, it'll also still have its data [21:32:22] ec2 can be persistent [21:32:33] ec2 can only be persistent if you use EBS volumes [21:32:37] Ryan_Lane just won't demand x $ per month [21:32:48] Welll unless it's poker night anyway [21:32:53] it's a PITA to use EBS volumes [21:33:12] also EBS is unreliable [21:33:20] I really hate ec2 charge for ips and change the ip on reboot [21:33:46] ewww, they change it on boot? [21:33:50] I want our infrastructure to be resilient, so volume storage is completely separate from instance storage [21:33:56] that's kind of nasty [21:34:00] ours don't change on reboot [21:34:07] one an IP is assigned, it stays that way [21:34:12] until the instance is terminated [21:34:20] Ryan_Lane: have you seen the ec2 gov region? [21:34:25] eh? [21:34:41] what's that? [21:35:21] http://aws.amazon.com/govcloud-us/ [21:37:30] o.0 [21:37:57] Who the fu** would give "sensitive" workloads to amazon, quick someone from wikileaks bribe a sysadmin. [21:38:14] I think Ryan changed jobs to get away from shit like that :p [21:38:18] heh [21:39:44] the instance's storage is likely encrypted [21:40:00] they likely have audit daemons that log to central log servers [21:40:08] where the central log servers log to each other [21:41:31] is Ryan having fun? [21:41:45] openstack is looking at adding support for trusted computing, where instances are signed and controlled via the TPM [21:42:02] uhuh [21:42:27] so, you could launch agency A's instances and they'd only run on hardware cluster A [21:43:27] there's also some selinux integration with TPM, so you could run agency A's instances and agency B's instances on the same hardware, and selinux's MLS would run them in different security contexts [21:43:39] the TPM chip would enforce the selinux contexts [21:48:52] Ryan_Lane: any reason not to add proxycmd to the wiki now? i just tested it [21:49:10] does it work for all internal instances? [21:49:13] sure, go for it [21:49:31] i just ran `ssh bots-irc1.pmtpa.wmflabs` locally and it worked [21:49:33] idk about all [21:49:55] it's only pmtpa for now. will need update for other DCs (none other exist atm, right?) [21:50:04] none others exist right now [21:59:36] Ryan_Lane: take a look [22:00:11] i'm not certain about the placement of User because i only have a shell with the same name as labs shell name handy [22:00:14] but i think it's right [22:00:30] !access [22:00:30] https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [22:00:55] I think you need to use agent forwarding for this too [22:01:00] i'm not [22:01:09] and i don't see why you would [22:11:49] jeremyb: InnoDB [22:12:11] stupid internet connection [22:12:23] jeremyb: anyway, you need to have an agent configured, at minimum [22:12:46] I just made the documentation a little better :) [22:12:57] Amgine: check *all* the tables... [22:13:19] Ryan_Lane: well how do you explain me getting in without one? [22:13:27] I dunno. I couldn't [22:13:28] oh [22:13:29] I know [22:13:48] my keyname isn't a default keyname [22:14:07] so, without an agent, I used -i [22:14:11] and that wasn't passed through [22:16:13] jeremyb: table hitcounter is MEMORY, else all are InnoDB [22:17:03] jeremyb: ok. updated to note that you need an agent if you are using a non-default key [22:17:21] Ryan_Lane: you can set the key path in the ocnfig [22:17:23] config* [22:17:37] true [22:17:47] Amgine: this isn't really relevant for here... i was bringing you here to talk search (eventually, no one is here) [22:21:30] setting identityfile doesn't seem to help [22:21:51] I take that back [22:21:57] it does [22:23:49] i just did `SSH_AGENT_PID= SSH_AUTH_SOCK= ssh bots-irc1.pmtpa.wmflabs` and it worked. had to enter passphrase twice but it worked [22:24:19] it's working fine [22:24:19] the only issue i see is 'Killed by signal 1.' every time i quit [22:24:37] even with `ssh host echo foo` [22:25:21] I also see echo foo [22:25:46] do you see Killed by signal 1. ? [22:26:03] yes [22:26:11] ooooo. sweet [22:26:11] IdentitiesOnly yes [22:27:06] ooh, i like it [22:27:19] I can use a single agent for both labs and production with that option [22:27:29] and it'll ensure I don't forward my agent [22:27:31] wait [22:27:32] maybe not [22:27:37] I should test that [22:27:54] agent forwarding is off by default [22:28:03] I know [22:28:16] I forward my agent, then use screen [22:28:47] you could explicitly `ForwardAgent off` [22:29:02] screen on bastion1? [22:29:17] but I *want* forwarding in labs ;) [22:29:27] I like to keep my screen sessions open on bastion1 [22:29:47] damn [22:29:50] it forwards both [22:30:25] don't want screen per host? [22:31:27] `ssh -t ${host} screen -Ux -d -RR -e ^${newcontrolchar}a` [22:31:56] hmm. that would work... [22:32:06] wait [22:32:06] no [22:32:09] I don't want that :) [22:32:15] that would be a pain [22:32:19] I want one screen [22:32:48] * jeremyb chuckles [22:38:34] Ryan_Lane: so, do i have convert to the nc method? :) [22:38:51] eh? what do you mean? [22:39:19] i guess it doesn't really matter with you using screen [22:39:31] you can use either or [22:39:37] I prefer agent forwarding [22:40:06] I like to be able to reconnect to my sessions, if my connection dies [22:40:27] without having a million screens open [22:40:29] how is that related to ssh forwarding? [22:40:30] err [22:40:37] terminal windows or windows open [22:40:52] err, agent forwarding* [22:41:07] if my connection dies, and I use the proxycommand thing, with screens, then I have to have a tab open for each one [22:41:25] to reconnect to them [22:41:39] hmm. [22:41:40] wai [22:41:41] wait [22:41:53] I wonder if screen would work without forwarding with the proxycommand way [22:43:08] nope [22:43:36] yeah. that's not gonna work ten [22:43:37] *then [22:44:28] with proxycommand the only things running on bastion are nc and ssh to bastion. no ssh to the inside host. with screen you need ssh running from bastion to inside host [22:44:38] yeah [22:45:47] proxycommands are still useful for scp and such, though [22:46:21] ooh, i wonder if that works [22:46:26] it does [22:46:28] I just tried it [22:48:41] I just changed the config to work for pmtpa and eqiad, too :) [22:48:54] Host *.*.wmflabs [22:49:08] i'd change it back [22:49:11] different bastion [22:49:12] i bet [22:49:28] ah. true [22:49:29] unless we know the bastion hostname now :) [22:49:50] i left it out intentionally [22:51:23] added in one for eqiad [22:51:32] we'll just name it bastion1.eqiad.wmflabs [22:51:40] and its public name will be bastion2 [22:52:14] it'll be possible to access any instance from either of them [22:52:25] it's just a secondary for the purposes of having a secondary [22:52:37] ok, we're omnisiscient [22:52:56] :) [22:53:19] obviously it'll be faster using bastion.wmflabs.org for pmtpa and bastion2.wmflabs.org for eqiad [22:53:25] but not *that* much faster [22:54:10] wtf. i just opened my config and found this. whooops! [22:54:11] > ProxyCommand ssh -e none bastion1.pmtpa.wmflabs exec nc -w 40 -w 40 -w 40 -w 40 -w 40 -w 40 -w 40 -w 40 -w 40 %h %p [22:54:19] o.O [22:54:27] wtf is that? [22:54:58] i must have hit a digit before hitting 'i-w 40' [22:55:04] heh [22:55:10] I changed the docs to use 36000 [22:55:13] err [22:55:14] 3600 [22:55:15] one hour [22:55:20] yeah, i saw [22:55:23] otherwise you are going to get logged out after 40 seconds [22:55:35] i think it's timeout not total time [22:56:00] right [22:56:01] yeah [22:56:11] I mean after 40 seconds of inactivity [22:56:12] anyway, i can know to tweak that [22:56:16] which is really short [22:56:17] other ppl, who knows [22:56:44] well in the top of my config i have [22:56:44] Host * [22:56:44] ServerAliveInterval 12 [22:56:44] ServerAliveCountMax 4 [22:57:04] so, something's wrong if there's no activity for that long [22:57:43] (that's what you need to get ssh out of the NYPL iirc. kaldari probably still has it in his config :) ) [22:58:03] ssh out of NYPL? eh? [22:58:12] new york public library [22:58:19] that's where glamcamp was [22:58:20] ah [22:58:53] i've tweaked it for a few different odd places [23:01:56] Ryan_Lane: edited again [23:02:24] ah. cool [23:04:41] so is there any kind of per project central host? [23:05:06] no [23:05:24] that would use too many instances [23:05:30] we have 32 projects right now [23:05:45] i thought to do puppet stuff with or something [23:05:51] where's the puppetmaster? [23:05:58] virt1.wikimedia.org [23:06:05] huh [23:06:11] we are looking at changing how this works [23:06:17] right now all instances only use one git branch [23:06:18] test [23:06:27] we want all project to have their own branch [23:08:18] maybe we'll have some manifests pulled from a centralized branch, so that we can still push code updates when we need to [23:08:37] hmm [23:08:40] nah. that could cause conflicts [23:39:20] bbl