[00:11:37] @channellist [00:11:37] I am now in following channels: #huggle, #wikimedia-dev, #mediawiki-move, #wikimedia-tech, #wm-bot, #wikimedia-labs, #wikimedia-operations, ##matthewrbowker, ##matthewrbot, #wikipedia-zh-help, #wikimedia-toolserver, ##Alpha_Quadrant, #wikimedia-mobile, #mediawiki, #wikipedia-cs, #wikipedia-cs-rc, #wiki-hurricanes-zh, #wikinews-zh, #wikipedia-zh-helpers, #wikipedia-en-afc, ##thesecretlair, [00:11:53] @add #USEagency [00:11:53] Permission denied [00:12:01] wtf [00:35:09] @add irc://irc.freenode.net:6697/#USEagency [00:35:09] Invalid name [00:35:19] @add #USEagency [00:49:12] hmm, having trouble connecting to my labs instance [00:49:17] but it has worked before [00:49:33] doing this: [00:49:52] ssh -A bastion.wmflabs.org [00:49:52] ssh -v analytics.pmtpa.wmflabs [00:49:54] getting: [00:51:00] debug1: Authentications that can continue: publickey [00:51:01] debug1: Next authentication method: publickey [00:51:01] debug1: Trying private key: /home/otto/.ssh/identity [00:51:01] debug1: Trying private key: /home/otto/.ssh/id_rsa [00:51:01] debug1: Trying private key: /home/otto/.ssh/id_dsa [00:51:01] debug1: No more authentication methods to try. [00:51:02] Permission denied (publickey). [00:51:22] i don't have any public keys on bastion…should i? [00:51:32] i shouldn't right? since I am forwarding my local one with -A [00:52:56] oo [00:52:56] nm [00:53:06] read this a little closer [00:53:07] https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [00:53:09] sorry for the noise! [01:22:04] RECOVERY Free ram is now: OK on nagios 127.0.0.1 output: OK: 20% free memory [01:23:54] PROBLEM Current Load is now: CRITICAL on analyticstest analyticstest output: Connection refused by host [01:24:34] PROBLEM Current Users is now: CRITICAL on analyticstest analyticstest output: Connection refused by host [01:25:14] PROBLEM Disk Space is now: CRITICAL on analyticstest analyticstest output: Connection refused by host [01:26:04] PROBLEM Free ram is now: CRITICAL on analyticstest analyticstest output: Connection refused by host [01:26:22] hmmmmm [01:26:25] critical eh? [01:26:31] i just spawned that instance [01:26:41] i think puppet is doing its initialization stuff there, not sure [01:26:44] haven't been able to get in yet [01:27:24] PROBLEM Total Processes is now: CRITICAL on analyticstest analyticstest output: CHECK_NRPE: Error - Could not complete SSL handshake. [01:28:14] PROBLEM dpkg-check is now: CRITICAL on analyticstest analyticstest output: CHECK_NRPE: Error - Could not complete SSL handshake. [01:32:24] RECOVERY Total Processes is now: OK on analyticstest analyticstest output: PROCS OK: 85 processes [01:32:53] ottomata: Yeah, our monitoring is so good that it notices the VM is down before it gets a chance to be spun up [01:33:14] RECOVERY dpkg-check is now: OK on analyticstest analyticstest output: All packages OK [01:33:54] RECOVERY Current Load is now: OK on analyticstest analyticstest output: OK - load average: 0.11, 0.55, 0.59 [01:34:34] RECOVERY Current Users is now: OK on analyticstest analyticstest output: USERS OK - 1 users currently logged in [01:35:14] RECOVERY Disk Space is now: OK on analyticstest analyticstest output: DISK OK [01:36:04] RECOVERY Free ram is now: OK on analyticstest analyticstest output: OK: 79% free memory [01:42:01] New patchset: Ottomata; "adding class to install pip for stat1 - dev use only." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2818 [01:42:24] New review: gerrit2; "Change did not pass lint check. You will need to send an amended patchset for this (see: https://lab..." [operations/puppet] (test); V: -1 - https://gerrit.wikimedia.org/r/2818 [02:02:19] New patchset: Ottomata; "adding class to install pip for stat1 - dev use only." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2818 [02:02:40] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/2818 [02:05:39] New review: Bhartshorne; "(no comment)" [operations/puppet] (test); V: 0 C: 0; - https://gerrit.wikimedia.org/r/2818 [02:07:01] New patchset: Ottomata; "adding class to install pip for stat1 - dev use only." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2818 [02:07:24] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/2818 [02:08:26] New review: Bhartshorne; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2818 [02:08:27] Change merged: Bhartshorne; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2818 [02:42:15] PROBLEM Free ram is now: WARNING on puppet-lucid puppet-lucid output: Warning: 12% free memory [03:07:15] PROBLEM Free ram is now: CRITICAL on puppet-lucid puppet-lucid output: Critical: 3% free memory [03:27:15] RECOVERY Free ram is now: OK on puppet-lucid puppet-lucid output: OK: 20% free memory [06:24:35] PROBLEM Free ram is now: WARNING on nagios 127.0.0.1 output: Warning: 19% free memory [07:09:33] RECOVERY Free ram is now: OK on nagios 127.0.0.1 output: OK: 20% free memory [15:31:29] New review: Mark Bergsma; "Is this change still relevant?" [operations/puppet] (test); V: 0 C: 0; - https://gerrit.wikimedia.org/r/1464 [15:36:30] New review: Mark Bergsma; "What's the status of this change now?" [operations/puppet] (test); V: 0 C: 0; - https://gerrit.wikimedia.org/r/2012 [16:14:35] PROBLEM Current Users is now: CRITICAL on testblog testblog output: Connection refused by host [16:15:15] PROBLEM Disk Space is now: CRITICAL on testblog testblog output: Connection refused by host [16:16:05] PROBLEM Free ram is now: CRITICAL on testblog testblog output: Connection refused by host [16:17:25] PROBLEM Total Processes is now: CRITICAL on testblog testblog output: CHECK_NRPE: Error - Could not complete SSL handshake. [16:18:15] PROBLEM dpkg-check is now: CRITICAL on testblog testblog output: CHECK_NRPE: Error - Could not complete SSL handshake. [16:18:55] PROBLEM Current Load is now: CRITICAL on testblog testblog output: CHECK_NRPE: Error - Could not complete SSL handshake. [16:36:05] RECOVERY Free ram is now: OK on testblog testblog output: OK: 92% free memory [16:37:25] RECOVERY Total Processes is now: OK on testblog testblog output: PROCS OK: 85 processes [16:38:55] RECOVERY Current Load is now: OK on testblog testblog output: OK - load average: 2.29, 0.79, 0.41 [16:39:35] RECOVERY Current Users is now: OK on testblog testblog output: USERS OK - 1 users currently logged in [16:40:14] RECOVERY Disk Space is now: OK on testblog testblog output: DISK OK [16:43:14] RECOVERY dpkg-check is now: OK on testblog testblog output: All packages OK [16:49:44] New patchset: RobH; "added in simply mysql server call for labs testing of blog" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2828 [16:51:39] New review: RobH; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2828 [16:51:39] Change merged: RobH; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2828 [17:59:35] <^demon|away> Ryan_Lane: I wrote some bash scripts to wrap around my svn2git commands so people other than me can figure it out. (Bus casualty insurance) [18:00:45] heh [18:00:48] that's good [18:04:40] <^demon|away> (dump|repack|push)-(core|wmf-ext) :) [18:10:32] I'll see if I can move gerrit over today [18:10:39] either way, it's good to have a backup ;) [18:12:02] <^demon|away> I was busy on formey earlier, but I'm done for the day so feel free. [18:15:55] <^demon> Ryan_Lane: One quick question before I run out. How feasible do you think it'd be to bump the AUTO_INCREMENT for gerrit ids? The suggestion was raised to bump it to avoid conflicting with svn ids. [18:16:02] <^demon> But I'm kind of afraid it will Break Shit. [18:16:17] is it even possible? [18:16:34] <^demon> Assuming the gerrit change ids are assigned by an AUTO_INCREMENT, it's theoretically possible. [18:16:52] I'm not sure I want to go mucking around with that [18:16:52] <^demon> Alter table foo AUTO_INCREMENT = somereallybignumber; [18:17:02] <^demon> Yeah, it makes me kind of leery. [18:17:15] also, we'd be missing changes [18:17:21] why? [18:17:23] there'd be a giant gap [18:17:29] and? [18:17:48] let me reverse the question. why do we give so much of a shit that the change numbers are high? [18:18:39] Platonides: if you care enough, feel free to investigate the matter, and ensure it won't break anything [18:18:40] so that you could refer to MediaWiki r123456 change ? [18:18:47] I'm going to put exactly 0 effort in. [18:19:01] I gave the needed sql [18:19:05] bumping the auto increment isn't going to help [18:19:11] am I expected to break into the server to run it? ;) [18:19:32] you won't be able to refer to r123456 because it won't point to something in existence [18:19:33] isn't it provided by an autoincrement column? [18:19:55] none of the old rev numbers will point to something real [18:20:06] but you could go to the old system [18:20:24] don't we want to kill the old system? [18:20:30] not ideal, but better than having two r1234 changes [18:20:35] we should try to map r numbers to sha-1 hashes [18:20:40] <^demon> Ryan_Lane: Special:Code's going to sit around indefinitely for archive purposes, read-only. [18:20:43] what about the CR discussions? [18:21:05] and we need to keep the rXYZ mappings even in git [18:21:17] otherwise, commit comments like fix breakage of r1234 won't make sense [18:21:18] the CR discussions have the old svn repo to look at [18:21:26] don't use r1234 [18:21:29] use change 1234 [18:21:35] it isn't a revision, it's a branch [18:21:45] <^demon> Platonides: You can't point r1234 at a changeset in gerrit anyway. [18:21:46] Ryan_Lane, are you proposing to rewrite old commit messages? [18:21:56] <^demon> You'd have to point it at gitweb [18:21:59] <^demon> They're not in gerrit [18:22:04] the old commit messages woudln't *work anyway*! [18:22:18] ^demon, I was thinking in https://gerrit.wikimedia.org/r/2768 [18:22:27] it's not exactly *one* change in gerrit [18:22:31] r1234 won't be able to point to change 1234 [18:22:34] maybe several proposals [18:22:40] because change 1234 is a puppet change [18:22:45] <^demon> Right, those aren't revisions. r2768 doesn't mean the same thing [18:22:58] Also, the imported changes won't even *be* in Gerrit [18:23:06] <^demon> I said that :) [18:23:06] Because presumably we're bypassing Gerrit for pushing those in [18:23:10] <^demon> Yes [18:23:10] r1234 is commit 5b296c9d9cca10d872e57d569c8f1f59399d7755 [18:23:22] (really it isn't i'm just giving an example) [18:23:22] Yup [18:23:33] ^demon: You were experimenting with git notes earlier, what's up with that? [18:23:47] <^demon> The notes field has a link to the page in Special:Code [18:23:58] <^demon> So you've got rev number, and link to historical CR discussion [18:24:06] Excellent [18:24:16] I don't mind about the old changes not being in gerrit [18:24:54] I think people care more about "we used to have over 100,000 commits. now we have 2000, and it makes us look less important" [18:25:07] <^demon> RoanKattouw: http://p.defau.lt/?FiDwekMHlbaBXR_QR8tBjg [18:25:09] ohloh will still properly count the commits in a project [18:25:26] Dude [18:25:31] We get to have all the milestones a second time! [18:25:37] how do you propose to link with gerrit links? [18:25:44] <^demon> Not with r123 [18:25:45] ^demon: you can get #100,000 this time! [18:25:55] Also, because puppet and MediaWiki and everything else are sharing the same numbering namespace, the numbers will grow faster [18:25:58] <^demon> Say "change 123" or whatever like Ryan suggested. [18:26:08] g123 ? [18:26:11] <^demon> Don't conflate the two terms and we avoid the issue entirely. [18:26:12] we had http://www.mediawiki.org/wiki/Special:Code/MediaWiki/123456 [18:26:13] I've been using !g 123 for the bot [18:26:26] continuing with higher numbers is the logical thing [18:26:34] Platonides: I disagree [18:26:37] plus, we will get a mix of old a new numbers after the switch [18:26:38] I think it's illogical [18:26:52] we aren't using svn [18:26:57] svn doesn't use revisions [18:27:02] xD [18:27:06] err [18:27:07] I agree with Ryan, and I disagree that there will be confusion [18:27:08] *git [18:27:24] If I say 3451 then obviously I'm not talking about SVN rev 3451, that was in like 2004 [18:27:42] <^demon> I think as long as we're not being silly and calling the changes in gerrit revisions (which they aren't), it's not really likely to confuse. [18:28:18] !change is https://gerrit.wikimedia.org/r/$1 [18:28:19] Key exist! [18:28:23] !change [18:28:23] https://gerrit.wikimedia.org/r/$1 [18:28:24] heh [18:28:58] !g [18:28:58] https://gerrit.wikimedia.org/r/$1 [18:29:05] well, there's already a r there [18:29:14] I don't know what it stands for [18:29:18] review [18:29:18] That /r is for all Gerrit URLs [18:29:32] Ryan_Lane: lol, I'm about a month ahead of you there ;) [18:29:34] well, then go to review 1234 :) [18:29:42] problem fixed: you're not calling it a revision [18:30:41] <^demon> Anyway, class time. I vote for a green bikeshed. [18:30:58] Platonides: no. it's a change [18:31:09] we should use the terminology that comes with the system [18:31:15] it's a review system [18:31:21] that has changes [18:31:38] change the url to say /c/ ? [18:31:48] * Ryan_Lane groans [18:31:57] lol [18:32:00] Platonides: The /r is used for the entire system, not just the /r/1234 shortcut [18:32:41] also: https://gerrit.wikimedia.org/r/#change,2836 [18:32:47] well, I see gerrit urls as completely broken, but that's a different issue [18:32:47] permalinks are short urls [18:33:17] normal URLs have change, right there in the url [18:33:48] I don't mind too much about how to call it, but I'd expect it to be consistent [18:34:03] how is it inconsistent/ [18:34:05] or #q,1324,n,z [18:34:13] or, the urls? [18:34:42] yes, /r/1234 redirects to /r/#q,1324,n,z [18:34:57] then to change [18:35:19] when I click on a permalink, it brings me to #change [18:35:22] not to #q [18:36:02] I see a double redirect [18:36:04] ah [18:36:08] I do now as well [18:36:08] maybe it's faster for you [18:36:15] used a different browser [18:36:21] Well if they're 301s, his browser will cache them [18:36:25] yeah [18:36:40] Platonides: feel free to send patches to gerrit :) [18:36:48] or open bugs, and we'll star them [18:37:17] we'd love for the system to be better. I don't think anyone loves it [18:37:31] yet we want to move to it [18:38:22] it's better than svn [18:39:01] I sort of love it. But maybe only in contrast to what came before. [18:40:34] Ryan_Lane: Have time to help me think about nova/gluster? [18:40:45] yep [18:43:39] Well, first, a check on terminology. I got into a lot of confusion in openstack-devel because I kept talking about 'gluster volumes' and the person I was talking with insisted that 'volume' only meant a block-level thingy, and that a file-level thingy should be called a 'filesystem' instead. [18:44:06] Do you agree with that? should I stop saying 'gluster volume'? [18:44:30] no, what we like is git and the review before merge [18:44:38] not the gerrit system used to accomplish that [18:44:57] andrewbogott: that's fine [18:45:13] Platonides: propose an alternative that doesn't suck [18:45:22] It's a minor point, obviously, just caused me deep confusion earlier :) [18:45:49] someone suggested an alternative, and was backed out as "we'll check later" [18:46:00] Anyway... I think there are three big technical issues: 1) find or create an API for nova to create/delete/manage gluster volumes [18:46:05] Platonides: because we had some major issues with it [18:46:18] 2) Get an agent onto instances that knows how to mount those volumes [18:46:20] * Platonides checks the thread [18:46:26] 3) Communication between 1 and 2 [18:46:34] I think we can avoid #2 [18:46:44] Really? How so? [18:46:52] the way current volumes work, it makes the volume available to the instance [18:46:58] but it doesn't automatically mount it [18:47:00] If we can avoid 2 then we can also avoid 3 which is where the real problem is. [18:47:24] as long as it makes it available to the instance, I think we're ok [18:47:41] Which, in the case of gluster, 'making it available' is just a matter of setting permissions in gluster, right? [18:47:46] yep [18:47:49] Hm. [18:48:10] I've been fixated on 2 and 3 as the interesting/useful parts :) [18:48:17] I see, the big problem was lack of LDAP in phabricator [18:48:30] But I'm happy to abandon them if you think that in general labs users are sophisticated to do all their connecting and mounting themselves. [18:48:34] I was planning on using automount with LDAP for access to gluster [18:48:35] then some unexplicit mentions about scripts [18:48:46] so that people just cd to a directory and it exists [18:49:00] Yeah the way that phabricator's backend is designed is kind of cringe-worthy when compared to Gerrit [18:49:03] Platonides: phabricator is only a code-review system [18:49:14] However, hacking in automatic merging shouldn't be too hard [18:49:18] it handles patches for code review [18:49:26] RoanKattouw: we'd still need to handle private repos [18:49:30] Ryan_Lane: OK; I think I don't know how auto-mount works. It can respond to changes at runtime? [18:49:36] In Gerrit, a revision is merged automatically as soon as it's approved. Phabricator doesn't do this (requires manual intervention) but could probably be hacked to [18:49:38] and we'd need to handle fine grained permissions [18:49:49] Ryan_Lane: How so? Do you guys do review at all for the private repo? Isn't that all direct pushes? [18:49:50] we'd basically need to reimplement gerrit's backend in phabricator [18:49:56] RoanKattouw: review [18:49:58] Aha [18:50:06] So there's all sorts of fine grained restrictions [18:50:08] andrewbogott: well.. [18:50:19] andrewbogott: see how the home directories work on labs [18:50:33] I actually wrote a blog post about this :) [18:50:34] http://ryandlane.com/blog/2011/11/01/sharing-home-directories-to-instances-within-a-project-using-puppet-ldap-autofs-and-nova/ [18:50:58] we can either reimplement a backend, or we can modify a frontend [18:51:15] and we can link-up with the openstack community to help with the frontend [18:51:32] we have no help with phabricator [18:51:59] ok, I will read! When I looked before I only found the document describing how it might work, not the how-we-really-do-it doc. [18:52:21] I also have the impression that you sort of hate the existing method...? [18:52:34] <^demon> There's lots of hate here :) [18:53:51] andrewbogott: I don't like the existing method [18:53:59] it's inefficient and definitely doesn't scale [18:53:59] Ryan_Lane: Anyway, in short: you having a design in mind for the instance-side of things is fantastic. There's some hope that a standard system for installing agents and passing metadata at runtime will be added to folsom... if that happens we can consider better openstack integration. [18:54:07] also, we're using NFS in the current method, not gluster [18:54:15] I was just feeling stumped by the fact that that system doesn't exist yet. [18:54:21] * Ryan_Lane nods [18:54:21] yeah [18:54:32] a user-agent will solve #2 and #3 [18:54:40] err [18:54:43] system-agent [18:55:00] I think they're calling it 'guest configuration' http://wiki.openstack.org/guest-configuration [18:55:08] * Ryan_Lane nods [18:55:09] Oh, or maybe guest agent. [18:55:20] that looks sane [18:56:06] if we can just handle creation and modification of the share for now, we'll be ok [18:56:23] Great. I think knowing that gets me unstuck. [18:56:24] Thanks. [18:56:27] yw [18:56:41] I started writing a script to do an interim solution for now [18:56:47] and noticed an interesting issue [18:56:55] all volume names need to be unique [18:57:39] gluster enforces that, right? [18:58:29] so: gluster volume create - replica 2 transport tcp brick1:/// brick2:/// [18:58:36] yeah [18:58:49] that means we need to create the volumes in a unique way [18:58:57] - may work better [18:59:04] sorry, I should start using filesystem. [18:59:05] heh [18:59:35] also, the directory listed in brick1:/// must already exist [18:59:39] on every brick [19:00:22] and it needs to exist before the gluster volume is created [19:00:25] Oh... hm. [19:00:28] filesystem is awkward :) [19:00:36] gluster calls them volumes. heh [19:01:00] Yeah, because they are volumes :) [19:01:21] it's likely possible to send a queue message to create a volume, that all of the bricks pick up [19:01:32] How do we get all those directories created? The system running the gluster command doesn't necessarily have the ability to mkdir on every other system does it? [19:01:35] they can then create the directory, then send a queue message to create the volume [19:01:50] Ah, so you're thinking that we need an agent running on every brick? [19:02:04] I think it may be necessary no matter what [19:02:17] because I think you can only run the gluster commands from one of the peers [19:02:20] Dang. Gluster is already running on every brick; /it/ should be that agent. [19:02:25] I want gluster to do more than it does. [19:02:40] yeah. it unfortunately doesn't have crap for openstack integration ;) [19:03:09] Well, regardless of openstack, I would think that every gluster user would want to be able to create a new volume without having to log into every damn brick. [19:03:12] but, it's normal to run nova-volume in the location that's actually managing the storage [19:03:47] so, nova-volume can be the agent [19:04:10] it'll just need some logic to create the directories before it creates the volume [19:04:29] I was thinking of handling this by creating a global share [19:04:34] the global share should be the topdir [19:04:55] each gluster node would mount it, and then any gluster node could create the directories for all others [19:05:11] I'm not sure if you can share subdirectories of existing volumes, though [19:05:28] though I'd imagine you can [19:06:12] Um... [19:06:39] so, volume global would be /a [19:06:40] Either I don't follow, or that doesn't make sense. So, we create a global share, /gl/global [19:06:55] OK, /a [19:06:56] the gluster servers mount it [19:07:07] then, a new project is creates: /a/ [19:07:07] So we create a subdir /a/volume1 [19:07:20] a new volume in that: /a// [19:07:27] But /a/ isn't a directory on each brick. It's a directory that's... who knows where? [19:07:47] well, it is a directory on each brick [19:07:59] Is it? [19:08:04] yep [19:08:10] I mean, if I put a file in that directory, there's only one copy of that file. [19:08:18] It's distributed among the bricks, but not duplicated. [19:08:18] ah. hm. true [19:08:34] So nesting gluster volumes will totally foil any attempt to stripe or duplicate. [19:08:43] :( [19:08:44] true [19:08:47] Basically it makes only one big brick. [19:08:47] ok, that approach won't work [19:08:58] Which might be fine, if the one big brick is, itself, striped on a different level... [19:09:03] But this confuses me :) [19:09:03] right [19:09:51] well, either way, somehow nova-volume must create the directory on each brick before it creates the volume [19:09:52] I tried a few experiments with nesting shares last week, and mostly decided that it led to madness. [19:10:04] yeah. it's likely a bad idea :) [19:10:21] For example, unmounting and remounting a parent share caused the nested shares to unmount and vanish, so I had to do a recursive remount. [19:10:51] * Ryan_Lane twitches [19:11:00] yeah, let's avoid nested volumes, then :) [19:12:15] Could we have a 'boss brick' that can automatically ssh to the other bricks to mkdir? [19:12:25] hm [19:12:27] that seems slightly simpler than installing a custom agent on each brick. [19:12:33] we could configure nova-volume to ssh to all bricks [19:12:39] and also run the gluster volume command via ssh [19:12:56] then nova-volume wouldn't need to run on all of the bricks, either [19:13:05] Yeah. As long as you don't mind setting up passphraseless ssh like that, that's probably an OK solution. [19:13:09] yeah [19:13:23] I don't think it's a problem [19:13:50] ok. [19:14:17] I'd imagine other drivers use ssh [19:14:39] Because people are committed to the existing nova-volume API being for block storage, I'm going to propose an API extension. Draft is here: http://wiki.openstack.org/SharedFS [19:14:57] oh. cool [19:15:20] (no need for you to read that now, unless you feel like it.) [19:16:52] looks good to me [19:25:40] Does sharing gluster volumes between projects have the same priv-escalation danger that it has with NFS? [19:25:53] Which is to say -- is that something that no one would ever want to do? [19:27:06] Oh, I guess the issue is not NFSness but rather home-directoryness [19:27:51] andrewbogott: yep [19:27:54] it does [19:28:07] security is enforced on the instance side, not the gluster side [19:28:22] so cross-project sharing can be dangerous [19:28:51] the only place I was considering allowing cross-project shares in labs was for read-only open to everyone datasets [19:28:54] like dumps [19:34:39] it could be interesting for the openstack spec to allow open-to-everyone shares [19:35:02] then it would be global/project/instance [19:35:17] would be great to be able to list them as read/write and read-only too [19:36:23] EC2, for instance, has public volumes [19:36:46] one thing that's available there is wikipedia dumps, funny enough :) [19:39:18] ok, I'll add globalness. [19:39:33] cool. thanks [19:55:28] hi :) [19:55:36] hashar: howdy [19:55:41] any admin around to link iAlex svn account to labs ? :-) [19:55:45] Ryan_Lane: hi :-) [19:55:55] !account-questions | ialex [19:55:55] Ryan_Lane: I think I have sent you a few review requests through gerrit [19:55:55] ialex: I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your preferred email address. 3. Your SVN account name, or your preferred shell account name, if you do not have SVN access. [19:55:59] \o/ [19:56:18] ah. you did indeed [19:56:24] ialex: comme tu peux le voir, ce channel est bien équipé en bots / admins etc… :) [19:56:41] hashar: je vois :) [19:57:21] doing review for you now [19:58:38] Ryan_Lane: I would like IAlex as username, ialex.wiki@gmail.com as email address and I already have a SVN account "ialex" [19:58:47] ok [19:58:51] gimme a sec [20:00:08] ialex: and you want full labs access, right? not just gerrit? [20:00:18] Ryan_Lane: yes [20:00:23] ok [20:00:41] hashar: https://gerrit.wikimedia.org/r/#change,2514 [20:00:46] please fix :) [20:01:06] yeahhh excellent exercice [20:01:30] all very straightfoward changes :) [20:01:50] !initial-login | ialex [20:01:50] ialex: https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [20:02:00] CONFLICT (content): Merge conflict in .gitignore [20:02:00] \o/ [20:02:12] 02/28/2012 - 20:02:12 - Creating a home directory for ialex at /export/home/bastion/ialex [20:02:31] Ryan_Lane: thank you very much! :) [20:03:12] 02/28/2012 - 20:03:12 - Updating keys for ialex [20:03:44] yw [20:05:11] 02/28/2012 - 20:05:11 - Updating keys for ialex [20:12:12] Ryan_Lane: hi, Diederik van Liere sent me your way to see if I could get access to wikimedia labs to run wikistream.inkdroid.org and possibly some other wikipedia analytics tools at wikimedia [20:59:59] edsu: sure [21:00:05] !account-questions | edsu [21:00:05] edsu: I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your preferred email address. 3. Your SVN account name, or your preferred shell account name, if you do not have SVN access. [21:01:46] wm-bot: help [21:01:57] :-) [21:02:00] heh [21:02:06] I need the info, not the bot ;) [21:02:07] @helpo [21:02:07] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.1.4 source code licensed under GPL and located in wikimedia svn [21:02:08] edsu ; ehs@pobox.com ;edsu [21:02:11] @help [21:02:11] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.1.4 source code licensed under GPL and located in wikimedia svn [21:02:33] no svn account, right? [21:02:39] correct :) [21:55:47] hi [Haekchen] [21:55:59] !account-questions [21:55:59] I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your preferred email address. 3. Your SVN account name, or your preferred shell account name, if you do not have SVN access. [21:56:25] <[Haekchen]> hi [21:57:01] <[Haekchen]> with wiki username you mean mediawki.org? [21:57:32] [Haekchen]: no, this is going to be your wiki username for labsconsole.wikimedia.org [21:57:42] which is sort of the keystone of our new single sign-on infrastructure [21:58:38] [Haekchen]: no, this is going to be your wiki username for labsconsole.wikimedia.org which is sort of the keystone of our new single sign-on infrastructure [21:59:23] <[Haekchen]> oops, browser crash [22:00:10] <[Haekchen]> OK, let it be "Bergi", with email adress "a.d.bergi@web.de! [22:00:51] ok [Haekchen] just a moment [22:01:41] [Haekchen]: the shell account must be lowercase. is bergi ok? [22:02:12] <[Haekchen]> yeah, sure. I thought wiki usernames are uppercase automatically [22:02:23] Wiki usernames are. Shell accounts are lowercase [22:02:39] ok, [Haekchen] you should have something in your email in the next 5 min [22:02:42] <[Haekchen]> so then "User:Bergi" and "~bergi" :-) [22:03:14] [Haekchen]: yes! please follow the instructions in that email, log in to https://labsconsole.wikimedia.org and then go to https://www.mediawiki.org/wiki/Git/Workflow#Get_the_right_permissions and follow instructions from there [22:03:53] <[Haekchen]> got it [22:04:01] PROBLEM host: fundraising-civicrm is DOWN address: fundraising-civicrm CRITICAL - Host Unreachable (fundraising-civicrm) [22:04:50] ok, great! [22:11:24] after adding my ssh keys to labs and gerrit should i be able to ssh to edsu@bastion.wmflabs.org ? [22:12:16] I think someone needs to add you to the bastion project for that [22:13:51] I think I did... [22:14:10] maybe not [22:14:34] edsu: in a minute a bot will say it created your homedirectory [22:14:42] after that you'll be able to [22:14:51] edsu: you'll need drdee to add you to an analytics project [22:14:57] or we'll need to create a new one for you [22:15:12] 02/28/2012 - 22:15:12 - Creating a home directory for edsu at /export/home/bastion/edsu [22:16:12] 02/28/2012 - 22:16:12 - Updating keys for edsu [22:20:49] edsu: probably best to create your own project, i can do that for you if you want, what is preferred project name? [22:21:24] drdee: I don't think you can create projects [22:21:53] in labs?, i think i can [22:22:04] I'm pretty sure I took that privilege away because it opened a security hole [22:22:13] well, it would in a system with open registration [22:22:28] it allows people to bypass quotas too [22:22:46] I'm still trying to think of a sane way to handle that :) [22:22:56] we'll likely have a project-creators group [22:23:00] ok [22:23:07] drdee: can you try? [22:23:16] it shouldn't even give you an interface to do so [22:23:22] btw, in puppet.git, i am listed as an intern :( [22:23:28] hahaha [22:23:50] you should likely be moved to restricted [22:24:19] i am in there as well [22:24:37] don't think i can create a project anymore [22:25:40] * Ryan_Lane nods [22:25:43] thought so [22:25:55] edsu: what will you be working on again? [22:26:07] wikistream? [22:28:01] RECOVERY host: fundraising-civicrm is UP address: fundraising-civicrm PING OK - Packet loss = 0%, RTA = 158.78 ms [22:31:25] PROBLEM Total Processes is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:31:25] PROBLEM Current Load is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:31:45] PROBLEM Current Users is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:31:45] PROBLEM Disk Space is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:31:45] PROBLEM Free ram is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:32:08] Jeff_Green: please re-run puppet on the instance [22:32:18] stupid nagios doesn't work on first run for some reason [22:32:35] PROBLEM dpkg-check is now: CRITICAL on fundraising-civicrm fundraising-civicrm output: CHECK_NRPE: Error - Could not complete SSL handshake. [22:35:30] yeah i nuked it [22:58:10] Ryan_Lane: yes, that was the plan: wikistream, at least at first [23:00:31] Ryan_Lane: i guess it would be nice to have sandboxed environment for wikistream if possible, rather than disturb an existing project, but i'm still new here, so let me know how you think it will work best [23:00:42] a new project is fine [23:00:53] Ryan_Lane: wikistream is a node app that listens on a bunch of irc channels basically [23:00:54] I'll call it wikistream [23:00:56] * Ryan_Lane nods [23:00:59] Ryan_Lane: sweet [23:01:49] please read the docs on instances and security groups, at minimum before creating instances [23:02:04] it'll make things a lot easier ;) [23:02:27] 02/28/2012 - 23:02:26 - Creating a project directory for wikistream [23:02:27] 02/28/2012 - 23:02:27 - Creating a home directory for laner at /export/home/wikistream/laner [23:02:27] 02/28/2012 - 23:02:27 - Creating a home directory for edsu at /export/home/wikistream/edsu [23:02:32] Ryan_Lane: ok, will do [23:02:46] Ryan_Lane: and thanks! [23:03:05] yw [23:03:26] 02/28/2012 - 23:03:26 - Updating keys for edsu [23:03:27] 02/28/2012 - 23:03:26 - Updating keys for laner