[00:45:17] Damianz: that's not very easy to implement [00:45:33] It wouldn't be fun if it was easy ;) [00:45:44] patches welcome ;) [00:46:09] https://wiki.openstack.org/wiki/Main_Page?skin=strapping [00:46:10] ;) [00:46:27] wait [00:46:27] https://wiki.openstack.org/wiki/Main_Page?useskin=strapping [00:48:01] I need to update the CSS to make external links show differently [00:48:35] navigation dropdown is slightly incorrect as well [02:13:27] I seriously wonder what drugs you require to skin mediawiki sometimes [02:43:54] Damianz: lots [02:43:54] https://wiki.openstack.org/wiki/Main_Page [02:43:58] now it's the default :) [03:41:28] !log accounts-creation-assistance Getting strange puppet errors related to SSL on accounts-puppetmaster, going to try rebooting. [03:41:28] accounts-creation-assistance is not a valid project. [03:41:33] Derp [03:41:35] !log account-creation-assistance Getting strange puppet errors related to SSL on accounts-puppetmaster, going to try rebooting. [03:41:37] Logged the message, Master [03:41:41] Danke, labs-morebots [03:48:59] !log account-creation-assistance Scratch last log entry, error fixed by using `sudo puppetca --sign "i-000005c5.pmtpa.wmflabs"` [03:49:00] Logged the message, Master [04:49:54] Change on 12mediawiki a page Wikimedia Labs/TODO was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=647904 edit summary: [-250] wip wip wip [19:27:06] Ryan just has no love for YP [19:28:42] I have less than no love for it [19:41:52] YP is teh satan. Worse, it's an obsolete satan. :-) [19:43:11] indeed [19:44:24] NIS+ was sorta okay-ish, but make LDAP look like a simple and elegant design by comparison. [19:44:36] yep [19:44:45] NIS+ has been gone for a very, very long time, though [19:45:21] I remember using it in ~2000 last, and even then only because it was a Solaris shop. [19:46:13] But that's all moot anyways, there's no reason to not stuff those users in LDAP, just in a different OU. (Hell, it's probably be wise to make having a per-project OU a general pattern) [19:46:35] it's best to keep things flat [19:46:56] did you see my reply? [19:47:48] Just did. Am I allowed to think you're insane still while I'm not yet officially working or is that a privilege reserved for coworkers? :-) [19:48:27] :D [19:48:56] In my experience, "Keep things in a flat namespace" is the wrong answer no matter what the question is. :-) [19:49:01] how would normal user auth work without including both ous? [19:49:09] then there's still clashes [19:49:17] *and* hierarchy [19:49:18] :) [19:49:29] it's the worst of both worlds! :) [19:49:41] That's one... strange way of looking at it. :-) [19:50:20] how does hierarchy help here? [19:50:32] (note that we do use hierarchy like this for sudo) [19:50:49] each project has a sudo ou under its project cn [19:50:58] I think you want to have a generic prefix for per-project {user,group}names, certainly, but you want the hiarchy to allow the same ID ranges and names to be used in different projects. [19:51:16] So that one doesn't affect the others. [19:51:29] I don't understand how that would work [19:51:52] how does authentication work in that structure? [19:52:15] Well, let's way we set aside 20000-29999 for uid and gid per-project. [19:52:34] Wait, auth? Why auth? You can only auth against "real" users, not per-project UIDs. [19:52:57] ah. ok. I see what you're getting at [19:53:25] Oh, and I see what you thought I meant. :-) [19:53:34] if we did that. I'd prefer it to be a completely separate ou [19:53:38] not under ou=people [19:53:47] Oh, yeah, that'd work too. [19:53:57] then have a per-project user base [19:54:05] we'd still need to prefix the account names [19:54:43] Yeah, but it doesn't have to be complicated; we can even do something like a single character. [19:55:07] we use project- for project groups [19:55:09] I'd rather avoid usernames like 'complicatedproject-longservicename' [19:55:35] I would say 'local-' would work. [19:55:40] does nslcd allow multiple separate ous for lookups? [19:55:48] local- sounds good [19:56:21] PAM does, even if nscd didn't; but all you need to do IIRC is just specify a list of base DNs rather than just the one. [19:56:31] pam? [19:56:38] pam uses nslcd [19:56:44] so does nss [19:57:04] nslcd != nscd :) [19:57:11] Yeah, but with pam you could have more than one check sucessively with different parameters [19:57:19] nslcd is the nssldap replacement [19:57:37] pam is just used for authentication [19:57:41] it's not involved in this at all [19:57:46] only nss is [19:57:55] using nslcd [19:58:14] Wait, wait, my own brain is failing -- I was going back to your 'how does authentication work' again. :-) [19:58:58] base [MAP] DN [19:58:59] Specifies the base distinguished name (DN) to use as search base. This option may be supplied multiple times and all specified bases will be searched. [19:58:59] [19:59:18] So yeah, that'd work. [19:59:32] yep [19:59:49] we'd want to specify it for passwd and shadow [20:00:11] maybe group as well, I guess [20:01:02] we could allow projectadmin users to manage it as well [20:01:05] Of course group as well; you almost certainly want individual tools to have they own groups to allow for sgid directories [20:01:13] yeah [20:01:59] I know I'd want those for labs, and allow sudoers to the tool-uid from members of the tool-gid. [20:02:14] (so that services would be started as tool-uid) [20:02:29] for tools* [20:03:13] how are you going to handle sudo membership for this? [20:04:12] From the group membership would be easiest. %local-group hosts = (local-user) xxx [20:07:34] ah. that's a good idea [20:07:51] so global users could be added into the local group [20:08:01] Exactly. [20:08:28] we have per-project sudo-ldap, btw [20:08:34] afk (fetching tea) brb [20:08:35] could manage the policies there [20:22:04] back [20:22:30] Huh. Didn't look into sudo implementation yet -- I had just presumed you did the "standard" /etc/sudoers from puppet template source. [20:27:47] Even with a seperate ou, still gonna get dublicates [20:28:01] same reason project groups are prefixed currently [20:30:21] Coren: sudo-ldap [20:30:39] also CAKE [20:31:05] damians, we agreed that 'local-' woudl suffice as prefix [20:31:09] FastLizard4|zZzZ: Yes - I can make you a login then /login on the nagios page will give you access. ideally we'd use ldap but sec risk ssl wise... once oauth is in maybe [20:32:45] forced openid would be nice [20:32:47] Too much text to read - but that sorta makes sense [20:33:24] Ryan_Lane: Well for nagios ideally project admins would have access to their instances etc so oauth would do nice permission... openid+a bunch of ldap searching would do also though [20:33:41] does nagios have oauth support? [20:33:44] I kind of doubt it :) [20:33:48] nope [20:34:06] We can fix that though.... auth over the top - as long as you pass it a username it can handle perms [20:34:17] The script needs updating to add usernames per project before that would work anyway [20:34:20] * Ryan_Lane nods [20:35:59] When's asm? may? [20:36:31] asm? [20:37:12] err.. flip s and m around... ams [20:37:34] ah [20:37:37] yeah, may [20:37:47] shit. I forgot to submit a talk for the openstack summit [20:37:51] 'ams'? [20:38:08] Amsterdam hackathon [20:38:10] amsterdamn [20:38:43] Wonder if I can find the motiviation to finish puppetizing bots stuff before then or just do it in May [20:40:09] Hmmm portland for openstack... that's a long way, think I'm doing europython this year... next year I'll do the defcon/pycon/openstack cool confs [20:54:00] Ryan_Lane: hi [21:16:49] Coren: mind adding a bug for per-project user/groups? [21:17:02] use the infrastructure component [21:17:34] Will do shortly. [21:18:08] I know you have a TODO somewhere for the tool labs stuff [21:18:22] http://www.mediawiki.org/wiki/Wikimedia_Labs/TODO [21:18:51] But it's a preliminary skeleton / gesstimation. None of what's on this is set in stone. [21:18:53] I've been making "project" subpages [21:18:56] like: http://www.mediawiki.org/wiki/Wikimedia_Labs/Stability_improvement_project [21:19:16] and have been tracking the bugs in them [21:19:58] can we rename TODO to a more specific project? [21:20:26] Toolserver migration project, maybe? [21:20:50] Perhaps the more generic "/Wikimedia Labs/Tools Lab"? [21:21:07] that's fine too [21:22:43] (Coren moved page Wikimedia Labs/TODO to Wikimedia Labs/Tools Lab: More generic, yet more precise. Crunchy, yet satisfying.) [21:23:06] I'm about to start prepping my food. Will do the bug after? [21:23:14] Change on 12mediawiki a page Wikimedia Labs/Stability improvement project was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=648127 edit summary: [+88] /* Gluster */ [21:23:18] yeah. no rush [21:24:04] Politically saying "Toolserver migration" as focus is a poor fit; the point is to make a good Tools lab they'll *want* to migate to. :-) [21:24:11] indeed [21:24:28] Change on 12mediawiki a page Wikimedia Labs/Stability improvement project was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=648128 edit summary: [-14] /* Monitoring */ [21:25:58] Change on 12mediawiki a page Wikimedia Labs/Stability improvement project was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=648129 edit summary: [+27] /* Gluster */ [21:26:38] Wikinaut: I haven't had time to make another wiki [21:27:55] Ryan_Lane: I understand. My approach is this: to make a symlink-ed copy of the the root files of /srv/mediawiki, and to (try to) adapt the apache2 server settings. [21:28:05] let me know, what your approach would be, pls [21:28:23] I would honestly just create another instance [21:30:02] can you then pls. simply make a clone openid-wiki2 from openid-wiki [21:30:12] if you have time [21:30:14] ty [21:30:27] when I get a chance, yes [21:30:29] pls. can you let me know by mail, if you don't forget this [21:43:54] Ryan_Lane: Jelly! It would be more stable :D [21:44:02] Damianz: :D [21:44:11] I'd like to use out netapp for this [21:44:16] Good luck stealing that [21:44:32] and try a distributed filesystem again later, when they are usable for this use case [21:44:42] we aren't using it for much [21:45:49] I'm still really hoping Ceph comes though, but I've not seen enough real world usage off of ssds to jusity trialing it in prod... and I don't really have a case for dfs right now, since our replication challenges are db not fs [21:46:15] well, it doesn't fit our usecase [21:46:31] ah, you mean where you work [21:46:38] Yeah [21:46:55] The only place it would fit here is enabling us to failover/migrate vms - since now we'd just loose a few hundred [21:47:04] But then meh hardware, cheap and relitlvy quick to replace [21:47:21] relatively* totally can't type today [21:47:30] adding in a dfs means that io issues on one node turn into io issues on all nodes [21:47:48] I'd prefer to be able to do live migration without a shared filesystem [21:47:56] it works in xen, apparently [21:48:03] yeah it does [21:48:11] Can do it in kvm I believe... never tried [21:48:26] Basically pauses io for a few seconds on migration to sync memory state AFAIK [21:49:12] it was broken very badly in kvm last time I tried, remember? :) [21:49:16] It would be interesting to see how much io we cause just running puppet at the same time on everything without splay - bet it would cause a little laaaggggg [21:49:24] and upstream keeps talking about removing the featuee [21:49:28] *feature [21:50:12] I've done that a couple times [21:50:17] it's usually ok [21:50:17] Weirdly though... I've used live cpu/ram resizes in kvm fine and it breaks for us under openstack :P Never sure where issues lie around here [21:50:30] it does cause quite a jump in usage, though [21:50:39] heh [21:51:27] I remember the days when dumps ate the disks every few hours :D that was fun hah [21:52:07] Though I'm not convinced beta is any better.... stuff doing git on the core repo is really ruddy slow, imagine it bottlenecks on io before you see wait though [21:54:39] yeah [21:54:53] when we switch deployment systems, we'll move away from using project data for that [21:56:02] Ryan_Lane: Honestly, Ryan, I'd look at LVMs over MDs over some iSCSI [21:56:24] Especially if you multipath [21:57:35] urgh multipath... [21:58:13] I had to configure an archaic fiber channel card for multipath the other day... wow was that not fun, stupid drivers [21:58:14] Meh, it's ugly but if you want never-down stuff it's the way to go. [21:58:48] In my experience, shared-disk filesystems end up being more reliable than distributed ones. [21:58:57] yes [21:58:58] they are [22:01:49] I've seen a quite good vm implementation using NBD for storage - just point kvm straight to an ipv6 address that has a NBD serving up disk space for and boom, if you feel the need then also drdb your backends (but drdb is evil). [22:02:23] drbd is evil, yes [22:02:30] ganeti uses it extensively [22:02:35] Damianz: That's good for instance-local storage, but the current use case for gluster is the stuff that's /shared/ [22:02:45] they have a pretty interesting way of handling disk images [22:02:48] I like the idea of ganeti but yeah... node to node drdb for failover made me cringe [22:02:53] heh [22:03:29] I'd suggest ocfs2 but oracle are bastards :D [22:04:19] They are. CXFS is my fave, but it's not opensource. [22:04:54] I nearly died when I saw how much our company pay oracle a year in licensing... multiples of millions =\ [22:05:54] what's-his-name isn't rich because of Oracle's upstanding open source contributions. :-) [22:06:25] But isn't OCFS2 mainline in the kernel since before 3.0? [22:06:35] * Coren needs to look into it. [22:08:02] IIRC it was open and in kernel, then oracle pushed the new version and closed it up in licensing or something [22:08:10] You know, like they do to mysql every few years [22:11:12] The suse peeps seem to like it. [22:14:29] hm. ocfs2 looks like an interesting solution [22:15:07] Oh god, what have I done [22:15:09] :P [22:15:10] :D [22:15:40] !log webtools Installed sqlite3 on webtools-login per scfc_de request [22:15:41] Logged the message, Master [22:15:48] You know we could just use nfs and shard projects across storage nodes for io purposes :D [22:15:52] * Damianz hides [22:16:18] I want to avoid a prolonged outage due to a hardware failure [22:16:25] though at this point even that would be better [22:20:27] meh for ocfs2 [22:20:55] !log webtools Installed libdbd-sqlite3-perl on webtools-apache1 per scfc_de request [22:20:56] Logged the message, Master [22:21:36] one of the biggest reasons I want to steal the netapp is because I don't want to have to manage a set of fileservers [22:21:50] it's too time consuming. [22:25:49] heh [22:26:00] !log webtools Installed sudo apt-get install build-essential libtool autoconf on webtools-login, scfc_de wants libtool, and it already pulls most of the dependencies [22:26:01] Logged the message, Master [22:26:10] risk vs reward also... [22:26:28] !log webtools Installed libdbd-sqlite3-perl on webtools-login per scfc_de request [22:26:29] Logged the message, Master [22:27:26] Damianz: risk vs reward? [22:27:34] mhm [22:27:50] risk of managing fileservers vs the reward you get from managing your own fileservers [22:27:53] ah [22:27:54] right [22:27:55] indeed [22:29:30] there's one downside. it eats up some political capitol [22:29:56] since we're saying open source only, then using something proprietary [22:32:16] true, but using that is less likely going to get us sued [22:32:28] yep [22:32:34] !log webtools Installed apt-file on webtools-login per scfc_de request [22:32:35] Logged the message, Master [22:33:56] If we really wanted to be picky we'd argue over the level of openness and use maria over mysql etc [22:55:39] "Get us sued"? [22:56:00] that was regarding the use of proprietary software [22:56:01] * Coren tries to figure out how using a Netapp box is less likely to get us sued. [22:56:07] and allowing end-users to use it [22:56:12] Ah! [22:56:39] Yeah, tbh though, I can see Damianz point. There /is/ something to be said for just plain ol' reliable NFS [22:57:34] indeed [23:04:38] !log webtools Installed libhtml-parser-perl libwww-perl liburi-perl on webtools-login and webtools-apache-1 per scfc_de request [23:04:39] Logged the message, Master [23:59:20] Damianz: Ahh, I see. Well, if it's too much trouble, I'm fine waiting until the unified system is in place