[00:00:49] Ryan_Lane: some people believe that being a checkuser on any wmf site or having access to private data require me to be identified to wmf [00:00:52] is that true? [00:01:09] hm [00:01:15] generally, yes [00:01:33] hm... does it mean I should identify then? [00:01:40] PROBLEM Current Load is now: WARNING on bots-sql3 bots-sql3 output: WARNING - load average: 5.44, 9.86, 7.94 [00:01:40] I'm not sure if this is the same with prototype/beta [00:01:50] I mean, in my case it's no problem but it would complicate account creation for others [00:01:54] lemme write up an email asking about this stuff [00:01:59] wmf people already have my data [00:02:08] at least Maryana and steven [00:02:10] PROBLEM Current Load is now: WARNING on incubator-nfs incubator-nfs output: WARNING - load average: 3.16, 7.82, 7.02 [00:11:40] RECOVERY Current Load is now: OK on bots-sql3 bots-sql3 output: OK - load average: 0.11, 1.62, 4.34 [00:17:10] RECOVERY Current Load is now: OK on incubator-nfs incubator-nfs output: OK - load average: 1.14, 2.34, 4.24 [01:06:04] 01/31/2012 - 01:06:04 - Creating a home directory for andrew at /export/home/gluster/andrew [01:07:05] 01/31/2012 - 01:07:04 - Updating keys for andrew [01:09:51] ryan; have you ever tried Diff All Unified in Gerrit? [01:10:06] yeah [01:10:08] you should try that on a patch with 20 or more files :D [01:10:12] yep [01:10:16] retarded [01:10:16] it's rediculous [01:10:35] old skool window clicking [01:10:43] i haven't seen that since '98 [01:10:48] this is one of the things Roan wants to fix [01:10:54] yes! please! [01:27:50] PROBLEM Current Load is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:28:20] PROBLEM dpkg-check is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:28:23] Currently, the test suite for my software uses test.wikipedia.org. But people keep being personally offended that I testwiki is used for testing, so I'd like to have a wiki that's solely for this test suite. Is this something you guys can help with? [01:29:10] PROBLEM Total Processes is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:29:15] PROBLEM Disk Space is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:30:30] PROBLEM Current Users is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:33:10] PROBLEM Free ram is now: CRITICAL on gluster-driver-dev gluster-driver-dev output: Connection refused by host [01:41:09] alot_of_mike: what's the test suite do? [01:42:09] For example, it reads a page with known content and checks that it was fetched properly. It edits a page to make sure it can. It fetches the contents of a category with known contents to make sure the list is correct when fetched. and so on [01:42:35] what are you testing? [01:42:56] a bot? a library? [01:43:28] A bot library. https://metacpan.org/module/MediaWiki::Bot [01:43:46] ah. yeah [01:43:58] you should be able to use any mediawiki install for this [01:44:34] Yes, well, we want something configured in a WMF-ish way, and we also want one away from prying eyes that get offended by seeing random bot edits. [01:44:43] So apparently test.wikipedia.org is no good 9_9 [01:51:34] well, having a wiki that is wmf-like is not terribly easy [01:51:51] they are doing so in the deployment-prep project, but that's for testing deployment [01:52:49] what the labs environment gives you is the ability to create virtual machines and work in a shared environment [01:53:19] maybe the deployment prep people wouldn't mind you testing in a beta.wmflabs.org wiki [01:53:33] it seems like an appropriate place for testing libraries [01:53:41] !project deployment-prep [01:53:41] https://labsconsole.wikimedia.org/wiki/Nova_Resource:deployment-prep [01:54:02] talk to one of the members of that project for access [01:54:13] you really only need wiki editing access for that [01:54:57] if you wanted to also have access to automated testing infrastructure (like jenkins) and have your code hosted in our repos, that's also possible [07:43:30] PROBLEM Free ram is now: CRITICAL on incubator-bots incubator-bots output: Critical: 5% free memory [07:48:30] PROBLEM Free ram is now: WARNING on incubator-bots incubator-bots output: Warning: 6% free memory [07:53:30] PROBLEM Free ram is now: CRITICAL on incubator-bots incubator-bots output: Critical: 5% free memory [09:38:30] PROBLEM Free ram is now: CRITICAL on incubator-bots incubator-bots output: Critical: 5% free memory [11:17:31] New patchset: Dzahn; "adding index.html for planet test site" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2153 [11:17:50] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/2153 [11:18:30] New review: Dzahn; "btw, i used "git review" for this. works fine so far:)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2153 [11:18:30] Change merged: Dzahn; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2153 [11:27:56] mutante: Yay git review! [11:36:40] RoanKattouw: installed yesterday, today it told me to upgrade, works:) [11:36:52] what is gitreview? [11:36:58] Yeah [11:36:59] is that a frontend to gerrit to make stuff easier? [11:37:11] You don't have to pull the Wikimedia fork any more, they put my changes in the new release [11:37:20] hashar: https://labsconsole.wikimedia.org/wiki/Git-review [11:38:19] oh pip [11:39:40] ahh so git-review push the change to a ref named according to our local branch true? [11:39:46] that will make stuff easier to track down [11:39:52] yea [11:40:15] I used to create a local branch and then manually added a field like 'Local-Branch: my_local_branch_name [11:40:20] see the branch names for the last 2 merges in gerrit [11:40:25] then deleted it manually once the change was merged [11:40:53] ah topic [11:40:54] perfect [11:41:03] hashar, and the mobile CSS is online now [11:41:23] yeah, thanks for that merge. I have notified tfinc about it [11:41:32] that will make it easier to fetch the mobile app nightly builds [11:41:35] thanks! [11:42:19] yw, accessibility for people with large fingers :) [11:43:41] will have a look at git-review later on [11:43:52] will probably end up packaging it for Mac homebrew :) [11:44:58] see you after lunc [11:45:14] yep, cu later [12:53:30] RECOVERY Free ram is now: OK on incubator-bots incubator-bots output: OK: 50% free memory [12:58:55] lets package git-review [12:59:23] <^demon> I remember hearing Roan or someone talking about it already going through the ubuntu packaging process. [12:59:35] hi Chad [12:59:43] we talked about it before lunch [12:59:50] upstream merged Roan's change [13:00:10] just going to package it for Mac OS X homebrew [13:34:27] Upstream is building a .deb too [13:34:35] They wanna put it in the Ubuntu 12.04 release [14:02:14] Change on 12mediawiki a page Wikimedia Labs was modified, changed by Sumanah link https://www.mediawiki.org/w/index.php?diff=492619 edit summary: /* Communications */ IRC and mailing list [14:46:09] !bug [14:46:09] https://bugzilla.wikimedia.org/show_bug.cgi?id=$1 [14:55:50] Change on 12mediawiki a page Wikimedia Labs/status was modified, changed by Sumanah link https://www.mediawiki.org/w/index.php?diff=492643 edit summary: /* 2012-01-31 */ new section [14:55:51] Change on 12mediawiki a page Wikimedia Labs/status was modified, changed by Sumanah link https://www.mediawiki.org/w/index.php?diff=492643 edit summary: /* 2012-01-31 */ new section [15:31:38] RoanKattouw: git-review is a must have :) [15:31:47] Yes it is :) [15:31:56] that makes everything muuuuucchhhh easier [15:32:03] my first use: https://review.openstack.org/#change,3577 [15:32:08] I learned about it from a presentation that the OpenStack people gave at LCA [15:32:23] Only thing was I had to hack it a bit to get it to work with branches that aren't named 'master' [16:02:06] Reedy: Hi. be.wikimedia.org ready to be installed meanwhile [16:02:20] I saw. Thanks :) [16:03:21] yw, guess they waited a while [16:04:06] heh [16:04:07] it happens [16:04:14] we don't have an SLA [16:04:16] * Reedy grins [16:05:02] :) [17:10:24] Can someone grant me rights to the beta.wmflabs wikis please? [17:10:33] onwiki that is [18:34:26] ssmollett: morning! [18:37:53] Ryan_Lane: Do I need a separate keypair for each project? I can reach nova-dev2 but not gluster-driver-dev. [18:38:06] hm [18:38:24] no. it's one shared between all [18:38:38] It is safe to assume that I am doing something silly. [18:38:49] nah. there may be some other problem [18:39:13] damn. your instance failed to build [18:39:25] our apt-mirror was dead at some point yesterday [18:39:34] you should delete/recreate [18:40:09] ok [18:40:38] I need to turn off access logging on that apt box [18:43:09] Can someone grant me rights to the beta.wmflabs wikis please? [18:43:41] Reedy: that would be petan, hexmode, etc [18:43:58] yeah [18:44:22] how long till we can kill prototype? :) [18:44:23] Reedy: I'll do it [18:44:33] give me a couple min [18:44:37] Ryan_Lane, AFAIK all the wikis are now in readonly mode [18:44:44] on prototype? [18:44:47] ya [18:44:51] \o/ [18:44:54] getting closer then [18:44:57] yup [18:45:02] you guys just let me know [18:45:09] it's one of the few things still running there [18:45:23] closed.dblist lists 5 _labswikimeida wikis [18:47:35] I want to kill *all* of the labs wikis [18:47:38] the ones in the cluster [18:47:53] I also want test.wikipedia.org dead [18:47:59] in preference of test2 [18:48:10] beta steward for Reedy coming up [18:48:43] we have too many wikis :D [18:48:49] Ryan_Lane, isn't that all of them? Can't see any others listed [18:49:12] for the labs ones I'd like them fully gone [18:49:19] why keep them around even in read mode? [18:49:24] they confuse people [18:49:58] Reedy: should have it now [18:50:01] cheers [18:50:38] Deleting wikis is an undocumented process [18:50:46] true [18:50:49] They were getting spammed, so closing was the simplest intermediatary [18:50:53] * Ryan_Lane nods [18:52:10] good that it has a closed banner now [18:52:17] Ryan_Lane: Looks like it failed again. Same problem? https://labsconsole.wikimedia.org/w/index.php?title=Special:NovaInstance&action=consoleoutput&project=gluster&instanceid=i-00000123 [18:52:21] I get questions about those wikis occasionally [18:52:30] crap [18:52:37] is the stupid apt box dead again? [18:52:41] Well, I used oneiric this time... [18:52:44] oh [18:52:45] hm [18:52:47] still should work [18:54:11] ugh. it's timing out :( [18:55:05] ok. try now [18:55:17] I had to restart squid and lighttpd on brewster [18:55:46] ok. I'm going to fix its logging right now [18:59:07] It seems happier this time [18:59:11] cool [19:00:58] Ryan_Lane: In other news... shall I go ahead and delete nova-dev1? I was the only one using it, right? (Now that I have devstack on dev2 I never visit dev1 any more. And I never call. And I never write.) [19:01:23] oh. sure [19:01:27] :D [19:01:35] was it getting lonely? [19:01:56] Reedy: hi [19:02:01] ssmollett: hows ganglia coming along? [19:02:10] (I'm pretty sure ssmollett doesn't have pings turned on :D ) [19:02:30] Would it be useful for labsconsole to keep records about who uses/cares about a given instance? Maybe just a per-instance wiki page? [19:02:47] hm. [19:03:01] well, we can see who created it, via it's resource page [19:03:16] Reedy: I gave you steward I hope it's enough [19:04:00] !instance i-00000124 [19:04:00] https://labsconsole.wikimedia.org/wiki/Help:Instances [19:04:03] heh [19:04:09] well, that's not what I meant [19:04:20] andrewbogott: https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-00000124 [19:04:41] I'm not too sure how to track usage otherwise [19:05:01] maybe we can have something like a last log [19:05:30] I was thinking of just a place where someone can write a note: "I'm using this, please email blah@blah.com before deleting" [19:05:35] ah [19:06:15] yeah, that would be useful [19:06:25] I'll need to think about how to add that to the interface [19:06:27] (indeed she doesn't.) [19:06:30] :D [19:06:52] I worry that once there are too many instances for you to keep track of them personally, we'll end up with a ton of relic instances that no one is using but no one feels confident deleting either. [19:08:13] I can't think of any way to track the last time an instance was used. Short of sshd logs, and even that isn't a complete picture. [19:08:22] (This is, clearly, not an urgent problem.) [19:08:46] PROBLEM host: nova-dev1 is DOWN address: nova-dev1 CRITICAL - Host Unreachable (nova-dev1) [19:09:26] PROBLEM host: gluster-driver-dev is DOWN address: gluster-driver-dev CRITICAL - Host Unreachable (gluster-driver-dev) [19:09:42] i'm trying to figure out a reasonable way to modify the ganglia puppet configs for labs. [19:10:26] ah. yeah [19:10:36] remember that you can use $realm where necessary [19:10:39] !realm [19:10:39] $realm is a variable used in puppet to determine which cluster a system is in. See also $site. [19:10:46] Ryan_Lane: OK, now we have a new problem: "can't create /var/lib/dhcp3/dhclient.eth0.leases: No such file or directory." [19:10:54] o.O [19:12:05] hm [19:12:07] weird [19:12:17] well, puppet ran [19:12:31] lemme see [19:12:44] it pings [19:12:54] and I can log in [19:12:56] but..... [19:13:04] the DNS name is screwed now. lemme fix that for you [19:13:43] andrewbogott: ok. you should be able to ssh in now [19:14:23] when you delete/recreate an instance with the same name, the old name has already been cached in both of the resolvers [19:14:33] RECOVERY host: gluster-driver-dev is UP address: gluster-driver-dev PING OK - Packet loss = 0%, RTA = 0.75 ms [19:14:34] Makes sense. [19:14:42] so I have to purge it from both manually [19:14:49] it takes about an hour otherwise [19:15:07] I wonder if I should just change the ttl to like 5 minutes [19:16:01] only for *.wmflabs domains, of course. the public ones don't actually have a problem :) [19:16:08] should I have the gluster commandline tool on that instance? Maybe it's just not in my path... [19:16:16] there's unfortunately no way to purge the cache automatically [19:16:30] oh. for some reason I remember this not working [19:16:53] I think the init script doesn't install properly via puppet [19:17:48] I wonder if it's a matter of missing quotes. fucking puppet [19:19:13] hm. no. has quotes [19:19:20] ah. install [19:19:21] doesn't [19:19:51] andrewbogott: let's add the generic client class [19:21:04] New patchset: Ryan Lane; "Wrap true in quotes to see if it fixes the upstart job." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2156 [19:21:33] oh. [19:21:39] seems it's the same package [19:22:30] right now you are definitely missing the init script, though [19:22:35] and it does some magical set of things [19:22:51] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2156 [19:22:52] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2156 [19:24:01] this is good, it's prompting me to fix this stuff. heh [19:24:57] oh. this is oneri [19:24:59] *oneric [19:25:16] yeah. [19:25:28] I need to add the package to that repo [19:25:41] we use the package from gluster's site [19:25:44] RECOVERY Total Processes is now: OK on gluster-driver-dev gluster-driver-dev output: PROCS OK: 80 processes [19:26:02] unless oneric has the right version [19:26:04] RECOVERY Disk Space is now: OK on gluster-driver-dev gluster-driver-dev output: DISK OK [19:26:35] 3.2.1-1 in oneiric [19:26:42] 3.2.4 in production [19:26:49] lemme get the package :D [19:26:54] RECOVERY Free ram is now: OK on gluster-driver-dev gluster-driver-dev output: OK: 92% free memory [19:27:20] Is the labs apt server serving that package directly, not leaving it up to the ubuntu mirrors? [19:27:36] it's a different package name in the ubuntu mirrors [19:27:44] labs hits our apt mirror directly [19:28:14] RECOVERY Current Load is now: OK on gluster-driver-dev gluster-driver-dev output: OK - load average: 0.02, 0.07, 0.11 [19:28:21] hm. seems ubuntu support sucks a little now that redhat took over [19:28:30] why doesn't this surprise me? [19:28:39] I think I don't quite understand why labs has its own apt repo. It's because you want to swap in some package versions that differ from the official ubuntu versions? [19:29:54] RECOVERY Current Users is now: OK on gluster-driver-dev gluster-driver-dev output: USERS OK - 2 users currently logged in [19:30:00] it doesn't have its own [19:30:24] hi roan, quick git-review question [19:30:31] Sure [19:30:42] i get a working directory is dirty error [19:30:47] ??? [19:31:01] You probably have uncommitted changes [19:31:05] Check git status [19:31:46] andrewbogott: our infrastructure as a whole has an apt mirror [19:31:57] no, i don't: Your branch is ahead of 'origin/master' by 1 commit. [19:31:59] andrewbogott: then we also have a custom apt-repo [19:32:09] hmm [19:32:24] Well, you committed to master, that's also not precisely by the book [19:32:24] the custom apt-repo is either for stuff that isn't in the ubuntu repo, or for packages we want to override in ubuntu's repo [19:32:34] Run git checkout -b myprettyfeaturebranch then try again [19:32:37] Does that happen a lot? (overriding, I mean) [19:32:43] yep [19:32:48] well, not a ton [19:32:49] but enough [19:33:10] after checkout git-review? [19:33:13] or commit? [19:33:24] oh. seems this same package should work in newer versions [19:33:32] lemme just copy it across [19:33:41] diederik: Ideally, you should first start a new branch, then change files, then commit, then run git review [19:33:41] in precise we can probably use the ubuntu version [19:33:54] the lucid version was ancient [19:33:59] But you can also create the branch later, you'll just have to update or roll back your master branch when you're done [19:34:31] this keeps confusing me, i am the only one who is working on this repository [19:34:39] i just wanted to commit the changes based on the feedback [19:35:02] So you're the only one working on this and you don't use the review feature at all? [19:35:03] it's a follow up to the previous 2 patches [19:35:13] ?? [19:35:20] Ryan_Lane: So, rebuild my instance again? Or will puppet take care of updating? [19:35:22] In that case you should probably just push directly, if you have the required permissions [19:35:23] i am talking about the udp-filters repository [19:35:29] Ys [19:35:37] nah. only need to rebuild instance if it fails to build and you can't login [19:35:40] You're saying you're the only one that commits? And no one reviews your changes? [19:35:51] you and tim review my commits :D [19:35:55] Right [19:36:05] stupid ganglia in oneiric is still broken in puppet [19:36:17] Well, then try what I said, create a new branch and submit from there [19:36:17] so this is a new commit that addresses tim's comments [19:36:36] Wait [19:36:41] no doesn't work [19:36:46] hm. upstart didn't install. I hate puppet [19:36:47] OK, so a few questions [19:36:50] still working tree is dirty [19:36:57] after changing branche [19:37:01] Then I swear you must have uncommitted changes [19:37:05] git status will list them [19:37:38] i have some untracked files but that's okay [19:37:47] Yeah [19:38:07] So are you changing one of the previous commits, or are you creating a new one [19:38:24] it's a new commit [19:38:33] but with the same change-id [19:38:47] Oh [19:38:50] i did git commit --amend [19:38:50] Yeah well that won't work [19:38:52] Oh, OK [19:38:54] Yeah [19:38:59] So you're changing the commit [19:39:02] Did you use --amend -a ? [19:39:03] sorry, yes [19:39:09] no [19:39:12] Then do that [19:39:26] andrewbogott: ok, it's working now [19:39:34] Yep, looks good. Thanks! [19:39:55] I think it's possible to create gluster volumes with a single brick cluster [19:40:01] thanks! [19:40:18] There you go :) [19:40:24] if not, we'll need to go through this again on the new node. heh [19:40:26] I'm surprised git status didn't show your changed file [19:40:28] s [19:40:31] oneiric doesn't work properly at all [19:40:54] thankfully precise will be out soon [19:41:06] roan: so how can i reply to your and tim's comments in gerrit? [19:41:33] To submit an inline comment, go to the diff and double-click on the line [19:41:47] When you're done leaving inline comments, go back to the main revision page, and click Review [19:41:54] RECOVERY dpkg-check is now: OK on gluster-driver-dev gluster-driver-dev output: All packages OK [19:41:58] That'll allow you to submit a comment and a score (choose zero) [19:42:09] but i want to reply to your comments, not adding new comments [19:42:18] You need to add new comments in order to do so, I thnik [19:42:22] Well, not sure [19:42:27] Maybe there's a reply button? [19:42:44] but i can't find your comments :( [19:42:52] lol, of course [19:42:56] I commented on a different patchset [19:43:47] got it [19:43:56] (We're still getting used to this ourselves) [19:52:18] roan: code committed, and replied to all comments [19:52:24] yay [19:52:49] regex support is not fully functional (yet) but want to get this deployed asap [19:52:59] then i'll continue working on regex support [19:53:34] New patchset: Sara; "First iteration of adding ganglia for labs." [operations/puppet] (test) - https://gerrit.wikimedia.org/r/2157 [20:02:46] Ryan_Lane: Next question... the gluster docs are full of things like this "server1:/exp1" with no explanation about exp1. It refers to a location in the native filesystem, right? And the gluster stuff manifests as files there? [20:04:00] andrewbogott: /exp1 is just a directory, yeah [20:04:02] on the server [20:04:17] right, and gluster just writes files there [20:04:32] So the gluster driver will have to take a long list of server:location pairs as a flag. [20:04:55] umm [20:05:08] for creating volumes, yeah [20:05:13] for mounting volumes, no [20:06:06] It sort of surprises me that gluster doesn't abstract away that kind of specific info about individual servers. [20:06:21] well, it does [20:06:24] I mean, I guess it does once a volume is created. [20:06:25] the peers list [20:07:16] But you still have to specify a location when creating a volume, which means you have to know stuff about the filesystem on each peer. [20:07:25] ah. right [20:07:47] Why not just allow the peer to configure gluster with "put stuff here" and keep that knowledge local to the peer? [20:08:27] what do you mean by "put stuff here"? [20:08:44] e.g. exp1 [20:09:09] ah [20:09:24] well, since the gluster peer will be running the nova-volume service, we can make it work however we'd like [20:09:32] Nova will have to be notified or reconfigured any time a brick is added or removed, anytime the dir structure of a peer is rearranged, etc. [20:09:57] I think we should control the directory structure [20:10:04] Not hard to implement, just a pain for sysadmins. [20:10:09] yeah [20:10:09] true [20:11:26] Do you have an intuition about whether that information can just be configured via flags or needs to be dynamically changeable via an API or something? [20:11:40] Like, having to restart nova when you add storage -- onerous or not onerous? [20:15:54] hm [20:36:12] 01/31/2012 - 20:36:11 - Creating a home directory for asher at /export/home/mobile/asher [20:37:10] 01/31/2012 - 20:37:10 - Updating keys for asher [21:41:22] New review: Ryan Lane; "The apache config should be a template." [operations/puppet] (test); V: 0 C: 0; - https://gerrit.wikimedia.org/r/2157 [21:45:17] Ryan_Lane: Are you planning to respond to Vish regarding his "instance configuration should be in horizon" email? Or just letting that topic go for now? [21:45:31] oh. let me respond to that [21:45:53] I don't understand how it could be in horizon without being in nova. [21:56:03] me either [21:58:19] 01/31/2012 - 21:58:19 - Creating a project directory for outreach [21:58:20] 01/31/2012 - 21:58:19 - Creating a home directory for nimishg at /export/home/outreach/nimishg [21:58:20] 01/31/2012 - 21:58:19 - Creating a home directory for laner at /export/home/outreach/laner [21:59:21] 01/31/2012 - 21:59:20 - Updating keys for laner [21:59:21] 01/31/2012 - 21:59:20 - Updating keys for nimishg [22:14:10] https://www.mediawiki.org/wiki/User_talk:Ryan_lane#Labs_11369 Ryan_Lane you wanna respond, or shall I? [22:15:50] I can [22:16:34] thanks Ryan_Lane [22:17:11] yw [22:22:44] PROBLEM Free ram is now: WARNING on incubator-bots incubator-bots output: Warning: 19% free memory [23:29:18] preilly: https://bugzilla.wikimedia.org/show_bug.cgi?id=34105 [23:29:52] Ryan_Lane: have issues *cough* *cough* [23:29:59] heh [23:30:30] I have a few infrastructure related things that are higher priority than that right now [23:30:54] Ryan_Lane: I mean getent shadow doesn't need to actually work does it? [23:31:03] not really [23:31:58] though it does seem to fix another issue I ran into [23:32:09] heh [23:32:13] fucking debian [23:32:13] Ryan_Lane: what is that? [23:32:21] an issue when editing files in home directories [23:32:46] nice [23:32:55] if you vi /home/user/.ssh/known_hosts, it won't let you write [23:33:08] if you cd to /home/user, then edit the file from there, it works [23:33:29] it likely makes a call when crossing filesystems [23:33:41] Ryan_Lane: yeah [23:35:44] so, I'll try to get to it at some point [23:35:52] *some point soon [23:42:30] Ryan_Lane: does this look right: /mnt/store/db/** rwk,? [23:43:12] /a/sqldata/ r, [23:43:12] /a/sqldata/** rwk, [23:43:12] /a/tmp/ r, [23:43:12] /a/tmp/** rwk, [23:49:49] Ryan_Lane: thanks, that worked [23:49:59] yw [23:50:22] Ryan_Lane: I don't care what social media tells me about you — you're alright! ;) [23:50:29] hahaha [23:55:58] I *really* wish labs bugs showed up in this channel