[00:11:06] Coren: you still have backups of user/tool pmtpa dot files? [00:19:27] gifti: Seems to work now :) [00:20:34] yay! :) [01:01:12] scfc_de: Poke? [01:12:22] a930913: Wazzup? [01:14:51] gifti: AFAIUI, the pmtpa storage is still available, but I cannot access that ("== THIS SERVER IS DECOMISSIONNED =="). With Sunday, I'd suggest mail or bug. [01:18:33] scfc_de: Nvm. It works now \o/ [01:21:00] a930913: Glad to have been able to help :-). [01:21:17] scfc_de: You want to file the bug thing? [01:21:33] a930913: What bug thing? [01:21:48] scfc_de: There was a bugzilla for it. [01:23:15] a930913: Eh, for what?! [01:24:47] scfc_de: https://bugzilla.wikimedia.org/show_bug.cgi?id=59721 [01:24:57] Though atm it's only doing en-wp. [01:34:02] a930913: You've coded a filter IRC -> Redis? a) That's cool! :-) b) There's already a Redis feed in WMF's MediaWiki, there just needs to be some piping installed to bring it to Labs. Could you document what you've done and others can use until then? [01:39:41] scfc_de: can that redis thing be used multiple times by a tool? [01:40:08] gifti: Which Redis thing? ATM, there's none. [01:40:22] the redis thing of the future ;) [01:43:48] gifti: Yeah, you jsut subscribe to the local feed. [01:45:37] so i can run two processes (or rather 3 to 4) that can read all events in parallel? [01:46:14] they wouldn't compete for them? [01:48:46] gifti: AFAIUI, all processes that subscribe to a channel see all messages. [02:11:13] scfc_de: Documentation = subscribe to redis key "#**.wikipedia" :p [03:21:18] Coren: WTF??!??!? [03:21:44] MAKE IT STOP [03:21:50] Now [03:23:10] What genius decided to have old mail get flooded to people's emails? [03:24:10] That really just pissed me off. [03:25:54] Now I get to clean up 100+ emails from my client, scattered throughout my inbox. [03:26:55] Oh look more coming in. >:( [03:27:12] Now at 198 emails. [03:27:31] I'm marking labs as spam. [03:30:48] I really didn't need that, especially since I just fell asleep. [04:10:14] Lol, yeah, what was with that email stuff? [08:08:08] Hello all! I just created a LAMP instance as described at https://wikitech.wikimedia.org/wiki/Help:Lamp_Instance and PROBLEM: Lots of LDAP connection failures [08:08:26] Mar 24 08:05:46 map nslcd[1041]: [1b58ba] failed to bind to LDAP server ldap://virt1000.wikimedia.org:389: Can't contact LDAP server: Connection timed out [10:31:41] !log deployment-prep migrated deployment-solr to self puppet/salt masters [10:31:44] Logged the message, Master [14:17:11] bd808|BUFFER: "try it now" [14:28:01] I've a strange error on tool labs. I am not able anymore to edit the files of a tool created directly in eqiad server just before the migration end. [14:28:13] But I'm still in the maintainer list [14:32:25] Tpt: What's the name of the tool? [14:32:38] scfc_de: creatorlinks [14:34:47] Tpt: The directory /data/project/creatorlinks looks broken. Coren, could you take a look if this is related to migration or a FS failure? [14:35:02] "broken"? [14:35:40] The directory is owned by uid 51771 which doesn't exist. [14:35:52] Ah, hm. One of those. Not entirely sure why, there's a small fraction of tools that got confused about userids when they got copied. [14:36:33] So, chown -R is okay? [14:36:39] scfc_de: Yep. Just did it. [14:36:55] * Coren looks for more of those. [14:37:03] Tpt: Try again, please? [14:37:26] Not too bad; there seems to only be a couple. [14:37:30] * Coren fixes them. [14:38:24] scfc_de, Coren: it works now. Thanks a lot :-) [14:41:35] tools.supercalifragilisticexpialidocious [14:41:39] ... seriously? [14:45:43] petan: Is the 'huggle' project fully migrated? If not, do you need help with anything? [14:45:55] I will finish that today [14:46:32] cool. [14:46:44] scfc_de: same question… is 'math' finished? [14:48:38] manybubbles: Are you connected with the 'pubsubhubbub' project? [14:48:48] No one has offered to migrate it, and yet I can see that someone has started to do so :( [14:48:49] andrewbogott: not that i know of [14:48:57] manybubbles: ok [14:50:01] The admins of that project are 'Alexander.lehmann' and 'Nik' [14:51:21] manybubbles: 'Nik' is you, right? If so, do you happen to know how I can contact Alexander.lehmann? [14:51:31] Nik is me [14:51:38] I don't know Alexander.lehmann [14:52:07] weird, ok [14:52:17] !ping [14:52:17] hmm [14:52:17] !pong [14:55:36] andrewbogott: I will also release one public ip [14:55:41] I don't need it [14:55:49] cool [14:55:53] whee at public IPs being releaseddd [14:56:21] if there was Ipv6 at labs, every tool could have own ip :P [14:56:59] andrewbogott: The broken instance that I left up and running seems to have healed itself and finished a puppet run. \o/ [14:57:10] bd808: yep, mine too [14:57:24] Was it an ACL/firewall rule? [14:58:18] "The static route on the routers was for 10.68.16.0/24 still" [14:58:33] So, not exactly a firewall, but grepping for /24 found the problem. As you predicted. [14:59:03] Cool. Glad it was found [14:59:19] me too. 256 instances was not enough :) [15:00:30] bd808: is deployment-prep pretty close to done? Are you still waiting on springle for db work? [15:00:53] andrewbogott: yuvipanda: You're on the CC list for https://bugzilla.wikimedia.org/show_bug.cgi?id=59721 I've made a script that dumps JSON for all? wikis from the IRC RC. [15:01:01] looking [15:01:17] a930913: into redis/ [15:01:18] ? [15:01:21] that'll be really cool :) [15:01:21] yuvipanda: Yeah. [15:01:28] andrewbogott: I haven't checked my email yet to see if the db is ready. We have a few other bits to get working too. Mostly the jenkins integration to keep the code up to date. [15:01:31] a930913: are you dumping them as a list or as pubsub? [15:01:37] yuvipanda: Pubsub. [15:01:43] And these elasticsearch boxes for Nik [15:02:13] a930913: ah, nice. appropriate, I'd think. [15:02:26] a930913: where is it running? [15:02:54] yuvipanda: Cluestuff, recent_changes.py I think :p [15:03:08] a930913: is cluebot using it? :) [15:03:26] yuvipanda: Can you note on the bugzilla thing as whatever one does to those things? [15:03:35] a930913: link to surce? [15:03:36] *source [15:03:57] yuvipanda: Actually, cluebot makes its own one, but only for en.wp [15:04:07] oh. is it pushing into redis? [15:04:19] yuvipanda: Erm, /data/project/cluestuff/recent_changes.py [15:04:32] a930913: not on source control anywhere? :( [15:04:53] yuvipanda: The last time I used git, I lost a month of work. [15:05:08] I see [15:05:23] I've never really got to grips with it. [15:05:37] It's probably netbean's fault tbh. [15:05:40] :( it's quite amazing - I think my productivity and peace of mind has been increased by about 10x because of it [15:05:44] ah, I wouldn't be surprised. [15:06:04] if you were to try again and have the time to (I highly reccomend it!) i would suggest sticking to the commandine [15:06:15] no IDE I've tried gets git integration remotely close to right. [15:07:33] yuvipanda: For Vada, I do have revision control :p [15:07:42] Vada? [15:07:44] and svn? [15:08:32] yuvipanda: https://test.wikipedia.org/wiki/User:A930913/vada/vadaprocess.js?action=history :p [15:08:39] hahahah :P [15:08:51] yuvipanda: Vada is a mw framework I'm making. [15:09:01] oh. for gadgets? [15:09:24] yuvipanda: Kinda. [15:09:38] It has it's own "app" store :p [15:10:48] a930913: heh :) [15:11:32] yuvipanda: Currently has apps that include huggle and STiki remakes. (Functionality for AWB is there, but needs more specific apps than just "AWB".) [15:11:51] a930913: nice :) Good luck! [15:11:57] a930913: and I also hope you try out git again some day :) [15:12:28] * a930913 tries to tempt yuvipanda into developing an app for Vada. [15:12:41] a930913: not when it isn't in proper version control :P [15:12:43] * a930913 wafts cookies in front of yuvipanda. [15:12:58] * yuvipanda wafts git in front of a930913 [15:13:39] yuvipanda: Shouldn't that be behind me? I mean my last experience was like a stick, not a cookie :p [15:13:45] hehehe [15:14:27] a930913: either way, I reccomend http://git-scm.com/book chapters 1 and 2 and staying the hell away from all ide integrations [15:21:11] andrewbogott: Re project math, why me?! (If you mean https://wikitech.wikimedia.org/wiki/Nova_Resource:Math.) [15:22:40] yuvipanda: Would I need to make a separate branch for each block of change I'm making, concurrently? [15:23:27] a930913: not at all. You just bundle them all up into one commit, as long as they are not *too* divergent. [15:54:02] !log deployment-prep Built deployment-elastic01.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [15:54:05] Logged the message, Master [15:57:50] andrewbogott: I switched to proxy and still I can't load the webpage, it seems to resolve to old IP [15:57:59] andrewbogott: until that is fixed, I can't finish the migration [15:58:37] petan: Did you come up with a duplex for wm-bot? [15:58:38] mhm [15:58:44] it mysteriously started working [15:58:49] a930913: duplex? [15:59:08] petan, sorry, back -- let me catch up [15:59:16] oh, wait, working now? [15:59:17] andrewbogott: nvm it works [15:59:17] petan: Sending commands from IRC - labs. [15:59:19] yes [15:59:27] a930913: ah that... not yet [15:59:30] but I will [15:59:46] petan: so maybe just a dns delay? Did you switch off the older proxy? [15:59:54] yes [16:00:02] otherwise I couldn't use the same dns [16:00:16] ok, cool [16:00:30] petan: It occured to me that it could use the webservice. [16:01:00] Because then it just feeds parameters to be run with the tool's account. [16:01:51] scfc_de: sorry, I got confused about irc handles, never mind! [16:02:52] * andrewbogott tries to remember what Physikerwelt's irc handle is... [16:02:54] if not Physikerwelt [16:08:07] !log deployment-prep Built deployment-elastic02.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [16:08:10] Logged the message, Master [16:19:27] !log deployment-prep Built deployment-elastic03.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [16:19:29] Logged the message, Master [16:21:18] andrewbogott: Has the deployment-prep project hit a limit for cpu, ram and/or number of instances? I'm trying to create deployment-elastic04 and getting "Failed to create instance." from [[Special:NovaInstance]] [16:22:15] bd808: It doesn't look like it… https://wikitech.wikimedia.org/w/index.php?title=Special:NovaProject&action=displayquotas&projectname=deployment-prep [16:22:24] I'll investigate in a few minutes, eating breakfast atm [16:23:11] andrewbogott: Thanks. Maybe "RAM: 130048/131072". I''m trying to make another m1.large instance [16:31:00] too much beta [16:31:49] bd808: try now? I increased the ram quota by a lot. [16:32:32] andrewbogott: That seems to have done the trick. Thanks. [16:40:01] so beta much ram [16:44:35] milimetric, addshore, legoktm: I'm about to mothball the 'stats' project. Would one of you like to speak up in its defense? [16:45:19] andrewbogott: might be more worthy of a delete, we never got around to doing much there :/ unless legoktm has anything there! [16:45:30] ... or forever hold your peace. :-) [16:45:51] addshore: cool, I'm schedule it for deletion. thanks. [16:45:55] *I'll [16:47:41] * bd808 is getting tired of running `sudo puppetd --test --verbose` [16:47:53] yeah andrewbogott, I think it was just a hopeful effort and doesn't need migration at this time [16:48:00] delete away [16:48:02] thanks for checking [16:56:53] aude: have you finished migrating 'scrumbugz'? If not, can I do anything to help? [16:59:00] andrewbogott: chrisopher says it is all migrated [16:59:04] [= [16:59:13] kill it ;p [16:59:23] aude: great, I will mark it as finished [16:59:28] i can go through wikidata-dev and maps and delete stuff tomorrow, if that's ok [16:59:51] (need to ask chippy about maps) [17:00:13] aude: You don't need to clean up, necessarily, the pmtpa boxes will be wiped soon enough. [17:00:35] i don't want anyone to think we want them migrated [17:00:50] Ah, ok. [17:01:13] If you've already moved everything to eqiad that you need, just update https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration/Progress and I'll ignore them. [17:01:40] ok [17:01:45] i'll do tomorow [17:01:50] tomorrow* [17:01:56] thanks [17:05:02] !log deployment-prep Built deployment-elastic04.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [17:05:05] Logged the message, Master [17:05:34] !log deployment-prep Changed rules in search security group to use CIDR 10.0.0.0/8. [17:05:37] Logged the message, Master [17:05:38] !log planet shutting down instance mars [17:05:40] Logged the message, Master [17:06:01] !log deployment-prep Changed rules in sql security group to use CIDR 10.0.0.0/8. [17:06:03] Logged the message, Master [17:14:41] !log planet - deleting i-000003a8.pmtpa.wmflabs [17:14:43] Logged the message, Master [17:16:24] !log planet - deleted instance venus [17:16:26] Logged the message, Master [17:17:00] !log planet - remove hostname planet.wmflabs from Tampa, release IP 208.80.153.223 [17:17:03] Logged the message, Master [17:21:41] Coren: you about? [17:22:18] Betacommand: Yep. What can I do to you? [17:25:18] Coren: See PM [17:25:28] !log deployment-prep Converted deployment-db1.eqiad.wmflabs to use local puppet & salt masters [17:25:30] Logged the message, Master [17:54:43] Krinkle: you've claimed two projects, Cvn and Integration. Are they finished? Or do you need any help from me? [18:00:37] Coren: Is there a bugzilla bug for the NFS server "Read-only file system" problem? I just had an instance (deployment-elastic04) change from working to non-working without a reboot. I have since rebooted twice and the problem persists. [18:12:38] bd808|LUNCH: Wait what? [18:13:47] Coren: It seems to be the classic "NFS server has bad ACLs", but this time it happened to a node where /home had been working. [18:14:38] As far as I know the instance was not restarted and /home was not remounted but it suddenly became read-only [18:15:25] I have since rebooted twice and not managed to win the ACL race condition lottery [18:20:49] Coren, I have a question? [18:21:05] In meeting. [18:21:11] Uhuh [18:22:16] Coren, then something you can memoserv the answer to me with. Who's smart idea was to flood my inbox with 200+ emails dating back to March 4th? [18:28:35] You did. They got stuck the outgoing queue, though. [18:33:34] https://www.youtube.com/watch?v=nOMX3deeW6Q#t=18 < I wonder if this is what the eqiad storage sounds like [18:35:26] Coren, not really. I never wanted mail. [18:35:52] I expect you got a noisy cron job? [18:36:30] I deleted my 1381 emails before Coren got a chance to spam me :D Well thanks to Tim anyway [18:40:59] yuvipanda: Are you involved with the Reportcard project? And/or can you tell me what Dan Andreescu's irc handle is? [18:41:20] andrewbogott: milimetric is him [18:41:56] hi andrewbogott [18:41:59] ah! [18:42:01] hi :) [18:42:09] it's migrated properly, I forgot to update the wiki page [18:42:10] sorry [18:42:12] How's reportcard doing? All finished? [18:42:14] yes [18:42:16] np, good news :) [18:42:19] i just wanted to leave it up for a bit [18:42:21] andrewbogott: Can you make me a new project? [18:42:28] to test that it doesn't crash randomly [18:42:38] but it's been stable for days, so feel free to kill the pmtpa instance [18:42:47] milimetric: great, thanks. [18:42:53] Damianz: yes, ask me again in 15 minutes? [18:42:54] no sir, thank you [18:42:59] andrewbogott: Sure [18:43:08] * Damianz might actually migrate his instance tonight then YAY [18:43:49] Damianz, I got no warning. [18:44:20] I wonder if whoever it was I yelled at a lot a few weeks ago fixed the thing hmmm... [18:44:33] andrewbogott: No, migration is not finished on those. hashar as started on migrating integration. I haven't started on Cvn yet. [18:44:57] Krinkle: OK… do you need any help from me? [18:45:16] Nope, just time and finding out what I forgot to puppetise. [18:45:30] ok [18:45:46] Note that things will start to vanish on Friday. [18:45:47] Hm.. any beta labs admins here? I'm running trying to run mwscript from cli, but it's indefinitely doing nothing (createAndPromote). [18:46:07] Oh, now it finally times out [18:46:08] DB connection error: Can't connect to MySQL server on '10.4.0.53' (110) (10.4.0.53) [18:46:30] I noticed all pmtpa instances from deployment-prop disappeared from the nova overview page [18:46:32] https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep [18:46:33] That looks suspisiously like a prod ip [18:47:13] documentation still says pmtpa though, but I connected to deployment-eqiad.wmflabs. [18:47:33] Damianz: https://github.com/wikimedia/operations-mediawiki-config/blob/2f075e7e435077a3d98d33964e76e7f68e5995f7/wmf-config/db-labs.php#L47 [18:48:05] Looks like whomever migrated beta to eqiad, didn't update IP references? [18:48:13] It's a pmtpa instance I think [18:48:16] or used to be rather [18:48:38] oh, pmtpa instances are still there [18:48:43] which one is being served? [18:48:52] * Damianz waits for wikitech [18:49:13] pmtpa is still primary for beta [18:49:26] OK, nvm then, I'll connect to the old deployment prep bastion in that case [18:49:35] Yeah [18:49:44] I believe the eqiad host is db1 rather than sql [18:50:30] Also beta has too many fricking instances [18:51:20] Damianz: project name? [18:51:25] 'monitoring' [18:51:57] Hm, I just created an 'icinga' project, is this different? [18:53:19] Well currently we use the icinga instance in the nagios project, I was going to rebuild icinga to live under the monitoring project and make it generic [18:53:35] We might well have use to have an icinga project as well [18:53:49] ok, so that sounds like a duplcate of what the icinga project was for… petan, can y'all coordinate? [18:54:02] Unless Damianz you can explain why we would want two... [18:54:23] Damianz: there is new project icinga for icing [18:54:29] there is 1 instance called icinga [18:54:38] :o [18:54:47] which is where icinga is surprisingly going to be [18:54:55] I don't really mind either way, I just think 'icinga' under 'nagios' is stupid [18:54:56] but it's not yet configured so you can do that [18:55:08] Damianz: there is no more icinga under nagios [18:55:10] Damianz: the 'nagios' project is definitely getting killed. [18:55:13] there is icinga under icinga [18:55:40] I just recall saying I was going to do it a while ago then went into a rage at someone breaking ircecho and the project being named stupidly [18:55:43] *shrug* [18:55:50] I'll use icinga ;) as long as we get to murder nagios [18:56:17] IIRC there was other monitoring related things under nagios at once point that lesslie or such was working on hence a suggestion of monitoring [18:56:23] * Damianz goes to get pizza out of the oven before FIRE [18:56:48] Damianz: idk, maybe she was testing things [18:57:08] Damianz: let me know if I should call FD [18:57:10] :P [18:57:38] Yay I didn't burn food [18:57:54] Fire exclamation mark [18:58:50] Hmm, I apparently have nothing to drink - this sucks [19:00:02] btw you have sysadmin on icinga project feel free to break it err. install that [19:10:28] Thanks, working on it [19:10:34] andrewbogott: Can't add security rules for eqiad? [19:10:55] Damianz: that interface is a bit weird, but it should work. [19:11:00] What rule are you adding, to what project? [19:11:32] icinga. 80, 80, tcp, 0.0.0.0/0 or 10.0.0.0/8, default -> Failed to add [19:12:23] Oh, I think you need to leave that last setting ('source group') blank. I don't really know what that does even :( [19:12:47] That works [19:13:03] I should be able to have multiple groups though... maybe just default is borked [19:13:26] You can have multiple groups, though... [19:13:45] I think that setting is supposed to import a set of rules from a different group into the current one? I'm not sure at all. [19:14:01] Anyway, you can both 'add group' and 'add rule' to a group. [19:16:52] * Damianz mentally 's it [19:21:08] Cyberpower678: My threshold was about 200 mails as assembling the information and notifying the maintainer takes me about five minutes, following up another five, while deleting 200 mails takes the maintainer about three minutes, and it was them who ordered the mails, not me. [19:22:16] scfc_de, huh? It took me a good time to declutter my mailbox. Not 3 minutes. [19:23:44] Cyberpower678: I sorted by read. [19:24:06] jsub can't be used by users, only tools, right? [19:24:25] a930913, users can use it too I think. [19:24:47] a930913, I was tired. I wasn't thinking clearly. [19:25:17] * a930913 trouts Cyberpower678. [19:25:36] Cyberpower678: So why isn't it working? :/ [19:26:00] * CP678 eats it. [19:26:02] Stupid internet. [19:26:08] This connection is boolshit. [19:26:46] Hi every one... I have just joined this group... I'm newbie in this world... I need some guidance regarding how to connect with database... Can anyone give me an idea or reading reference? [19:27:07] asad_, terminal or via a script? [19:27:26] asad_: "sql enwiki_p" is a quick way. [19:27:32] through terminal(putty) [19:27:51] asad_: Are you SSHed into labs yet? [19:27:51] asad_: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Shared_Resources/MySQL_queries [19:28:45] I am unable to connect through terminal... I donot know the exact server name [19:29:04] Cyberpower678: If users can use jsub, why isn't mine working? :/ [19:29:19] asad_: tools-login.wmflabs.org [19:29:23] asad_, if you have SSH'd into tool labs, you can use sql [19:29:47] a930913, dunno [19:30:19] Cyberpower678: Are you on? Could you just quickly check for me? :3 [19:30:55] Yes, I tried to login to "tools-login.wmflabs.org". but I get following error when i tried to login from my user name "asad".... If you are having access problems, please see: https://wikitech.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [19:32:25] asad_: Do you have a key? [19:33:09] which key ? [19:33:23] a930913, successfully submitted as cyberpower678 [19:33:25] asad_: A public/private key pair. [19:33:58] a930913, I have generated public/private key and uploaded public key... [19:34:06] Cyberpower678: Hmm, so what's wrong with me? :/ [19:34:55] asad_: And you're using the private key with putty? [19:35:04] no, I'm not using the private key.. should i use private key ? [19:35:55] asad_: On the left menu, Connection>Auth [19:36:33] a930913, I don't let me see the command. [19:37:21] Cyberpower678: qstat returns nothing. [19:37:52] a930913, let me see the jsub command. [19:38:45] thanks a930913, I have just successfully logged in... [19:38:55] Cyberpower678: Found the error, it was all failing silently. [19:39:09] Could an op please merge https://gerrit.wikimedia.org/r/#/c/120581/ when they have a min [19:39:18] mount.nfs: mounting labstore.svc.eqiad.wmnet:/project/icinga/project failed, reason given by server: No such file or directory < Puppet complaining [19:40:13] asad_: Awesome, you should be able to "sql enwiki_p". (Assuming it's en.wikipedia you want the db for.) [19:40:24] a930913, umm, no [19:40:29] it's sql enwiki [19:41:12] asad_: What Cyberpower678 said. I don't use it :p [19:42:18] a930913, yes you are right. Just learning how to deal with enwiki_p... Hope I will do it... :) [19:42:46] asad_: What are you trying to do/make? :) [19:44:49] I want to develop the model, that will predict the impact of news on Wikipedia articles searches... [19:44:55] Damianz: The complaint is annoying, but I haven't sit down to find a way to avoid it yet. The problem is that puppet doesn't /know/ that you haven't elected to use a shared /home in the project; so it still tries to mount it even though it's not there. It's harmless. [19:45:32] Coren: Np... probably should find a 'fix' even still though :) [19:46:07] Could an op also merge https://gerrit.wikimedia.org/r/120584 when they have a min :) [19:46:28] Thanks Dzahn for the last one [19:47:23] Cyberpower678: a930913: in fact it's the same, as /usr/bin/sql handles both variants :P [19:47:53] ... wait, why did Jenkins only +1 that? [19:48:10] hedonil, when I sued it on labs when replication was first established on labs, it didn't work. :p [19:49:54] Cyberpower678: that's the magic progress & improvement :-) [19:50:39] :-) [19:52:17] (03PS1) 10DamianZaremba: Ignore pmtpa hosts [labs/nagios-builder] - 10https://gerrit.wikimedia.org/r/120586 [19:52:29] (03CR) 10DamianZaremba: [C: 032 V: 032] Ignore pmtpa hosts [labs/nagios-builder] - 10https://gerrit.wikimedia.org/r/120586 (owner: 10DamianZaremba) [19:52:33] Damianz: Yes; that'll require making a fact out of that, or adding a class, or something. The solution can't live in puppet alone, anyways. It's on my plate, but relatively low priority since it's harmless and most projects have shared homes anyways (and there is no overhead for extra 'volumes' like in gluster anymore) [19:53:02] hehe: Host 10.68.16.195 is not allowed to talk to us! [19:53:27] That will me fixed on the next run - change just got merged [19:53:47] petan: I'm ignoring all pmtpa hosts in the config generator since andrewbogott is going to turn them off anyway [19:54:26] sure [19:56:14] Krinkle|detached: FYI, there is now a submit host for cron jobs. It's not the default yet, but you can reach it via 'xcrontab' [19:56:21] (from the bastions) [19:57:50] ... it also doesn't work quite yet. :-) (But will very shortly) [20:10:47] Ok... labs monitoring should start to clear as instances catch up in puppet runs. Just ircecho not working, but it should be meh [20:20:42] bd808: thanks for the salt/puppet help on beta cluster. It is really useful [20:20:54] bd808: still have to find a way to nicely list / cherry-pick the patches :]] [20:21:00] but that is good enough for now [20:22:06] hashar: I made my bash script work a little better today and changed so that we are cherry-picking onto the local production branch rather than a local branch that's not tracking anything [20:22:57] try running ~bd808/git-sync-upstream and see if you like the output [20:23:32] The best thing about it is that is automatically runs sudo if you forget [20:24:31] cat: /home/bd808/git-sync-upstream: Permission denied [20:24:31] :D [20:25:17] `chmod g+rx bd808`; try now :) [20:25:55] and to make it easier [20:26:06] old user account have hid 550 which is the svn group [20:26:16] whereas you got GID 500 which is wikidev :-] [20:26:39] andrewbogott: any clue what should be the default gidNumber for LDAP account? [20:26:47] I still get 550 which is the 'svn' group [20:26:55] whereas new user have 500 which is 'wikidev' [20:27:03] I don't know. [20:27:28] $ ldaplist -l passwd |grep gidNumber|sort|uniq -c|sort -n|tail -n2 [20:27:28] 447 gidNumber: 550 [20:27:28] 2615 gidNumber: 500 [20:27:29] I think it depends, but I'm not sure what it depends on. [20:27:30] go figure :] [20:28:48] looking at the list of accounts it seems old accounts have gid 550 [20:29:32] * hashar feels a bug [20:29:43] can be dealt later on [20:29:55] Accounts that were imported from the old svn credentials get 550 [20:30:29] LDAP users really should all be wikidev nowadays. I don't think we care how old the account is? [20:31:57] old is merely the svn import [20:32:01] I filled https://bugzilla.wikimedia.org/show_bug.cgi?id=63028 about it [20:32:11] it is not very important, can wait post pmtpa migration [20:36:04] * andrewbogott running errands, back ~ an hour [20:38:36] Change on 12mediawiki a page OAuth/For Developers was modified, changed by Sage Ross (WMF) link https://www.mediawiki.org/w/index.php?diff=938668 edit summary: links [20:46:55] Coren: how do I ensure that a script doesnt go through jsub when using cron? [20:48:05] You don't. That's the point. Why would you want to anyways? If there's a use case, I can code something to support it. [20:48:42] Coren: I need the output emailed [20:48:51] jsub just eats it [20:49:27] Ive got 1 script thats not done via jsub for that reason [20:50:06] Betacommand: Hm. Well, there's nothing that'd prevent the script from sending its output to mail, but I can see how that'd be a generally useful option to add to jsub anyways. [20:50:38] I'll have that running and tested before I do the switchover. [20:50:56] Coren: which means Ive got to figure out another hack to make things work like they did on the toolserver [20:51:18] Betacommand: It'll be as trivial as add something like '-mail' to jsub. :-) [20:52:39] [jq]sub already knows how to mail, I'll just add a 'gimme the output in a mail' option. Like I said, that seems like a generally useful thing for reports and such. [20:52:42] Coren: ive had an open request about sge for ~6 months now thats gotten no where [20:53:05] Betacommand: That seems long indeed. Do you have the BZ handy while I've not my hands in it/ [20:53:06] ? [20:53:12] s/not/got/ [20:53:18] Coren: one sec [20:55:53] Coren: 54054 opened sept 12, 2013 [20:58:12] Betacommand: Ah, it never got prioritized so it fell at the bottom of my list. Sorry 'bout that. It's quite reasonable and shouldn't be to hard to implement. I'm spending this week doing bug and enhancements to bits of infrastructure, I should be able to squeeze that in. [20:58:43] Coren: it should be possible to set both the global output and -mail flags for all jsubs [20:59:11] Yeah, I'll probably make a .jsubrc in which you can just set default options for all your invokations. [20:59:25] thanks [21:00:39] these "easy" requests I report and get nowhere with, so pardon my frustration when things change [21:00:55] How could you email streamed output? [21:01:05] Content boundaries? [21:01:21] a930913: You can't, obviously. It'll simply email the output in toto when the script ends. [21:01:43] (I.e.: set the -o to a temporary file and email the result) [21:02:54] *Script dumps email output* *Script starts another instance of itself* *Original script kills itself* :p [21:03:35] a930913: It'd probably be simpler for said script to just pipe the output it wants to mail. :-) [21:04:46] Betacommand: You should see the length of my 'small requests' list. :-) Thankfully, with the migration finally over and the new tools being considerably more stable (Yeay ext4!) I'll finally be able to make headway down it. :-) [21:08:56] Coren: I just made a script that sends emails, and tbh, I'd probably have tried my method above, before doing it properly :p [21:10:47] Coren: Your new xcrontab massively screwed up when I tried it for tools.anomiebot [21:13:04] anomie: What'd it do? [21:14:40] Coren: It inserted "/usr/bin/jsub -N send-daily-report -once -quiet" before the sixth "word" on every single line. Even the comments, and the ones that already had calls to /usr/bin/jsub [21:14:52] err, different "-N" though [21:29:21] ... how odd. Can you show me what your original crontab was like? My parser might be broken for some syntaxes, I guess. [21:29:54] Oh, hah! At least one I see. Silly me. :-) [21:37:36] I'm adding eqiad instances to replace editor-engagement's pmtpa instances. https://wikitech.wikimedia.org/wiki/Help:InstanceConfigMediawiki says "Enable the 'Web' security group", should I check both default and web-servers ? [21:42:05] scfc_de or Coren, i have a problem with connecting via ssh to bastion and yesterday I was suggested to ask you, do you have a few minutes? [21:44:26] i can connect to bastion-eqiad but not to mpaa@bastion.wmflabs.org [21:44:40] cannot understand why [21:45:29] Mpaa: what does `ssh -v -v mpaa@bastion.wmflabs.org` print out [21:46:23] after this: [21:46:27] debug1: Offering RSA public key: /home/squiddy/.ssh/id_rsa [21:46:38] it steps to the next auth method [21:47:23] in bastion-eqiad I get: [21:47:57] debug1: Server accepts key: pkalg ssh-rsa blen 279 [21:51:01] i tried to delete ssh and reupload. bastion-eqiad behaves ... no key, no access, a few min after reupload -> access again [21:53:19] Hi, I'm receiving e-mails about jobs started by cron, I have not received these mails before the migration to eqiad, I don't want to receive them, how can I disable the mail forwarding? [21:54:41] spagewmf, any suggestion? [21:57:49] hi guys. quick question. is it possible to protect files on tool labs by htpasswd? [21:57:57] Mpaa: I don't know. When I first connected to bastion-eqiad I was prompted for a password and I think it was different than the one I use for pmtpa sites [21:58:46] andrewbogott: I'm creating new editor-engagement instances, we'll see how I get on. [21:59:30] andrewbogott: editor-engagement quota at eqiad has no public IPs, should I file a bug to up the limit? [22:01:16] chrismcmahon, but also others, is there a good place to look at long-term load on the beta cluster? [22:02:42] danilo: You can't disable mail /forwarding/, but you can quiet cron itself though. What do you receive, just the "job number xxx was started" thing? [22:02:44] marktraceur: I suspect that beta cluster is not a reliable gauge of performance in any way. We've discussed hosting a performance-test env, but for now we don't have one. [22:03:21] I mean, I just want to see the difference since Friday [22:03:25] spagewmf: can you use a proxy instead? [22:03:34] marktraceur: you might do like "action x" takes longer than "action y" kind of comparison [22:03:39] Er...Thursday [22:03:40] marktraceur: ah. [22:03:45] onetimenickname: Not htpasswd, because we use lighttpd, but yes -- check out http://www.cyberciti.biz/tips/lighttpd-setup-a-password-protected-directory-directories.html [22:04:00] (with the difference that the configuration file to edit is your .lighttpd.conf and not the global one) [22:04:24] Coren : oh, cool. I will check it out. thanks. [22:05:59] andrewbogott: we could, there are references to ee-flow.wmflabs.org but maybe not critical [22:06:10] marktraceur: @deployment-bastion:/data/project/logs/archive$ might have some useful information for you [22:06:19] spagewmf: you can make a proxy named ee-flow.wmflabs.org [22:06:26] just look at the 'manage proxies' sidebar link [22:06:26] chrismcmahon: Sounds like raw request data? [22:06:39] um… 'manage web proxies' that is [22:07:04] marktraceur: yes. I can see e.g. apache2 logs back to Mar 1 [22:07:23] andrewbogott: OK, I thought you meant the http://ee-flow.instance-proxy.wmflabs.org URL that works "for free" [22:07:32] anomie|away: I've made a few fixes; xcronsub should be less dumb about some jsub invokation, and leave comments alone. :-) [22:07:43] s/xcronsub/xcrontab/ [22:08:09] chrismcmahon: I was hoping for load data or timing [22:09:19] Mpaa: Why do you need to reach the old bastion? I mean, I can help you debug but that server goes away for good Friday anyways. [22:10:57] Coren OK, I did not know ... can I reach piramido from bastion-eqiad? [22:12:59] Mpaa: Apparently not, because that instance's security group allows SSH only from pmtpa. Hm. Do you know what project this is part of? [22:13:28] Coren, no, not really [22:13:51] andrewbogott: must I set up a proxy for an eqiad web-server, is there an equivalent to http://INSTANCENAME.instance-proxy.wmflabs.org that we've been using? [22:14:28] The *.instance-proxy system is going to be shut down in favor of the dynamic proxy system. [22:16:24] Mpaa: That's editor-engagement. According to https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration/Progress this should be in the process of migration by the growth team. [22:17:23] Mpaa: mattflaschen is the contact; you probably want to ask him about piramido's status. [22:17:40] Core, OK thanks [22:17:46] chrismcmahon: Separately, we have issues with caching on betalabs - we merged something on Thursday that turns on Media Viewer for all users (even logged-out) but logged-out folk aren't seeing it because $caching [22:18:03] I'm not sure if that matters but figure you should be aware, I can file a bug if that's what's needed [22:18:16] marktraceur: file a bug for hashar please [22:18:27] marktraceur: you could Cc me if you would [22:18:47] Coren: Still broke my crontab entries that use the "@daily" syntax [22:19:10] Righto [22:19:32] anomie|away: Ooooo. @stuff syntax. Completely forgot about /that/. [22:20:14] marktraceur: idle question, have you found any oddness with sessions and cookies on beta labs, especially since the migration to eqiad? I have a really perplexing issue in some MobileFrontend tests with what looks like a server-side race condition. it's perplexing. [22:20:19] I'll add code to understand it but, mind you, I recommend against it as a rule. It schedules everything to pile up at the same times for everyone, so it causes bottlenecks. It's not fatal, mind you, but not all that beneficial. [22:20:32] andrewbogott: thx, mentioned that at https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration/Howto [22:20:41] thanks [22:21:46] andrewbogott: I asked above should we have both 'default' and 'web-servers' security groups for new MW instances? [22:22:01] chrismcmahon: I haven't noticed anything, no [22:22:08] But I don't test all that frequently there [22:22:21] spagewmf: You can organize your security groups any way you like. It depends on what access you want for the instances. [22:23:41] chrismcmahon: https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 [22:24:27] marktraceur: thanks. I think we used to handle this properly, but I bet the migration to eqiad broke it. [22:25:02] Maybe [22:25:12] anomie|away: Incoming patch. Thanks for testing the edge cases. :-) [22:28:46] Coren: Also, what will happen for people who use input or output redirection in their crontabs? [22:31:13] can anyone point me to examples of doing map-reduce statistics collecting over database dumps on Labs? [22:35:28] andrewbogott: ssh ee-flow.pmtpa.wmflabs from the new eqiad instance fails with "debug1: connect to address 10.4.0.118 port 22: Connection timed out". Do I need to ssh -A to new instance to make my keys available? [22:36:38] spagewmf: yes, probably. [22:36:44] You're trying to copy files from pmtpa to eqiad? [22:37:29] yes, some updates to /data/project. rsync failed so I'm just trying ssh. It times out even when I'm ssh -A'd into the eqiad machine. [22:38:51] Heads up, restarting varnish on betalabs at hashar's recommendation [22:39:03] if ssh works then rsync should also just work [22:39:19] !log Restarting betalabs varnish to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 [22:39:20] could rsync from instances to local just fine [22:39:20] Restarting is not a valid project. [22:39:22] andrewbogott: I can copy to my laptop and then back out, but that won't work so well for a SQL DB. mutante ssh is timing out. [22:39:24] !log beta Restarting betalabs varnish to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 [22:39:24] beta is not a valid project. [22:39:30] you can see what keys are forwarded by doing ssh-add -l [22:39:32] !log deployment-prep Restarting betalabs varnish to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 [22:39:34] Logged the message, Master [22:39:54] if it was a key forwarding issue you'd get permission denied [22:39:57] instead of timeout [22:40:14] Oh, no I can't [22:40:15] spagewmf: probably the firewall is blocking access from eqiad to the pmtpa instance. [22:40:20] Have a look at your security groups. [22:40:36] Someone happily added me to the project but not the sysadmin group [22:40:41] hashar: ^^ :D [22:40:45] spagewmf: you should use mysqldump and copy the dump file [22:40:50] marktraceur: :-D [22:41:07] marktraceur: done [22:41:15] Grazie [22:41:16] andrewbogott: Hmm, OK. I'm following https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration/Howto#Self-migration_of_shared_data which says I can access pmtpa from eqiad [22:41:38] spagewmf: mysqldump --all-database -u root -p > mybackup.sql ; gzip mybackup.sql [22:41:48] --all-databases [22:41:52] spagewmf: the security groups are set per-project. So you may need to set them differently depending on what you want to do. [22:41:55] marktraceur: and gave you sudo access [22:42:03] Also good [22:42:08] !log deployment-prep made marktraceur a project admin and granted sudo rights [22:42:09] hashar: It doesn't look like it's working though [22:42:10] Logged the message, Master [22:42:22] mutante: thanks for the tip, I'll add it. [22:42:32] http://en.wikipedia.beta.wmflabs.org/wiki/Lightbox_demo in a private window still links to the file pages for those images [22:42:38] marktraceur: deployment-cache-bits03.pmtpa.wmflabs ; sudo /etc/varnish restart [22:42:57] hashar: I thought betalabs was migrated to eqiad? [22:43:14] marktraceur: ah no. There is no database setup there yet :] [22:43:21] Aha [22:43:26] marktraceur: it is being rebuild from scratch on eqiad [22:43:50] spagewmf: copying just files from one db host to another _might_ work, but only if everything else is 100% identical, dump and import is always safer [22:43:51] marktraceur: whenever we got something more or less working, I will post an announce to ask people to test it out using /etc/hosts hacks :] [22:44:37] marktraceur: I am not sure why the js/css hosted on bits ends up being stalled. To be honest, I have no idea how they get invalidated [22:44:52] marktraceur: also the script that sync mediawiki/extensions.git might be stalled as well [22:45:04] * hashar looks at https://integration.wikimedia.org/dashboard/ [22:45:09] https://integration.wikimedia.org/ci/job/beta-code-update/ [22:45:13] !log deployment-prep attempted restart of varnish on betalabs; seems to have failed, trying again [22:45:15] Logged the message, Master [22:45:31] hashar: /etc/varnish is a directory [22:45:46] marktraceur: sorry /etc/init.d/varnish restart [22:45:50] And I'm getting VCC failures on running ^^ that [22:45:57] /usr/lib/x86_64-linux-gnu/varnish/vmods/libvmod_header.so: cannot open shared object file: No such file or directory [22:46:04] Which is why betalabs now looks like ass [22:46:05] impossible [22:46:08] not at 11pm! [22:46:11] Hah [22:46:25] ahah [22:46:29] I could bother someone else maybe [22:46:36] varnish got upgraded [22:46:43] so the puppet manifest got slightly enhanced [22:46:48] but the new varnish did not get installed :-( [22:46:51] Argh [22:47:01] !log deployment-prep apt-get upgrade varnish on deployment-cache-bits03 [22:47:03] Logged the message, Master [22:47:14] hashar: You're awesome [22:47:15] petan: anything left to do for project 'huggle' or can I mark it as done? [22:47:17] well [22:47:40] marktraceur: I would be awesome if varnish upgraded itself automatically or if I managed to get ops to test the new varnish version on beta :] [22:48:02] Still not working though :( [22:48:29] Maybe I need to touch some files in the deployment-prep apaches? [22:48:35] !log deployment-prep upgrading varnish on all pmtpa caches. [22:48:37] Logged the message, Master [22:49:01] 00:14:10.731 PHP Parse error: syntax error, unexpected ')', expecting ']' in /mnt/srv/scap/bin/refreshCdbJsonFiles on line 150 [22:49:01] 00:14:10.738 Update of MediaWiki localisation messages failed [22:49:07] at the bottom of https://integration.wikimedia.org/ci/job/beta-code-update/54335/consoleFull [22:49:28] bd808: got some PHP parse error in /mnt/srv/scap/bin/refreshCdbJsonFiles :-D [22:50:03] probably from the last guy who had a patched to it merged hashar [22:50:18] * 5d64e63 - (HEAD, origin/master, origin/HEAD, master) Option to increase verbosity of refreshCdbJsonFiles (4 days ago) [22:50:21] damn antoine [22:51:03] ganglia labs down? [22:53:05] beta labs slow? http://en.m.wikipedia.beta.wmflabs.org/w/index.php?title=Special:MobileOptions took a long time to return [22:53:20] bd808: how do we get the scap tools deployed? [22:53:29] chrismcmahon: Beta labs is being redone as eqiad, unstable at the moment [22:53:33] chrismcmahon: yeah restarted the caches for upgrade :( [22:53:41] Krinkle: well pmtpa is not touched :] [22:53:54] hashar: In beta or production? [22:54:01] ebta [22:54:06] hashar: OK, thanks. [22:54:08] bd808: in beta I ran git pull . On production I have no clue how scap is updated [22:54:55] chrismcmahon: and the upload cache is stalled apparently :-( [22:55:17] * hashar hurries up, in 4 minutes my laptop turns back to a pumpkin [22:55:36] hashar: restart all the things! (maybe that will make my session/cookie issue for Mobile go away) [22:55:38] hashar: Updating prod now. The process is `ssh -A tin; /srv/scap/bin/update-scap` [22:55:47] bd808: thanks :] [22:57:46] !log deployment-prep restarting deployment-cache-upload04 , apparently stalled [22:57:48] Logged the message, Master [22:58:19] marktraceur: is your js/css caching issue resolved? [22:58:38] Hm [22:58:49] * marktraceur looks [22:59:30] yupiieee mediawiki code updating again https://integration.wikimedia.org/ci/job/beta-code-update/ [22:59:37] Not fixed [22:59:57] chrismcmahon: the localization cache was broken for the last 18 hours or so [23:09:30] is wikihadoop or hadoop set up anywhere on labs? [23:09:47] !log deployment-prep upgraded all pmtpa varnishes, ran puppet on all of them. all set! [23:09:51] Logged the message, Master [23:10:15] !log deployment-prep l10n cache got broken due to a PHP fatal error I introduced. It is back up now. Found out via https://integration.wikimedia.org/dashboard/ [23:10:17] Logged the message, Master [23:11:04] marktraceur: so all four varnishes are now up to date (version + puppet) [23:11:16] * marktraceur tries once more [23:11:23] marktraceur: restarting bits again [23:11:57] as for the js/css caching, I have no clue how it works sorry :-( [23:12:07] maybe the resource loader does not detect the change? [23:12:15] Guess not [23:12:30] I might ask someone else tomorrow [23:13:08] marktraceur: try touching the js/css files ? [23:13:16] I guess so [23:13:24] But like...on the deploy bastion, or on the apaches? [23:13:28] I never had to do it before [23:13:37] they all use a shared NFS directory [23:13:43] I usually log on the deployment-bastion [23:14:05] then become mwdeploy : sudo su --login --shell /bin/bash mwdeploy [23:14:11] hashar: jenkins doesnt merge anymore [23:14:12] (I have set that as a bash alias) [23:14:19] i wouldnt have asked, but since you're here:) [23:14:25] mutante: change ? :-] [23:14:35] f.e. https://gerrit.wikimedia.org/r/#/c/120694/ [23:14:42] https://gerrit.wikimedia.org/r/#/c/120697/ [23:14:58] Submitted, Merge Pending [23:15:02] marktraceur: files are under /data/project/apache/common-local/ [23:15:07] mutante: sounds like a Gerrit issue [23:15:31] hashar: ok! [23:15:31] mutante: that is Gerrit JGit backend having trouble/delay merging a commit apparently [23:16:06] mutante: ah no [23:16:13] !log deployment-prep Touching all the MMV scripts because they're not getting invalidated or something [23:16:14] where can I read up on what appear to be handy nfs mount points in equiad labs? [23:16:16] Logged the message, Master [23:16:17] mutante: I am tired sorry. There is a parent change which is not merged : https://gerrit.wikimedia.org/r/#/c/120689/1 :-] [23:16:27] mutante: whenever the parent is submitted, Gerrit will merge both [23:16:46] hashar: oh, i'm sorry for even asking :( got it [23:16:53] my bad adding dependency [23:16:56] I'd like to add a mount point for my project so /home and /data/project can mount.... [23:17:05] Arrrgggghhhhh [23:17:45] cajoel: new eqiad instances should have those automatically. [23:17:52] mutante: sleeping now :-] [23:17:54] Are you working on a migrated instance? [23:17:58] marktraceur: I am crashing to bed now sorry :-: [23:18:00] andrewbogott: err: /Stage[main]/Role::Labs::Instance/Mount[/home]: Could not evaluate: Execution of '/bin/mount -o rw,vers=4,bg,hard,intr,sec=sys,proto=tcp,port=0,noatime,nofsc /home' returned 32: mount.nfs: mounting labstore.svc.eqiad.wmnet:/project/netflow/home failed, reason given by server: [23:18:00] No such file or directory [23:18:08] hashar: It's cool, we can figure it later [23:18:11] cajoel: what instance and project is that? [23:18:15] andrewbogott: nope -- fresh instance [23:18:24] project == netflow [23:18:29] marktraceur: have a good afternoon :-] [23:18:37] flow-localpuppet [23:18:47] Righto [23:18:48] cajoel: Hm… some instances aren't able to mount the volume right away due to a race condition. If you restart then things will most likely work. [23:19:21] sounds bogus.. shouldn't the next puppet run mount it? :) but I'll restart [23:20:46] andrewbogott: same fail condition after restart [23:20:58] cajoel: ok, then it's probably a coren question :/ [23:21:06] only on /home and /data/project [23:21:39] any way to take a peek at the fileserver config? [23:21:39] It's not "unable to mount the volume", it's "mount it too fast and get it readonly" [23:22:05] cajoel: Have you checked the "shared home" and "shared project directory" options of the project config? [23:22:17] ah -- no [23:22:20] Because unless you did, you don't /have/ shared directories to mount. :-) [23:22:21] new stuff to me. [23:23:03] (It takes 2-3 minutes for the directories to be created) [23:23:14] cool [23:23:15] added [23:23:29] this will be a big timesaver -- cool new feature [23:23:53] seems the Mount puppet should check to see if those options are set? [23:25:28] all mounted -- ready to roll [23:25:29] thx [23:26:42] andrewbogott: for labs-vagrant to work, I think my home directory needs to be 755, but in eqiad, "chmod: changing permissions of `.': Read-only file system" [23:27:53] spagewmf: if your $home is read-only then you probably need a reboot. [23:28:10] spagewmf: but I think there may be some known issues with labs-vagrant. I don't really know whose project that is... [23:28:32] andrewbogott: actually, that's only true for /home on the ee-flow-extra instance I created 20 minutes ago, the 2-hour old instance is fine. thx, I'll reboot the former [23:40:36] andrewbogott: I rebooted and ee-flow-extra.eqiad.wmflabs still says my home directory is "Read-only file system" [23:40:53] same mount command as ee-flow.eqiad.wmflabs where I can make changes. Hmmm [23:41:27] just keep rebooting, it sounds bad.. but in this case.. [23:41:33] it's some race condition [23:42:39] had same thing [23:45:04] mutante: OK, will do. console output has a "mountall: mount /home [677] terminated with status 32", other than that everything thinks it's mounted rw. [23:57:50] Got edit->diff->email alert working \o/