[06:14:28] 6Labs, 6Collaboration-Team: Investigate and remove NFS from editor-engagement project - https://phabricator.wikimedia.org/T102663#1386449 (10Mattflaschen) >>! In T102663#1382941, @yuvipanda wrote: > Should get rid of /data/project too. Before you do so, please give me an opportunity to put the config git repo... [08:16:52] 6Labs, 10Incident-20150617-LabsNFSOutage: Recover files for project liangent-php - https://phabricator.wikimedia.org/T103268#1386559 (10liangent) [08:26:38] how to best recover the MediaWiki, salt and ElasticSearch instances of the "intense" project from te he NFS crash? [08:27:33] I don't remember if they even use NFS, hopefully not because repository checkouts should be in /data/scratch and all the rest in local FS, IIRC. [09:06:42] 10Quarry, 7Easy: String "Your query is currently executing" should be "This query..." - https://phabricator.wikimedia.org/T103275#1386635 (10Aklapper) [09:45:56] Does anyone know how stuff at wikimedia.org/static/images/project-logos is controlled? See https://phabricator.wikimedia.org/T103296 [09:56:29] [13intuition] 15siebrand pushed 1 new commit to 06master: 02https://github.com/Krinkle/intuition/commit/ced3df1968853ec754d2bc71454448149067aae3 [09:56:30] 13intuition/06master 14ced3df1 15Siebrand Mazeland: Localisation updates from https://translatewiki.net. [10:02:02] 6Labs, 10Beta-Cluster: deployment-bastion: Cannot create /home/l10nupdate/.ssh; parent directory /home/l10nupdate does not exist - https://phabricator.wikimedia.org/T103300#1386751 (10hashar) 3NEW [10:07:32] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1386763 (10hashar) [10:07:35] 6Labs, 10Beta-Cluster: deployment-bastion: Cannot create /home/l10nupdate/.ssh; parent directory /home/l10nupdate does not exist - https://phabricator.wikimedia.org/T103300#1386760 (10hashar) 5Open>3Resolved a:3hashar On deployment-bastion I created the dir: ``` # mkdir /home/l10nupdate # chown l10nupdat... [10:10:37] 6Labs, 3Labs-Sprint-102: Audit projects' use of NFS, and remove it where not necessary - https://phabricator.wikimedia.org/T102240#1386766 (10hashar) [10:10:40] 6Labs, 10Beta-Cluster: Disable NFS home directories on deployment-prep - https://phabricator.wikimedia.org/T102169#1386764 (10hashar) 5Resolved>3Open Reopening, some instances apparently still rely on NFS :( [10:22:35] YuviPanda: any news regarding the recovery of the files from the disk? [10:26:05] addshore: it's under operations/mediawiki-config [10:26:13] zhuyifei1999: cheers! [10:27:04] in w/static/images/project-logos/ [10:29:38] I guess some ops person needs to poke somehting somewhere :) [10:32:50] probably, logo used to be stored in uploads.wikimedia.org , but I just found https://git.wikimedia.org/commit/operations%2Fmediawiki-config/3463cd6e0499841ef40a2682fbd6f9dccb2d80e2 [10:39:05] addshore: Addshore removed a subscriber: zhuyifei1999. <= ??? [10:39:11] O_o [10:39:21] did I? [10:39:55] well, apparently phabricator doesn't handle conflicts very well.... [10:40:06] probably [10:41:07] idk why, but the images works for me https://www.wikimedia.org/static/images/project-logos/enwikiversity.png https://www.wikimedia.org/static/images/project-logos/wikidatawiki.png [11:08:35] Hi all [11:08:56] I've an issue with my instance [11:09:18] I cant't log in with my public key, server refuse my key [11:09:30] It's about the recent NFS failure? [11:13:28] 6Labs, 10Continuous-Integration-Infrastructure: Continuous integration should not depend on labs NFS - https://phabricator.wikimedia.org/T90610#1386948 (10hashar) `salt '*' cmd.run 'grep labstore /etc/fstab'` yields: ``` integration-slave-jessie-1001.integration.eqiad.wmflabs: labstore.svc.eqiad.wmnet:/pro... [11:25:29] hei - [[da:Marc Fyhring St�vlb�k Nielsen]] is deleted on dawiki, but is still in dawiki.labsdb - test: select * from page where page_title='Marc_Fyhring_St�vlb�k_Nielsen' and page_namespace=0; [11:28:25] otherwise: [[da:Vandpileurt]] is not found on dawiki.labsdb [11:29:57] YuviPanda: hi I can't ssh to huggle instance since outage of NFS [11:30:10] petrb@tools-bastion-02:~$ ssh huggle [11:30:11] Permission denied (publickey). [11:30:15] I can ssh anywhere else though [11:30:17] 6Labs, 10Continuous-Integration-Infrastructure: Cant ssh to integration-slave-jessie-1001.integration.eqiad.wmflabs - https://phabricator.wikimedia.org/T103312#1387030 (10hashar) 3NEW a:3hashar [11:30:25] it's just this instance I think it doesn't want to reboot [11:36:11] 6Labs, 10Continuous-Integration-Infrastructure: Cant ssh to integration-slave-jessie-1001.integration.eqiad.wmflabs - https://phabricator.wikimedia.org/T103312#1387049 (10hashar) Remove the puppet class `role::ci::slave::labs` which prevents puppet from completing. Under /home/ only /home/admin/ exists :( [11:44:40] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387063 (10Aklapper) [11:45:39] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1386998 (10Aklapper) > The server refuse my key, while I can log in bastion or other instance of the project. How exactly do you log in (command and parameters which are unproblematic in public are welcome)? [11:51:40] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387090 (10Sbiribizio) I log in through bastion host. I use putty under Windows. Setting are unchanged. I use bastion host through the command "plink.exe bastion.wmflabs.org -l sbiribizio -agent -nc %host:%p... [12:14:29] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387367 (10Aklapper) * I assume you are aware of the recent SSH key changes?: ** https://lists.wikimedia.org/pipermail/labs-announce/2015-June/000036.html ** https://lists.wikimedia.org/pipermail/wikitech-l/... [12:28:01] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387388 (10Sbiribizio) I'm aware of the last key changes. After the key change I logged on bastion to confirm the fingerprint. - Also (and unrelated), does the "-agent" parameter in your command imply... [12:38:01] This user is now online in #wikimedia-labs. I'll let you know when they show some activity (talk, etc.) [12:38:01] @notify YuviPanda [13:04:30] test2 [13:16:52] *yell* [13:27:19] petan: still having trouble with huggle? I can look now. [13:27:26] yes [13:27:33] I can't ssh to huggle instance since outage of labs [13:27:44] instance ‘huggle’ project ‘huggle’? [13:27:46] I tried "reboot" using wikitech but not sure if it actually did something [13:27:47] yes [13:34:19] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387522 (10Krenair) I would try https://wikitech.wikimedia.org/wiki/Recover_instance_from_NFS [13:36:56] petan: it looks to me like puppet has been broken on that system for ages. [13:37:11] very likely [13:37:11] I can try to kickstart it via salt, we’ll see where we get. [13:37:45] andrewbogott: can you also please look into instance dwl, project dwl? same problem (can't ssh in) [13:37:53] In worst case I can build new instance but I though I just need to restart it or something [13:37:54] gifti_: next :) [13:37:57] yay [13:38:34] petan: any time an instance has broken puppet it’s a good candidate for falling off the internet [13:38:52] ok, just let me know if I need to rebuild it [13:40:05] huh, actually it seems it just needed to run puppet and now it’s happy. puppet /wasn’t/ broken, although that doesn’t explain why it hadn’t fixed itself already [13:40:25] let me poke for one more minute, but it should be back shortly [13:41:28] petan: ok, should be all better [13:41:29] 6Labs: Instance osmit.eqiad.wmflabs refuses my public ssh key - https://phabricator.wikimedia.org/T103310#1387531 (10Sbiribizio) For now I don't want to try it. The server on the instance is running correctly, the web server is accessible and working, so all my data are OK, I suppose. I would like to know if th... [13:41:38] YuviPanda: any news regarding the recovery of the files from the disk? [13:42:12] andrewbogott: good job! it works now [13:42:19] cool [13:42:53] gifti_: I have a login on dwl, no problems. Can you try? [13:42:58] It may just have needed a few hours to catch up [13:43:27] still doesn't work [13:43:35] gifti_: did yuvi contact you about disable nfs on this instance? [13:43:45] um, i think not [13:44:30] if i do that can i copy the home directories over? [13:45:08] I’m really not sure what’s up with nfs regarding this project, we’ll have to catch up with yuvi or coren. It should be possible to keep NFS working. [13:45:22] hm [13:45:38] But — I can log in as a normal user. So i don’t think anything is wrong with the instance, loginwise. [13:45:44] hmmm [13:45:45] try logging in now? [13:45:53] nope [13:46:14] your login attempt isn’t even hitting that instance. [13:46:19] Are you able to log into other labs instances? [13:46:46] i don't think i have any other accessible instances [13:47:01] are you on windows? [13:47:12] no, linux [13:47:16] and using proxycommand? [13:47:19] yes [13:47:30] ok, one moment... [13:47:54] when were you last able to contact this instance? [13:48:05] before the nfs outage [13:48:24] i don't know the exact date and time [13:48:37] can you try to ssh to util-abogott.testlabs.eqiad.wmflabs? [13:48:41] I just added you to that project [13:48:43] Hello, is there someone whom I could ask to help me recover two log files from my tool's folder that were lost in the recent NFS outage? [13:48:48] i'll try [13:49:17] andrewbogott: Which project? [13:49:19] doesn't work [13:49:34] gifti_: and, again, you aren’t hitting the instance. [13:49:39] hm [13:49:40] andrewbogott: We're still in recovery mode and changes to instances, etc need to be done by hand still because no automation yet. [13:49:47] So this isn’t specific to your instance, it’s something broken with ssh setup [13:50:07] Coren: dwl [13:50:31] Coren: it’s keeping nfs and has been rebooted [13:50:32] marmick: Not yet. Recovery progresses, but it's very slow and it's impossible to estimate how long is left. [13:50:34] but the mounts still fail [13:50:41] * Coren checks. [13:51:00] gifti_: please ssh -vvv and paste the output someplace? [13:51:05] andrewbogott: I... don't see a dwl project? [13:51:16] andrewbogott: When was it created? [13:51:24] Coren: glups. does the recovery need to be done to check if a particular user-file can be obtained? [13:51:45] marmick: Yes, it's not possible to mount the filesystem until that is done. :-( [13:51:48] Coren: when you say you don’t see it… you mean you don’t see nfs files for it? [13:52:01] Coren: for all I know it never had nfs mounts. [13:52:10] there’s just one instance so gifti_ probably wouldn’t have noticed [13:52:16] andrewbogott: I don't see an exports file for it. If it's a new project I'll have to create it by hand. [13:52:44] (Not long or hard, but it needs doing) [13:53:18] andrewbogott: I'll try to get manage-nfs-volumes going today. [13:53:45] Coren: great. I don’t know anything about how this project should or shouldn’t work, I’m just seeing puppet complain about missing mounts. [13:53:53] If gifti_ ever reappears we may learn. [13:58:05] http://pastebin.com/Z0sitGnY [13:58:25] andrewbogott: I made the entries for dwl by hand, it should have NFS now. [13:58:38] Coren: but with empty volumes, right? [13:58:44] Or at least, within a few minutes. [13:58:48] Yes, empty volumes. [13:58:55] gifti_: we don’t see evidence that that project ever had nfs mounts for /home or /data/project [13:59:07] ok [13:59:10] Coren: ok, I don’t think that’s going to improve things, if gifti_ was used to seeing existing files… [13:59:51] if I can log in that's actually a big improvement [13:59:59] andrewbogott: I can tell you for sure that dwl didn't have nfs filesystems as of Jun 8th [14:00:40] Wait, that project was created on the 20th [14:00:47] So it'd never would have had NFS [14:01:06] gifti_: your home dir looks 100% empty to me. Is that what you’d expect? [14:01:17] no [14:01:19] gifti_: can you tell me what command you typed to produce that paste? [14:01:34] ssh -vvv dwl.dwl.eqiad.wmflabs [14:01:59] gifti_: you need to always specify your username when you do that. As far as I can tell your username on your local machine is not the same as your username on labs. [14:02:18] unless your proxycommand specifies it [14:02:33] it's in the config file [14:03:23] can you humor me and try? I don’t see evidence from -vvv that the username is added. [14:03:35] Coren: so, how do we explain the newly empty homedir? [14:04:39] andrewbogott: The project was created on the 20th, two days ago. It never had NFS so any files in the home would have been on the instance-local storage. gifti_: did you ever log into that box? [14:05:10] http://pastebin.com/EpcvK0SM looks the same to me [14:05:20] Coren: i did [14:05:33] and it was not created on the 20th it's older [14:07:35] gifti_: can you ssh to bastion.eqiad.wmflabs? [14:07:50] yes [14:08:30] well, bastion.wmflabs.org [14:09:32] gifti_: ... I see you logged into that instance. [14:09:51] yeah, that's my phone [14:10:57] and it works with the eval `ssh-agent` method, but that's not what i want [14:11:51] -rw-r--r-- 1 root root 0 Jun 20 11:51 firstboot_done [14:12:00] That instance was definitely created Jun 20 [14:12:54] i agree with that [14:14:02] 6Labs, 10Labs-Infrastructure, 10Wikimedia-Apache-configuration, 6operations, 10wikitech.wikimedia.org: wikitech-static sync broken - https://phabricator.wikimedia.org/T101803#1387575 (10Andrew) Actually, I think the issue is that we store an entire new copy of wikitech on each import. I propose we chang... [14:14:31] Well, the project never had NFS storage, and the instance was created two days ago and never logged in - I don't know what you expected to find in its homes but "empty" would have been what I'd have expected. [14:14:52] hm, ok [14:23:42] Coren: interested in sorting out gifti_’s proxycommand issue? output is in the backscroll [14:23:50] If you’re swamped, I’ll return to it shortly. [14:23:59] I mean, you are definitely swamped, but might enjoy a simple distraction :) [14:24:20] Heh. I'll see if I can sort it out quickly. [14:28:22] gifti_: Your proxycommand *also* must include your username - the user stanza in the config file applies only to the final destination, not the proxycommand. [14:28:33] hm [14:29:01] So something like 'ProxyCommand ssh -a -W %h:%p gifti@bastion[...]' [14:29:35] wow, that was easy :) [14:29:47] why doesn't it say so on the wiki page? [14:30:22] gifti_: It probably should; I expect whoever wrote that documentation usually has the same username everywhere and so would never have noticed. :-) [14:30:38] thank you for the hint [14:31:26] is there a possibility to restore the local files of the deleted instance? [14:31:50] gifti_: I'm pretty sure not, but if it is then andrewbogott is the one that can tell. [14:32:32] gifti_: no, sorry :( [14:32:38] * andrewbogott updates the docs [14:32:40] ah, well [14:34:11] Is this the only part that was wrong? https://wikitech.wikimedia.org/w/index.php?title=Help%3AAccess&type=revision&diff=167108&oldid=163241 [14:35:35] seems so [14:35:39] great [14:35:44] sorry the docs were wrong :( [14:43:43] Hallo? My plea above did not get noticed, so I have waited until the ongoing discussion ends. Can I now please ask an admin for help with restoring two log files that have disappeared because of the recent outage? [14:44:54] Blahma: I think that we’re still waiting on file copies before restores are available. But Coren can correct me... [14:45:22] 6Labs: Unable to ssh to dwl - https://phabricator.wikimedia.org/T103245#1387663 (10Giftpflanze) 5Open>3Resolved [14:46:15] Blahma: andrewbogott has the right of it; we're still waiting for the fsck to finish. If you want to make sure I get to your restorations as soon as it's done, the best way is to create a ticket -> https://phabricator.wikimedia.org/maniphest/task/create/?parent=103265 [14:47:06] OK, thank you, Coren, it's not a hurry and I was actually simply trying not to get forgotten. I'll create a ticket. [14:50:33] Actually, I think I don't really need the latest version to be recovered from the failed NFS; I'll do with copies from the last backup. Those were .log files that were skipped in the process that has brought Tools back to functioning. Would that not be possibly right now? [14:51:24] Blahma: Sorry, we didn't skil the log files in the restore, we skipped them in the /backup/. They aren't there to be restored. [14:51:33] I see. [14:51:43] Then I'll file the ticket and wait. [14:53:37] 6Labs, 10Incident-20150617-LabsNFSOutage: Recover cssk tool's log files - https://phabricator.wikimedia.org/T103350#1387682 (10Blahma) 3NEW [14:53:48] ^^ that's mine [14:58:44] 6Labs, 6operations, 10ops-codfw: rack and connect labstore-array4-codfw in codfw - https://phabricator.wikimedia.org/T93215#1387698 (10coren) This ticket has been open for some time; @papaul, can you confirm that this was done and that the current codfw setup mirrors the eqiad setup? Starting from a well-un... [14:59:46] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1387705 (10coren) [14:59:48] 6Labs, 3ToolLabs-Goals-Q4: Allow labstores to hot or warm swap in case of failure - https://phabricator.wikimedia.org/T93589#1387706 (10coren) [14:59:50] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3ToolLabs-Goals-Q4: Test labstore switchover - https://phabricator.wikimedia.org/T94607#1387701 (10coren) 5Open>3declined a:3coren This has been made mood by the filesystem crash and the forcible switch. [15:01:54] 6Labs, 10Tool-Labs: Recover homedir of "ci" tool - https://phabricator.wikimedia.org/T103205#1387728 (10coren) [15:01:55] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: Salvage, then remove volumes on labstores' raid6 - https://phabricator.wikimedia.org/T103265#1387727 (10coren) [15:02:00] 6Labs, 10Labs-Infrastructure, 10Wikimedia-Apache-configuration, 6operations, and 2 others: wikitech-static sync broken - https://phabricator.wikimedia.org/T101803#1387730 (10jcrespo) The main issue with MySQL was that due to the `innodb_file_per_table | OFF` configuration (default in 5.5), every time a... [15:07:55] Coren, am I understanding correctly that /data/project/upload7/private/captcha on deployment-prep would've been down with NFS? [15:08:26] Krenair: Yes, if it lives in /data/project it depends on NFS [15:08:30] captchas were completely broken, but seem fine now? [15:08:44] lived* [15:08:47] The NFS mount available now is from a few days ago right? [15:09:03] Late Friday. [15:09:13] 6Labs, 10Catgraph, 10Labs-Infrastructure, 6TCB-Team: Can't login to catgraph instance - https://phabricator.wikimedia.org/T103354#1387777 (10jkroll) 3NEW [15:09:33] 6Labs, 6operations, 10ops-eqiad: Labs: Disconnect labstore1001 from the shelves - https://phabricator.wikimedia.org/T103355#1387786 (10coren) 3NEW [15:13:53] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: Make a new backup of the Labs storage to codfw - https://phabricator.wikimedia.org/T103356#1387805 (10coren) 3NEW [15:15:09] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: Make a new backup of the Labs storage to codfw - https://phabricator.wikimedia.org/T103356#1387815 (10coren) [15:16:01] 6Labs: decouple role::labs::instance puppet runs from the rest of puppet - https://phabricator.wikimedia.org/T103357#1387818 (10Andrew) 3NEW a:3Andrew [15:16:35] 6Labs, 6operations, 10ops-codfw: Labs: Install the new RAID controller in labstore2002 and test - https://phabricator.wikimedia.org/T103267#1387827 (10coren) [15:17:56] 6Labs, 6operations, 10ops-codfw: Labs: Install the new RAID controller in labstore2002 and test - https://phabricator.wikimedia.org/T103267#1385952 (10coren) This can be done safely with labstore2002 provided that it is first disconnected from the shelves. [15:18:22] 6Labs, 10Catgraph, 10Labs-Infrastructure, 6TCB-Team: Can't login to catgraph instance - https://phabricator.wikimedia.org/T103354#1387838 (10Andrew) 5Open>3Resolved a:3Andrew Logins should be fixed now. Part of this issue was caused by a broken puppet run -- it's very important that you keep your in... [15:18:24] 6Labs, 6operations, 10ops-eqiad: Labs: Disconnect labstore1001 from the shelves - https://phabricator.wikimedia.org/T103355#1387841 (10coren) p:5High>3Unbreak! [15:20:17] 6Labs, 3Labs-Sprint-101, 3Labs-Sprint-102: Kill off virt1000 - https://phabricator.wikimedia.org/T102005#1387849 (10Andrew) [15:20:18] 6Labs, 3Labs-Sprint-101, 3Labs-Sprint-102: Sort out remaining virt1000 salt minions - https://phabricator.wikimedia.org/T103010#1387848 (10Andrew) 5Open>3Resolved [15:20:34] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: increase size of the volume for the maps project and restore - https://phabricator.wikimedia.org/T103358#1387852 (10coren) 3NEW [15:21:00] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: Salvage, then remove volumes on labstores' raid6 - https://phabricator.wikimedia.org/T103265#1387863 (10coren) [15:21:01] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: increase size of the volume for the maps project and restore - https://phabricator.wikimedia.org/T103358#1387862 (10coren) [15:26:54] is http://botbot.wmflabs.org/freenode/15/ effected by the recent downtime? i'm getting a 502 [15:27:14] 6Labs, 10Incident-20150617-LabsNFSOutage: Labs: increase size of the volume for the maps project and restore - https://phabricator.wikimedia.org/T103358#1387876 (10scfc) [15:27:17] 6Labs, 6Discovery, 10Maps: Replacements for a.toolserver.org, b.toolserver.org, c.toolserver.org not available - https://phabricator.wikimedia.org/T103272#1387875 (10scfc) [15:27:44] 6Labs, 6Discovery, 10Maps: Replacements for a.toolserver.org, b.toolserver.org, c.toolserver.org not available - https://phabricator.wikimedia.org/T103272#1386072 (10scfc) (Assuming that the WMF tile server indeed is part of the #Maps project.) [15:28:52] 6Labs, 10Catgraph, 10Labs-Infrastructure, 6TCB-Team: Can't login to catgraph instance - https://phabricator.wikimedia.org/T103354#1387886 (10jkroll) 5Resolved>3Open Thanks for the quick response, but I still can't ssh there - same output as before :/ [15:50:45] 6Labs, 6operations, 10ops-codfw: rack and connect labstore-array4-codfw in codfw - https://phabricator.wikimedia.org/T93215#1387954 (10Papaul) Stay waiting on Chris to give me the layout of labstore2001-array4 [15:56:04] 6Labs, 10Catgraph, 10Labs-Infrastructure, 6TCB-Team: Can't login to catgraph instance - https://phabricator.wikimedia.org/T103354#1387968 (10Andrew) 5Open>3Resolved I've removed role::labsnfs::client and webserver::apache and catgraph_hostmap from the puppet config of that instance in order to allow pu... [16:01:53] 6Labs: Garbage collect unmaintained/unused instances - https://phabricator.wikimedia.org/T102409#1388026 (10Andrew) cscott, were you unaware that puppet was broken on towtruck? Would some sort of email alert system have encouraged you to maintain it properly? [16:04:04] 6Labs, 3ToolLabs-Goals-Q4: Rename virt1000 to labcontrol1002, move to same subnet as labcontrol1001 - https://phabricator.wikimedia.org/T102646#1388037 (10Andrew) [16:04:26] 6Labs, 6operations, 3ToolLabs-Goals-Q4: Rename virt1000 to labcontrol1002, move to same subnet as labcontrol1001 - https://phabricator.wikimedia.org/T102646#1370417 (10Andrew) [16:14:52] 6Labs, 6operations, 10ops-eqiad, 5Patch-For-Review, 3ToolLabs-Goals-Q4: Rename virt1000 to labcontrol1002, move to same subnet as labcontrol1001 - https://phabricator.wikimedia.org/T102646#1388119 (10Andrew) [16:33:23] Coren: quick question, I've seen in ToolLabs there's python 2.7.6. Is there a way to use 3? Maybe with virtualenv? [16:34:25] marmick: 3.4 is available as well. You can use a virtualenv with 'virtualenv --python python3 .' [16:37:07] oh,thanks. now installing... [16:44:27] YuviPanda: stupid ubuntu broke python3 -m venv :{ [16:50:33] should I use python3 everytime I want to execute it? [16:52:43] Coren: I think my tool lost its replica.my.cnf during the outage (it didn't exist at the time of the backup you restored from). Would you be able to run that script again? [17:09:11] valhallasw: did they fix it in later versions at least? [17:10:05] YuviPanda: it was broken on TL the last time I tried :/ [17:10:15] TL? [17:12:09] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: Make a new backup of the Labs storage to codfw - https://phabricator.wikimedia.org/T103356#1388385 (10coren) [17:12:25] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: Salvage, then remove volumes on labstores' raid6 - https://phabricator.wikimedia.org/T103265#1388386 (10coren) [17:13:32] 6Labs: Labs: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T103266#1388389 (10coren) [17:13:33] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1388390 (10coren) [17:14:32] 6Labs, 6operations, 10ops-eqiad: Labs: Disconnect labstore1001 from the shelves - https://phabricator.wikimedia.org/T103355#1388399 (10coren) [17:14:34] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1168026 (10coren) [17:14:36] 6Labs, 6operations, 10ops-codfw: Labs: Install the new RAID controller in labstore2002 and test - https://phabricator.wikimedia.org/T103267#1388400 (10coren) [17:14:44] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1168026 (10coren) [17:15:30] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: increase size of the volume for the maps project and restore - https://phabricator.wikimedia.org/T103358#1388402 (10coren) [17:16:18] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: Salvage, then remove volumes on labstores' raid6 - https://phabricator.wikimedia.org/T103265#1388408 (10coren) p:5Triage>3High [17:17:45] is this just me? tools.wmflabs.org needs very long to load [17:18:45] It's not just you. [17:19:04] Hm. NFS is nice and snappy afaict. [17:19:37] The webproxy, not so much. [17:19:41] * Coren looks into it. [17:21:25] hm, tools-bastion is not affected, i think [17:26:38] Hm. And now it's fast again but I have done nothing. [17:27:20] 6Labs, 3Labs-Sprint-103: decouple role::labs::instance puppet runs from the rest of puppet - https://phabricator.wikimedia.org/T103357#1388459 (10Andrew) [17:27:30] * Krinkle is wondering why he's getting HTTP 400 Bad Request on post requests [17:27:35] https://tools.wmflabs.org/orphantalk/ [17:27:44] picking a wiki/namespace and doing POST results in 400 every time [17:27:51] Worked fine before the outage. Wondering what changed. [17:28:03] I spent 2 days looking into it over hte weekend, no luck. [17:28:27] 6Labs, 10Labs-Other-Projects, 3Labs-Sprint-103: investigate/clean up 'servermon' project - https://phabricator.wikimedia.org/T103149#1388464 (10Andrew) [17:28:50] Krinkle: That's a fairly meagre error message to go on. Perhaps it's related to the mandatory https switch rather, if that thing uses the API? [17:29:14] Coren: The server itself returns 400 when submitting the form [17:29:22] It's not hitting php-cgi from what I can see [17:29:37] I tried all kinds of debugger hooks and what not. It's never getting in my code [17:30:21] 400 is about as generic an error as possible. Lemme go take a peek at the logs in case they are less opaque. [17:30:24] 6Labs, 3Labs-Sprint-103: In openstack upstream, add project_id to instance metadata - https://phabricator.wikimedia.org/T103384#1388484 (10Andrew) 3NEW a:3Andrew [17:30:33] Krinkle: Right after my meeting. (SOrry) [17:31:29] I know its not much, but it's all I've been given :( [17:35:51] 6Labs, 3Labs-Sprint-102, 3Labs-Sprint-103: Audit projects' use of NFS, and remove it where not necessary - https://phabricator.wikimedia.org/T102240#1388536 (10faidon) [17:36:59] 6Labs, 3Labs-Sprint-102, 3Labs-Sprint-103, 5Patch-For-Review: Replace puppetsigner with a script to clean certificates, puppet's autosign and salt's auto accept - https://phabricator.wikimedia.org/T102504#1388540 (10faidon) a:5yuvipanda>3Andrew [17:37:33] 6Labs, 10Labs-Infrastructure, 6operations, 10ops-eqiad, 3Labs-Sprint-102: Locate and assign some MD1200 shelves for proper testing of labstore1002 - https://phabricator.wikimedia.org/T101741#1388543 (10faidon) [17:37:36] 6Labs, 10Labs-Infrastructure, 6operations, 3Labs-Sprint-100, and 2 others: Migrate Labs NFS storage from RAID6 to RAID10 - https://phabricator.wikimedia.org/T96063#1388542 (10faidon) 5stalled>3Resolved [17:38:23] 6Labs, 3Labs-Sprint-102, 3Labs-Sprint-103: Labs: rewrite remaining labstore* scripts - https://phabricator.wikimedia.org/T102520#1388548 (10faidon) [17:39:12] 6Labs, 6operations, 3Labs-Sprint-102, 3Labs-Sprint-103, 5Patch-For-Review: labstore has multiple unpuppetized files/scripts/configs - https://phabricator.wikimedia.org/T102478#1388553 (10faidon) [17:41:16] 6Labs, 6operations, 3Labs-Sprint-102, 3Labs-Sprint-103, 5Patch-For-Review: Backport sshd with AuthorizedKeysCommand support to Ubuntu precise - https://phabricator.wikimedia.org/T102401#1388562 (10faidon) [17:41:45] 6Labs, 6operations, 10ops-eqiad, 5Patch-For-Review, 3ToolLabs-Goals-Q4: Rename virt1000 to labcontrol1002, move to same subnet as labcontrol1001 - https://phabricator.wikimedia.org/T102646#1388565 (10faidon) [17:41:47] 6Labs, 3Labs-Sprint-101, 3Labs-Sprint-102: Kill off virt1000 - https://phabricator.wikimedia.org/T102005#1388563 (10faidon) 5Open>3Resolved [17:43:01] 6Labs, 3Labs-Sprint-102: Ganglia broken for labstore1001 (again) - https://phabricator.wikimedia.org/T92618#1388572 (10faidon) 5Open>3Resolved Let's the resolve for now, reopen if it comes up again... [17:43:41] 6Labs, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: virt1000 SPOF - https://phabricator.wikimedia.org/T90625#1388575 (10Andrew) [17:43:56] 6Labs, 3Labs-Sprint-102, 3Labs-Sprint-103, 5Patch-For-Review: Disable NFS by default for new projects - https://phabricator.wikimedia.org/T102403#1388577 (10faidon) [17:44:23] 6Labs, 3Labs-Sprint-101, 3Labs-Sprint-102: Kill off virt1000 - https://phabricator.wikimedia.org/T102005#1388581 (10Dzahn) There are still remnants in the puppet repo re: virt1000. (hiera, dhcp, openstack module) and dns. [17:47:22] 6Labs, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: virt1000 SPOF - https://phabricator.wikimedia.org/T90625#1388592 (10Andrew) [17:48:07] YuviPanda: tool labs [17:49:13] 6Labs, 3Labs-Sprint-103: Limit available images on horizon - https://phabricator.wikimedia.org/T91782#1388595 (10Andrew) [17:49:47] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1388599 (10faidon) a:3coren [17:49:57] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: Salvage, then remove volumes on labstores' raid6 - https://phabricator.wikimedia.org/T103265#1388604 (10faidon) a:3coren [17:50:04] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: Make a new backup of the Labs storage to codfw - https://phabricator.wikimedia.org/T103356#1388606 (10faidon) a:3coren [17:50:14] 6Labs, 10Incident-20150617-LabsNFSOutage, 3Labs-Sprint-103: Labs: increase size of the volume for the maps project and restore - https://phabricator.wikimedia.org/T103358#1388608 (10faidon) a:3coren [17:52:28] 6Labs, 10Tool-Labs, 3Labs-Sprint-103: Labs: Move tools-shadow off the same host as tool-master - https://phabricator.wikimedia.org/T103390#1388625 (10coren) 3NEW a:3coren [17:55:43] 6Labs, 10Labs-Infrastructure, 3Labs-Sprint-103: Instances without a shared NFS storage suffers from a 3 minutes boot delay - https://phabricator.wikimedia.org/T102544#1388643 (10faidon) [18:02:58] 6Labs, 10Tool-Labs, 3Labs-Sprint-101, 3ToolLabs-Goals-Q4: Move tools-shadow away from labvirt1004 - https://phabricator.wikimedia.org/T101636#1388665 (10scfc) [18:02:59] 6Labs, 10Tool-Labs, 3Labs-Sprint-103: Labs: Move tools-shadow off the same host as tool-master - https://phabricator.wikimedia.org/T103390#1388666 (10scfc) [18:42:55] 6Labs, 6operations, 10ops-codfw: Labs: Install the new RAID controller in labstore2002 and test - https://phabricator.wikimedia.org/T103267#1388829 (10Papaul) All shelves are disconnect form labstore2002. New controller card in place. [19:20:08] 6Labs, 6operations, 10ops-eqiad: Labs: Disconnect labstore1001 from the shelves - https://phabricator.wikimedia.org/T103355#1388999 (10Cmjohnson) disconnected everything from lasbstore1001. [19:23:04] 6Labs: Labs: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T103266#1389017 (10Cmjohnson) [19:23:07] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1389018 (10Cmjohnson) [19:23:09] 6Labs, 6operations, 10ops-eqiad: Labs: Disconnect labstore1001 from the shelves - https://phabricator.wikimedia.org/T103355#1389014 (10Cmjohnson) 5Open>3Resolved a:3Cmjohnson done [19:23:25] 6Labs, 10Labs-Infrastructure, 6operations, 10ops-eqiad: labstore1002 issues while trying to reboot - https://phabricator.wikimedia.org/T98183#1389023 (10Cmjohnson) [19:23:28] 6Labs, 10Labs-Infrastructure, 6operations, 10ops-eqiad: Locate spare H800 PERC in case it is necessary to switch labstore1002's - https://phabricator.wikimedia.org/T101743#1389021 (10Cmjohnson) 5Open>3Resolved We have a spare card on-site but we ordered different cards. [19:25:48] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1389033 (10coren) Given that we have only one operational system right now and we'd be in serious trouble if we don't have a recovery host, I'm re... [19:25:57] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1389034 (10coren) [19:25:59] 6Labs, 6operations, 10ops-codfw: Labs: Install the new RAID controller in labstore2002 and test - https://phabricator.wikimedia.org/T103267#1389035 (10coren) [19:35:47] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389055 (10thcipriani) [19:35:50] 6Labs, 10Beta-Cluster: Beta Cluster uploads (new and viewing existing files/thumbnails, including captchas) broken due to WMF Labs NFS outage - https://phabricator.wikimedia.org/T102963#1389052 (10thcipriani) 5Open>3Resolved a:3thcipriani All tests passing now [19:40:41] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389060 (10hashar) 5Open>3Resolved a:3hashar Seems all NFS related breakages have been fixed now. [19:42:47] 6Labs, 10Tool-Labs: Geohack should be mobile friendly - https://phabricator.wikimedia.org/T103409#1389066 (10Jdlrobson) 3NEW [19:42:55] 6Labs, 10Tool-Labs: Geohack should be mobile friendly - https://phabricator.wikimedia.org/T103409#1389075 (10Jdlrobson) [19:43:01] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389076 (10Krenair) By NFS being up, sure... Shouldn't we be trying to make it not depend on NFS? [19:43:11] Coren, YuviPanda|brb: ^ [19:43:46] 6Labs, 10Beta-Cluster, 6operations, 7Monitoring: Setup (simple) catchpoint monitoring for betacluster - https://phabricator.wikimedia.org/T97865#1389081 (10hashar) @yuvipanda can you handle replicating one of the catchpoint probe to hit en.wikipedia.beta.wmflabs.org ? Whatever is done for the production e... [19:43:49] Krenair: That's a bigger project (swift, etc) [19:44:16] 6Labs, 10Beta-Cluster, 6operations, 7Monitoring: Setup (simple) catchpoint monitoring for enwiki betacluster just like production - https://phabricator.wikimedia.org/T97865#1389084 (10hashar) [19:45:20] Coren, should that be the goal of the task or not? [19:46:06] the logstash part was only resolved once it was properly moved off of NFS [19:46:23] Krenair: I suppose it should; deployment-prep depending on NFS is definitely a divergence from prod and thus undesirable even in principle. [19:47:09] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389089 (10Krenair) 5Resolved>3Open This is not resolved unless nothing in beta is expected to break next time NFS does. [19:48:48] Krenair: there is already a bug for it somewhere [19:49:01] I'm on a phone and can't find that atm [19:51:10] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1168026 (10coren) In progress. [19:52:42] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389115 (10hashar) Thanks @krenair. Well NFS is gone from the deployment-prep. Its only use now are the upload/thumbnails. We need to migrate to swift which is T64835 blocking {T84950}. [19:53:32] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389119 (10hashar) [19:53:50] 6Labs, 10Beta-Cluster: Beta Cluster uploads (new and viewing existing files/thumbnails, including captchas) broken due to WMF Labs NFS outage - https://phabricator.wikimedia.org/T102963#1389122 (10Krenair) By NFS being up, yes... This task should probably remain open until it no longer depends on NFS or is clo... [19:56:44] 6Labs, 10Beta-Cluster: Things broken by betacluster suddenly being moved off NFS - https://phabricator.wikimedia.org/T102953#1389137 (10hashar) a:5hashar>3None [20:05:50] (03PS1) 10Legoktm: Fix wmf_number [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219962 [20:06:25] (03CR) 10Merlijn van Deen: [C: 032] Fix wmf_number [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219962 (owner: 10Legoktm) [20:06:31] legoktm: so much derp :p [20:06:56] (03CR) 10Legoktm: [V: 032] Fix wmf_number [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219962 (owner: 10Legoktm) [20:07:00] shhh [20:08:34] eh [20:08:35] wtf [20:08:36] ValueError: invalid literal for int() with base 10: '3-back' [20:09:49] some extension has a bad branch? [20:12:22] (03PS1) 10Legoktm: Don't keep creating GerritREST instances [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219966 [20:12:49] legoktm: get_master_branches is only called once =p [20:12:57] or rather [20:12:59] it's lru_cached [20:13:02] no, it's called once per repo [20:13:09] oh [20:13:09] wait [20:13:15] yes, you're right [20:13:17] I'm blind [20:13:38] I'm also live debugging as well [20:14:48] WARNING:root:mediawiki/extensions/Gather 1.26wmf3-back [20:14:49] sigh. [20:15:32] okay, I have an evil idea [20:17:01] (03PS1) 10Legoktm: Don't die on branch names like '1.26wmf3-back' [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219967 [20:17:34] valhallasw: how does ^ look? [20:17:58] Has anyone had success connecting to the Tool Labs databases using node-mysql? [20:22:42] (03CR) 10Legoktm: [C: 032 V: 032] Don't keep creating GerritREST instances [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219966 (owner: 10Legoktm) [20:22:55] (03CR) 10Legoktm: [C: 032 V: 032] Don't die on branch names like '1.26wmf3-back' [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219967 (owner: 10Legoktm) [20:24:25] INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: gerrit.wikimedia.org [20:24:25] INFO:forrestbot:https://gerrit.wikimedia.org/r/186006: merged in branch master, Task T87247, needs slugs ['mw1.26wmf11'] [20:24:26] INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: gerrit.wikimedia.org [20:24:26] eh [20:25:24] okay [20:25:26] shits broken [20:25:44] because [20:25:51] we need to sorty by major and minor [20:26:05] "1263" should work [20:30:50] (03PS1) 10Legoktm: In wmf_number, sort by major and minor version [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219971 [20:31:08] (03CR) 10Legoktm: [C: 032 V: 032] In wmf_number, sort by major and minor version [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219971 (owner: 10Legoktm) [20:33:04] legoktm: y u no use tuple =p [20:33:11] (major,minor) [20:33:15] oh does that work? [20:33:18] yeah [20:33:22] File "/data/project/forrestbot/forrestbot/forrestbot.py", line 52, in get_master_branches [20:33:22] key=wmf_number)[-1] [20:33:22] IndexError: list index out of range [20:33:30] you can even do (major,minor,wmf) [20:35:17] legoktm: must be the return False :/ [20:35:28] but... [20:35:34] I don't see how? [20:35:51] all the things in the list should be sortable [20:36:09] legoktm: the format is refs/heads/wmf/1.25wmf6 [20:36:12] not 1.25wmf6 [20:36:26] oh [20:36:27] otherwise marker would never be in the branchname [20:36:28] so it should be [20:36:34] wmf_number(b.split(marker)[1]) ? [20:36:46] yes [20:37:09] (03PS1) 10Legoktm: Probably fix wmf_number filtering thing [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219976 [20:37:15] I think so, at least [20:37:25] (03CR) 10Merlijn van Deen: [C: 032] Probably fix wmf_number filtering thing [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219976 (owner: 10Legoktm) [20:37:35] (03CR) 10Legoktm: [V: 032] Probably fix wmf_number filtering thing [labs/tools/forrestbot] - 10https://gerrit.wikimedia.org/r/219976 (owner: 10Legoktm) [20:37:39] I should set up jenkins for that. [20:39:15] IT WORKS [20:39:47] majick [20:39:48] valhallasw: so I think gerrit is dropping our connections after a few requests [20:39:56] bah [20:42:13] !log deployment-prep updated OCG to version b482144f5bd8b427bcc64a3dd287247195aa1951 [20:42:17] Logged the message, Master [21:04:15] valhallasw: https://secure.phabricator.com/D13098 is deployed in our phab [21:04:21] valhallasw: so we should be able to auto-create projects now [21:04:24] James_F: ^ [21:04:38] Oooh. [21:06:25] legoktm: majick! [21:06:54] legoktm: just think about all the mischief this makes possible! [21:08:03] valhallasw: James_F just pointed out to me that we only support one Bug: header [21:08:09] and some patches close multiple bugs [21:08:19] Yeah. [21:08:23] legoktm: that sounds fixable [21:08:45] we just need to make 'Bug' a list [21:12:03] 6Labs, 10Incident-20150331-LabsNFS-Overload, 3Labs-Sprint-103, 3ToolLabs-Goals-Q4: Reinstall labstore1001 with Jessie - https://phabricator.wikimedia.org/T94609#1389549 (10coren) 5Open>3Resolved Back up with Jessie. [21:25:54] 6Labs, 10Labs-Infrastructure, 10Wikimedia-Apache-configuration, 6operations, and 2 others: wikitech-static sync broken - https://phabricator.wikimedia.org/T101803#1389616 (10Andrew) 5Open>3Resolved a:3Andrew OK -- all of the above is done. I've restarted with empty tables, and now we're only syncing... [21:32:44] 6Labs, 6Release-Engineering, 6operations, 10wikitech.wikimedia.org, 5Patch-For-Review: silver / scap - Could not get latest version: 403 Forbidden - https://phabricator.wikimedia.org/T103138#1389685 (10Andrew) 5Open>3Resolved a:3Andrew Fixed by attached patch. [21:35:37] 6Labs, 10Wikimedia-Site-requests, 10wikitech.wikimedia.org: Find missing wikitech apple touch icon (or remove reference) - https://phabricator.wikimedia.org/T102699#1389723 (10Krenair) [21:58:00] !log deployment-prep re-enabling puppet on deployment-videoscaler01 because no reason was given for disabling [21:58:04] Logged the message, dummy [22:11:07] 6Labs, 10LabsDB-Auditor, 10MediaWiki-extensions-OpenStackManager, 10Tool-Labs, and 8 others: Labs' Phabricator tags overhaul - https://phabricator.wikimedia.org/T89270#1389928 (10Aklapper) >>! In T89270#1374717, @Aklapper wrote: > * This will **NOT** retroactively add existing tickets. Anyone with #Triager... [22:30:27] 6Labs, 6Discovery, 10Maps, 6Scrum-of-Scrums, and 2 others: Upgrade postgres on labsdb1004 / 1005 to 9.4, and PostGis 2.1 - https://phabricator.wikimedia.org/T101233#1390024 (10RobH) 5Open>3declined a:3RobH I'm going to go ahead and decline this outright (rather than resolve as @Yurik suggests), since... [22:45:52] 6Labs, 7Tracking: New Labs project requests (Tracking) - https://phabricator.wikimedia.org/T76375#1390071 (10TempleM) Hello, I have recently created a WikiProject on Wikipedia called "WikiProject National Basketball League of Canada." Can you please verify this project and add it to your database? Thank you. [22:47:28] 6Labs, 7Tracking: Wikipedia WikiProject Request - https://phabricator.wikimedia.org/T76375#1390073 (10TempleM) [23:33:47] 6Labs, 6Discovery, 10Maps, 6Scrum-of-Scrums, and 2 others: Upgrade postgres on labsdb1004 / 1005 to 9.4, and PostGis 2.1 - https://phabricator.wikimedia.org/T101233#1390198 (10yuvipanda) 5declined>3Open It should still happen at some point - yurik isn't the only user of the machine and others will appr... [23:56:55] 6Labs, 7Tracking: Wikipedia WikiProject Request - https://phabricator.wikimedia.org/T76375#1390302 (10Legoktm) >>! In T76375#1390071, @TempleM wrote: > Hello, > > I have recently created a WikiProject on Wikipedia called "WikiProject National Basketball League of Canada." Can you please verify this project an... [23:57:46] 6Labs, 7Tracking: New Labs project requests (Tracking) - https://phabricator.wikimedia.org/T76375#1390305 (10Legoktm)