[00:00:22] indexes are important if you expect some performance from it [00:00:37] yep, especially when things get big :) [00:00:47] i.e. what has just happened to mine xD [00:01:06] it went from a few thousand to the above :/ [00:01:11] heh [00:01:18] but it shouldnt ever get bigger than this ;p [00:01:21] petan|wk: If you expect performance - CACHE EVERYTHING and don't touch the db [00:01:29] lol [00:01:33] text file and ssd? ;p [00:01:40] reminds me of idea of PC running from ramdisk [00:01:54] cbng had that problem until we lost the db and started again - hundreds of thousands of rows of revert data... slooow [00:02:16] buy 60gb of ram, create 40gb ramdisk - put all stuff on that including OS during boot [00:02:27] browse port safely [00:02:31] * porn [00:02:36] HAHA [00:03:00] ooooh... getting up to a million rows again now =D [00:06:34] petan|wk: http://ganglia.wmflabs.org/latest/graph.php?r=hour&z=xlarge&c=bots&h=bots-bsql01&v=20.3&m=part_max_used&jr=&js=&vl=%25&ti=Maximum+Disk+Space+Used [00:06:41] Yay :) [00:06:52] what is that [00:06:57] storage per project? [00:06:58] disk on bsql01 [00:06:59] xD [00:07:03] ah [00:07:11] AND ram on bsql1 http://ganglia.wmflabs.org/latest/graph.php?r=hour&z=xlarge&h=bots-bsql01&m=load_one&s=by+name&mc=2&g=mem_report&c=bots [00:07:21] cache cache cache ;p [00:07:39] addshore: you can't believe that [00:07:43] it takes all disks together [00:07:51] if u type df you will see 1% [00:09:36] 1GB used >.< [00:09:43] dam slow import [01:12:44] petan|wk: import done :D [01:12:53] * addshore is happy* [01:14:17] * Damianz gives addshore a unicorn [01:16:08] * addshore runs a speedy count [01:17:18] * addshore is a unicorn [01:34:16] now for the massive alter which broke last time :P [01:35:37] heh [01:35:40] gluck [01:39:00] legoktm: hu done yet? :P [01:39:14] i dont think i got approved on hu? [01:39:53] wait, I cant remmeber which it was, do I mean he? [01:40:02] the one you were talking about earlier :P [01:40:14] yeah he [01:40:18] my bot is still running [01:40:30] also petan|wk is there some sort of limit for transfer rate per instance to the network storage? [01:40:48] or infact on the network as a whole [01:41:33] the instances all currently have 100Mb network cards [01:41:44] we're hoping to change that to 1Gb cards next week [01:41:49] hmm [01:42:43] the ganglia graph for the net on bsql01 seems to have a rather low maximum usage, and as far as I can tell that would be the only thing slowing my current query down :/ [01:44:00] * addshore might get petan to fiddle with sqld tomorow :) [01:51:05] [bz] (8RESOLVED - created by: 2Addshore, priority: 4Unprioritized - 6normal) [Bug 45654] bots-sql3 needs more disk space - https://bugzilla.wikimedia.org/show_bug.cgi?id=45654 [01:52:55] erm [01:52:57] oh [01:53:05] those colors >.< [01:57:10] all looks good on my client except maybe yellow :P [01:57:32] its the yellow [01:57:40] i cant read it [02:05:14] legoktm: this is how I feel right now.. http://thebest404pageever.com/swf/processing.swf [02:15:59] heh [06:03:28] hashar: get some better sleep tonight! [06:03:38] I do need [06:03:52] some wild girls have been partying in the room next to me for the last two nights [06:04:09] I am just waiting for my wife to wake up for the daily conf call :_D [06:04:46] hah [06:10:25] * jeremyb_ sleeps [06:54:02] legoktm: your irc client suck [06:54:10] wat. [06:54:16] it just uses a white background [06:54:40] just try reading http://cl.ly/image/1U2F00263i27 [06:54:42] meh [06:55:15] you can add a background to the colors [06:55:20] that would be nicer [08:51:10] @notify addshore [08:51:10] I will notify you, when I see addshore around here [08:53:23] * legoktm wonders what addshore is in trouble for now [09:36:15] addshore hey let me know if there are still some [09:36:28] data on these boxes you were copying from [09:36:34] so that I can delete them once it's done [09:37:01] I suppose it finished [09:49:40] legoktm anyway, regarding irc clients, no matter of background it should display the colors so that you can see them [09:49:53] yes but [09:49:58] the color code stands for yellow, but it could be some more dark yellow :P [09:50:05] that [09:50:06] the bot should display colors for all users :P [09:50:16] colors usable for all users* [09:50:17] legoktm you know you can change the colors of bot? [09:50:18] :P [09:50:25] oh really? [09:50:26] how? [09:50:27] yes [09:50:34] I need to reed docs sec [09:50:35] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.10.6.8 source code licensed under GPL and located at https://github.com/benapetr/wikimedia-bot [09:50:38] :P [09:50:44] good, i'm going to change it to black. [09:50:55] or better, no colors at all! [09:50:57] style-rss string message template for rss item (variables: $name $author $title $link $description) [09:51:03] http://bots.wmflabs.org/~wm-bot/dump/%23wikimedia-labs.htm [09:51:17] meh [09:51:25] no way to retrieve current value :/ [09:51:39] * legoktm is lazy [09:52:11] ill just try and ignore new bugs :P [09:52:32] [bz] (8$bugzilla_status - created by: 2$author, priority: 4$bugzilla_priority - 6$bugzilla_severity) $title - $link [09:52:35] damn [09:52:42] it's html encoded version :D [09:53:29] @rss-setstyle [09:53:32] @rss-setstyle fdsga [09:53:32] I don't have this item in a db [09:53:39] @rss-setstyle bz test [09:53:39] I don't have this item in a db [09:53:44] @rss-setstyle bugzilla [09:53:45] Item now has a different style you can restore the default style by removing this value [09:54:36] @rss-setstyle bugzilla [bz] ($bugzilla_status; - created by: $author, priority: $bugzilla_priority - $bugzilla_severity) $title - $link [09:54:36] Item now has a different style you can restore the default style by removing this value [09:54:44] @rss-setstyle bugzilla [bz] ($bugzilla_status - created by: $author, priority: $bugzilla_priority - $bugzilla_severity) $title - $link [09:54:45] Item now has a different style you can restore the default style by removing this value [09:54:47] no colors :P [09:56:41] :))) [09:56:50] * legoktm goes to file a bug about wm-bot not having colors :PPPP [11:05:39] !log wikidata-dev wikidata-testrepo: Last week's attempt of puppet to downgrade php5-mysql (that Andrew fixed manually) was back today. Fixed i manually again. Hm... [11:05:41] Logged the message, Master [11:33:59] Hi there! Did you see my e-mail "how to puppetize solarium" on labs-l? Any opinions on that? [11:35:10] * saper does not subscribe, even. [11:35:52] tststs [11:36:51] petan: there probably is still data on them [11:36:55] I only removed my db [11:40:15] Silke_WMDE: is it one of those nicely worded, well-researched, repectful and polite emails that provide all the information and actually require some actual work to be done other than hitting the R button, that never get a reply? [11:41:45] What? I get replies to most of my e-mails on this list. [11:41:59] http://lists.wikimedia.org/pipermail/labs-l/2013-March/000932.html [11:43:04] :-) bad joke, sorry [11:43:17] It's a puppet "layout" question, don't know if it requires extra work. For people who know what the layout is supposed to look like, it's probably hitting R. [11:45:19] frankly it might be just a timezone issue [12:31:34] andrewbogott_afk: we toook the liberty of assuming that your IRC nick/cloak/etc. were spelled correctly. https://commons.wikimedia.org/w/index.php?title=File:Andrew_Bogott_staff_photo.jpg&diff=91912785&oldid=73770124 [12:31:41] legoktm: ^ [12:32:03] :) [12:32:34] legoktm: i think he's actually in your TZ [12:32:39] oh cool [12:33:00] I should probably be sleeping... [12:37:11] addshore ok, so can I safely remove all addbot db's now? [12:44:20] Silke_WMDE: for solarium where is the git repo? [12:46:47] ok, so it's not clear to me how cloning the repo is different from using composer [12:47:12] but probably if you want to use it with puppet then you need to: [12:47:31] make a mirror of the github repo on gerrit and maintain that mirror manually as needed [12:47:36] and have puppet clone from gerrit [12:47:41] .... [12:48:01] legoktm: ? [12:48:19] nvm i read that wrong [13:23:09] petan yes "addbot" and "addshore_dump" [13:23:59] addshored_dump is not copied [13:24:01] :/ [13:24:05] its empty ;p [13:24:08] ah [13:24:10] ok [13:24:13] hadnt started using it yet :P [13:24:27] been able to run all of my alters on the table now :) [13:24:30] * addshore is happy [13:25:31] you have superuser on mysql there so I suppose you can do that [13:25:34] :D [13:25:49] what kind of alters it were [13:28:44] adding another collum and altering an index [13:28:51] jeremyb_: composer downloads _and_ installs solarium [13:29:10] the collum took a few hours and the index took 15 mins :P [13:29:37] Silke_WMDE: what does install mean? [13:29:59] I think it tells php about the new library [13:33:27] huh [13:33:45] well i think i don't even want to know what that means in php [13:33:59] i kinda know the equivalent for java, perl and python [13:34:04] ?? [13:34:23] ??? [13:34:26] :) [13:34:30] :) [13:34:37] you are sceptic [13:34:56] i think you should figure out how to make it work without composer [13:35:11] I don't know how to tell php about it when I clone the git repo just like that. [13:35:27] well you can run composer and then figure out what it did [13:35:29] and copy that [13:36:17] ok [14:05:46] Damianz can you move cluebot data to bots-bsql01? [14:06:10] I would like to remove all sqlN instances [14:06:37] :D [14:11:04] addshore is your bot working now? [14:12:20] the db is working [14:12:28] gonna add some more stuff to it though ;p [14:13:02] 2 more collums whihc might take a while, so Im htinking carefull before I make any changes ;p [14:17:54] addshore I don't think adding columns take time [14:18:13] it's just a ddl command [14:18:21] won't really change any data [14:18:30] true, just last time I was adding data too ;p [14:21:58] I think a collum with the date the row was last edited and then a count of the number of links left after each check and potentialy a collum with the number of times it has been checked :/ [14:37:47] addshore: I admire your consistency, but I believe it's spelt "column" :-). [14:38:09] haha! indeed :) [14:41:28] lol :d [14:47:53] legoktm could you run the db parse on en for me? ;p [14:48:01] ok..... [14:48:11] which instance should i run it from? [14:48:18] bnr1 :) [14:48:44] legoktm@bastion1:~$ ssh legoktm@bots-bnr1 [14:48:44] If you are having access problems, please see: https://wikitech.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [14:48:44] Permission denied (publickey). [14:48:48] petan: :( ^ [14:48:52] oh :/ [14:49:00] tell me what to type and where and I'll run it? ;p [14:49:06] legoktm o.O [14:49:14] @labs-user legoktm [14:49:15] That user is not a member of any project [14:49:20] @labs-user Legoktm [14:49:20] Legoktm is member of 4 projects: Bastion, Bots, Editor-engagement, Tools, [14:49:31] let me check [14:49:37] @labs-user addshore [14:49:37] That user is not a member of any project [14:49:40] >.< [14:49:42] @labs-user Addshore [14:49:42] Addshore is member of 6 projects: Bastion, Bots, Huggle, Hugglewa, Proposals, Tools, [14:49:57] * legoktm would like to see someone finish hugglewa [14:50:12] haha, we cant [14:50:23] until the whole openid login e.t.c is done [14:51:13] why not? [14:51:25] just use jquery [14:51:38] legoktm: first of all you shouldn't need to ssh there [14:51:47] oh? [14:51:48] you can launch mysql from any bots instance (application one) [14:51:54] BUT [14:51:54] nono [14:52:00] you should be able to do that [14:52:06] im running a script that outputs to a text file [14:52:10] legoktm what exactly you want to do? [14:52:18] log into bots-bnr1 :) [14:52:26] ok such a script can run on any instance? [14:52:29] yes [14:52:36] oh wait lol [14:52:42] but i can run faster when it has more resources [14:52:44] it* [14:52:48] I thought you are talking about bsql1 [14:52:50] nvm me [14:52:52] xD [14:53:15] * legoktm gives petan some coffee [14:54:44] legoktm: can you tell me the reason why you can't ssh [14:54:55] ssh -vvvv bots-bnr1 [14:55:32] there is a public key uploaded in your keys folder [14:55:38] http://dpaste.de/8hAwx/raw/ [14:55:39] so I don't really see a reason :/ [14:58:02] legoktm and you are trying it from bastion right? [14:58:04] other instance work? [14:58:07] legoktm: Eh, do you really store your private key on bastion? [14:58:17] scfc_de I do same [14:58:33] imho it's secure :> [14:58:50] as long as your private key is for labs only [14:58:55] hi yalls, anyone know much about connecting to http on labs instances? i've done it before but am currently having trouble [14:58:55] i do? [14:58:57] hmmm [14:59:18] legoktmfrom logs it looks like your ssh client actually doesn't see your keys [14:59:21] it can't read them [14:59:26] erm [14:59:27] ottomata hi [14:59:30] hiya [14:59:41] i'm just using a normal terminal window [14:59:47] @search proxy [14:59:47] Results (Found 1): socks-proxy, [14:59:50] scfc_de: er…where is it stored? [14:59:51] yeah [14:59:58] i've gotten that to work before [14:59:58] ottomata you mean connecting using proxy? [14:59:59] it just isn't right now [15:00:00] !socks [15:00:01] ssh @bastion.wmflabs.org -D ; # [15:00:02] so, simplest example, i have a reportcard.pmtpa.wmflabs [15:00:03] i should probably remove it [15:00:23] i can http request from the instance [15:00:24] but [15:00:26] legoktm remove what [15:00:29] even from a bastion [15:00:32] it times out [15:00:39] otto@bastion-restricted1:~$ curl -v http://reportcard.pmtpa.wmflabs:80 [15:00:39] * About to connect() to reportcard.pmtpa.wmflabs port 80 (#0) [15:00:39] * Trying 10.4.1.55... Connection timed out [15:00:44] petan: my private key [15:01:00] if you remove your private key then no wonder you won't auth... [15:01:05] oh [15:01:07] wait [15:01:11] * legoktm is confused now [15:01:18] so why cant i connect? [15:01:19] are you forwaring your keys? [15:01:26] i can get into normal instances just fine [15:01:27] yes [15:01:40] from my laptop terminal i type in [15:01:44] if you are forwarding it you can remove it from storage [15:01:46] on labs [15:01:47] ssh legoktm@bots-3.pmtpa.wmflabs [15:01:48] otto@reportcard:/etc/apache2/sites-enabled$ curl -v http://reportcard.pmtpa.wmflabs [15:01:48] * About to connect() to reportcard.pmtpa.wmflabs port 80 (#0) [15:01:48] * Trying 10.4.1.55... connected [15:01:51] that gets me in just fine [15:01:57] but not for bots-bnr1 [15:02:09] petan: legoktm: I just use ProxyCommand. No keys stored, no double logins, etc. [15:02:15] right, thats what i'm using too [15:02:21] scfc_de so? [15:02:53] scfc_de that still may be insecure at some point :P [15:03:09] petan: That means I can't help with legoktm's bastion login problem :-). [15:03:31] well i cant login from any instance to bots-bnr1 [15:03:39] i can get from bastion --> bots-3 for example [15:03:45] i can go from laptop --> bots-3 [15:03:51] just not to bnr1. [15:03:52] legoktm did you try using bastion3 or bastion2 [15:04:09] no, il ldo that now [15:04:40] legoktm I will check logs further but I don't see a reason why it doesn't work unless there is a brainsplit again and it show different pubkey on bnr than on other instances [15:04:47] the pubkey is readable on bnr1 [15:04:49] no on bastion2 [15:04:56] legoktm? [15:05:15] i tried from bastion2 and it didnt work [15:05:26] ok, did you change your key recently? [15:05:30] and didnt work on bastion3 [15:05:31] nope [15:05:39] same key since i signed up [15:05:40] mhm [15:05:56] whats wrong with this? O_o "ALTER TABLE iwlinked ADD links SMALLINT, ADD checked DATETIME DEFAULT ON UPDATE CURRENT_TIMESTAMP;" [15:05:57] ^demon did update my key in svn.wikimedia.org though [15:06:07] ok let me compare them [15:06:12] but that update is to the same key [15:06:55] addshore ADD links SMALLINT, checked... [15:06:56] ? [15:07:46] nop [15:08:04] somehting wrong with DATETIME DEFAULT ON UPDATE CURRENT_TIMESTAMP;" [15:09:17] legoktm can you retry now? [15:09:52] yay :) [15:09:54] i'm in! [15:10:24] legoktm I didn't do anything [15:10:29] :o [15:10:32] I just was tailing the log so that I see error [15:10:40] there was no error :/ [15:10:54] lolwut [15:11:06] Accepted publickey for legoktm from 10.4.0.54 port 44967 [15:11:15] can you try several more times? [15:11:19] heheh sure [15:11:20] just to make sure it works [15:11:50] well i just logged in + out 5 times [15:11:58] I see [15:11:59] ahh /me has to specificy default and onupdate [15:12:01] * legoktm thanks petan's magic :) [15:12:08] I think it was automount [15:12:16] how I tried to read your public key [15:12:22] maybe it somehow mounted what wasn't before [15:12:29] interesting [15:12:43] ok so where should i delete my private key from if i'm going to be using proxycommand? [15:12:55] from bastion [15:13:22] where in bastion? [15:13:26] its not in my .ssh [15:13:40] they you don't have the private key there... [15:13:45] oh [15:13:54] scfc_de: then what were you referring to? [15:14:27] petan, adding collums does take a long time :( [15:14:30] 1.64% [15:14:39] of what [15:14:53] im adding 2 collums, and it goes rather slowly :p [15:15:07] it depends if these columns are keys / indexed / have some default value other than NULL etc [15:15:08] legoktm: Hmmm. It looked to me as if you wanted to store the key on bastion, but according to your log you didn't. So everything seems to be fine. [15:15:34] ok :) [15:15:36] Traceback (most recent call last): [15:15:36] File "xmlscanner.py", line 27, in [15:15:36] import os [15:15:36] ImportError: No module named os [15:16:04] :/ [15:16:23] maybe virtualenv's dont carry accross instances? [15:16:38] virtual env? [15:16:39] what is that [15:16:50] you mean environment? it doesn't [15:16:58] virtualenv is a python thing [15:17:01] er its a python thing [15:17:03] oh [15:17:05] and surely they don't travel across instances [15:17:08] right? [15:17:12] why not? [15:17:26] the directory structure is all from the same place [15:17:28] i mean, it probably stores the env in your homedir…and homedirs are available across instnaces [15:17:35] right, but don't you have to initialize the virtualenv? [15:17:39] i did that [15:17:39] (i've only used it a few times) [15:17:40] ah [15:17:42] nm thne [15:17:44] i will hush [15:17:48] and mine is stored in /data/project/legoktm/py2 [15:17:51] :P [15:17:59] lemme see what happens if i create a new one [15:18:02] aye [15:18:10] petan, any insight on my curl issue? [15:18:12] lol [15:18:25] i can curl from localhost on my instance, but not from a bastion [15:18:26] ottomata mhm I forgot what it was :D [15:18:28] petan: can you install "python-virtualenv" on bots-bnr1? [15:18:35] ottomata of course [15:18:38] ottomata firewall [15:18:43] oh it comes installed? [15:18:47] did you open port 80? [15:18:55] yes it is on your project definition [15:19:02] !security [15:19:02] https://labsconsole.wikimedia.org/wiki/Help:Security_Groups [15:19:03] hmmmMMMMM [15:19:12] reading [15:21:14] petan these alter queries are really odd, the instance isnt at maxram or maxcpu or max anything but it still goes really slowly O_o [15:21:31] mhm [15:21:46] http://stackoverflow.com/questions/5677932/optimize-mysql-for-faster-alter-table-add-column [15:22:23] petan, if a group is in a project, all instances in that project are then in that group [15:22:25] correct? [15:22:30] (security group*) [15:22:43] no [15:22:50] you select group when you create instance [15:23:02] can you edit after it is created? [15:23:06] it wasn't possible to be changed later, dunno if it still isn't [15:23:14] AGHHHH [15:23:19] poop scoops [15:23:24] :P [15:23:31] that is mentioned in help [15:23:41] * legoktm nudges petan for "python-virtualenv" on bots-bnr1 ;) [15:23:54] aye i see it [15:23:58] ratso [15:24:00] hm. [15:27:12] legoktm sec [15:27:19] ty [15:27:36] ja, thanks petan :) [15:28:23] legoktm done [15:28:31] thanks [15:31:04] addshore: its running now [15:31:12] [= ty [15:31:15] legoktm are you using some sql? [15:31:23] nope [15:31:25] ok [15:31:29] just scanning a dump [15:31:39] that dump is where? [15:31:41] on which sql [15:31:46] everywhere? [15:31:48] public/datassets [15:31:50] /public/datasets [15:31:50] oh [15:31:52] ok [15:31:57] gluster then [15:32:05] gluster run lag time ;p [15:32:09] *fun [15:36:05] hmm, running this insertinto seems to have the same low cpu and mem usage as the ALTER :/ [16:34:04] [bz] (NEW - created by: Antoine "hashar" Musso, priority: Unprioritized - enhancement) [Bug 45706] shell wrapper to connect to databases - https://bugzilla.wikimedia.org/show_bug.cgi?id=45706 [16:43:44] petan: Is this channel logged somewhere so I can look up something? [16:43:52] (I don't mean the server log.) [16:43:55] !logs [16:43:55] logs http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs [16:44:13] petan: Thanks! [16:44:17] yw [17:26:53] oh Gluster! why are you sooo slow [17:29:15] Coren: do you already have ip address range for tools labs (and bot labs), so that we can add them to autoblock whitelist? [17:32:28] Merlissimo: Wouldn't it be okay to whitelist the whole Labs net? I assume other projects may run bots as well. [17:33:01] maybe [17:43:28] !log deployment-prep set a dummy value for wmgTranslationNotificationUserPassword [17:43:31] Logged the message, Master [17:47:58] !log deployment-prep removing all 'aft%' tables to make sure ArticleFeedbackv5 database schema is valid {{bug|45318}} [17:47:59] Logged the message, Master [17:53:43] addshore everything ok so far? [17:55:06] yep :) [17:55:14] doing some more large alteres on my table [17:55:21] 92% through this one [17:55:36] just altered my code to work with the new collums I have added [17:55:45] then just got to add a few million more records [17:56:10] and then I think im ready to run! :D [17:57:48] Merlissimo: That's actually a good question. Because of the NAT involved, I'm pretty sure that every project will look the same, so I think the whole labs needs to be whitelisted. [17:58:32] I think I'm going to remove my primary key and set PRIMARY_KEY(lang, article) [17:59:21] Merlissimo: It may be worthwhile to investigate whether it would be a good idea to have the tool labs get a distinct egress IP, but those are in small supply. [18:00:49] maybe ipv6 only for wmf wiki connections? then ips should not be a problem [18:03:21] tbh, I don't even know what the network layout looks like. It might even be possible for the labs to speak to the squids without ever using public IPs if the rules allow it. [18:08:51] hah petan took 2 hours to run this last alter I did :[p [18:47:35] Coren: can we discuss this interface you are creating for managing resources? [18:48:11] Ryan_Lane: Sure, although atm I am at the "Hm. How am I going to go about it phase." :-) [18:48:30] is it going to be a web interface? [18:49:18] Ryan_Lane: Almost certainly. [18:49:22] if so, it should be a part of wiktiech [18:49:26] Ryan_Lane: Unless you had some other idea? [18:49:38] which means it should be a mediawiki extension, or code added to OpenStackManager [18:50:31] what will this interface do? [18:50:42] Hm. That's a nice approach, but then to speak to LDAP exclusively? Seems brittle. [18:51:10] well, it can also write into mediawiki's database [18:51:28] and mediawiki has apis [18:51:59] I'd prefer to avoid custom web apps [18:52:07] Well, it allows managing of the tools users and groups (LDAP-friendly), also creating the directories, setting permissions, putting skeletons in tool homes and webroot. Also fiddles with the sudoers, but that's also LDAPable [18:52:37] users and groups should absolutely be managed by wikitech [18:52:41] I was also thinking of putting basic queue control there for the tool maintainers (start/stop bot) [18:53:18] creation of home directories and skels can be handled by pam_mkhomdir [18:53:21] *homedir [18:53:42] Yeah, sounds like it's entirely wikitechable. [18:53:46] * Ryan_Lane nods [18:53:56] let's work together on a project plan for this [18:54:05] we can likely sprint this and finish it in a week or so [18:54:48] let me start a wikitech page for this [18:54:51] Ryan_Lane: Sounds good. Got a meeting with CT in a few, but I'll ping you right after? [18:54:57] sure [19:06:43] Coren: edit at will: https://wikitech.wikimedia.org/wiki/Projects#Tool_management_interface [19:36:45] hashar: ping [19:36:51] Krinkle: yup [19:37:17] hashar: I'm preparing qunit now as I speak [19:37:27] Krinkle: while you are around, I slightly tweaked the URL for the jslint jobs. Will points directly to the check style result :-] [19:38:16] Krinkle: ah nice :-] [19:38:32] hashar: thanks, I was going to ask you about that. I couldn't find where to do it. [19:38:38] Krinkle: did you get phantom.js etc installed on gallium? [19:38:42] hashar: also, the -merge message is still useless as ever. [19:38:51] I can't find in git where that is [19:39:06] it might be hardcoded in Zuul source code :-] [19:39:14] hm.. okay [19:39:18] hashar: So about qunit [19:39:41] that is directly reported by Zuul after it attempts to locally merge the patchset against latest master. We can probably add a Zuul setting to tweak the message. [19:39:44] hashar: We need to install with sqlite, publish in /srv/localhost/qunit, run grunt qunit --qunit-url='http://' and that's pretty much it [19:39:52] good! [19:40:01] hashar: phantomjs has been 'on' gallium since Nov 2012 [19:40:08] It is in grunt-contrib-qunit [19:40:26] hashar: I'm thinking of simply re-using the install-sqlite macros [19:40:34] and instead of copying, just putting a symlink in /srv [19:40:40] e.g. /srv/localhost/qunit/w [19:40:51] so everything is in place [19:41:08] except I'm not sure how to pass the parameters to the install ant thing [19:41:12] does it support that? [19:41:21] probably :-] [19:41:23] e.g. wgServer and wgScriptPath [19:41:48] ahhh [19:41:56] the rest is fine [19:41:57] let me have a look at it [19:42:12] oh, and wgEnableJavaScriptTest of course, but that can be echo >> easily. [19:42:15] and you want to copy the files not a symlink [19:42:20] so can the rest, but that may be less clean. [19:42:26] cause the workspace is whipped / changed on each build [19:42:33] hashar: no [19:42:35] hashar: yes [19:42:39] hashar: but it doesn't matter [19:42:42] that's what we wanrt [19:42:44] want* [19:42:49] this is synchronous [19:42:56] it is just like any other build step [19:43:00] there is no point in copying it elsewhere [19:43:02] okkkk [19:43:22] ahh indeed, there will be only one and exactly one of qunit job running [19:43:26] yeah so symlink is fine [19:43:27] :-] [19:43:28] sorry [19:43:28] The reason I don't make document root point to jenkins workspace is to make sure the dirname isn't hardcoded. [19:43:39] there can be multiple in theory, that's fine [19:43:46] the symlink name is unique [19:43:51] http://etherpad.wmflabs.org/pad/p/IntegrationQUnit [19:43:56] has to be, to avoid caching issues [19:44:53] hashar: In there I just run install.php directly [19:45:06] is there any advantage to calling ant? does it do something special we need? [19:45:33] ant is just a wrapper [19:45:33] somethign we might want to abstract [19:45:41] since it already has most of what we need we can use that [19:45:52] but you can well write a grunt task wrapper around install.php [19:45:59] and thus drop ant :-] [19:46:19] ant installdb-sqlite just runs install.php I see, so nothing special [19:46:24] hashar: There is one thing I thought of though [19:46:27] extensions [19:46:40] hashar: It appears you already have this figured out for extensions [19:46:48] yeah via a hack [19:46:49] (install mw core, put extension inside etc.) [19:46:50] correct? [19:46:55] is that re-usable? [19:47:04] that is done in jenkins-job-builder jobs [19:47:23] there is a ugly hack that pass to the extension job the list of extension dependencies [19:47:41] that is in turn passed to some shell script that clones each extension under $WORKSPACE/extensions [19:47:54] extensions-loader.php hm.. [19:48:06] then there is another php file which is injected in LocalSettings.php that will include() each of the extension found under $WORKSPACE/extensions [19:48:11] that is not pretty [19:48:33] so potentially we could use the same trick for extensions on qunit [19:48:42] I can handle that part :-] [19:48:49] okay, I'll do it for mediawiki core first [19:48:49] I guess as a first step, focus on mediawiki/core [19:48:54] then do extensions later on [19:48:57] exactly [19:48:59] :-] [19:49:11] you are going to have a HUGE karma boost whenever that is done [19:49:21] random() people have been asking about it over the last few days :-] [19:50:26] hashar: I'm trying to figure out where to put these 3 bash commands (php install.php, symlink, wmfgrunt qunit --qunit-url='';) [19:50:42] oh and remove symlink [19:50:48] Krinkle: either directly in the jenkins job builder template [19:50:50] There's macros, templates, jobs [19:50:58] or in a shell script under /tools or /bin ? [19:51:50] I'd like to keep them related to the context [19:52:02] should be just like we run jshint, right? [19:52:17] I guess :-] [19:52:20] and you'd adapt it later to use the extension hack [19:52:32] if it's in a separate script, it can't call back to other macros and stuff [19:52:47] Hm.. [19:52:49] I'll see [19:53:00] so you could just create a new job-template in mediawiki.yaml [19:53:06] something like '{name}-qunit' [19:53:10] and put the shell commands there [19:53:27] or put the shell commands in one or more macros [19:53:34] and add the macros to the job-template [19:54:00] hashar: I'm looking at the phpunit code, although that still uses ant, the principle is similar: install first, then run it. [19:54:35] I can include the installation as just another bash command within -qunit, or should I declare that step as dependency [19:54:41] install itself isn't a separate job [19:55:17] I'll just get it to work, test on test/mediawiki, and have you look at it before I deploy [19:57:56] petan: I could imagine you wanting to read this -- http://www.aosabook.org/en/puppet.html [19:58:58] petan! final alter to my db! just another 2 hours to wait ;p [20:01:56] Krinkle: sounds like a good plan [20:02:16] Krinkle: might need to add a qunit job in zuul layout for test/mediawiki [20:02:21] yeah [20:02:30] I'll add all the ones we have for mwcore [20:04:43] Ryan_Lane: All yours. [20:05:53] Oh, Coren I wanted to ask some questions about Tools... But I'll wait if Ryan_Lane is around. [20:06:41] Darkdadaah: Well, actually, if you want all of my attention it's even better if you use the time while Ryan doesn't yet have it. :-) [20:07:10] Right :P Then: is it possible to join the Tools project? [20:07:36] Yes it is. :-) What's your wikitech username? [20:07:43] Darkdadaah [20:08:16] Is it possible yet to test webtools? [20:08:46] I may not even need a database for some of them. [20:09:17] Darkdadaah: Yes, as http://tools.wmflabs.org/wikilint/cgi-bin/wikilint proves :-). [20:09:35] Oooh [20:10:02] Be careful, though: "Webtools" is the name of another Labs project. Lots of possible misunderstandings :-). [20:10:19] Yes, I'll be careful. [20:11:20] (Spent quite some time recently wondering why my files on webtools-login and tools-login were different :-).) [20:11:38] Héhé [20:12:51] Darkdadaah: You're in. [20:13:06] Thank you :) [20:13:11] Darkdadaah: I need to create the actual tools manually atm; pls to give me names? [20:13:46] "anagrimes_web" would be good [20:14:12] That's a mouthful. Is there an anagrimes_somethingelse too? [20:14:31] I.e. 'anagrimes' work? [20:15:03] Anagrimes is already a set of scripts, used to generate some of the data that anagrimes_web offers. [20:15:21] Maybe I should find another name entirely. [20:15:48] Wouldn't it make sense to see it as two 'parts' of the same tool then? [20:16:06] Or could they eventually get different maintainers? [20:18:05] (The reason anagrimes_web is a little long is that the username would then be local-anagrimes_web which is a pain) [20:18:11] Hm let's go with "anagrimes" then. [20:19:38] Yes, it makes sense to have one single "tool", even though there are different parts. [20:19:45] Also, otherwise the URL would include _web which feels repetitive. [20:19:49] Darkdadaah: All done. [20:20:04] Damianz: Please read the not-documentation there: http://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Help [20:20:20] Darkdadaah:: Please read the not-documentation there: http://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Help [20:20:27] &#^ autocompletion [20:21:07] Darkdadaah: And yes, writing read documentation is on my todo for the week. :-) [20:21:21] Darkdadaah: In the meantime, don't hesitate to ask. [20:21:27] Coren: Want to buy me pizza? ;) [20:21:32] Okay, I'm in :) [20:22:23] Damianz: I'll buy you a slice next time you're within physical reach. :-) [20:22:49] So summah owes me beer, coren pizza and ryan a t-shirt... I'm totally sorted for ams [20:26:51] Coren: no database for now? [20:27:32] Darkdadaah: Not yet. It's not immediately clear whether there is also going to be a local database in addition to the replica one, and mysql sucks hairy slugs through a paper straw on virtual boxen. [20:27:51] Darkdadaah: So either way, a physical box would be needed. [20:29:28] But I'll talk with Ryan today and see if we can arange for a local mysql instance as a transition measure. Its performance would be teh suck, but it would be a reasonable stopgap. [20:30:20] performance? this is labs [20:30:50] That would be much appreciated :) [20:44:43] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/TODO was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=655218 edit summary: [-19] wip [20:55:04] Coren: I have several small but independent tools (usually just one html page + js, no db), do I need to create a project for each one, or can I have a single "toolbox" project? [20:56:06] Darkdadaah: There's no hard and fast rule; I should say that the dividing line should be "what is likely to be self-contained enough that it could get different maintainer(s)" [20:56:26] Darkdadaah: There is no "cost" to a tool, really. [20:56:56] Coren: create a local database? [20:56:58] that does what? [20:57:08] holds user databases? [20:57:20] Ryan_Lane: Storage for the tools that don't otherwise need to interact with the replicas. [20:58:03] My database would be in this case. [20:58:19] Coren: yeah [20:58:25] Ryan_Lane: Yeah, "user databases" in the toolserver sense. [20:58:28] may be sane to have an instance for that now [20:58:44] * Coren doesn't like that term since it sounds like a database /of/ users. [20:58:53] we'll likely miss our target date for that [20:59:05] Boooo! [20:59:08] I hate missing target dates [20:59:17] hey, we've only missed a couple in the roadmap ;) [20:59:25] since project start. heh [21:00:00] Darkdadaah: And here is our answer. I'll create a DB instance and make it available to the tools then. [21:00:24] Coren: how are you planning on handling authentication for it? [21:00:25] That's good to hear :) [21:00:44] we're going to need to manage auth for the replicated databases, too [21:01:50] Ryan_Lane: ident [21:02:10] Coren: ident? [21:02:56] Use identd. The usernames are trusted on the project, so they should suffice. If you can sudo to that user, you can access its databases. [21:03:35] heh. identd is totally spoofable [21:04:32] also, I'm pretty sure that won't actually work for databses [21:04:34] *databases [21:04:41] iff you don't trust your roots. [21:04:46] JFTR: Toolserver uses password auth. Don't know if this has some deeper reasoning behind it. [21:04:54] password auth is easiest [21:05:11] Coren: also, users won't have access to the database servers themselves [21:05:20] just via mysql client [21:05:50] Ryan_Lane: If they had access to the database servers, then you wouldn't need identd. You could just use native auth. :-) [21:06:01] I'm not seeing how identd could work [21:06:10] does mysql even support that? [21:06:15] Ryan_Lane: We could do password, but I dislike adding yet another set of credentials to manage. [21:06:42] Ryan_Lane: IIRC, there was an auth module for identd floating around; but it's trivial enough that I could write one in an hour at need. [21:06:46] we could do ldap auth, but I don't like the idea of people sticking their ldap password into a file [21:06:57] s/module/plugin/ [21:07:06] Couldn't you add another PW field to LDAP? [21:07:26] not really [21:08:16] Personally, I'd always use kerberos. :-) [21:08:33] kerberos is a pain in the ass [21:08:35] On Toolserver, MySQL and LDAP passwords (the latter used for ... account renewal?) are different. [21:08:44] yeah [21:08:56] my thought was to automatically generate credential for users [21:09:08] *credentials [21:09:33] Ryan_Lane: But then you still have the credentials-in-a-file problem. [21:09:40] we'll have that anywayt [21:09:42] *anyway [21:09:52] Why so? [21:10:13] do you really think people are going to type their passwords in? [21:10:37] tools will have the password embeded in the config [21:10:39] bots will too [21:10:46] users will put it into their .my.cnf [21:11:30] Ah, you mean bot credentials for the projects. [21:11:36] I meant credentials to access the database. [21:11:38] and for users, too [21:11:43] that's also what I mean [21:11:56] people are going to stick those credentials into files [21:12:17] Like on toolserver then? [21:12:19] Well, yeah, but not if we trust the userid -- which is the point. :-) [21:12:25] we can't [21:12:38] Why not? [21:12:38] you assume the replicated databases are just for the tools project [21:12:45] they aren't [21:12:47] Oooooh. Wait! [21:13:00] You mean for the /replicated/ DB [21:13:06] I meant for the project-local DB! [21:13:25] my goal is to have per-project database creation [21:13:29] Though I can see the point of having the same mechanism for both now that I think about it. [21:13:40] for "user" databases [21:14:17] Ah! Using the same infrastructure rather than project-local resources. [21:14:22] yep [21:14:23] Okay, that makes sense then. [21:14:35] identd is completely worthless in that case. [21:14:59] And you guys don't have a KDC, so password auth is the only reasonable way left. [21:15:10] (also, we don't want a KDC ;) ) [21:15:21] Why wouldn't you want a KDC? [21:15:21] mysql doesn't really support kerberos auth anyway [21:15:31] kerberos is a pain in the ass to deal with [21:15:32] mysql speaks PAM. PAM speaks krb5. :-) [21:15:43] Kerberos heals all and will turn water into wine. [21:15:50] well, mariadb speaks krb5 [21:15:57] err [21:16:00] Also walks on water pre and post wine-conversion. :-) [21:16:02] mariadb support pam [21:16:20] and maintaining kerberos sucks ;) [21:16:58] A well kerberized infrastructure is secure, and emits puppies and rays of kitteny love. [21:17:29] kerberized infrastructure /can/ be more secure [21:17:39] in our case I don't think it adds any security [21:17:44] Note the discerning use of 'well-' :-) [21:17:54] it's not a matter of well [21:18:10] in our environment, there's no need to type in a password on any instances [21:18:28] that changes if we use kerberos [21:18:49] and it would need to be the same password used for your labs account [21:18:51] What? No it doesn't. What's what tickets are /for/ :-) [21:18:56] which is shared with a bunch of other services [21:19:00] And service tickets. :-) [21:19:01] how do you get the tickets? [21:19:39] Normally, you'd get the ticket on your workstation. The same way you have to decrypt your ssh key with a passphrase. [21:19:50] heh. that'll never work for us ;) [21:19:55] it would need to happen at the bastion [21:19:58] Oh? [21:20:21] it's hard enough teaching folks how to use ssh keys [21:20:21] How come? [21:20:36] Hah. Human problems. [21:20:39] yes [21:20:45] the hardest of all problems :D [21:21:15] we'd also need to have our kerberos servers open to the world, then too [21:21:19] * Ryan_Lane shudders [21:21:30] hm. weird comma, placement [21:21:51] anyway, we need to come up with some method of auth :) [21:21:56] So we do. [21:22:01] generated password credentials are easiest [21:22:17] and likely generated per user and project [21:22:26] Hm. [21:22:37] And what about per-project-users? [21:22:49] that would also need to get done, yes [21:23:15] we're going to have a little bit of network downtime in labs [21:23:18] soon [21:23:19] Ryan_Lane: Central repo, then. LDAP seems the logical place for it regardless. [21:23:37] Coren: we could actually store it in a database on the database server [21:23:57] storing it in ldap isn't very easy [21:23:59] Ryan_Lane: That seems oddly circular. :-) [21:24:05] Coren: right? :) [21:24:26] I can't think of a simple way to do this in ldap [21:24:45] Ryan_Lane: Couldn't we store the salted hash there? [21:26:21] not really [21:26:29] we need to be able to give users their passwords [21:27:01] Or not. Whichever tool that generates it can be accessible to the user. It gives them the plaintext and stores the hash. [21:27:42] That means they can't recover it, but they can regenerate it. [21:27:46] if we wanted to do that, then we wouldn't need to store it anywhere [21:27:54] we could just add the grant and give the password [21:28:09] Heh. Annoyingly good point. Stop being right. [21:28:21] we could also allow ssh just for that command, as well [21:28:34] :) [21:29:05] this table alter is taking longer than I thought,, 2 hours and it is 7% of the way through >.< [21:29:38] Coren: that means we'll need to have authorized keys and ssh keys for the tool users, though [21:29:40] That seems to be the simplest solution; do the grant on database creation. [21:29:43] which is kind of annoying [21:30:04] once we get salt-api working, we can avoid this [21:30:22] Ryan_Lane: But those are already available on a shared filesystem anyways, aren't they? [21:30:31] not for tool users [21:30:42] Ah. [21:30:46] * Coren ponders. [21:30:59] we can do that, though [21:31:10] I think your user database of user databases might be the simplest solution. [21:31:23] I'd like to think about it a little more [21:31:50] because we'd need need some way of updating that database [21:31:52] KK. I'll hold off on the local database then, because I'd rather use whatever scheme you decide upon. [21:31:55] and giving the passwords to users [21:32:06] which brings us back to the same set of problems [21:32:14] Ryan_Lane: Presumably, the same process that /creates/ the user database would update that table. [21:32:25] but then how does the user get the password? :) [21:33:04] hm [21:33:13] !logs [21:33:14] logs http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs [21:33:14] Autogenerated .mysqlrc? [21:33:27] well, if the web inferface was managing it, then we could show it to the user through the interface [21:33:54] that's not a great approach, though [21:34:09] Ah! True. You don't want to create the database for a new tool without a positive action from the user anyways; that'd work. [21:34:11] bleh. why's this have to be so hard? :D [21:34:43] Of course, with kerberos, we wouldn't be having this conundrum. "Just use your TGT". :-) [21:35:17] * Coren promises that, henceforth, he'll only bring up how Kerberos solves every problem once weekly at most. :-) [21:35:24] would service users need a service ticket, then? [21:35:34] Ryan_Lane: That's be SOP [21:35:40] * Ryan_Lane nods [21:36:13] Though there is nothing that prevent service users from having 'normal' principals. [21:36:22] we're possibly about to lose network [21:36:42] Leslie is doing the network bonds [21:38:33] AFAICT, the easiest way -- if not perfect -- is to create/manage databases through the wikitech interface and have /it/ give the credentials to the user. But that maps 1:1 with the Tools models, maybe not so much with others. [21:39:15] On the user/tool management page have a "I need a db and what are my db creds" section. [21:43:29] Coren: my only concern there is, how do we only give the password away to the correct users? [21:43:48] must be in a tools' group to get it? [21:44:51] Ryan_Lane: I'd say so. By definition, the group members are the maintainers and have access the the files (where that password will end up) anyways. [21:45:00] * Ryan_Lane nods [21:46:07] Hm. I just realized that the wikitech interface to tool management has "project-local users" as a prereq. [21:46:50] Right now I'm doing it with a script. You know the kind. for host in...;do ssh sudo adduser;done. :-) [21:47:25] * Ryan_Lane nods [21:47:39] yeah, all of this assumes we're doing project user management [21:47:45] that's why I added the script ;) [21:47:46] err [21:47:48] the sprint [21:48:41] hello! [21:48:47] whom do I poke to get a public address? [21:49:03] Ryan_Lane: ^ [21:49:17] or rather, what do I need to do to get mobile-reportcard.wmflabs.org? [21:49:25] you need a public ip :) [21:49:33] right. [21:49:40] * YuviPanda pokes Ryan_Lane for a public ip [21:49:51] can i haz? [21:50:38] we're making some network changes right now [21:50:42] they may bring things down [21:50:48] give us a bit to finish this up [21:50:52] ah, oaky [21:50:54] but will make it stronger in the end! [21:50:58] we can rebuild it... [21:50:58] indeed ;) [21:51:07] * YuviPanda has the technology [21:51:12] Ryan_Lane: I'll poke you in ~30 mins? [21:51:15] "we can make it faster, stronger, we have the technology" [21:51:18] or should it be more like '3 hours'? [21:51:26] "…. we just don't want to spend the money" [21:51:35] :) [21:52:00] YuviPanda: 30 mins, hopefully [21:52:09] Ryan_Lane: oh, nevermind. ottomata had perms and created it [21:52:20] sorry for the bother. [21:59:45] ah. cool [21:59:52] no worries [22:02:17] Ryan_Lane: ssh mobile-reportcard.pmtpa.wmflabs gives me [22:02:20] If you are having access problems, please see: https://wikitech.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [22:02:20] nc: getaddrinfo: Name or service not known [22:02:28] don't ssh directly into it [22:02:43] wait [22:02:46] is that an instance? [22:02:48] I do have the ssh config setup. -v tells me I am sshing through bastion [22:02:54] is that actually the instance name? [22:03:11] that's what milimetric said he created [22:03:25] did he create a public hostname? [22:03:29] if so, it won't end in wmflabs [22:03:34] http://mobile-reportcard.wmflabs.org/ [22:03:34] it'll end in wmflabs.org [22:03:42] notice you're using private dns [22:03:49] also, don't ssh directly into the public ip [22:03:56] ssh into the private hostname [22:04:02] which may not match the public hostname [22:04:05] Ryan, can you point me at the git repo where the current wikitech stuff is so that I can see how it's done? [22:04:21] Coren: mediawiki/extensions/OpenStackManager [22:04:24] in gerrit [22:04:30] kk [22:04:34] Ryan_Lane: yeah, poking him to see the private one. (He is currently in a meeting) [22:04:37] and LdapAuthentication extension [22:04:52] YuviPanda: what project is this [22:04:53] ? [22:05:05] kripke [22:05:09] (I'm not a member yet) [22:05:21] that's analytics, I think [22:05:30] yeah [22:05:32] it is [22:05:46] well, if you aren't a member, you can't ssh in [22:06:34] so 1. private hostname is different 2. I needed to be added to member [22:06:44] yep [22:06:46] (I dunno why i didn't catch 2) [22:06:50] you may be able to ssh into the public hostname [22:06:53] but it's bad practice [22:07:05] not all instances will let you do that [22:07:21] (I'd really prefer that none did ;) ) [22:08:19] I'm having them add me to the project, which should solve both [22:08:25] * Ryan_Lane nods [22:10:00] Ryan_Lane: all done. Thanks for the help :) [22:11:48] YuviPanda: great. yw [22:15:06] !log integration Upgrading npm from 1.1.39 to 1.2.13 on integration-apache2 (sudo npm install -g npm) [22:15:08] Logged the message, Master [22:39:38] Ryan_Lane: seems we have yet another split brain on labs /home :( [22:39:44] [2013-03-04 22:37:58.428825] W [afr-open.c:213:afr_open] 0-deployment-prep-home-replicate-1: failed to open as split brain seen, returning EIO [22:39:45] [2013-03-04 22:37:58.428863] W [fuse-bridge.c:1948:fuse_readv_cbk] 0-glusterfs-fuse: 7009: READ => -1 (Input/output error) [22:40:00] hashar: which directory? [22:40:07] Ryan_Lane: /home/l10nupdate/.ssh/authorized_keys [22:40:15] Ryan_Lane: and possibly /home/l10nupdate/.ssh [22:40:30] well, neither one of those would actually be used for ssh [22:40:39] also, they are dynamically generated [22:40:41] via puppet [22:40:47] so, just rm -Rf .ssh [22:40:51] and re-run puppet [22:40:52] okk [22:41:02] it'll be easier than me fixing the split brain [22:41:07] :-] [22:41:17] was not sure what to do with it hehe [22:44:10] Ryan_Lane: my original bug report has been made a dupe of https://bugzilla.wikimedia.org/show_bug.cgi?id=45609 which list some other file having an issue: /home/ajentzsch/wikidata/core/.git/index [22:44:22] -_- [22:44:33] that's not a dupe [22:44:34] imo [22:44:54] no idea :-] [22:44:56] [bz] (NEW - created by: silke.meyer, priority: High - normal) [Bug 45609] Input/Output errors in a /home directory - https://bugzilla.wikimedia.org/show_bug.cgi?id=45609 [22:45:08] at least puppet is happy now! [22:45:08] heh [22:46:49] hashar: are you still in San Francisco? [22:47:00] chrismcmahon: yes sir [22:47:09] chrismcmahon: for the rest of the week. Flying back on saturday 9th [22:49:00] hashar: great, I think there's another opportunity to use beta, greg-g might be speaking with you soon. [22:49:33] chrismcmahon: we talked to each other next week, need to formalize a bit I guess [22:50:16] * sumanah listens happily [22:52:05] hashar: it was regarding E3's use of betalabs in replacement of their current server that they treat as a sort-of production like environment. [22:52:29] greg-g: I briefly talked about it with them this morning [22:52:38] guess we need some kind of workflow specification [22:52:45] "how my idea lands up in production" [22:53:14] that would be something like: write -> gerrit -> deploy on an instance -> test it out -> land it in master (disabled) -> enable on testwiki -> generalize [22:53:31] with 'beta' in between 'land it in master' and 'enable on testwiki' [22:53:46] greg-g: hashar: chrismcmahon - incidentally, what's the current status re Fundraising using beta cluster for some of their exploratory and automated browser testing? [22:57:05] sumanah: I think you stumped us all, I'm not aware of interest in beta from Fundraising, or if I am it's way off my radar for right now at least. [22:57:25] They're interested, chrismcmahon - I talked with Jeff Green on Friday and they wanna talk to you [22:57:39] awesome, the more the better [22:59:12] chrismcmahon: so, my (unsolicited!) advice - set up a chat with them [22:59:41] noted, on the TODO list [23:00:10] cool [23:09:42] LeslieCarr: http://www.stgraber.org/2012/01/04/networking-in-ubuntu-12-04-lts/ [23:10:39] warning, virt2 network going down for a short while [23:11:02] this will cause a short network outage for all public traffic [23:11:09] internal traffic should continue operating [23:15:25] Unable to parse the feed from https://bugzilla.wikimedia.org/buglist.cgi?chfieldfrom=-4h&chfieldto=Now&list_id=151044&product=Wikimedia%20Labs&query_format=advanced&title=Bug%20List&ctype=atom this url is probably not a valid rss, the feed will be disabled, until you re-enable it by typing @rss+ bugzilla