[00:06:09] Ryan_Lane: are instances supposted to take a minute or more to reboot? [00:06:26] Ryan_Lane: specifically, if I add the nfs role to a machine, and reboot it, will it take a while to turn up? [00:06:29] or is the machine fucked? [00:06:49] you could check console output via wiki [00:07:08] mutante: it's at the login screen [00:07:10] forever now [00:07:17] Ubuntu 12.04.2 LTS multimedia-alpha ttyS0 [00:07:20] multimedia-alpha login: [00:07:36] but it's frozen? ehm.. doesn't sound right then [00:07:41] by forever I mean about 2 minutes [00:07:56] this is https://wikitech.wikimedia.org/wiki/Nova_Resource:I-00000810 if that's any help [00:08:19] Coren: ^' [00:08:46] nevermind, seems to have come up :) [00:08:58] The initial puppet run on boot delays the start of SSH, and can take several minutes. [00:09:18] YuviPanda: it generally takes a while. yeah [00:09:25] yeah, it's on now. [00:09:25] because of what Coren says :) [00:13:52] Ryan_Lane: Coren I added the appropriate role to the instance (role::labsnfs::client) [00:13:59] Ryan_Lane: still mount tells me [00:13:59] projectstorage.pmtpa.wmnet:/multimedia-home on /home type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) [00:14:07] I restarted it and waited for the puppet run. [00:14:11] should I restarta gain? [00:14:31] YuviPanda: You had one restart too many. You could have just run puppet then rebooted. :-) [00:14:42] Coren: augh, right. [00:14:47] Coren: so now I need to restart *again* [00:14:54] But yeah, you need to reboot once after the puppet change takes; autofs doesn't like the change. [00:14:55] Coren: I'm an idiot. [00:21:21] Coren: Ryan_Lane ah, different troubles [00:21:23] puppet fails [00:21:28] err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class generic::packages::git-core for i-00000810.pmtpa.wmflabs on node i-00000810.pmtpa. [00:21:32] err: Could not retrieve catalog; skipping run [00:21:48] I've seen this occur intermitently before. Just try again. [00:22:08] eww [00:22:32] well [00:22:33] now err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Apache_site[000-default] is already defined in file /etc/puppet/manifests/webserver.pp at line 100; cannot redefine at /etc/puppet/manifests/webserver.pp:54 on node i-00000810.pmtpa.wmflabs [00:22:39] (I got rid of that definition) [00:23:03] Clearly not. :-) [00:23:16] I got rid of generic::packages::git-core [00:24:31] fuckin imbeciles. [00:24:34] the power went out again [00:24:35] gaaah [00:24:54] Coren: ah, works now. I had to get rid of the apache, php, php-mysql things that have been there by themselves [00:26:26] Coren: I guess puppet has never actually run on that instance before [00:26:35] Anyone know what's up with deployment-search01? [00:43:40] Ryan_Lane: andrewbogott_afk is role::mediawiki-install::labs something I should use? [00:43:48] or will it be deprecated for code from vagrant? [00:43:51] um [00:43:54] no idea :) [00:44:35] extradite tells me it is mostly andrewbogott_afk's [00:45:23] i would have expected it to be /modules/mediawiki_singlenode? [00:45:29] meanwhile [00:46:16] mutante: it is in modules/mediawiki_singlenode [00:46:26] mutante: just that the role is elsewhere [00:46:26] /nich ryuch_ [00:47:02] ah, i don't see it in ./roles/ though [00:47:16] eh, manifests//role/ [00:47:30] mutante: manifests/role/labsmediawiki.pp [00:47:51] got it, yea, class role::mediawiki-install-latest::labs sounds like it [00:48:03] sounds like the one i'd try [00:48:14] mutante: yeah, I'm using it right now :) [00:48:27] mutante: question is, there's a very nicely done mediawiki-in-puppet thing in mediawiki-vagrant [00:48:36] mutante: and I'm wondering if this will go away in favor of that at some point [00:48:56] for vagrant you'd have to ask ori afaik [00:49:18] true, true. more like 'will vagrant's puppet stuff replace *this*' than the other way around [00:49:31] since I guess this predates that, but that is more fleshed out now [00:50:13] i dont know, my guess is they will coexist for a while and you'll have all the choices.. [00:50:25] hmm, right :D [00:50:33] not as many as you have with setting up Apache :) [00:50:40] hehe :P [00:50:50] I do hope that the vagrant stuff replaces this, though :) [00:54:41] http://tools.wmflabs.org/mzmcbride/ [00:54:48] That doesn't correspond to public_html? [00:55:37] Elsie: should [00:55:41] Elsie: do you have an index.something? [00:55:56] Elsie: no directory listing by default [00:56:00] mzmcbride@tools-login:~/public_html$ pwd; ls [00:56:00] /home/mzmcbride/public_html [00:56:00] index.html [00:56:15] Elsie: aah, is that your username? [00:56:22] Yes. [00:56:30] Elsie: only tools have web access. You need to create a tool first [00:56:35] and then put it in the public_html of that tool [00:56:38] mzmcbride@tools-login:~/public_html$ whoami [00:56:38] mzmcbride [00:56:46] tools.wmflabs.org// [00:56:56] I can't have a personal home? [00:57:08] serving to the web? I don't think so. [00:57:10] How do I make a tool? [00:57:18] moment [00:57:23] Just one! [00:57:25] Elsie: http://tools.wmflabs.org/ [00:57:32] 'create new tool' to the right of 'hosted tools' [00:57:44] Elsie: will take you to wikitech, assuming you're logged in :) [00:57:55] I wasn't. [00:58:11] "Add service group"? [00:58:13] yeah [00:58:19] service group == tool [00:58:35] !toolsdoc [00:58:35] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [00:58:40] there's also that, Elsie [00:59:01] http://tools.wmflabs.org/mzmcbride/ [00:59:03] Easy enough. [01:00:33] I made my home dir 755. [01:00:35] I think that's fine. [01:00:49] Elsie: The difference being that nobody will ever be given credentials to your own account, whereas service groups can be moved from user to user as needed; hence all public stuff needs to be in one of them. [01:01:18] It's a free host. Surely I should consider everything public... [01:01:32] Yeah, the default is just protective-paranoid for people who don't really get Unix file permissions and might not realize. [01:01:43] * Elsie nods. [01:01:53] ssh -q is a beautiful thing. [01:02:21] Elsie: Different expectations; your personal account is yours and we avoid playing in it. [01:02:37] Why you no like my ascii artz?! [01:02:48] I like a quiet login. [01:04:40] <^d> -q is a must for git. [01:04:58] <^d> Especially for batch operations :) [01:06:11] And annoying messages that don't respect .hushlogin. [01:06:15] <^d> Elsie: I made you elasticsearch redirects :) [01:06:52] I saw. Thanks! [01:07:03] I read the wikitech page. [01:07:04] It was amusing. [01:13:31] Coren: access.log records 127.0.0.1 instead of an external IP? [01:14:59] Elsie: Right; this allows the privacy policy to work on Tool Labs. [01:15:57] Which privacy policy? [01:16:15] We'll expose developer IP addresses, but not user IP addresses? [01:16:26] The general Wikimedia one; in particular it means that linking to a tool from a project doesn't need a disclaimer intersitial. [01:16:53] I thought there was a separate privacy policy for Labs. [01:16:57] Elsie: That's because, as a dev, you are bound by the labs TOU -- something which random endusers aren't expected to. [01:17:08] Hmmm. [01:17:09] Confusing. [01:17:20] Elsie: For labs in general there is, but tool labs doesn't so that tools can be used directly from projects. [01:17:40] It's strange that the access log uses a home IP address. [01:17:43] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Rules <-- rules for devs. [01:17:44] Rather than simply omitting the data. [01:18:03] Elsie: To avoid breaking tools that expect the log to be in standard common apache format. [01:18:13] > not your personally account. [01:18:44] fix't [01:19:38] Those rules were written less than a week ago. [01:19:40] Such that they are. [01:20:15] Well, there were drafts up and about for quite some time; this is the "real" version from legal. [01:20:28] "Such that they are"? [01:20:57] I like 'em. They're not overly complicated and fairly liberal. [01:22:45] And they got enough flexibility in them that we can make exceptions when needed. [01:26:17] I'm glad you're happy. [01:26:32] o_O Do you have issues with them? [01:35:21] Coren: ping [01:35:31] CP678|iPhone: Pong. [01:35:42] How are deleted edits coming? [01:35:46] Did you ever get my /msg about why xtools was broken? :-) [01:36:08] CP678|iPhone: I dunno, I need to bug Luis about it; thanks for reminding me to remind him. :-) [01:36:19] s/Luis/Asher/ [01:36:38] Can't blame Luis now. :-) [01:37:20] Coren: pfft. you can *always* blame the lawyers [01:37:37] !ping [01:37:37] !pong [01:37:41] okay, my network works [01:38:10] Coren: no I have not. [01:39:15] CP678|iPhone: Wasn't your fault; some stray query had managed to log the enwiki.page table for days without anyone noticing; obviously the edit counter didn't fare well. [01:39:32] CP678|iPhone: The errors remain in the logs, but they're obviously not fatal otherwise. [01:39:36] ????? [01:39:47] You mean the counter hung up? [01:40:11] * Coren nods [01:40:18] Which made it spawn a new one at every request until the webserver was crushed under the memory strain. [01:40:39] * Coren is working on a nagios alert for stuck table locks. [01:41:24] Weird. Everything is falling apart. [01:41:38] What is? [01:42:08] Cyberbot, xtools, labs, etc... [01:42:46] Everything works for me that I can see. [01:43:02] Could you do me a favor? [01:43:08] Sure? [01:43:32] You can initiate tasks under Cyberbot, right? [01:43:44] Yeah. [01:44:08] Can you count how many continuous tasks are running in the queue? [01:44:51] I see 12 continuous, and a single task [01:45:16] What's going on with RfX tally? [01:45:24] It seems to be stuck. [01:45:45] It seems to be running. Want me to go look at what its process is doing? [01:45:55] Yes please. [01:49:13] CP678|iPhone: It's spinning in userspace. Lemme check its logs. [01:50:07] Ah, there are no timestamps. The last thing I see in longs is the GET for Getting page info for Wikipedia:Requests for adminship/Procrastinator16.. [01:50:59] Want me to give it a kick in the diodes? [01:51:48] What would you do? [01:52:09] kill the php running the bot; since it's a continuous task it will automatically be restarted. [01:52:50] Do it. [01:56:50] Ryan_Lane: Coren is it possible to have a labs isntance that has no puppetmaster? [01:57:00] Ryan_Lane: Coren *not* the same as Puppetmaster::self [01:57:13] I'm trying to put the vagrant puppet files into a labs instance and see what happens [01:57:35] I don't think "without", no. [01:57:44] at all? [01:57:58] At least not without some manual trickery. [01:58:19] And I'm pretty sure that nagios will then complain loudly. [01:58:37] i'm mucking around, let's see what happens :) [01:58:50] (this is multimedia-dragons, in case it freaks anyone out) [02:00:24] goddamnit [02:01:17] CP678|iPhone: I see new stuff ended up in the rfx-tally.out file [02:01:32] Like what? [02:01:57] * YuviPanda tries again [02:01:58] CP678|iPhone: It's getting page info and history for several RfA that I can see [02:02:40] The .err file has... Peachy errors? Something about it trying to curl to an IPv6 address. [02:03:39] Also POST: http://en.wikipedia.org/w/api.php [02:03:45] Coren: okay, so uninstalling puppet, and then rm -rfing /etc/puppet seems to do it [02:03:50] So I'm guessing it's updating tallies. [02:03:50] ' [02:04:48] YuviPanda: well, it's kind of possible, but please don't [02:04:50] why would you do so? [02:05:02] Ryan_Lane: I'm putting the vagrant puppet files in there and seeing what happens. [02:05:08] Ryan_Lane: I can delete the instance if things go wrong [02:05:13] it means that we'll never be able to apply global changes to it [02:05:27] Ryan_Lane: sure sure, I don't expect this instance to last more than a day or two anyway. [02:05:44] * Ryan_Lane nods [02:05:58] YuviPanda: then all you need to do is modify the puppet config [02:06:05] powercut again. gah [02:06:05] and you're done [02:06:10] YuviPanda: then all you need to do is modify the puppet config [02:06:11] and you're done [02:06:51] Ryan_Lane: right. but I did something else - killed the puppet package, rm -rf /etc/puppet, then install puppet again :) [02:06:55] horrible, I know [02:07:03] but now I've the vagrant stuff running! [02:12:46] !ping [02:12:47] !pong [02:12:48] ok [02:13:00] !pang [02:13:08] ? [02:13:16] !ping [02:13:16] !pong [02:13:23] !pong [02:13:23] Don't mess with me. [02:25:08] Coren: it works fine too, btw ;) http://multimedia-dragons.instance-proxy.wmflabs.org/wiki/Main_Page [02:25:26] !ping [02:25:26] !pong [02:25:29] okay [06:05:06] !log math adding jiabao to work on math support for visualeditor [06:05:08] Logged the message, Master [06:05:29] Ryan_Lane: hello [06:05:38] jiabao: ok, so I added you as projectadmin so that you can create instances in the math project [06:05:47] there's other people's instances in there too [06:06:00] ok cool thanks alot! [06:06:05] XD [06:06:16] you shouldn't really mess with them, unless you are working on something with them [06:06:46] ok how can I find out how to create my own instance? [06:06:59] jiabao: https://wikitech.wikimedia.org/wiki/Help:Getting_Started [06:08:02] that looks awesome =D [08:48:46] Hi guys. On the toolserver, there is a database table (toolserver.wiki) with all wikiprojects replicated on the toolserver. (dbname, lang, family, domain, size, is_meta, is_closed, is_multilang, is_sensitive, root_category, server, script_path). [08:48:47] Is there something similar on labs? [09:07:27] or I have to take it from /etc/hosts ...what a pitty [09:17:43] [bz] (8NEW - created by: 2Pietrodn, priority: 4Unprioritized - 6normal) [Bug 52370] Replicated DB fawiki_p is missing revision table - https://bugzilla.wikimedia.org/show_bug.cgi?id=52370 [09:21:05] oh wikidatawiki.sites contains the wikis. [09:30:28] Coren|Away: Hi. Ask again, my project "guc" doesn't get a replica.my.cnf file. bug? [09:52:21] hallo Luxo, freut mich dass du an den Tools arbeitest :) [09:53:54] Hallo Steinsplitter, Ja wenn Sie mir den TS wegnehmen wollen ;-) [09:56:39] *ping* Petan [10:40:17] petan: ping [12:02:20] Hmm. No IPC::Run on tools-exec-*. Can I get libipc-run-perl? [12:50:55] hi there [12:51:50] http://tools.wmflabs.org/render-tests/limes-git/db/pois.py?getfocusyearforobject=Kastell_Saalburg&range=verified throws an error, but the script works when i start it from the shell. Ryan_Lane (or any other admin), could you check the error log for anything related? [12:53:28] btw: i remember that it should become possible for users to check the error log themselves at some point. is there any news on that? [12:58:04] [bz] (8NEW - created by: 2Brad Jorsch, priority: 4Unprioritized - 6enhancement) [Bug 53532] Package request: libipc-run-perl - https://bugzilla.wikimedia.org/show_bug.cgi?id=53532 [13:06:28] um... apparently lots of scripts/tools don't work. random selection: http://tools.wmflabs.org/not-in-the-other-language/ http://tools.wmflabs.org/oauth-hello-world/ http://tools.wmflabs.org/ocounter/pcount/ http://tools.wmflabs.org/orwell01/ all give me 500 errors. [13:06:36] anybody got any idea what's wrong? [13:15:57] [bz] (8RESOLVED - created by: 2Magnus Manske, priority: 4Unprioritized - 6normal) [Bug 52744] cawiki sql bug - https://bugzilla.wikimedia.org/show_bug.cgi?id=52744 [13:16:39] Coren|Away: ping [13:17:08] petan: ping [13:17:23] anyone: ping [13:19:00] !newlabs [13:19:00] This is labs. It's another version of toolserver. Because people wanted it just like toolserver, an effort was made to create an almost identical environment. Now users can enjoy replication, similar commands, and bear the burden of instabilities, just like Toolserver. [13:19:15] lol [13:19:32] Well it's true. [13:19:39] Labs is broken again. [13:20:25] indeed all the cgi scripts i tested just give a 500. is that what you mean? [13:20:43] Oh bah! OOM again. [13:20:56] oh. [13:21:13] Coern: told you 2gb is too small :p [13:21:15] I need to start putting a little bit more limits on the webserver, O fear, [13:21:58] JohannesK_WMDE: Yeah, clearly. Ima going to finish puppetizing the webservers and reistall them bigger. [13:22:13] * Coern grumbles about silly openstack and broken resize [13:22:16] good thing. [13:23:54] Coern: what kind of limits? [13:24:32] CP678|Webchat: Probably reduce the number of children to n so that n*php overhead < total memory. [13:25:07] You mean restrict memory allotment even more? [13:25:33] No, I mean reducing the number of simultaneous scripts that can run at the same time. [13:25:42] Ah. [13:26:05] More memory is high on my list, of course, but even then. [13:26:06] I've already been getting complaints about X!'s tools not working across the projects again. [13:26:44] Editcounter works atm. [13:27:03] The problem is that the webservers aren't robust against one tool taking all the slots. [13:27:14] Now it does. It hasn't been for the last 12 hours. [13:27:33] Seems it happened right after I went away. [13:27:58] And yeah, it works now because I just removed all the stalled children. [13:28:09] Coern: i normally don't say things like that, but can't you just increase the memory of the vm? instead of restricting number of simultaneous scripts? [13:28:50] JohannesK_WMDE: I can, and I will, but that only puts the limit higher; it doesn't prevent the issue from occuring again. [13:29:20] Coren: what's causing the stalls? [13:29:41] CP678|Webchat: Looking into it now... [13:29:47] And I think I already found it. [13:29:52] I hope not XTools [13:30:04] Looks like something is spidering catfood and catscan2 [13:30:24] Coren: I'm not familiar with that terminiology. [13:30:31] Starting hundred and hundreds of simultaneous queries. [13:30:56] catfood and catscan2 are tools for category intersection. Rather heavy. *Really* shouldn't be spidered. [13:31:20] Then it's not my fault. :-) Phew [13:33:09] Coren: has Asher responded to the ETA? [13:33:30] CP678|Webchat: He's at Burning Man until Monday. :-) [13:33:41] Burning Man? [13:34:06] [[Burning Man]] [13:35:26] Yeah, looks like some distributed spambot/scrapper is trying to follow every link on some tools and bringing lots of pain. [13:36:11] Who's running it? [13:36:48] CP678|Webchat: "Bad Guys". Mostly IPs in mainland china, plus a couple of ranges with VPSes. [13:37:11] :O [13:38:58] Most of those tools can't really cope with many requests by second. [13:39:57] Coren: Maybe it would be advisable to give xtools it's own web server? :) [13:41:20] * Coren installs mod_evasive [13:43:07] webservers are down again. [13:43:46] Coren: ^ [13:44:19] And it's up again. [13:44:53] Yeah, it's not intended as a DDOS, just evil spammers, but the net effect is the same. [13:45:20] robot.txt ? [13:45:52] phe: Sadly, scrappers don't obey robot.txt [13:52:19] The limit doesn't help all that much. The webserver no longer gets oom. but the illicit crawlers are keeping all the children busy. [13:52:24] Still DOSes [13:52:46] * Coren needs to blackhole more IP ranges. Damn. [13:56:16] Coren: I'll leave you to your spiders. [13:56:22] See ya [13:56:28] * Coren waves. [13:59:41] dafu? Why is google ignoring robots.txt? [14:00:20] How hard is Disallow: / [14:15:26] !ping [14:15:26] !pong [14:32:13] Coren: we might create a script that would automatically insert IP to blacklist for all webservers so that we can easily get rid of problematic robots [14:32:37] petan: to a point. [14:32:44] or at least document how you are doing it now :P [14:33:03] It's not always immediately clear what is a bot and what isn't -- they generally lie about their UA after all. [14:33:22] ok but we could make like temporary bans for excessive IP's [14:33:23] What I'm doing now is simple: look at the logs, whois the sources, and block them at the network level. :-) [14:33:31] and create a cron job that would clear them after a day or so [14:33:32] Coren, my bot's UA is Peachy. :D [14:33:55] Coren: network like iptables or lower level? like router [14:34:05] petan: iptables [14:34:19] ok that will be lost after reboot, and you do that on all webservers? [14:35:08] like, if you see some IP you ban it on all webservers or proxy? or just 1 webserver in question? [14:41:19] It's on the proxy, and the iptables config is in ~/reject.iptables [14:43:04] I also added X-Robots-Tag "noindex,nofollow" so that the more legitimate bots we don't know about are, at least, encouraged away. [14:50:17] I seem to have stemmed the worse. [14:50:45] Meaning? [14:51:18] The rate of requests to the actual webservers is now down to quite reasonable levels, and nothing seems to be spidering them sucessfuly anymore. [14:51:39] Cool. [14:52:04] andrewbogott: hey! any luck on the packaging? [14:52:39] Building something now, we'll see if it does what you want. Should I just stash it in /data/project in tools? [14:52:56] andrewbogott: no, there's project proxy-project [14:53:00] andrewbogott: do it there? [14:53:03] ah, ok. [14:53:06] I no haz root on tools :) [14:53:13] Also, giving 403 on obviously broken URIs (/~.*) helps. [15:06:00] YuviPanda, sorry for the holdup, turns out building on a gluster volume is… slower. [15:13:45] andrewbogott: hah! [15:13:54] andrewbogott: oh, I thought proxy-project was on NFS? [15:15:36] Probably, but I'm building elsewhere… just grabbed the first precise machine I could think of. [15:15:39] Chose poorly. [15:15:44] hehe :) [15:20:44] [bz] (8RESOLVED - created by: 2Pietrodn, priority: 4Unprioritized - 6normal) [Bug 52370] Several replicated DB are missing tables and content - https://bugzilla.wikimedia.org/show_bug.cgi?id=52370 [15:26:04] Hi, thanks for the database fix Coren :) [15:26:30] Darkdadaah: No worries. Sorry for the brief blackout. [15:26:57] No problem. That was still fast. [15:27:54] last chance to give feedback on the article list generator! anybody who wants to fill out the questionnaire please do it now. :) link here: http://tools.wmflabs.org/render/stools/alg thanks! [15:33:24] YuviPanda, does the -extras package in /data/project/nginx have what you want? [15:33:33] andrewbogott: looking [15:33:37] (I know that it's marked as oneiric, I can fix that if I'm on the right track.) [15:33:46] * YuviPanda ssh's [15:35:56] andrewbogott: how do I find out? [15:36:08] find out if it works, you mean? [15:36:16] andrewbogott: ah, no - find out the version of lua package in it? [15:36:20] andrewbogott: nginx version is fine [15:36:29] andrewbogott: although Ryan_Lane preferred 1.4 rather than 1.5 [15:37:33] hm, do you remember why? [15:38:45] andrewbogott: 1.4 was stable, 1.5 was development version [15:38:57] andrewbogott: nginx does the 'even versions stable, odd versions dev' versioning scheme [15:41:00] andrewbogott: okay, I've no idea how to figure out the version of the lua stuff in there. [15:41:03] andrewbogott: i can't install it either [15:41:15] can't install because you don't have root? [15:41:18] dependency problems - leaving unconfigured [15:41:18] Or because it doesn't work? [15:41:22] damn [15:43:02] andrewbogott: I'm going to go have food now. [15:43:08] 'k [15:43:35] andrewbogott: btw, /home/scfc/packaging seems to have work from him [15:44:07] andrewbogott: no idea if it is useful tho [15:46:33] andrewbogott: hmm, I see --add-module=/andrewtmp/nginx-1.5.0/debian/modules/nginx-lua [15:46:43] andrewbogott: in the version of nginx in /data/project/nginx [15:46:47] andrewbogott: can you tell me what version that is? [15:50:03] hm… the files seem to disagree with each other. One of them says ngx_lua v0.5.0rc24 [15:50:33] andrewbogott: that might in fact be recent enough, but I'll feel better if it is a stable version (0.8.x) [15:51:03] okay, really going to have food now [15:51:32] I don't really understand how the modules interact with nginx, but… I'd prefer you to use that version if you can since it came straight from the nginx devs. [17:24:07] * YuviPanda pokes andrewbogott.  [17:24:08] any luck? [17:25:43] YuviPanda, I'm running a fever, not good for much atm. But I will try to make you a new build on precise in case that helps. [17:25:48] andrewbogott: ow! [17:26:01] andrewbogott: 'tis okay, no hurry. take care of the health. post travel stresses are horrible [17:26:08] Yeah, thought I was just jetlagged but must have some actual illness. [17:35:09] YuviPanda, do the precise packages in /data/project/nginx work any better? Or same config failure? [17:35:26] andrewbogott: moment, looking [17:38:35] andrewbogott: works if i first install common, then extras [17:38:47] cool. [17:39:07] I'm not sure how we should handle those packages… you only need them on a single instance? [17:39:32] andrewbogott: yeah [17:39:38] andrewbogott: and what's the lua version for these? [17:39:48] OK, so maybe it's sufficient to just document where they came from... [17:40:21] andrewbogott: Ryan_Lane mentioned something about putting the debs in a local repository. he also gave a link to scfc_de from wikitech, but I don't have it :( [17:41:05] I put the source in /data/project/nginx/nginx-lua/ -- that's all I know about versioning. [17:41:11] andrewbogott: looking [17:43:22] andrewbogott: that looks okay. [17:43:27] andrewbogott: is that sourced from git? [17:43:47] No, it's from this ppa: https://launchpad.net/~nginx/+archive/development [17:44:16] A bit dodgy... [17:44:25] grrr, it has no indication of what version any of those are [17:44:37] Yeah. [17:45:06] andrewbogott: the documentation says that it supports the stuff I wanted. [17:45:10] so... it might be good enough. [17:45:12] still a bit dodg [17:45:12] y [17:47:11] Yeah -- I'm too addled to think through security/maintainabiliy issues, but ryan_lane probably has opinions. [17:48:18] andrewbogott: you should probably get some rest :) [17:48:31] andrewbogott: also, is the mediawiki_singlenode module in ops/puppet your work? [17:49:36] yeah, mostly mine. [17:51:25] andrewbogott: so I was going to setup multimedia-dragons yesterday, and instead of using mediawiki_singlenode, I set it up to use mediawiki-vagrant's puppet stuff. works like a charm, and is more complete than mediawiki_singlenode. I want to see how we can integrate mediawiki-vagrant's puppet stuff into labs. [17:51:36] andrewbogott: will talk about it when you're not sick :) [17:51:54] ok :( [17:53:04] andrewbogott: thanks for the package! [18:09:58] Coren: I'm wondering if we should use docker for a MW project [18:10:23] Coren: it's a new hypervisor in the havana release of OpenStack [18:10:34] YuviPanda: ^^ [18:10:38] They're nice pants. Oh. Docker, not DockerS. :-) [18:10:45] * Coren reads up on it. [18:11:06] Ryan_Lane: did you see multimedia-dragons? I've the vagrant stuff working there perfectly fine :D [18:11:13] eh [18:11:14] err [18:11:15] heh [18:11:15] yeah [18:11:22] Ryan_Lane: me and ori were thinking of adding a wikitech provider to vagrant [18:11:35] Ryan_Lane: so wikitechprovider+docker -> cheap+fast mediawiki playgrounds [18:11:40] "* Please note Docker is currently under heavy developement. It should not be used production (yet)." [18:11:55] Ryan_Lane: did they add *docker* support or LXC support? [18:12:01] YuviPanda: docker [18:12:05] If I understand properly, that's intended to be an easily deployable app-in-a-vm? [18:12:14] LXC support was already there [18:12:21] Coren: in a container, but yes [18:12:26] Coren: kernel level containers, not VMs. [18:12:32] lot less heavy weight than VMs. [18:12:42] Ryan_Lane: how soon do you think we can upgrade? [18:12:54] we're still on folsom [18:13:02] and havana isn't till November [18:13:10] folsom is... one release behind? [18:13:13] we can start it off using VMs [18:13:19] and switch to docker later [18:13:32] we'd need to use separate hardware for it, though [18:14:02] It's worth playing around with, I'd say. [18:14:04] it should lessen the amount of vms used by a decent amount [18:18:43] for now I think making a vm per mw install is sane [18:18:49] it's what we're doing anyway [18:18:59] later we can try docker and use a container per install [18:19:02] true true [18:19:13] Ryan_Lane: I'm going to take a stab at a vagrant provider for wikitech in some time. [18:19:19] Ryan_Lane: once I'm done with the other stuff I'm doing. [18:19:19] why vagrant? [18:19:27] I don't understand the point [18:19:41] all vagrant does is run puppet [18:20:03] Ryan_Lane: so it is either integrate the puppet stuff in mediawiki-vagrant into wikitech, or the other way around [18:20:19] the mediawiki-vagrant stuff already exists [18:20:20] Ryan_Lane: the idea being, if I have a vagrant plugin, I can just say 'vagrant sync' and then my local install and the labs instance will have the exact same setup [18:20:40] and it uses basically the same code [18:20:50] already exists on labs? [18:20:51] where? [18:20:54] mediawiki_singlenode? [18:20:59] https://wikitech.wikimedia.org/wiki/Help:Single_Node_MediaWiki [18:21:01] yes [18:21:07] those two should be merged more [18:21:34] yeah, but I like mediawiki-vagrant's roles better. [18:21:47] to setup UploadWizard, for example, I just add the uploadwizard role [18:21:54] those should be added to regular puppet [18:21:56] that sets up imagemagick, php ini stuff, dependent extensions, etc. [18:22:09] Ryan_Lane: to operations/puppet.git? [18:22:11] yes [18:22:19] merging those will get a lot harder :P [18:22:31] the vagrant classes only exist because our ops repo sucks [18:22:39] I tend to agree. [18:22:53] diverging the puppet repos further only makes things worse [18:22:57] I wanted to use the Redis class from our ops repo in mediawiki-vagrant last week. Had to copy paste, and then rip out the ganglia stuff [18:23:09] felt so dirty. [18:23:14] the ganglia stuff should be added via a role [18:23:14] but I don't have a better solution [18:23:17] not the class itself [18:23:27] yeah, but it is added in the class itself in ops/puppet :) [18:23:53] split it into separate classes in the module and include it in the role [18:24:22] in ops/puppet? [18:24:24] yes [18:24:32] I'll do that in the 2 extra weeks of free time I have :P [18:24:36] heh [18:24:59] it's not easier to split it into a separate repo and maintain it there [18:25:06] it adds technical debt [18:25:18] leave things cleaner than when you started ;) [18:25:55] Ryan_Lane: if we start having our modules in different repos and then submodule them, it would at least make it easier to move to a place where they aren't so interdependent [18:26:36] we can't do that with every module, though [18:26:45] sure sure. [18:26:49] but the redis module, for example. [18:27:04] we can say that for most modules ;) [18:27:11] hehe :P [18:27:24] don't have a concrete solution, Ryan_Lane :) [18:27:34] honestly it would be good if vagrant checked out the entire puppet repo [18:27:46] it doesn't use a lot of the stuff there, no? [18:27:48] we want to encourage people to make infrastructure changes [18:27:51] in fact it probably *can't* use a lot of the stuff there [18:28:06] the more we move to modules the more will be usable [18:28:27] true [18:28:33] that's the goal [18:28:37] *actually* modularize things, rather than just plop them into modules :P [18:28:53] as it's needed, yes [18:29:09] true, true [18:29:22] Ryan_Lane: also, btw, andrewbogott seems to have produced a deb that *should* work [18:29:47] Ryan_Lane: do you have the link to the 'local repo' stuff? [18:30:04] https://wikitech.wikimedia.org/wiki/Help:Using_debs_in_labs [18:30:11] andrewbogott: ^ [18:31:02] Ryan_Lane: I'll test it out in about an hour or so, and then hopefully you can merge my patches :) [18:34:28] cool :) [18:44:26] Coren, ping [18:44:45] Cyberpower678: Yes? [18:45:22] Didn't you say that labs has the ability to restore an image of the way an account was from few hours ago? [18:46:05] Coren, ^ [18:46:33] Cyberpower678: I did, but it's not currently available while the primary controller of the NFS server is offlined. [18:46:53] Crap [18:46:59] I kind of need it now. [18:47:09] As in I corrupted xtools. [18:47:18] is that on git? [18:47:22] No. [18:47:30] Migration is still in progress. [18:47:32] svn? [18:47:35] No. [18:47:38] Ewww [18:47:50] Yeah, sorry, there haven't been snapshots in two weeks and the old ones are not accessible. [18:48:09] Hence the exhortation in the documentation to put everything you have in source control. [18:48:16] (And the notice I've sent to labs-l) [18:48:39] When will they be accessible. [18:48:48] An old one will do. [18:52:13] Nevermind. A simply problem. [18:52:19] Fixed it now. [19:43:42] Coren: If you have time at some point yet today, https://bugzilla.wikimedia.org/show_bug.cgi?id=53532 please? [19:44:10] anomie: Should be simple enough. Gimme a minute and I'll do that. [19:44:17] Thanks! [19:51:29] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Normal - 6enhancement) [Bug 53458] adapt the MariaDB puppet manifests for beta - https://bugzilla.wikimedia.org/show_bug.cgi?id=53458 [19:51:45] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Normal - 6enhancement) [Bug 53457] setup a DB backed parser cache - https://bugzilla.wikimedia.org/show_bug.cgi?id=53457 [19:51:58] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Normal - 6enhancement) [Bug 53339] migrate beta databases to MariaDB - https://bugzilla.wikimedia.org/show_bug.cgi?id=53339 [19:53:22] [bz] (8NEW - created by: 2Brad Jorsch, priority: 4Unprioritized - 6enhancement) [Bug 53532] Package request: libipc-run-perl - https://bugzilla.wikimedia.org/show_bug.cgi?id=53532 [20:02:38] [bz] (8NEW - created by: 2Pleclown, priority: 4Normal - 6enhancement) [Bug 53117] Switch to using the mysqlnd driver in PHP - https://bugzilla.wikimedia.org/show_bug.cgi?id=53117 [20:04:06] [bz] (8RESOLVED - created by: 2Peter Bena, priority: 4Normal - 6enhancement) [Bug 49104] display used vmem in stats - https://bugzilla.wikimedia.org/show_bug.cgi?id=49104 [20:08:10] [bz] (8RESOLVED - created by: 2Krinkle, priority: 4Low - 6enhancement) [Bug 34606] Create bot-control panel for bot-operators. - https://bugzilla.wikimedia.org/show_bug.cgi?id=34606 [20:10:07] [bz] (8RESOLVED - created by: 2Aude, priority: 4Unprioritized - 6enhancement) [Bug 51885] provide postgresql and postgis on toollabs - https://bugzilla.wikimedia.org/show_bug.cgi?id=51885 [20:11:41] [bz] (8NEW - created by: 2m.p.roppelt, priority: 4Unprioritized - 6normal) [Bug 51310] create dedicated instance for exturl checking - https://bugzilla.wikimedia.org/show_bug.cgi?id=51310 [20:11:53] awwwww [20:12:00] * aude got excited to see resolved [20:12:07] resolved duplicate :/ [20:12:08] aude: me too! [20:12:09] sigh [20:12:26] [bz] (8ASSIGNED - created by: 2m.p.roppelt, priority: 4Unprioritized - 6normal) [Bug 51129] install tdbc and tdbc::mysql - https://bugzilla.wikimedia.org/show_bug.cgi?id=51129 [20:13:24] [bz] (8RESOLVED - created by: 2Peter Bena, priority: 4Lowest - 6minor) [Bug 51937] execution hosts are randomly unresponsive - https://bugzilla.wikimedia.org/show_bug.cgi?id=51937 [20:15:19] [bz] (8NEW - created by: 2Yuvi Panda, priority: 4Low - 6enhancement) [Bug 52452] Have public, readonly, up-to-date git repositories available for all tools to use - https://bugzilla.wikimedia.org/show_bug.cgi?id=52452 [20:16:03] [bz] (8ASSIGNED - created by: 2silke.meyer, priority: 4Low - 6enhancement) [Bug 48785] UI improvement suggestion for "create new tool" link - https://bugzilla.wikimedia.org/show_bug.cgi?id=48785 [21:59:19] Hi guys! I'm trying to get connection to a bastion, but I cannot... [22:00:23] I think it's because my username contain spaces [22:00:43] So when I try to connect via SSH I get permission denied [22:00:48] Any idea? [22:13:36] elgranscott: did you read https://wikitech.wikimedia.org/wiki/Help:Access ? [22:15:16] Darkdadaah: yes, now I know my username [22:15:33] and I've gotten access to a bastion [22:16:15] with ssh -A @bastion.wmflabs.org [22:17:21] hey, EE team needs help with an old labs instance [22:17:50] but, how can I know what's my instance to do ssh .pmtpa.wmflabs [22:18:16] ee-prototype has an old PHP version, yet it shows roles apache webserver::php5 and webserver::php5-mysql, so not sure why it isn't getting updated [22:18:50] elgranscott: you need to be a member of project first (e.g. tools). Then you will be able to connect to one of the instances of the project (e.g. tools-login). [22:21:34] tried adding the mediawiki-install role to ee-prototype, and got err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Package[libapache2-mod-php5] is already defined in file /etc/puppet/manifests/webserver.pp at line 41; cannot redefine at /etc/puppet/modules/apache/manifests/mod.pp:31 on node i-0000013d.pmtpa.wmflabs [22:21:45] should we not have the apache roles? [22:21:54] spagewmf: did you try apt-get upgrade yet? they are unlikely auto-upgrading [22:22:04] elgranscott: For what you want to do, you should use the tools project: see http://tools.wmflabs.org and "request access" [22:22:26] mutante no... I assumed that would be part of the puppet magic [22:22:55] I'm removing the webserver roles [22:23:00] spagewmf: well, sometimes you don't want it to happen at a random moment and surprise you, so ensure => latest broke stuff before [22:23:12] elgranscott: more help here: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [22:23:16] it's ensure => latest vs. ensure => present [22:23:51] with an apt-get update it still reports 5.3.2-2wm1 [22:23:57] Darkdadaah: thanks, I understand it now [22:24:03] (as the newest available version in apt-cache show php5) [22:24:53] its sourcing from deb http://apt.wikimedia.org/wikimedia lucid-wikimedia main universe [22:25:03] perhaps lucid never got the update? [22:25:16] np [22:25:51] <^d> Lucid's sooooo old :) [22:26:07] so i can dist-upgrade to precise? [22:26:18] spagewmf: one of the 2 roles must remove that definition to pull in libapache2-mod-php5, they are probably doing too much in a single role instead of just mediawiki or just ee (and not Apache) [22:26:51] <^d> ebernhardson: No, that'd be a do-release-upgrade. Which I've been told is a Bad Idea and you should just build a new instance using precise. [22:26:57] <^d> Long as it's puppetized, should be easy :) [22:27:53] i've always used dist-upgrade, what extra does do-release-upgade do? [22:28:35] ebernhardson: do-release-upgrade is an Ubuntu thing to go from like lucid -> precise [22:28:56] oh, i guess nothing is ever good enough for ubuntu :P [22:28:58] apt-get dist-upgrade is upgrading all packages and the kernel but stays within the same release [22:29:15] well, i would obviously update source.list and source.list.d/* by hand :P [22:29:18] otherwise would be pointless [22:29:27] i think spage is trying to get a new machine built [22:29:33] i dont think you really want to do release upgrades, rather take a fresh instance with the image you want [22:31:16] http://manpages.ubuntu.com/manpages/precise/man8/do-release-upgrade.8.html [22:31:42] each time i tried there were issues , it's not as good as it sounds :) [22:31:49] i suppose i'm just finding it odd dist-ugprade worked for years, and ubuntu had to invent some random new thing :P [22:31:57] actually i went back to the manual editing of sources.list [22:32:04] like normal Debian [22:32:17] it has equal or better chances of working :p [22:32:35] ebernhardson: agree [22:34:20] https://wikitech.wikimedia.org/wiki/Distribution_upgrades [22:38:30] eh, but it's still not pointless to run apt-get upgrade without editing your sources.list and switching releases [22:42:05] Darkdadaah: now I see a notificaction in my profile it says ""Coren added you to project Nova Resource:Tools"... so how can I now what instance I can use? [22:42:40] elgranscott: All the information on where to connect can be found at !docs [22:42:43] !docs [22:42:44] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [22:42:47] ^^ there [22:42:59] But the tl;dr: you probably want tools-login.wmflabs.org [22:44:22] !ping [22:44:23] !pong [22:47:49] Coren: yes, that's what I need [22:48:37] I would like to access to mysql database to do some queries, but I don't know the username and password I have to use [22:50:01] They should be in a file named replica.my.cnf in your home. [22:54:04] Coren: but I'm getting this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) [22:54:08] why? [22:55:22] Probably because you're not connecting to the replicas at all (you need to specify a host with the -h option to mysql). Alternately, there is a simpler wrapper 'sql' you can use; just do things like 'sql enwiki' and it'll connect to the right place. [22:56:26] Jasper_Deng_away: busy? [22:56:39] yeah [22:56:53] :) [23:00:38] Coren: I've used the "sql enwiki" wrapper and I can connect to the database. But if I wanted to connect using mysql -h, how can I know the host I have to specify? [23:01:06] Teh basic pattern is .labsdb [23:01:37] E.g. enwiki.labsdb [23:27:47] Thanks for your help guys!!!