[04:18:27] Hi there. [04:18:51] I am an admin of the project math and I would like at add a new member. Can someone tell me how to do this? [04:21:30] Howie: on wikitech, in the sidebar, under 'Labs Projectadmins', click 'Manage projects' [04:21:48] then, under 'Project filter', type 'Math', and click 'Set filter' [04:21:52] thanks ori...in fact, i just figured it out just after I asked. :) [04:22:00] no problem [04:25:32] have a good one. [10:22:48] 3Tool Labs tools / 3Quentinv57's tools: editcount tool gives a significant different/wrong number of edits - 10https://bugzilla.wikimedia.org/65741 (10Andre Klapper) [10:33:03] 3Wikimedia Labs / 3wikidata: install AbuseFilter and SpamBlacklist on wikidata jenkins - 10https://bugzilla.wikimedia.org/65727 (10Lydia Pintscher) p:5Unprio>3Normal [12:44:07] hey liangent [12:44:37] YuviPanda: hey [12:45:08] liangent: heya! do you have a page that produces a django 500? [12:46:18] liangent: ? [12:46:21] YuviPanda: tools.wmflabs.org/liangent-django/right_timelines/ [12:46:27] http://tools.wmflabs.org/liangent-django/right_timelines/ [12:46:48] liangent: http://tools.wmflabs.org/liangent-django/right_timelines/ [12:46:50] liangent: see now [12:47:06] liangent: this is just me live testing, will revert in a few mins automatically. [12:47:18] liangent: did you add any special lighty config, or is this default lighty behavior? [12:49:18] 3Wikimedia Labs / 3tools: Install pastebinit - 10https://bugzilla.wikimedia.org/50935#c3 (10Liangent) 5RES/FIX>3REO It's missing again. [12:49:21] YuviPanda: http://pastebin.com/tXdZ32mC [12:49:36] I guess this contains nothing special [12:49:43] liangent: hmm, alright. yeah, doesn't look like it [12:51:35] liangent: didn't you find an nginx variable that tells me the body length of the proxied response? [12:51:41] liangent: can't find it in my logs or googling or reference atm... [12:55:51] YuviPanda: last time I found http://stackoverflow.com/questions/12431496/nginx-read-custom-header-from-upstream-server [12:56:00] but it doesn't appear working for me in my local test... [12:56:38] liangent: oh, were you just attempting to read Content-Length? [12:57:54] YuviPanda: yeah [12:58:04] hmm, right. let me try that out [12:58:28] liangent: can you find another non django URL that 500s, but doesn't have a custom 500 page? one for which the current behavior is appropriate? [13:00:54] YuviPanda: http://tools.wmflabs.org/liangent-misc/make_status.php?status=500&length=0 [13:01:07] liangent: hah! sweet [13:01:13] YuviPanda: source http://pastebin.com/n7JeFiCs [13:01:31] liangent: http://tools.wmflabs.org/liangent-django/right_timelines/ doesn't 500 anymore! :( [13:01:41] liangent: can you make it 500 for a little more time? :) [13:01:47] YuviPanda: I fixed just now... [13:02:05] YuviPanda: you can http://tools.wmflabs.org/liangent-misc/make_status.php?status=500&length=100 now ... [13:02:15] liangent: aaah, forgot about the length param [13:02:17] liangent: sweet, thanke [13:02:38] liangent: the 500 handler will revert when puppet next runs on tools-webproxy, btw [13:03:58] YuviPanda: ok though I have a few more pages to fix [13:05:52] YuviPanda: hm my django site now returns "400 - Bad Request". don't know why [13:06:03] I'm not even sure who sent it [13:06:15] nginx? lighttpd? django (unlikely)? [13:06:56] I should really make the nginx logs available at some point, I think. [13:07:00] YuviPanda: try on http://tools.wmflabs.org/liangent-django/right_timelines/ : wiki = zh.wikipedia.org , group = sysop , output = rendered [13:07:13] liangent: playing with the error pages now, give me a bit [13:08:45] liangent: nope, the if ($sent_header_content_length) doesn't work because error_page can't be there. [13:09:12] YuviPanda: so your nginx doesn't start? [13:09:28] liangent: yeah. I did a test and it bailed. [13:09:49] liangent: hmm, actually, maybe not [13:10:34] YuviPanda: http://paste.debian.net/101858/ [13:10:45] my nginx can start with this config file [13:10:51] though I'm always redirected to baidu [13:10:51] yeah, if in location is allowed, [13:10:56] let me try that [13:13:33] liangent: proxy is currently running that code, and it isn't quite working, since your generator doesn't actually send a content-length [13:13:45] liangent: maybe we could force lighty to always send a content-length, but that's bad for performance /streaming [13:14:26] liangent: perhaps we should move the 500 handling to lighty, instead of nginx? [13:14:43] liangent: then people can customize it all they want [13:14:48] liangent: and we can add a default one [13:15:25] YuviPanda: in my local test using that php on apache, Content-Length is sent [13:15:39] not sure if it's because of the difference between apache and lighty [13:15:44] liangent: ' curl -I 'http://tools.wmflabs.org/liangent-misc/make_status.php?status=500&length=100' [13:15:50] no content-length [13:16:14] YuviPanda: http://pastebin.com/Zm4keEXE .. [13:17:01] hmm, https://dpaste.de/YWjx [13:17:02] YuviPanda: http://tools.wmflabs.org/liangent-misc/make_status.php?status=500&length=100' is now intercepted by nginx [13:17:24] liangent: as is http://tools.wmflabs.org/liangent-misc/make_status.php?status=500&length=0 [13:17:37] puppet ran again, I presume :) [13:18:12] YuviPanda: so revert it again? [13:20:07] liangent: yeah, done. [13:20:39] liangent: neither are intercepted now [13:21:18] hm I guess it's really the diff between apache & lighty [13:21:23] probably [13:21:38] I'm going to see if I can make it intercept if it is missing or 0 [13:21:51] liangent: can you make your django app 500 for a moment? [13:22:17] I added an explicit one to php [13:22:25] liangent: content-length? [13:22:29] YuviPanda: yeah [13:23:08] not intercepting now either [13:23:53] you mean "not intercepting when length=0"? [13:25:06] liangent: yup [13:26:41] YuviPanda: btw changing python/django webapp code is more annoying than changing php files. the former requires a restart of lighty, or the old version is still used [13:27:25] YuviPanda: can you identify the source of that http 400 by the way? [13:27:35] liangent: are you still getting it? [13:27:52] liangent: I'm inclined to giving up for the moment on fixing 500s this way. I don't htink the 'if' works the way we think it works. [13:28:20] liangent: an easier solution is to shift the 500 handler to lighty, and then you can override it [13:28:35] liangent: do you have the bug number handy? I can comment there and then look at lighty [13:28:37] YuviPanda: it's now a 500, from where that 400 was sent [13:28:56] YuviPanda: https://bugzilla.wikimedia.org/show_bug.cgi?id=64393 [13:31:51] liangent: hmm, proxy_intercept_errors isn't allowed in an 'if' so we can't really do much. [13:32:28] YuviPanda: iirc proxy_intercept_errors doesn't intercept error codes without error_page defined anyway [13:32:45] liangent: yeah, but if we could somehow selectively turn it off... [13:33:19] doesn't not defining any error_page effectively turn it off? [13:33:39] it does, but we want to conditionally define them [13:33:43] ie. only define error_pages when there is some response [13:34:01] (if "there is some response" can be detected correctly) [13:34:02] liangent: indeed, that's what the current config is, and doesn't seem to work [13:34:23] if ($sent_http_content_length = '0') { [13:34:23] 2 error_page 500 /admin/?500; [13:34:23] 3 } [13:34:36] (ignore the 2 and 3, line numbers, terrible copying) [13:34:42] YuviPanda: in my local test the problem is that I can't get $sent_http_content_length correctly [13:35:27] YuviPanda: that is, $sent_http_content_length is always empty [13:37:04] YuviPanda: so with my config file if ( $sent_http_content_length = '' ) { return 301 http://www.baidu.com/; } redirects everything [13:37:28] right [13:37:36] that might also be what's happening here [13:38:28] YuviPanda: so http://stackoverflow.com/questions/12431496/nginx-read-custom-header-from-upstream-server is wrong? [13:38:37] or did we miss anything? [13:38:51] liangent: it might be wrong? or this might be a bug. [13:39:09] liangent: I'm trying to understand the semantics of $sent_ [13:39:14] and why it is called sent_ [13:40:58] YuviPanda: no idea.. [13:41:10] I'm investigating other options [13:42:28] YuviPanda: because there're $http_user_agent variables? [13:42:34] for request headers... [13:42:52] another means sending error pages from lighty? [13:43:07] that, or using lua in this somehow :) [13:43:52] in a fast way [13:44:59] liangent: putting it in lighty seems the best option atm, with nginx not really co-operating [13:45:09] filing a bug in nginx also seems appropriate [13:45:15] am looking at how to do it in lighty now [14:04:11] ping https://wikitech.wikimedia.org/wiki/New_Project_Request/Tools_for_mass_migration_of_legacy_translated_wiki_content [14:05:29] Nemo_bis: why can't it be on tools? [14:05:33] Nemo_bis: what is it being developed in? [14:05:39] Nemo_bis: is it a mediawiki extension? or a separate bot / tool? [14:05:42] MediaWiki extension [14:05:49] ah [14:05:52] makes sense then [14:06:00] Nemo_bis: you need andrewbogott_afk or Coren though. [14:06:09] I know I know [14:06:37] today is a US holdiay, so I dunno if they'll be up [14:06:39] Seems legit. [14:06:45] ah, he is :) [14:07:03] Nemo_bis: Watcha want for project name? [14:07:43] Pick a good one otherwise I'll default to tfmmoltwc :-) [14:07:44] Coren: "pagemigration" or something would be good [14:08:54] Nemo_bis: Created with that name. [14:10:43] thanks [14:14:09] Hey folks. Do we have a *.wmflabs.org SSL cert? [14:19:27] An error occurred during a connection to tools.wmflabs.org. SSL received a record that exceeded the maximum permissible length. [14:19:37] (Error code: ssl_error_rx_record_too_long) [14:21:51] I was getting to my instance with an SSL handshake for a while, but now I'm getting a 502 bad handshake. [14:22:59] RapidSSL CA [14:23:35] look like back now [14:24:19] halfak: we do. why? [14:24:31] I'd like to use it. :) [14:25:00] halfak: https://wikitech.wikimedia.org/wiki/Special:NovaProxy :) [14:25:07] Hi, https://wikitech.wikimedia.org/wiki/New_Project_Request/Tools_for_mass_migration_of_legacy_translated_wiki_content [14:25:14] has been approved by an admin [14:25:23] Thinks YuviPanda [14:25:30] halfak: yw :) [14:25:31] do I get any credentials? [14:26:05] YuviPanda, it seems that I have the proxy set up, but I'm working on SSL now. [14:26:25] halfak: you don't need to do any work. the proxy is also an ssl terminator [14:26:42] halfak: your code just needs to listen on http. https (and SPDY!) automatically handled for you [14:26:50] Oh! Cool. [14:27:09] * halfak trims the https part of the config. [14:27:11] halfak: yeah :) First bit of ops-y code I wrote from a few months ago :D [14:30:26] Coren: pingg [14:30:33] YuviPanda, do I still need to handle connections via 443? [14:30:39] halfak: nope. [14:30:41] Coren: only 80 [14:30:57] gah [14:30:59] halfak: only 80 [14:31:14] * halfak is troubleshooting a "bad gateway" issue. [14:31:18] Coren: I want to setup tools-proxy-test.wmflabs.org as a full duplicate. the redis instances are already in sync. [14:31:29] BPositive: You're project admin on the 'pagemigration' project. [14:31:32] halfak: check firewall (security groups) in wikitech, make sure your 80 is open [14:31:57] YuviPanda: ... okay? [14:31:59] Coren: only thing needed is to copy the ssl certs. ok for me to do that? [14:32:37] Must be something in my apache config. I can get the "It works" page with the default config. [14:32:49] YuviPanda: Well, ostensibly yes but I'm not hot on making copies of key material like that. Do you really /need/ to have the real cert rather than a self-signed one if it's just testing? [14:33:07] * halfak is just troubleshooting out loud -- no need to focus on helping if you're busy :) [14:33:07] YuviPanda: If it's important that you do, then sure. [14:33:19] Coren: it is, yeah. [14:33:49] YuviPanda: So yeah then; just apply the usual care with key material. [14:33:50] Coren: and regular users can't ssh into infrastructure vms, right? [14:33:58] YuviPanda: Right. [14:34:07] Coren: ok! just double checking :) [14:34:47] Coren, https://wikitech.wikimedia.org/wiki/Labs-vagrant#Setting_up_your_instance_with_labs-vagrant I am referring to this [14:39:37] Coren, not understanding where do I create the instance itself? [14:40:03] * halfak is no longer getting "It works" with default config. [14:40:16] Coren: PM? [14:41:02] BPositive: on Wikitech, from the "manage instances" page (check the sidebar) [14:41:06] and a reboot got it [14:41:14] wat [14:41:17] lol :D [14:42:17] YuviPanda, is there some sort of caching in place on the proxy? [14:42:34] halfak: nope. [14:42:39] it does gzip, but that's about it [14:42:52] I did think of adding it but definitely too much trouble [14:43:13] Gotcha. I just wanted to make sure that couldn't explain this behavior. [14:44:57] Coren, this seems to be a better tutorial https://www.mediawiki.org/wiki/User:BDavis_%28WMF%29/Notes/Labs-vagrant [14:45:35] While creating a new instance, how does the image type matter? I see 12.04 and 14.04 over there [14:46:52] BPositive: Well, 14.04 is new and shiny, but I don't know how well vagrant supports Trusty atm. [14:48:30] BPositive: use 12.04 [14:48:35] vagrant not fully tested with trusty yet [14:49:39] YuviPanda, okies [14:51:16] YuviPanda, should I expect to have any issues with virtualhosts -- e.g. requests come directly to the internal IP or something? [14:52:20] halfak: shouldn't. it sets Host: header according to what it receives [14:52:51] Interesting. I can't seem to catch the request with anything but a wildcard "*". Silly apache2. [14:53:40] halfak: yeah, just confirmed that it does set Host: to the domain name you picked [14:54:01] OK. Will keep testing. :) [14:55:53] Coren: I just had to copy the private .key file, right? [14:57:26] Coren: 2014/05/26 14:55:33 [emerg] 18537#0: SSL_CTX_use_PrivateKey_file("/etc/ssl/private/star.wmflabs.org.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) [14:57:48] Coren: should I copy the pems too? [14:58:20] no, they have exact same size on either host [14:58:21] hmm [15:03:01] Coren: do poke when around [15:04:29] Coren: nevermind, got it to work! :) [15:04:31] YuviPanda: Yes, you need the matching certs. :-) [15:05:05] Coren: the certs were already installed by puppet, but it generated them with the wrong key since they were previously installed. I rm-d them and ran puppet again, all good now [15:14:03] 3Wikimedia Labs / 3deployment-prep (beta): beta labs mysteriously goes read-only overnight - 10https://bugzilla.wikimedia.org/65486#c3 (10Andre Klapper) Who could investigate this? [15:19:43] Coren, YuviPanda : Do those steps need to be done from any particular directory or so? [15:21:27] My instance name is 'special-pm-instance' and IP address is '10.68.17.122' [15:21:30] not able to ssh to it [15:21:38] ping to 10.68.17.122 does not work either [15:27:15] BPositive: Are you trying that from a bastion host or from the InterNet? [15:29:47] from my machine [15:30:11] what exactly do you mean by from the internet? [15:30:29] scfc_de, ^ [15:32:14] BPositive: Your machine :-). 10.* is a private network, you cannot reach it from your machine without first logging into a bastion host and then connecting from there. Take a look at https://wikitech.wikimedia.org/wiki/Help:Access. [15:32:42] YuviPanda: any update? [15:33:33] 3Wikimedia Labs / 3tools: Install pastebinit - 10https://bugzilla.wikimedia.org/50935#c4 (10Tim Landscheidt) It was probably only installed manually. I'll submit a patch. [15:41:02] Coren, can you please approve this? https://wikitech.wikimedia.org/wiki/Shell_Request/BPositive [15:41:08] or any of the admins? [15:42:10] BPositive: Oh, ah! Sorry I hadn't noticed you didn't already have shell. I had presumed you did since you made a project request. :-P [15:42:43] Coren, nope I am new to this, Nemo_bis asked me to file a new request and thats all I did :) [15:55:15] Coren, Thanks for approving. what would be my ? [15:55:32] BPositive: You picked it when you registered. :-) [15:56:04] is it the username? [15:56:16] and case-sensitive? [16:00:09] Yes, and yes. [16:00:25] No, I mean it's the *shell* username you picked, not your Wikitech username. [16:00:37] Go look on your preferences page, it's listed there. [16:00:54] I dint pick any actually, in that request form, I just had to give the justification [16:03:54] Coren, got it under User Profile. Its bpositive [16:04:01] all lowercased [16:18:45] a930913: wooo! :) [16:18:58] Coren: does tools have any form of icinga monitoring at all? [16:35:11] YuviPanda: It's on our collective todo [16:35:39] Coren: hmm, ok. I was just looking for other things to work on now that upgrading the proxy is a lost cause (for a while!). will think up something! :) [16:36:08] might end up building some tool that's mongo based, and build out mongo support. our puppet repo has a nice module for it already [16:37:25] I could ssh now :) What roles do I need for running code under Extension:Translate? [16:41:24] BPositive: MLEB should install them all, I think. [16:42:19] okay [16:42:41] YuviPanda, so I should now set up mediawiki and Translate like I did on my machine? [16:42:51] BPositive: labs-vagrant enable-role mleb [16:42:54] BPositive: labs-vagrant provision [16:53:31] YuviPanda, I did that and now I see an empty mediawiki directory in /vagrant [16:56:34] a930913: (topic) not true, saw a certificate error in da logs, some hour ago :P [16:57:33] YuviPanda, from the console output I see that git clone mediawiki failed [16:57:47] BPositive: do chmod +x ~ [16:57:51] BPositive: and then try again? [17:09:26] YuviPanda, yeah thanks. That worked well :) [17:09:36] BPositive: :) [17:09:57] YuviPanda, possibly the last question [17:10:05] BPositive: sure! [17:10:05] As per the tutorial, it says "Edit /vagrant/LocalSettings.php to set $wgServer to the URL you plan on using to access the instance." [17:10:22] and then "View your new wiki at http://my-new-instance.instance-proxy.wmflabs.org/" [17:10:53] BPositive: ah, that probably needs some editing :) [17:10:53] in the file, $wgServer = '//' . $_SERVER['HTTP_HOST']; [17:11:05] BPositive: go to https://wikitech.wikimedia.org/wiki/Special:NovaProxy, and add a domain for your host [17:11:09] so not understanding what the syntax is [17:11:21] BPositive: and see if that just works? [17:11:28] BPositive: that bit of doc might be out of date :) [17:12:14] so there is no need of appending . $_SERVER['HTTP_HOST']; in that variable? [17:12:25] BPositive: unsure :D let's find out [17:12:31] BPositive: add a proxy, and see if it works? [17:13:41] :D [17:13:51] I have DNS hostname as pagemigration.wmflabs.org [17:16:09] and instance as "http://special-pm-instance.eqiad.wmflabs:80" [17:16:23] BPositive: cool. [17:17:12] BPositive: is that URL working? [17:18:22] nope [17:18:49] do I need to run anything after saving that file? [17:18:51] I guess no [17:18:52] BPositive: can you add me to the project and give me projectadmin so I can check it out? [17:19:06] I trust you so I will :D [17:19:19] Gaaah! Trust Yuvi? [17:19:24] :-P [17:19:28] can only end in disaster! :) [17:19:47] Coren: I'm creating a VM called mongo-testing, and making it self puppetmaster to test out mongo role. [17:20:27] Coren: considering using trusty. thoughts? should I stick to precise? [17:20:29] I don't need trusty [17:20:39] YuviPanda, done [17:21:41] YuviPanda: Well, we're *eventually* going to start defaulting to Trust in tools; but make sure that whatever you do will work with Precise /clients/ because we'll have those for years to come. [17:21:52] Coren: of course. [17:22:13] Coren: I guess at some point in the future we'll migrate all the current exec nodes to trusty [17:23:49] Dallas will have Trusty nodes, but I won't migrate any tool forcibly; foreeable future plans leaves Precise alongside Trusty in eqiad for quite some time. [17:24:13] I'll probably just adjust the balance as tools are tested to work in Trusty. [17:25:41] hedonil: Major incident that affected tool devs or users of the tools? [17:26:05] Coren: cool [17:26:09] BPositive: works now [17:26:20] Also, let's pretend that I ran "rm -r /usr/share/" on my computer. What can I do to fix it? [17:26:20] Coren: no need to edit localsettings file :) can you update the documentation to remove that step? [17:26:54] err [17:26:56] I meant BPositive, not Coren [17:27:55] a930913: ... short of reinstalling the OS? [17:28:09] restore from backup? :) [17:28:10] YuviPanda, http://special-pm-instance.eqiad.wmflabs/ works? [17:28:24] BPositive: nope, that's just the hostname, not the http url [17:28:30] BPositive: https://pagemigration.wmflabs.org/wiki/Main_Page is your HTTP endpoint [17:28:39] BPositive: special-pm-instance.eqiad.wmflabs is what you use to ssh in [17:28:47] ohho! [17:29:10] so what did you do? [17:29:13] Coren: Is that what I have to do? :( (Assuming I /did/ run that.) [17:29:24] BPositive: removed your localsettings.php customization :) not needed anymore [17:29:48] a930913: Pretty much, yeah; /usr/share contains a lot of stuff that'd be required for even apt-get to repair the install. [17:29:54] a930913: Hmm, importance: lowest/minor : "ssl error" for ~20 sec or so [17:30:14] hedonil: a930913 that was my fault, I think. [17:30:38] YuviPanda, okay [17:30:45] just kidding [17:31:52] * a930913 goes afk for an unspecified reason. [17:32:29] hehe [17:38:19] YuviPanda, https://www.mediawiki.org/w/index.php?title=User%3ABDavis_%28WMF%29%2FNotes%2FLabs-vagrant&diff=1015712&oldid=926180 [17:38:21] :) [17:38:31] BPositive: woot! thanks :) [17:38:41] see if that's right [17:39:59] BPositive: yeah, much more accurate [17:42:47] uhmm, what are the default login credentials? I could not find that in the docs too [17:43:44] BPositive: admin / vagrant [17:43:49] BPositive: they're on the mediawiki vagrant page [17:44:45] oh my bad [17:49:55] Coren: is there a way to add a dns service alias in labs, mapped to an internal ip? [17:50:23] ori: There is no interface for it, but I can add it to LDAP. [17:51:26] Coren: ooh. Can I ask for 'deployment-prep-syslog.eqiad.wmflabs' to point to deployment-bastion.eqiad.wmflabs? (that's I-0000010b / 10.68.16.58) [17:52:43] ... sure, but that seems a bit odd to me. [17:53:34] Coren: where's the code that generates replica.my.cnf files for new users? [17:53:38] Coren: is it in puppet? [17:53:42] Coren: basically i want to be able to say that remote syslog hosts should be derived from the project name (specifically, "${::instanceproject}-syslog.${::site}.wmflabs"). but the syslog role for deployment-prep is already deployment-bastion, and i don't want to provision a new instance [17:55:06] ori: Seems legit, if still a bit odd. But isn't it easier to make the host configurable instead and just /default/ to the derived name? [17:55:41] YuviPanda: You know, I don't recall if I put it in there yet. Lemme check. [17:55:56] Coren: :D do put it in there if it isn't already [17:56:21] Coren: hmm. i don't like that because there's no reason for the hostname to be 'deployment-bastion'; the default config *should* work, and the fact that it doesn't reflects kludginess. but i suppose piling another kludge on top won't fix it. so i'll take your advice. [17:57:26] ori: IMO, it's good practice to have things like this overridable explicitly. I can think of other scenarios where you'd want - say - different project to log to the same spot, etc. [17:57:43] * ori nods [18:02:02] YuviPanda: Huh, I never did. [18:02:05] * Coren fixes that. [18:02:08] Coren: :) [18:02:18] * YuviPanda trouts Coren a little bit [18:08:03] Hey folks. I created a m1.medium VM that's supposed to have a 40GB root partition, but I only have a 8GB root partition. What did I do wrong? [18:21:09] I just created a new m1.medium VM and I'm in the same situation. [18:21:35] I think you've to partition them into /mnt or something? [18:21:36] Coren: ^ [18:23:59] halfak: Look into role::labs::lvm::srv which probably does what you'd want by default. [18:24:20] Also, ignore Yuvi. /mnt shouldn't ever be used for persistent mounts. [18:24:37] What do I "want" in this case, Coren? [18:24:59] halfak: Have the space available for whichever application you have in mind. :-) [18:25:05] oh, in that case, ignore me :) [18:25:18] Great. So the dropdown on instance creation should not be believed? [18:25:47] Also I can't find role::labs::lvm::srv [18:25:53] I can find role::labs::lvm::mnt [18:26:09] That's evil. Lemme fix that. [18:26:30] ::mnt should only ever be used for old migrated instance that were already wrong. :-) [18:27:17] k. I'm looking in the puppet config for the instance. Should I find ...lvm::srv elsewhere? [18:28:11] Oh wait! I see it for the new instance, but not the one I built about a month ago. [18:28:14] but I was right! :D [18:28:29] halfak: I just added it to the default list. [18:28:55] Got it. thanks [18:29:08] * halfak is stoked he doesn't have to start from a new instance. [18:29:19] halfak: this is why you should puppetize things! :D [18:29:32] Yeah, just running puppet will allocate the storage and mount it at /srv [18:29:56] YuviPanda, if only I could just use puppet. [18:30:06] And the time investment to picking it up were negligible. [18:30:22] yeah, have felt the pain [18:30:37] When I'm working on "volunteer time", there isn't much time to go around :\ [18:31:05] halfak: yeah, same here. [18:32:56] Coren, I've enabled "role::labs::lvm::srv" on wikitech. Anything I need to do on the instance or should "/srv" just show up? [18:33:20] It'll show up next puppet run, though you can force that with 'sudo puppetd -tv' [18:33:35] Thanks [18:34:00] Coren: oh, so the script just enumerates all users and ensures they have a file? [18:34:12] or am I reading the perl all wrong? [18:34:22] No, that's basically it. [18:34:37] Coren: cool! [18:37:02] Possibly creating the credentials themselves along the way. [18:38:06] right [18:38:21] I might do a similar thing for mongodb and piggyback on [18:38:49] Is there actually much call for mongodb? [18:40:37] I use it. It's nice if you want indexes and denormalization. [18:40:40] :) [18:40:43] I want to use it, plus the same with redis I think - once we have it people will find uses for it. [18:40:51] yeah, plus flexible schemas are nice [18:41:02] and for a lot of tools full on sql is too much anyway. [18:41:15] yay halfak :) [18:41:29] In my case, SQL isn't a nice option. [18:41:48] Otherwise I would have done that. [18:42:02] yeah, lots of things that are complicated in SQL become easier with a document store [18:42:21] redis, on the other hand, lends itself nicely to my use case. I'd like to make that switch at some point. [18:43:21] halfak: yeah, but problem with redis is that you aren't guaranteed non-data-loss. At worst you can lose your last transaction [18:43:32] it solves a *lot* of problems really well though [18:43:47] Same with mongodb. [18:43:59] yup [18:51:27] Hey guys. New tips needed. I'm trying to run a command as another user (in this case, "mongodb"), but it doesn't look like I have the rights to do so. [18:51:42] (on an instance in my own project) [18:52:10] halfak: are you projectadmin? [18:52:12] I assume so [18:52:15] but good to check [18:52:32] yup. [18:53:10] I can sudo what I like without challenge, but when I try to run a command as another user, I get a password challenge. [18:56:44] should I just set my own password? [18:57:00] I'm not 100% clear on whether I should change the sudoers file. [18:58:36] YuviPanda, ^ [18:58:36] halfak: might be an ldap vs local user issue [18:58:38] Coren: ^ [18:59:47] halfak: The default sudoers allows you to sudo to root only. [18:59:47] happy fun workaround: sudo su - otheruser [19:01:07] Hmm... So I did that. "sudo su - mongodb" and whoami returns "halfak" [19:01:30] I usually do sudo -s and then su? [19:01:35] I did that for vagrant, I think [19:02:19] I just tried touching a new file and it's created with "halfak" as the owner. [19:03:54] Coren, ^ [19:03:57] no workie :( [19:05:57] Works for me. Odd. [19:06:03] I can't repair my DB as myself or root. Maybe I should just try it as root and then chown all the files again. [19:06:22] halfak: No need; from root you can still su to any user. [19:06:26] sudo -i [19:06:33] su - mongodb [19:06:53] Oh wait, mongodb might not have a valid shell/home [19:07:01] sudo wouldn't work either in that case [19:07:07] heh. [19:07:08] Remove the ' - ' [19:07:38] still no go. [19:07:56] No error, "sudo su mongodb" [19:08:52] Yeah, I'm thinking "no valid shell" [19:08:52] http://pastebin.com/7VMhv5Gc [19:09:13] * halfak shrugs [19:11:06] So, looks like I'm just going to try running the repair as root. [19:14:05] I'm just going to declare defeat on the repair and start from scratch. This has already eaten two hours. [19:16:42] * halfak is restoring the backup again. [19:44:36] https://snuggle-en.wmflabs.org! [19:55:06] 3Wikimedia Labs / 3tools: Limit number of jobs users can execute in parallel - 10https://bugzilla.wikimedia.org/65777 (10Tim Landscheidt) 3NEW p:3Unprio s:3normal a:3Marc A. Pelletier In pmtpa, we limited the number of jobs that could be executed in parallel per queue to 16 IIRC. During migration to... [20:23:48] Hm.. I'm trying to setup a tunnel so I can iterate my tool locally while connecting to labsdb replication [20:24:02] This should be possible with ssh tools-login.wmflabs.org -L something, right? [20:24:16] I've been trying different things but can't get it to work [20:25:47] Krinkle: yeah, that should work [20:26:00] ssh -L 3500:enwiki.labsdb:3306 tools-login.wmflabs.org [20:26:01] something like that? [20:26:10] looks about right [20:26:30] although I would go for the option 'run the tool on tools-login and connect to it remotely' (if it's a web tool, anyway) [20:27:17] What do you mean? [20:27:40] e.g. flask allows you to run a local development web server [20:27:59] if you run that on tools-login, you could connect to that web server through an ssh tunnel [20:28:39] in any case, your command seems to work -- I can then telnet localhost:3500 and get a connection [20:28:48] I'm confused. I have a working local environment. The reason I want to proxy the db is so I can debug it and edit it in my gUI editor from my local file system. [20:29:03] OK, sure. [20:29:20] Alternatively I'd need a way to mount the directory in tool-labs locally and edit that. [20:29:31] but I haven't found a way to do that that works with sublime. [20:29:37] Sublime Text [20:29:38] sshfs doesn't work? [20:30:11] anyway, to get back to the tunnel; does telnet localhost 3500 work? [20:30:17] if not, try 127.0.0.1 3500 instead [20:30:18] haven't tried that in years, didn't work for me a few years ago on Mac. [20:30:21] I'll give it another shot [20:30:35] ssh -L 3500:enwiki.labsdb:3306 tools-login.wmflabs.org [20:30:36] worked for me [20:30:44] but I don't have mysql installed locally, so I can't test more than just telnet [20:31:06] Yeah, works through telnet, but mysql hangs up [20:31:27] I prefer editing the live file though, so Ill give sshfs another try [20:31:39] oh! there's something with mysql trying to connect through a pipe by default [20:31:39] petan: https://tools.wmflabs.org/paste/view/662c89d4 [20:31:48] Krinkle: try 127.0.0.1 with mysql [20:31:48] It's easier to just create a new branch there in git, check it out in a temp directory and work there in the real env. [20:31:49] as hots [20:31:52] as host* [20:32:35] see, e.g. http://serverfault.com/questions/337818/how-to-force-mysql-to-connect-by-tcp-instead-of-a-unix-socket [20:32:43] liangent: what's that? [20:32:50] valhallasw: Hm.. I did that already [20:33:03] valhallasw: Oh, right, mysql doesn't support :3500, need to pass --port 3500 [20:33:25] perfect! [20:33:42] I'll still go for sshfs though, else I'll need a dozen tunnels for different dbs [20:33:52] and a switch in the code for local/labs hostnames and ports [20:33:57] petan: hm see mail [20:34:16] pastebinit is more flexible than how you think [20:34:23] * Krinkle runs brew install sshfs [20:34:34] Yay, brew has a package for it now. It didn't when I last tried [20:36:40] liangent: why would you want to use a generic tool if you can also use Petan's Awesome Tool(TM)? ;-p [20:40:28] liangent: it doesn't work [20:41:02] petan: well the package is missing on toollab so I can't test it there [20:41:24] valhallasw: why would you want to use tool which doesn't work when you can use some that just does? hm... that's complicated answer :P [20:41:56] petan: then it would make more sense to fix pastebinit than to write a complete replacement tool :-p [20:42:33] valhallasw: that would take me 200 times longer... [20:43:07] because petan works with c# ? [20:43:50] 3Wikimedia Labs / 3deployment-prep (beta): beta labs mysteriously goes read-only overnight - 10https://bugzilla.wikimedia.org/65486#c4 (10Antoine "hashar" Musso) On one SauceLab failure, it was POSTing to "http://en.wikipedia.beta.wmflabs.org/wiki/User:Selenium_user/firefox?vehidebetadialog=true&veaction=ed... [20:44:07] valhallasw: pastebinit takes -b or something as an argument to point it to something [20:44:30] yeah, liangent posted a config file [20:45:05] petan: btw in your tool, the prefix "Successfully pastebined to: " makes it more difficult to pipe its output to something else, if necessary [20:45:29] ah :) [20:45:52] liangent: have you tested it? I can puppetize it if you want [20:46:12] liangent: you can use parameter -s to suppress it [20:46:14] although I personally do not like using tools own paste, especially if tools goes down or whatever... [20:46:25] liangent: it's written in c++ not c# [20:47:01] YuviPanda: parameter -b doesn't work, because it's designed for pastebin.com not stikked based pastebins and even liangent's config doesn't work... [20:47:24] let me check [20:47:29] hhaaa valhallasw !! [20:47:49] valhallasw: did you get an opportunity to add Sphinx to pywikibot ? [20:48:07] hashar: I've played with it, but I'm not completely happy with it [20:48:35] petan: I tested locally .. and as I said I can't test it on toollab now due to missing package [20:48:39] petan: valhallasw liangent pastebinit by itself works fine for me [20:48:42] http://paste.ubuntu.com/7524280/ [20:48:46] valhallasw: you should raise your concerns on the list probably (I am subscribed to it). Will be happy to help if possible. [20:48:53] liangent: pastebinit is on tools-login now. I'm going to make a puppet change in a moment [20:49:10] YuviPanda: I wanted something to use for http://tools.wmflabs.org/paste [20:49:22] hashar: https://www.mediawiki.org/wiki/Help:Pywikibot/Documentation_RFC [20:49:23] valhallasw: pinged you because I am refactoring the way we publish doc and played a bit with the sphinx based doc we have. Example is https://doc.wikimedia.org/mw-tools-scap/ :) [20:49:48] hashar: do you hate ReST yet? ;-) [20:49:50] valhallasw: RFC ++ Maybe I will manage to track the edits made on that page [20:50:00] valhallasw: I am ok with ReST :] [20:50:24] hashar: anyway; take a look at the 'hybrid' scenarios, I think those might be the most worthwile [20:52:10] hashar: as for sphinx, I made an innitial attempt at https://gerrit.wikimedia.org/r/#/c/133939/ [20:52:25] ah, you responded there :-) [20:53:49] petan: the config file appears working correctly, except that I need to use tools-webproxy.eqiad.wmflabs instead of tools.wmflabs.org [20:54:42] liangent: ok, let me puppetize. paste modified config? [20:55:37] valhallasw: I -1 ed it just because "You would also need to add Sphinx to the test-requirements.txt file as well as any sphinx extension defined in conf.py" [20:55:48] liangent: Actually, lemme add an internal alias to this first -- it's been annoying everyone and that'd mean that the URLs would be annoying. [20:56:29] Coren: didn't scfc_de add a patch for that ages ago? :) [20:57:00] YuviPanda: Not that I can see offhand. [20:57:06] Coren: let me find it [20:57:26] YuviPanda: nvm, found it [20:57:32] Coren: https://gerrit.wikimedia.org/r/#/c/123149/ [20:57:33] Coren: ah :) [20:57:52] rebased cleanly! [20:58:20] hashar: Yes, I got that :-) I just had not mentally registered the fact you had already seen the commit :-) [20:58:32] Hm. Not clear that using a host resource is the best way because maintain-replicas regenerates the hosts file. [20:59:59] Coren: just realized this fixes internal https access too [21:00:18] liangent: hmm, where is the config supposed to go? [21:00:23] liangent: For different values of "fixes" [21:00:44] ah, /etc/pastebin.d [21:00:45] Coren: https://tools-webproxy.eqiad.wmflabs/ is not serving a certificate of tools-webproxy.eqiad.wmflabs [21:02:04] YuviPanda: there seems to be a few more params to add: private and expire [21:04:03] YuviPanda: maybe this? https://tools.wmflabs.org/paste/view/09661cd8 [21:04:17] Added. [21:05:27] https://tools.wmflabs.org/paste/view/0f88e57b looks better :p [21:05:37] * Coren wonders what {dev,exec,login}-android-test were supposed to be fore. [21:08:04] https://tools.wmflabs.org/paste/view/44d19b66 - one more? not sure whether I'm using that one correctly [21:10:51] Coren: Tools job queue seems to have a problem; a job was queued > 3 hours [21:11:35] russblau: The per-user run limit was not moved at migration, and someone is currently running a shitload of jobs. On the plus side, they should be over soon. :-/ [21:13:06] Seems like it would be a good idea to set a limit somewhere less than a shitload :-) [21:35:02] russblau: https://bugzilla.wikimedia.org/65777 [21:36:53] Coren: Are {dev,exec,login}-android-test still around? IIRC I set them up to test hashar's puppetization of the Android SDK, but I'm pretty sure I deleted them afterwards. [21:37:10] scfc_de: No, I was just wondering because I saw the host keys. [21:37:49] k [21:39:21] 3Wikimedia Labs / 3tools: Install pastebinit - 10https://bugzilla.wikimedia.org/50935 (10Tim Landscheidt) a:5Marc A. Pelletier>3Yuvi Panda [22:12:49] 3Wikimedia Labs: Implement ability to search wikitext of current Wikimedia wiki pages with regular expressions (regex) - 10https://bugzilla.wikimedia.org/43652 (10MZMcBride) [22:39:10] SGE = problems? [23:03:09] johang: Just saturated at the moment and probably for some time to come. [23:06:14] liangent0: the pastebinit config you gave me doesn't fully seem to work? [23:07:07] YuviPanda: What's not working? [23:07:21] liangent0: I pipe things into pastebinit and it still pastes to ubuntu [23:08:01] YuviPanda: -b ? [23:08:32] liangent0: hmm, that works. no way to make that the default? [23:08:37] That config file only adds support for a new pastebin backend [23:09:05] .pastebinit.xml ? [23:10:46] reading about that now [23:11:25] liangent0: bah, it is only at the user level! [23:11:29] liangent0: this is stupid [23:13:18] grr [23:30:08] scfc_de: Coren this still needs us to setup an alias in /etc/bash.bashrc however, since by default it doesn't actually go to tools paste. they have a slightly stupid configuration thingy, where the default paste site can only be determined per-user [23:30:16] scfc_de: Coren I'll do that change later. [23:35:02] Now if petan'd focus his attention on adding a /etc/pastebinit.xml to that ... :-) [23:37:48] 3Wikimedia Labs / 3tools: Install pastebinit - 10https://bugzilla.wikimedia.org/50935#c7 (10Tim Landscheidt) 5PAT>3RES/FIX While the default goes to http://paste.ubuntu.com/ and not http://tools.wmflabs.org/paste/, this bug as such has been fixed. [23:38:19] scfc_de: should be an easy 3 line fix, I'd reckon. I read t hrough the code a bit [23:38:25] anyway, back to setting up mongodb [23:38:40] scfc_de: I've been trying to setup tools-mongo :) Cleaning up the mongo module in ops/puppet [23:40:53] Removing dead code is always good. [23:52:44] Coren: scfc_de any idea how /data/project/.system/debs gets added to the deb repo? I tried including misc::labsdebrepo but that doesn't seem to do anything [23:53:08] and grepping for 'system/debs' gives me nothing [23:54:19] Look at toollabs/*.pp, $sysdir/debs IIRC. [23:54:36] $sysdir = /data/project/.system [23:56:09] ah, hmm [23:57:16] scfc_de: W: Failed to fetch file:/data/project/repo/Packages File not found [23:57:44] oh [23:57:46] that's from labsdebrepo [23:58:18] nevermind