[03:12:47] !proxy [03:12:47] did you know we have a proxy ? :-] https://wikitech.wikimedia.org/wiki/Help:Proxy [03:12:51] !webproxy [04:11:20] Coren: :~$ echo -e "GET /cluestuff/vis/stream HTTP/1.1\nHost: tools.wmflabs.org\n\n" | nc tools.wmflabs.org 80 -q 10 [04:11:59] Coren: Any idea where those extra few hex characters are coming from at the start of each event? [04:18:11] a930913: What hex characters do you mean? [04:20:30] scfc_de: Before the lines starting "data:" [04:21:57] a930913: Don't see that: "...\n\ndata: {something}\nevent: mainspace\n\ndata..." [04:23:21] scfc_de: Hmm, I wonder if it's the netcat... [04:24:05] scfc_de: http://pastebin.com/gyRBcJwN [04:25:05] a930913: I used "wget http://tools.wmflabs.org/cluestuff/vis/stream" (+ ^C after some seconds). [04:26:36] scfc_de: Mmm, so it must be an artifact of netcat. [04:26:38] a930913: With nc, I get that as well. Are you perhaps examining User-Agent ? [04:27:18] scfc_de: Which begs the question, if the stream is working correctly, why doesn't it work in the browser? :/ [04:27:42] scfc_de: I'm not using agents. [04:31:04] a930913: Do you get the same behaviour when you bypass tools-webproxy by directly connecting to your lighttpd? [04:32:19] scfc_de: How? [04:35:15] cluestuff's lighttpd is running on tools-webgrid-02, /var/run/lighttpd/cluestuff.conf says "server.port = 4106"; so tools-webgrid-02:4106. [04:38:19] Now if I could remember how the URLs look ... "curl http://tools-webgrid-02/cluestuff/vis/stream" gives a 404. [04:39:39] scfc_de: Port's not there? [04:39:46] a930913: In fact, there's no ~tools.cluestuff/public_html/vis? [04:40:22] Oh, you have a custom FCGI handler. [04:42:00] Hmmm. But .lighttpd.conf suggests that the URL for curl is correct. Anyway, gotta go! [05:14:19] andrewbogott_afk: can you give a look to https://wikitech.wikimedia.org/wiki/New_Project_Request/osmit_or_osmit-cruncher when you're back? [06:17:34] legoktm: shall I add you to the wikibugs group? [06:17:44] morning valhallasw [06:17:44] !log local-wikibugs wikibugs stopped reporting; investigating [06:17:45] Logged the message, Master [06:18:02] sure [06:19:13] did it stop reading new emails? [06:19:32] HUH. [06:21:45] !log local-wikibugs NameError: name '_wsp_splitter' is not defined in /data/project/wikibugs/src/pywikibugs/get_unstructured.py. Apparently the line 'from email._header_value_parser import _wsp_splitter, _validate_xtext' had not made it into the git repo, and was cleared by accident on deploymeny [06:21:46] Logged the message, Master [06:23:20] !log local-wikibugs Merging and deploying b7bbf92d7d2ceef993afc1113f515e01d79e1248 [06:23:21] Logged the message, Master [06:23:27] heh :P [06:24:05] should be OK now... [06:42:05] !log local-wikibugs deployed 2.0-1-gb7f4290 [06:42:07] Logged the message, Master [06:52:40] legoktm: ok, you're a wikibugs admin now :-) [06:52:55] if anything is broken, check ~/pywikibugs.err as that contains all output [06:53:07] ok [06:53:20] and I just use start.bash to reboot? [06:53:30] yep [06:56:44] oh, let me also add you as github admin [07:41:33] Hi. Toolserver is near to dead now and I still didn't move my project, yet. It's a long time ago that I tried to deal with it, but as far as I remember I didn't get answers at that time to all of my questions, so I'm going to ask again now. Please excuse me, if my questions are answered somewhere already. [07:43:18] I'm maintainer of the LALM project on toolserver, which currently is not under active development. Therefore I would like to start with a minimal-solution question: Is it possible to export the issues from JIRA to a readable form (without migration to another bug tracker)? [07:45:05] For a possible (and preferred) migration to labs: Am I right that it's possible to use java/servlets and a openstreetmap API/snapshot DB (osm database schema beyond the mapnik rendering database scheme)? [07:45:58] jongleur: you can export JIRA issues to XML , or you can save searches to CSV and get into Excel [07:46:01] https://confluence.atlassian.com/display/JIRA/Exporting+Search+Results+to+Microsoft+Excel f.e. [07:46:21] s/Excel/Calc [07:47:40] mutante: thanks [07:48:08] jongleur: there is "labs" and there is "toollabs". while i'm not 100% sure if you can use servlets within toollabs.. that all sounds doable in just "labs" [07:48:35] and probably also in toollabs [07:50:14] jongleur: there is work on OSM tile servers at wmf ..if that is related to your tool [07:50:24] might be [07:50:47] I looked into it over a year ago at last. at that time osm was not availlable at labs at all. [07:50:57] the first question to ask is probably "should i do this in toollabs, or should i have my own project" [07:51:10] there might be pros and cons [07:51:12] sure. [07:51:39] but you can always ask for a project in labs,then you can create virtual machines [07:51:44] ;) that's why parts of my questions are dedicated to export the stuf from toolserver without migrating it ;) [07:52:36] i thought there was some general approach already [07:52:42] to import all the JIRA issues into Bugzilla ? [07:53:28] jongleur: but by "own project" i also meant in WMF labs, just as a separate labs project [07:53:44] yes ;) (but that's the point - this is about migration already, while I'm still not sure I'm able to run my project on labs at all) [07:54:08] hmm. i think it would be good if you can write a short summary of your tool and mail the lest [07:54:12] list [07:54:26] the "problem" is that for a real run (beyond the current test installations) I would need access to an OSM database installation which is far beyond the capacity of a single project, I think [07:54:56] jongleur: but WMF is also working on running those OSM servers.. so it should be worth asking [07:55:11] wikitech-l is also a good place to get some clues there [07:55:49] thanks. as I'm currently working at my master thesis it might even be worth to do that in some months instead, I'll see [07:57:57] jongleur: just a matter of getting the right people to read it , i think.. do it:) good luck [07:58:06] thanks for your help [09:22:53] 3Wikimedia Labs / 3tools: Add support for Java Servlets on Tool Labs - 10https://bugzilla.wikimedia.org/54845#c9 (10Silke Meyer (WMDE)) Is the manual work needed currently documented? Or do you have an E.T.A. for your script? [09:29:18] !log integration rebased operations/puppet [09:29:20] Logged the message, Master [09:29:36] !log integration deploying phantomjs from integration/phantomjs.git {{gerrit|130049}} [09:29:37] Logged the message, Master [10:03:53] 3Wikimedia Labs / 3tools: Provide namespace IDs and names in the databases similar to toolserver.namespace - 10https://bugzilla.wikimedia.org/48625#c33 (10Silke Meyer (WMDE)) Any comments on Marlen's work so far? Did anyone test it? [10:38:14] valhallasw: you are too fast ! [11:00:05] hashar: what did he do! [11:10:32] YuviPanda: thank you :] [11:10:46] bah [11:10:47] done = yield from asyncio.wait_for(fixup_future, timeout=30) [11:10:51] gives me invalid syntax :/ [11:11:01] I need a different python version :/ [11:11:40] hashar: 3.4 [11:11:41] :D [11:11:46] yup [11:11:56] I am using vim syntastic [11:12:06] not sure how to teach him that the .py file uses a different version [11:12:17] hashar: yeah, same here. I have syntastic off for these files [11:13:44] hashar: too fast? never! :p [11:14:07] bah [11:14:07] tok, *remainder = _wsp_splitter(value, 1) [11:14:09] and yes, it's 3.4+, both for asyncio as well as for the email library [11:14:16] I love python 3.x syntax huuh [11:14:18] :D [11:17:30] I just need to hit myself everytime I wrote 'except SomeException, e:' [11:21:36] :D [11:21:46] valhallasw: I'm going to refactor the channels.py some more [11:21:54] valhallasw: "#huggle": product("huggle") [11:22:10] valhallasw: "#huggle": product("huggle", component("something")) [11:22:40] valhallasw: YuviPanda: what about using YAML to define the rules ? :D [11:22:48] hashar: yeah, I also want to do that [11:23:02] maybe I should just do that instaed [11:23:15] hashar: similar to grrrit-wm [11:23:47] you could even write some DSL to manage both grrrit-wm and wikibugs [11:25:00] and then put the config into mysql [11:25:06] (joking!) [11:25:08] :-] [11:28:27] valhallasw: I think #-dev shouldn't have all. Just anything that doesn't match anywhere else. -feed can have all [11:34:53] hashar: that would work for simple filters, but not for more complicated filters [11:35:20] hashar: basically, the DSL will then become complicated itself, and you might also have just used python. [11:35:34] yup [11:36:16] YuviPanda: I'd suggest x.product("x", "y", "z") => x["X_BUGZ_PRODUCT"] in ["x", "y", "z"] [11:36:30] then one can just do x.product("x") and x.component("a", "b", "c") [11:36:56] (x needs to be objectified first, then, but IIRC you were working on that) [11:36:59] or just leave it as-is [11:37:03] it's not as if the config changes a lot :p [11:37:56] and I don't really care what gets sent to #-dev tbh. Make sure to discuss with the powers that send emails ;-D [11:42:22] valhallasw: I'm writing a simple YAML file to see how it is. if it gets complex we can keep it in py [11:44:03] k [11:47:06] valhallasw: hashar https://dpaste.de/PtCK [11:47:14] that should completely capture the same thing we are doing right now [11:47:28] i hate yaml [11:47:55] that component: should be indented [11:48:03] but iirc yaml says otherwise [11:48:24] valhallasw: https://dpaste.de/BX3g is valid YAML, the previous one didn't pass lint [11:48:39] valhallasw: ah, but think of each item as a filter. In that case they are all top level elements no [11:48:46] valhallasw: we can later on have things like 'resolution': 'fixed' [11:48:49] and stuff, if needed [11:50:17] valhallasw: I'll go ahead and write a patch unless you've a strong -2 [11:50:18] hashar: ^ [11:51:31] !log pywikibugs Deployed bf1be7b55a19457469f311ae54e1cf6409eb4a0b [11:51:32] pywikibugs is not a valid project. [11:51:38] -:-) [11:51:38] !log tools pywikibugs Deployed bf1be7b55a19457469f311ae54e1cf6409eb4a0b [11:51:40] Logged the message, Master [11:52:26] YuviPanda: sounds good [11:52:31] YuviPanda: dont forget to write tests :-D [11:52:36] hashar: hah! [11:52:37] true [11:52:49] a lint test to validate the YAML [11:52:51] YuviPanda: local-pywikibugs [11:52:57] and one to validate the layout match the schema [11:52:59] now it's in the tools SAL instead of the pywikibugs SAL ;-D [11:53:04] Garnig: aaah, gah. right [11:53:07] i mean, valhallasw [11:53:09] sorry Garnig [11:53:18] (I submitted a patch for morebots, but it hasn't been deployed yet it seems [11:53:23] https://pypi.python.org/pypi/voluptuous/ can let you validate a YAML layout [11:53:24] valhallasw: not tools.? [11:53:26] also it's wikibugs [11:53:36] no, the LDAP group is still local-xxx [11:53:53] YuviPanda: integration/zuul.git comes from OPenStack and uses Voluptuous to validate the Zuul config file [11:54:04] valhallasw: aah [11:54:08] anyway [11:54:15] !log local-wikibugs deployed bf1be7b55a19457469f311ae54e1cf6409eb4a0b [11:54:16] Logged the message, Master [11:54:19] someone should deploy a new version of morebots [11:54:28] then you can just !log wikibugs yakyakyak [11:56:09] yeah [12:02:16] !add-labs-user [12:03:30] "If you currently have SVN access, then you have an account, but need to have it linked to Labs (how-to for admins: !add-labs-user)" [12:03:36] quote from https://wikitech.wikimedia.org/wiki/Help:Access#Admins [12:03:48] what was it? i forgot [12:04:00] things to do for "existing svn user" [12:04:15] valhallasw: I'm setting up pyenv for wikibugs [12:04:38] and is !add-labs-user gone ? [12:22:19] a930913: My guess now why curl didn't work: lighttpd expects a different hostname than tools-webgrid-02. Will test that later. [12:23:49] hi mutante [13:14:15] I am getting port 25 closed in my labs instance. I added the rule to open port 25 for 0.0.0.0/0 in Special:NovaSecurityGroup. Yet, telnet localhost 25 shows telnet: Unable to connect to remote host: Connection refused [13:15:19] tonythomas: so apparently there is no service running on port 25? [13:15:42] valhallasw: I have made my exim4 to run on port 25 [13:15:44] tonythomas: localhost may not be covered by 0.0.0.0 [13:15:48] its not showing up though [13:16:20] mutante: oh ! I tried to gave the box ip there, but it showed failed to add rule [13:17:25] tonythomas: can you see your exim process in "netstat -tulpen" on the box? [13:17:31] eh, the intsance [13:18:42] mutante: nope. its not coming up there [13:20:56] mutante: I will pastebin my update-exim4.conf.conf [13:21:07] In general, I think the security groups are "outside" the instances (and in fact only between projects and/or the InterNet). So from the instance to the instance or another instance in the project shouldn't depend on any security group settings. [13:21:08] mutante: http://pastebin.com/ndCJNfY3 [13:21:38] scfc_de: so the problem should be with the iptables configs in the instance ? [13:22:32] tonythomas: Didn't you just say to mutante that the process isn't running? So the problem would be that no exim is started? [13:23:04] tonythomas: eh, any errors in exim log when you restart the service? [13:23:12] like "can't bind to port" or something [13:23:44] mutante: looks like somthing like 2014-04-28 12:55:11 exim 4.76 daemon started: pid=28645, -q10m, not listening for SMTP [13:23:46] here [13:24:13] "not listening" there you go [13:24:30] now just gotta figure out why that is [13:24:49] Who's the ops guy for the WMF mail server? [13:24:53] did you let puppet intsall exim .. or? [13:24:54] mutante: some configs in puppet is causing the trouble I think [13:25:00] mutante: of course [13:25:02] :( [13:25:31] so what are you trying to copy? [13:25:43] the setup of mchenry? i dont even know what the goal is [13:26:25] 26 # options for daemon listening on port 25 [13:26:26] 27 SMTPLISTENEROPTIONS='' [13:26:42] there, that appears in templates/exim/exim4.default.erb [13:27:01] mutante: oh ! let me edit that up [13:27:01] tonythomas: which role name? [13:27:13] mutante: role name ? [13:27:24] you said you had puppet install it [13:27:34] so you picked a class or role [13:27:39] to apply to an instance [13:27:47] which one did you pick [13:27:52] mutante: oh ! I did the ediawiki-install::labs [13:27:55] *mediawiki-install::labs [13:28:10] eh, ok, i don't see how that is related to the actual mailserver setup [13:28:26] mutante: I have the entire exim4 conifgs puppetised here though [13:28:26] then that's a different story [13:28:44] let me try the earlier one [13:28:56] tonythomas: Just to make sure: You're running a self-hosted puppetmaster? [13:29:01] mediawiki-install tries to setup an exim? [13:29:03] scfc_de: yeah ! [13:29:07] is that right? [13:29:09] Nemo_bis: Regarding osmit-cruncher -- the project would be for largely internal use, right? Not linked to from the IT wiki? [13:29:27] mutante: of course. it tries to setup one, relaying all mails to the mchenry server [13:29:37] mutante: I think that's the basic exim setup that all hosts have. [13:29:40] scfc_de: I had made the one self puppetmaster [13:30:42] during gerrit-review should we commit all the files(including fonts/css/image files) or just the core code parts.. [for my gsoc project.] [13:30:42] scfc_de: let's say it is, does that work on other labs hosts then? [13:31:03] well, i think it does, or i wouldnt get all the labs cron spam [13:31:47] rohit-dua: That depends whether they form part of the project. Who's your mentor? [13:31:57] rohit-dua: all of it should go somewhere.. the question is just where [13:31:59] mutante: after editing the template/exim4/file I need to run puppetd -tv right ? [13:32:13] scfc_de: andrew zanni, tpt, yann [13:32:33] tonythomas: if you are on puppetmaster::self and if you edited that file in the local puppet repo, then yes [13:32:44] mutante: of course I'm [13:33:09] still more configs in the exim.conf - driver = smtp [13:33:09] remote_smtp: [13:33:09] driver = smtp [13:33:09] hosts_avoid_tls = <; 0.0.0.0/0 ; 0::0/0 [13:33:16] can cause problems right / [13:33:22] rohit-dua: Have you asked them? It's much easier for you because then you don't get bombarded with "Do it this way! No this way! No that way!", but have some guidance you can build upon (at least until the end of GSoC :-)). [13:33:35] Hm.. which server should I connect to to read meta_p? I guess it's present on all hosts, but is there a particular one that people conventionally use as the default? [13:34:27] scdfc_de: yes thank you. do you know if andrew zanni or yann come on irc? [13:34:46] Nemo_bis: I think you are on the OSM bugs, but the effect of osmit will not that we have /two/ "OSM toolservers" in Labs? :-) [13:35:27] tonythomas: what you can do is this: take one test-box without puppet, run manually dpkg-reconfigure exim4-config , follow the "wizard" for "satellite system", take the config it generates, compare that to what the puppet class installs.. look at the diff [13:36:10] valhallasw: setup python 3.4.0 :D [13:36:11] valhallasw: with pyenv [13:36:15] mutante: I have one running locally though. [13:36:22] tonythomas: but if this is really the default exim class used on everything.. then i'd be surprised why this one should be different [13:36:27] rohit-dua: No, I don't know that. You should ask them directly (GSoC has a "getting to know" time, hasn't it?). [13:36:46] scfc_de: yes. i'll mail them [13:37:00] mutante: yeah. I will try to do the dpkg-reconfigureexim4-config again [13:39:43] Hm.. tools-db seems like a nice default, but it doesn't have meta_p. [13:41:02] andrewbogott: Is there a convention for what the goto dbhost is for reading meta_p? I'm thinking s7.labsdb (the default db-slice). Ideal would be tools-db I think, except that one doesn't have meta_p [13:41:24] Krinkle: I have no idea, sorry :( [13:41:27] Krinkle: You can use any s[1-7]. I would just go for s1. [13:41:45] (Because of it being first and me being lazy :-).) [13:44:05] YuviPanda: cool! [13:44:20] YuviPanda: iirc I couldn't get pip to work in the pyvenv [13:44:23] need to modify start.bash now [13:44:41] valhallasw: ah, hmm. let me see how it goes [13:44:46] hashbang in pywikibigs.py [13:44:48] valhallasw: just moved the code into a module. [13:45:03] kk [14:20:10] Coren: scfc_de: Got it! It /was/ the proxy. When I set the X-Accel headers (correctly) it works. [14:22:37] But why would it only affect browsers and not inspection tools? [14:23:19] a930913: The interaction between browsers, compression, keepalives, and caching is sometimes difficult to analyze. [14:24:07] Heisenbug! [14:27:28] Coren: Is there any reason why packet inspection and manipulation is carried out by the proxy? [14:28:03] Because that's how a reverse proxy /works/ :-) Also, because we want/need to strip some headers. [14:28:39] Coren, are you personally running CorenSearchBot? [14:28:49] Coren: Reading the host header you mean? [14:29:06] Beetstra: Unsurprisingly. Why? [14:29:17] It is MASSIVELY hitting the spam blacklist [14:29:23] a930913: To strip XFF [14:29:44] About 250 hits in a couple of minutes [14:29:59] Beetstra: That wouldn't be all that surprising, most of the stuff it edits are copyvios and full of spammy [14:30:08] wait, what? It can't possibly edit that fast. [14:30:27] I would suggest that if you hit the spam-blacklist the first time, you śtrip the 'http://' from the url and try again [14:31:14] faster? :-D [14:31:39] 15:46-15:51 - about 350-360 hits [14:32:36] https://en.wikipedia.org/w/index.php?title=Special:Log/spamblacklist&limit=400&type=spamblacklist&user= <- screen of 400 hits, you have by far most of them [14:33:00] Wait, what timezone is this? [14:33:01] That list is already quite useless - this does not help [14:33:24] GMT+3, no DST [14:33:59] * Coren is hella confused. [14:34:22] Oh, ah, all of those are trying to do a /single/ edit! [14:34:32] over and over and over and .. over [14:34:57] Yeah, I don't think I have any code that could distinguish a blacklist block from a regular e/c [14:35:02] Don't worry, you are bot operator 4 who oversees this and hammers the list [14:35:48] In the case of CSBot, the solution needs must be different; I need to have it immune to spam blacklisting because it must not alter the actual article text. [14:36:22] But I don't think that can reasonably be done, can it. [14:36:25] * Coren grumbles. [14:36:40] You can't - spam-blacklist can't be excepted (except if you would whitelist your link, which needs admin rights) [14:36:56] Can you no-wiki the links in the text? [14:37:02] * Beetstra has to go [14:37:29] I probably could, but I'd have to do in unconditionally which would be destructive. [14:39:57] Well .. the alternative is to just skip it .. now no-one knows [14:41:00] * Coren ponders. [14:41:15] And, for that matter, I might be *adding* a blacklisted link. [14:41:51] I don't get to pick what the article is a copy of, and both the reference and the duplication detector link would quote the source of the copy. [14:42:14] * Coren ponders. [14:46:40] Also in es. [14:51:53] Annoyingly nontrivial. As it is, the only option I have is to not report copyvio of blacklisted sites. [14:53:55] Coren: s/https?:\/\///g ? [14:54:12] Coren: or dump it to some file on tools, and link to that? [14:54:26] valhallasw: Not an option; that will break most tools. [14:55:30] And copyright patrol is already a significant drain of time and effort; if I start requiring patrollers to copypaste URLs they're going to abandon ship. [14:56:39] just for the ones that are blacklisted, obviously [15:02:44] How frequently does the labs puppetmaster update? [15:03:32] Reedy: every 30 mins I believe [15:04:56] * Reedy tries to update deployment-salt [15:12:14] Reedy: Did you figure out how to update deployment-salt? [15:12:53] git pull? [15:12:53] :D [15:13:30] I think I did get that setup to work correctly there. I do `git fetch; get rebase origin/production` [15:13:52] Docs at https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/How_code_is_updated#Puppet_and_Salt [15:14:17] it looked like it autorebased to bring the local patches ontop [15:37:59] a930913: Could you add what you needed to get the streaming working to /Help? [15:43:20] !log deployment-prep Created empty /srv/scap-stage-dir/wmf-config/mwblocker.log file to stop missing file warnings in beta. [15:43:22] Logged the message, Master [16:12:12] !log deployment-prep upgrading highlighter plugin in Elasticsearch [16:12:14] Logged the message, Master [16:20:46] scfc_de: /Help is? [16:28:56] a930913: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [16:47:44] scfc_de: No wiki account. Not SUL? [16:48:44] a930913: wikitech has a separate set of credentials, yes. [16:49:07] scfc_de: Meatsock? [16:49:26] a930913: ? [16:51:18] scfc_de: I'll write it, you commit it. [16:58:58] a930913: Way too much work :-). [16:59:27] a930913: Your wikitech account is the same as your shell account except title-cased. [16:59:48] Oh, /that/ one. [18:00:21] !channels [18:00:21] | #wikimedia-labs-nagios #wikimedia-labs-offtopic #wikimedia-labs-requests [18:32:29] hi everybody [18:32:37] any ops in chat? [18:35:07] mik82: we're all in a meeting, but I can catch up shortly. What's up? [18:35:55] no worries, jut one info I wasnt able to find in mediawiki website, [18:36:09] I will leave you a pvt msg if you dont mind [18:36:22] you reply when you can [18:37:55] well.. there is no private message option on webchat [18:38:56] here is probably fine :) [18:39:28] the question is: is it possible to include a paypal donation button/link on the extension page ones created on mediawiki? [18:39:44] I mean is it against some rule/policy? [18:40:13] It may be against policy, since it would entail gathering identifiable user data. [18:40:36] If it's important I'd advise sending an email to labs-l for discussion. [18:41:40] ok, if it is against policy is ok, [18:41:53] There might also be issues about what the money is used for and how that relates to the generally centralized WMF fundraising scheme. I don't know much about that though. [18:42:20] I will include the donate link on github repo [18:42:21] thanks. [18:42:45] and thanks for your work guys, [18:43:25] That seems better (and at least easier for me :) ) thanks [19:13:47] We should all have a donation pot linked from our tools that gets distributed to us all :p [19:21:28] a930913: I want my own money! :-p [19:23:08] valhallasw: Yeah but a central pot looks more official and does away with the privacy concerns and whatnot ;) [19:27:04] a930913: gah, privacy. Money is much more important! [/facebook mode] [19:42:15] Hm... getting gateway timeout in labs for a new web proxy [19:42:15] http://integration-slave1002.wmflabs.org/krinkle-jsduck [19:42:23] a new web proxy should work right way, right? [19:42:27] Or is there an expected delay? [19:42:37] Locally via 'curl' it works, the server is responding [19:42:52] and security group has port 80 in it [19:43:11] Coren: andrewbogott: [19:43:35] ( hashar: ) [19:43:49] Krinkle: Maybe the security rules are bad for the backend server? [19:43:59] * andrewbogott looks [19:44:07] um… what project is this? [19:44:08] I used the same rules as for cvn-apache, which does work. [19:44:14] Krinkle: they are not meant to host content [19:44:24] hashar: I know that, I'm testing something. will disable later. [19:44:50] andrewbogott: project 'integration' [19:44:51] Krinkle: let me find out the rules :] [19:45:02] I know them already, 80 0../0 is in there [19:45:11] yeah blocked by ferm :] [19:45:15] was already in there, I didn't have to add it. [19:45:20] ? [19:45:41] what was already there ? [19:45:43] that rule enables it, not disable. [19:45:53] The default is not to allow any port. [19:46:13] the default policy is to drop any connection [19:46:29] Various other projects that don't allow port 80 by default, have a separate "web" security group with just the '80 80 tcp 0.0.0/0' rule in it. [19:46:33] but we have the default ferm rules applied (which allow ssh from bastion, monitoring etc) [19:46:35] So I know that that rule is what allows it [19:46:40] ahh [19:46:43] and the integration default group has that rule in it [19:46:54] I didn't add it, it's odd for it to be in default, btu that's good [19:46:57] but the instances have their own iptables [19:47:01] set up by ferm via puppet [19:47:06] because it would have to go in there, since we can't add new groups to existing instances (why not?, oh well..) [19:47:11] ferm? [19:47:14] so the project let port 80 flow to the instance [19:47:20] What is that and why? [19:47:23] but the instance reject it because it has its own firewall [19:47:33] ferm is a script that let you describe iptables rules [19:47:48] Is taht the default for labs projects to have this "ferm" ? [19:48:09] not at all [19:48:15] I never heard of it. I've set up web servers in a dozen different labs projects. I always just ensure this rule is in the security group and then stuff works. [19:48:27] but the slaves in labs are mor or less reusing the manifests from production [19:48:36] OK [19:48:42] and I needed to do some iptables magic rewriting to let the jenkins slaves access the beta cluster [19:48:51] can't be done via openstack interface (which I find very confusing ) [19:49:01] so went with ferm more or less without really expecting it :] [19:49:07] what did you want to do? [19:49:12] I just need to view a directory on there and see how it looks and behaves in the browser. [19:49:29] I know a dozen different ways to do it, this seemed the most straight forward. [19:49:34] well you can copy it to your computer via scp :] [19:49:40] Anyway, ignore this. I'll find another way. Dont have time for this mess. [19:49:52] Yes, but I'll need to adjust the parameters and regenerate a dozen times probably [19:50:00] syncing is inefficient, and changes the paths [19:50:07] which means I'll need to find/replace everything again [19:50:18] [19:50:21] what is your aim ? [19:50:43] To test jsduck works properly... [19:51:45] I posted on the ops list last week to figure out how we could copy doc from the labs to gallium doc.wikimedia.org [19:51:54] since I would like the doc generation jobs to be runnable on labs [19:52:03] That's not going to happen soon. And perhaps shouldn't have to. [19:52:17] anyway, this isnt' related to that [19:52:22] bryan proposed to add an instance that would be receiving doc and then have gallium rsync from it [19:52:47] which also mean the jobs running on check could well push their draft doc to there and be browse able. That will help review doc changes. [19:55:03] maybe, I don't think that preview is very valuable though. To test code in general, it is very normal to check out a change locally and run mediawiki. It's our business. Doign that for doc changes as well isn't much effort. [19:55:09] Preview is nice, but also complicated [19:55:15] Security of HTML/JS execution etc [19:55:49] We can securely run the test server side, but we wouldn't want to give urls out [20:00:44] hashar: (ended up tar.gz'ing /docs/js/, and scp'ing to localhost) [20:01:20] Krinkle: you can also scp -C -r [20:01:30] that compress on server side (-C) and copy recursively (-r) [20:02:47] k [20:10:38] hashar: we have a replicated db slave for beta now! [20:11:35] !log local-wikibugs restarted wikibugs, seems to have died [20:11:36] Logged the message, Master [20:17:17] Reedy: whoauuuuu [20:17:31] Reedy: how did you get the slave setup? :] [20:18:08] YuviPanda: anything in the log file? [20:18:27] valhallasw: log seems fine. Attempting to revert to a known good state [20:18:32] YuviPanda: OH. did you install the correct irc3 version (i.e. from github?) [20:18:33] hashar: mostly following springles instructions [20:18:49] valhallasw: I didn't switch fully to pyenv. I just set it up, didn't do anything [20:19:03] ah, ok [20:19:19] Traceback (most recent call last): [20:19:19] File "/data/project/wikibugs/src/pywikibugs/pywikibugs.py", line 16, in [20:19:22] from config import irc_password [20:19:25] ImportError: No module named 'config' [20:19:30] wth. [20:20:25] well, let's see how long before it dies again =p [20:20:37] valhallasw: that's old, that was me testing the module code. reverted it fairly quickly? [20:20:44] ah ok [20:20:50] yeah, I was just reading through .err [20:21:26] valhallasw: still down :( [20:21:39] YuviPanda: hmm [20:21:47] and I've to go to sleep now. [20:22:11] yeah [20:22:12] sorry couldn't be more useful! [20:22:15] ./toredis.py is dead [20:22:18] oh [20:22:24] apparently the redis module is gone?! [20:22:29] what [20:22:32] 'wat de vliegende fuck' [20:22:39] oh shit [20:22:40] ImportError: No module named 'redis' [20:22:42] that's pyenv [20:22:56] why? ./toredis runs from the system python interpreter [20:23:05] or have you changed paths? [20:23:12] ohhhh. [20:23:32] valhallasw: is back now [20:23:36] valhallasw: I just did a pip install redis [20:23:38] I AM AN IDIOT [20:23:51] YuviPanda: oh, so ./toredis is py3 compatible. Good to know :D [20:23:55] 3Wikimedia Labs / 3deployment-prep (beta): implement master-slave DB for beta labs - 10https://bugzilla.wikimedia.org/60058#c4 (10Sam Reed (reedy)) 5NEW>3RES/FIX We has a slave! [20:24:01] valhallasw: yeah yeah, I checked when I did my pep8 stuff :D [20:24:21] cool [20:24:21] valhallasw: so does whatever calling toredis call .profile before? [20:24:35] jmail. I guess, it's SGE so no-one really knows =p [20:24:40] hehe [20:24:42] apparently it does [20:37:56] Cyberpower678: Hi CP, things are running well ;) [20:38:49] putting /pages into new cloths right now [20:40:15] :-) [20:40:18] dr0ptp4kt: shall I rename you right now? It just requires you to stand clear of production for an hour or so. [20:40:33] hedonil, new cloths? [20:40:38] andrewbogott: yes, please [20:41:09] Cyberpower678: from old code to "Peachy-style" with Smarty [20:41:21] Cyberpower678: like the others [20:41:23] Ah [20:41:56] Cyberpower678: this is an insane routine there... [20:42:27] hedonil, what is? [20:54:00] !ping [20:54:00] !pong [20:59:22] * Beetstra hides from Cyberpower678 .. [21:00:09] * Cyberpower678 finds Beetstra and blows an air horn from behind. [21:02:06] Beetstra, the bot will have to wait until I'm done with finals though. [21:02:27] even longer [21:02:31] * Beetstra sighs [21:02:42] :-p [21:28:10] 3Wikimedia Labs / 3deployment-prep (beta): implement master-slave DB for beta labs - 10https://bugzilla.wikimedia.org/60058#c5 (10Antoine "hashar" Musso) Thank you Sam! [22:22:39] 3Tool Labs tools / 3[other]: Intuition: message rendering should accept necessary html markup - 10https://bugzilla.wikimedia.org/62855#c4 (10Tim Landscheidt) 5UNC>3RES/INV (In reply to Krinkle from comment #3) > Intuition is not maintained in Gerrit or BugZilla. > [...] Okay, in this case let's close t...