[00:00:58] yes. according to https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Contact [00:02:20] I am struggling to figuer out why my (newly recreated) crontab is not working. I don't seem to get any error messages left in files in the tools' directory, but also the tool doesn't seem to run [00:07:10] is it really the case that the server that jsub causes jobs to run on is incapable of submitting qsub jobs? [00:08:16] cbm-wikipedia: It is configured to be, yes. [00:08:31] If your cron job is small, you can use "jlocal". [00:11:27] 'man jlocal' doesn't work [00:12:36] but I will try it ... [00:13:49] cbm-wikipedia: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Creating_a_crontab [00:14:04] Basically, it just executes its argument. [00:18:05] jlocal does work. It would be good to add to the default crontab, which says that jsub and jstart are the only legal commands [00:23:02] jlocal should only be used exceptionally and by users who have understood its implications. So I don't think it should be displayed as part of the "normal" toolset. [00:24:19] Just to double check then: I have shell scripts that do nothing but invoke qsub with appropriate arguments. I historically ran these from cron. Now it is OK to run them from cron via jlocal [00:30:27] cbm-wikipedia: That's how I understand it to be. [00:30:29] Without having looked at them and trusting your expertise: Yes, if they just submit jobs, maybe on the condition of some very lightweight tests, that's generally allowed. [00:33:38] OK, that works then. If things break, I can always change the setup again. [00:34:06] !log deployment-prep Updated EventLogging to I89819bd [00:34:08] Logged the message, Master [01:48:34] I bumped https://bugzilla.wikimedia.org/show_bug.cgi?id=57617 [01:48:37] 3Wikimedia Labs / 3tools: watchlist table not available on labs - 10https://bugzilla.wikimedia.org/57617#c7 (10MZMcBride) (In reply to Luis Villa (WMF Legal) from comment #6) > Am I correct in understanding that the sanitization here is (1) remove > wl_user and (2) the sanitized user_touch? (But that as with... [09:41:19] (03PS1) 10Alexandros Kosiaris: Add passwords::mysql::dump [labs/private] - 10https://gerrit.wikimedia.org/r/133687 [09:42:11] (03CR) 10Alexandros Kosiaris: [C: 032] Add passwords::mysql::dump [labs/private] - 10https://gerrit.wikimedia.org/r/133687 (owner: 10Alexandros Kosiaris) [09:42:17] (03CR) 10Alexandros Kosiaris: [V: 032] Add passwords::mysql::dump [labs/private] - 10https://gerrit.wikimedia.org/r/133687 (owner: 10Alexandros Kosiaris) [11:39:37] 3Wikimedia Labs / 3tools: watchlist table not available on labs - 10https://bugzilla.wikimedia.org/57617#c8 (10Tim Landscheidt) (In reply to MZMcBride from comment #7) > [...] > If we split out "ts_wl_user_touched_cropped" to a separate ticket, there > shouldn't be any issue with exposing only wl_namespace... [16:34:01] Hello [16:35:22] when I restart my webservice I get :"a default document-root has to be set". Any ideas? [16:38:52] Guest703: is there anything in your .lighttpd.conf? the error suggests something failed merging your config file [16:43:29] the problem was a empty line in .lighttpd.conf, thank you. [16:43:49] the problem was an empty line in .lighttpd.conf, thank you. [16:55:35] after some restarts I get: "/usr/local/bin/lighttpd-starter: line 45: cannot create temp file for here-document: No space left on device". Is there a problem on one webserver? [16:56:13] scfc_de: ^ [16:56:22] are we running out of space again? [16:59:24] YuviPanda: One moment, please. [16:59:33] okay [17:00:20] On tools-webgrid-02, / is full, but /var has enough space. [17:00:47] /tmp full. [17:01:57] !log tools tools-webgrid-02: rm -f /tmp/core (tools.misc2svg, May 13 06:10, 3861106688) [17:01:59] Logged the message, Master [17:03:43] Is there now enough space again? [17:04:22] Guest703: Yes, it should work now. [17:04:31] fine, thank you. [17:04:48] Ah! /var/run is a symlink to /run, that's why / affected it. [17:05:04] Guest703: You're mars? [17:06:30] yes [17:09:12] Then the core dump was produced by something you did (perhaps the lighttpd misconfig?). I deleted it without looking into it, but if the error occurs again and I'm not around, ask an admin to find the cause of it, please [17:11:05] okay [17:12:47] my .lighttpd.conf contains only this: [17:12:56] server.modules += ("mod_status") [17:13:05] status.status-url = "/misc2svg/server-status" [17:13:15] status.statistics-url = "/misc2svg/server-statistics" [17:13:46] and my .user.ini: [17:13:57] upload_max_filesize = 6M [17:14:03] that's all. [17:17:12] Damianz: Can you sort out CBIII? There's 40KB to be archived on my talk :p [17:18:23] FDMS: o/ [17:20:40] Well, I would like to "request" something (a tool) … :) [17:21:32] And so far I found no on-wiki place to do so. [17:21:48] Yuvipanda: Do you have any idea why my website (misc2svg) wrote a core dump? [17:22:03] Guest703: unsure. might just have been a random lighty crash that dumped core [17:22:13] Guest703: I'd say it's ok, and see if it happens agaon [17:22:14] *again [17:22:16] FDMS: I'm listening. [17:22:28] okay, thank you. [17:22:34] Or at least logging, and reading occasionally :p [17:23:47] a930913: Wonderful. I'd like to transfer Commons-compatible files from local wikis to Commons, but only the useful ones in my field of interest. Therefore I would like to have *something* helping me to find files that are used in an article of a given Wikipedia category. [17:24:08] (and do not exist on Commons yet) [17:25:35] Yuvipanda: misc2svg ran fine about 4 years with apache and ubuntu 8.04, hmm.... Time will tell if it recurs. [17:25:48] Guest703: :) lighty, 12.04... :) [17:26:27] FDMS: Hmm, I'm not too knowledgable on files. That sounds like it would involve checking each article in the category. [17:26:39] Yuvipanda: Yes, it is another combination. [17:27:19] :( So do you think there is an on-wiki place, where the "request" could stay open longer? [17:28:16] FDMS: Well, Vada might work. [17:28:36] FDMS: you can just create a tool online. [17:28:51] FDMS: http://tools.wmflabs.org/ has a 'create a new tool' link [17:29:25] FDMS: you need to create a labs account, add a public key, and request access to tools first. I can grant access, so that should be speedy :) [17:29:25] a930913: Do you have a link? YuviPanda: I would if I was knowledgable enough on these topics … [17:29:26] aaah :) [17:29:45] FDMS: [[en:WP:Vada]] [17:31:03] a930913: Thank you, that sounds very interesting! YuviPanda: Thanks for the offer :) ! [17:32:55] FDMS: It can list the articles in a category, and then an app could go through each article, scanning for files. [17:33:14] FDMS: How does one tell if a file exists on commons too? [17:35:13] I will try to find out (currently reading some documentation, then installing) [17:35:40] You can also just query those in SQL, you don't have to check each article separately. [17:36:28] FDMS: So you're looking for all articles on enwp in category X that have local images? [17:37:24] scfc_de: Exactly (and other Wikipedias) [17:38:00] FDMS: Let me see if I can assemble a query. [17:38:34] great [17:43:01] FDMS: (If it is possible, scfc_de's way will be better.) [17:45:37] FDMS: "SELECT img_name, page_namespace, page_title FROM image JOIN imagelinks ON img_name = il_to JOIN page ON il_from = page_id JOIN categorylinks ON page_id = cl_from WHERE cl_to = '1815_in_Austria';" seems to work. What's the category you're looking for? [17:46:20] no particular … [17:46:46] And if one excludes https://en.wikipedia.org/wiki/Category:Wikipedia_files_on_Wikimedia_Commons_for_which_a_local_copy_has_been_requested_to_be_kept from that, it should be even more actionable. [17:49:04] Thank you for your help, I will look into it based on your instructions. [17:50:08] FDMS: I would make this into a tool for you, but am about to go off for a day. When are you next around, after tomorrow? [17:51:44] a930913: "SELECT img_name, page_namespace, page_title FROM image JOIN imagelinks ON img_name = il_to JOIN page ON il_from = page_id JOIN categorylinks ON page_id = cl_from WHERE cl_to = '1815_in_Austria' AND img_name NOT IN (SELECT page_title FROM page JOIN categorylinks ON page_id = cl_from AND cl_to = 'Wikipedia_files_on_Wikimedia_Commons_for_which_a_local_copy_has_been_requested_to_be_kept');" should be what's needed. [17:51:54] a930913: That would be really great, what about … eMail? [17:58:48] FDMS: Erm, a930913@tools.wmflabs.org should find its way to me. [18:00:52] So you will receive an eMail from me on Sunday. Thank you, again. [18:03:18] FDMS: No problem. [18:46:22] hmm, getting a 404 on one toolaccount where the other still works? [18:46:27] anything going on? [18:54:03] akoopal: ?? [19:02:31] Betacommand: https://tools.wmflabs.org/erwin85/ gives me a 404 [19:05:58] akoopal: is that the tool name or username? [19:06:09] toolname [19:10:20] restart of webservice didn't help [19:10:50] That's apparently occuring for all (http://tools.wmflabs.org/). [19:11:06] Tools, that is. [19:11:17] All tools. [19:11:29] permissions look ok [19:11:40] https://tools.wmflabs.org/locator/coordinates.php [19:11:42] nope [19:12:07] YuviPanda: Around before I just restart Redis for good measure? :-) [19:12:24] hmm, although that is not behaving as it should [19:12:35] worked half an hour ago [19:15:00] (03PS1) 10Incola: Add .gitignore [labs/tools/maintgraph] - 10https://gerrit.wikimedia.org/r/133770 [19:16:42] Okay, I'll take that back. Perhaps only erwin85. [19:19:36] http://gyazo.com/38cf84006ae2114b8b17694cb0331a3a [19:19:55] wrong link [19:20:20] http://gyazo.com/8204aa30bc9101fa7d777618a16c9089 <- what does this mean? [19:21:26] SPF|Cloud: it means you have not configured putty to use private key-based authentication [19:21:47] oh okay [19:22:33] still connecting since 2 minutes or so to https://tools.wmflabs.org/erwin85/ pretty long if there should be/was a 404 [19:24:52] The webserver itself at tools-webgrid-02:4062 seems to be slow. [19:25:42] O.o on ganglia: There was an error collecting ganglia data (127.0.0.1:8654): fsockopen error: Connection refused [19:26:25] se4598: that I observed as well, should have mentioned, in the end the 404 is there [19:27:09] scfc_de: do you know whats up with ganglia? [19:28:51] se4598: It's not finally set up and IIRC has not enough resources at the moment. hashar who's handling it talked about using separate instances for the aggregator and the web interface. [19:29:50] Okay, gonna restart erwin85's webservice, because "curl -H 'Host: tools.wmflabs.org' http://localhost:4062/erwin85/" is now running for quite a while without any result. [19:30:11] so it had been shut down again? it semi-worked even if it was realllly slow and I could see graphs [19:31:16] se4598: I think it's just a function of time: It starts with not much data and after a few days (?) it runs out of memory or so. A reboot will start the cycle again. [19:31:46] scfc_de: I did already a webservice restart myself [19:32:15] scfc_de: yeah, on and off for the next few mins [19:33:14] YuviPanda: Redis wasn't the culprit this time :-). [19:34:58] akoopal: http://tools.wmflabs.org/erwin85/hotcat.js returns immediately, so the problem seems to lie with index.php. [19:35:02] (03CR) 10Incola: [C: 032 V: 032] Add .gitignore [labs/tools/maintgraph] - 10https://gerrit.wikimedia.org/r/133770 (owner: 10Incola) [19:35:26] scfc_de: hmm [19:36:15] akoopal: The includes look like they try to connect to Toolserver databases. Has this page worked ever before? [19:36:28] yes, it has [19:36:47] Oh, I should have scrolled down. [19:37:32] And "php index.php" returns immediately. [19:39:12] *argl* There are old php-cgis on tools-webgrid-02. Why didn't my script kill them? [19:39:39] akoopal: Okay, try again, please. [19:39:56] much better [19:40:08] what was the culprit? [19:40:59] didn't the webservice stop kill all processes? [19:41:47] https://bugzilla.wikimedia.org/show_bug.cgi?id=61102 / https://bugzilla.wikimedia.org/show_bug.cgi?id=63878 [19:42:45] scfc_de: oh, 503s again? [19:42:47] Basically, (at least in the past) "webservice stop" left php-cgi processes lying around. lighttpd couldn't connect to them, but on the other hand didn't spawn any new ones because there were already there. [19:43:32] ahh [19:43:37] YuviPanda: No? It was just an error on one tool that I initially misdiagnosed as being an error on all tools. Everything's fine again. [19:45:46] btw, the remark about locator not working as it should was http/https mix in javascript causing the scripts not to load, that is fixed now [19:46:26] I thought I fixed that before, but maybe only on toolserver and not on labs or something [19:48:59] Ha! Running my script as a crontab is no longer possible, and as a continuous job doesn't work either because HBA is only enabled between bastion and exec nodes, but not from exec node to webgrid. So I'll set it up on my box and confine it to my schedule. [19:50:22] scfc_de: ah :) [19:50:24] scfc_de: cool [19:50:32] scfc_de: yeah, things should be better off after that connection pooling patch [19:50:57] scfc_de: just run ssh-keygen and add that public key to your accepted key list on wikitech [19:51:27] valhallasw: Way too much work compared to "crontab -e" :-). [19:51:32] fine :-p [19:53:00] l [20:50:39] !log integration Disabled beta-code-update-eqiad job to test a fix for TimedMediaHandler [20:50:41] Logged the message, Master [20:53:35] !log integration Enabled beta-code-update-eqiad job [20:53:36] Logged the message, Master [21:16:48] scfc_de: has this also zombie php-processes? http://tools.wmflabs.org/zoomviewer/index.php [21:17:27] keeps connecting for me [21:20:27] !log deployment-prep restarting elasticsearch in beta to update some plugins [21:20:29] Logged the message, Master [21:35:37] se4598: No, zoomviewer seems to use a custom FCGI handler. You need to contact dschwen directly if there are any problems with that. [22:03:40] 3Wikimedia Labs / 3deployment-prep (beta): Reenable $wgMWOAuthSecureTokenTransfer=true; - 10https://bugzilla.wikimedia.org/65421 (10Chris Steipp) 3NEW p:3Unprio s:3normal a:3None Once SSL is working in beta, we should reset all of the consumer secrets and require secure token transfer. The spec requi... [22:06:22] 3Wikimedia Labs / 3deployment-prep (beta): Get OAuth working in beta - 10https://bugzilla.wikimedia.org/59141#c9 (10Chris Steipp) 5PAT>3RES/FIX Reminder to reenable secure token transfer is bug 65421. In the meantime, OAuth is working on labs (I'm testing phabricator against it).