[01:04:17] 3Wikimedia Labs / 3tools: Install "phantomjs" on tool labs - 10https://bugzilla.wikimedia.org/66928 (10Kunal Mehta (Legoktm)) 3NEW p:3Unprio s:3normal a:3Marc A. Pelletier The program 'phantomjs' is currently not installed. To run 'phantomjs' please ask your administrator to install the package 'pha... [01:26:00] 3Wikimedia Labs / 3tools: Install "phantomjs" on tool labs - 10https://bugzilla.wikimedia.org/66928#c1 (10Kunal Mehta (Legoktm)) p:5Unprio>3Low s:5normal>3enhanc I downloaded a pre-built binary for now. [01:48:59] 3Wikimedia Labs / 3tools: Install "phantomjs" on tool labs - 10https://bugzilla.wikimedia.org/66928#c2 (10Kunal Mehta (Legoktm)) p:5Low>3Normal And that binary now seems to be segfaulting. >.> [09:49:45] 3Wikimedia Labs / 3Infrastructure: latest dump not available again - 10https://bugzilla.wikimedia.org/66362#c1 (10Lydia Pintscher) There is a newer dump now on the link you provided. Is the issue that they are not available on tool labs? [11:02:14] 3Wikimedia Labs / 3Infrastructure: latest dump not available again - 10https://bugzilla.wikimedia.org/66362#c2 (10Gerard Meijssen) When the dumps become available on labs, the statistics are automagically created. Thanks, GerardM [11:21:00] 3Wikimedia Labs / 3Infrastructure: latest dump not available again - 10https://bugzilla.wikimedia.org/66362#c3 (10Tim Landscheidt) What did you except to happen? What happened instead? [11:21:16] 3Wikimedia Labs / 3tools: Install "phantomjs" on tool labs - 10https://bugzilla.wikimedia.org/66928 (10Tim Landscheidt) a:5Marc A. Pelletier>3Yuvi Panda [11:35:59] 3Wikimedia Labs / 3Infrastructure: latest dump not available again - 10https://bugzilla.wikimedia.org/66362#c4 (10Gerard Meijssen) Check out https://tools.wmflabs.org/wikidata-todo/stats.php it parses each dump when it becomes available. The data for two dumps is missing in the statistics and consequently th... [12:05:29] 3Wikimedia Labs / 3Infrastructure: latest dump not available again - 10https://bugzilla.wikimedia.org/66362#c5 (10Tim Landscheidt) You filed a bug against the Wikimedia Labs Infrastructure. What did you except to happen there? What happened instead? [12:24:42] is it possible to get read-only database access of betadewiki? [12:36:49] gifti: You mean the beta cluster? You probably need to contact of the admins at https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep (hashar?) for that. And, Sunday, ... [13:39:29] 3Wikimedia Labs / 3tools: lighttpd redirects URLs of directories without a trailing slash from https to http - 10https://bugzilla.wikimedia.org/64627#c10 (10Tim Landscheidt) I haven't receive any answer to http://serverfault.com/questions/602386/how-to-make-lighttpd-respect-x-forwarded-proto-when-constructin... [15:53:11] Is there any things required to show Korean without words not broken? I get tools.dynamicbot cron daemon email like this [[분류:2014ë…„ 5ì›” 22ì ¼]] [15:53:22] it should be [[분류:2014년 5월 22일]] :p [15:56:02] Revi: tell your mail client to interpret mail as utf-8 instead of latin-1? [15:56:25] hmm, I use Gmail :p [15:56:47] and IIRC gmail should have option to interpret mail as UTF-8 [15:56:49] use the 'message text garbled' option [15:57:08] alternatively, try to get cron to add a content-type header [15:57:12] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=410057 seems to do just that [15:57:51] so maybe just setting LANG=en_US.utf-8 will fix it, too. [15:58:59] hmm. I told dynamicwork to do so :p [15:59:11] (I am phone now, cannot do shell work) [15:59:19] Revi: look into your console enocoding [16:03:00] 3Wikimedia Labs / 3tools: Install "phantomjs" on tool labs - 10https://bugzilla.wikimedia.org/66928 (10Tim Landscheidt) 5PATC>3RESO/FIX [17:14:26] is Tim Landscheidt in this channel ? [17:26:15] yes [18:26:28] GerardM-: Tim Landscheidt = scfc_de [18:26:50] Thanks valhallasw [19:34:24] * YuviPanda|zzz waves at scfc_de [19:34:32] and others [19:34:39] andrewbogott_afk: thanks for the merge [19:35:31] i want to run a webservice on labs that hooks into a python bot that is constantly running. so it seems like from what i've read on wikitech that lighttpd wont work because its spins up a new python interpreter for each request. is there a common way of dealing with this? [19:50:21] And notconfusing is gone. You can run something that does a bot task and in parallel offers an http server and connect that http server to the nginx proxy, but all that is very fiddly at the moment, and whether http and bot in parallel is easy in Python, I don't know either. Of course, you can also connect your bot and standard CGIs/web apps with shared files, DBs, etc. [19:57:27] scfc_de: Hit and run is bad, but doing it in two channels at the same time is even worse ;-) [20:03:02] forwarded him the answer [20:04:28] scfc_de: I think the question fits the standard pattern of 'bot does work, needs to interact with web', whcih can be handled via redis or mysql [20:04:40] * YuviPanda does a hit and run answer [20:04:56] we should write up these thimgs at some point, I guess [20:08:32] Redis falls into "DBs" for me :-). multichill, what was the other channel? [20:14:16] scfc_de: #pywikibot [20:14:18] I tried redis, it worked poorly, mostly because the redis python module is not designed to do IPC, it's complicate to implement send command, receive the result from the command through redis, daemonize the python bot on an exec node and use a socket from the web interface to the bot work a lot better [20:16:18] https://github.com/phil-el/phetools/tree/master/dummy_robot and https://github.com/phil-el/phetools/tree/master/common front-end.php + tool_connect.py implement that [20:21:43] the drawback of daemonize, even if noboody use the bot, it'll use a few hundred of virt mem on an exec node ;( [21:29:53] I'm a bit confused about the labs shell name for the ssh config file. I set it to my wikitech user name, but that doesn't work. [21:31:45] phe: why does the bot have to be running on the background? [21:32:02] phe: the typical time to do an edit is larger than the time to spin up a worker process, I'd think [21:33:31] valhallasw, not all bot do edit, I've bot where the works take 10 ms but pywikibot start time is around 2000 ms time longer [21:33:32] scfc_de: sorry for leaving [21:33:50] Nemo_bis: thanks for sending me the scrollback of scfc_de's comment [21:35:40] scfc_de: my bot has two inputs, one is that is scans recent changes for changes in {{Cite journal}} and then I also want a "jump the queue" feature where you can manually trigger the bot by entering it into a web form [21:36:49] so i need a thread that monitors the CGI, but as I understand it all calls to the CGI for python start a new thread. I don't know how to just constantly listen [21:37:23] well unless i spin up my own python webserver, like "tornado" webserver [21:38:11] notconfusing: then the obvious option is just a bot that waits for a redis message (pub/sub) [21:38:42] valhallasw: can you point to an example of this? [21:38:48] or using a fcgi process [21:39:12] https://github.com/valhallasw/pywikibugs/blob/master/pywikibugs.py [21:40:07] valhallasw: this is running on labs? [21:45:23] yes [21:46:17] I think I might actually write a redis page generator -- sounds like a useful concept. [21:47:45] valhallasw: it looks like your listening to irc with this script. i imagine i can also listen to web-requests? [21:48:53] oh nevermind, your saying with redis, that the http server doesn't have to be in the same thread as the bot [21:48:55] no, it's just writing to irc [21:48:58] right [21:49:06] this is the redis->irc interface [21:49:17] there's another script, toredis.py, which is the mail->redis interface [21:52:09] so redis is the central dispatcher here [21:52:15] ok cool [21:52:20] ill read up