[00:09:49] Espreon: https://meta.wikimedia.org/wiki/User_talk:EmausBot [00:10:23] https://meta.wikimedia.org/w/index.php?title=Wiktionary/Table&action=history [00:14:08] giftpflanze: So, just ask there, right? [06:40:33] Espreon: right [06:40:45] OK. [06:40:47] Thanks. [06:40:56] np :) [06:41:19] Say, are you German? [06:41:26] i am [06:41:41] Which part of Germany are you from? [06:41:46] hesse [06:42:43] Ah, I see. [06:43:00] Too bad. The quest continues. [06:43:27] * Espreon is searching for a Low German speaker [06:43:35] ... on teh Freenode [07:57:08] anyone online who has shell access to http://commons.wikimedia.beta.wmflabs.org? [07:57:09] or could grant me shell access to that project? [07:57:35] morning [07:58:07] anyone around who can say why glusterfs doesn't seem to be working? [08:15:17] anyone online who has shell access to http://commons.wikimedia.beta.wmflabs.org? [08:15:18] or could grant me shell access to that project? [08:23:57] hoi .. I think the synchronisation of Wikidata and Labs is broken again [08:30:16] hey hashar, good morning. when you have a moment i need help sorting out if the cronjob on http://commons.wikimedia.beta.wmflabs.org can run maintenance/runJobs.php as a user that can read/write to a filebackend path "/mnt/upload7/private/gwtoolset/$site/$lang" [08:30:54] dan-nl: in call, ping me again in half an hour :-) [08:31:16] k, i'll have to be a bit later, have an appointment at 10, but will do so [08:34:45] dan-nl: will look at it whenever my call is finished :-] [08:35:02] I guess the path hasn't been created [08:37:54] cool, basically, gwtoolset is able to store metadata files there as expected and can retrieve them as long as the extension is being run in the browser. the extension then creates a background job that will run when the cronjob, or whatever other service runs maintenance/runJobs.php. the problem seems to be that when maintenance/runJobs.php is run, the user it is run as has no access to /mnt/upload7/private/gwtoolset/$site/$lang. locally i run it as user [08:42:58] maybe it's better to just use the path /mnt/upload7/gwtoolset/$site/$lang instead … will need to sort something out for production though as the idea is to keep that directory private via server config ... [08:43:38] need to leave for that appointment … will try and make an irc connection at that location [09:19:11] dan-nl: filled a bug with my findings at https://bugzilla.wikimedia.org/show_bug.cgi?id=58202 [09:19:14] seems like a path issue [09:19:23] Could not retrieve (8/c/6/8c6f87aac4b9377b1492af22184b875f.xml) from the FileBackend. [09:19:36] the file on the disk is /mnt/upload7/private/gwtoolset/wikipedia/commons/commonswiki-gwtoolset-metadata/Dan-nl/8/c/6/8c6f87aac4b9377b1492af22184b875f.xml [09:20:10] right was thinking it might be a permissions issue … what service runs the maintenance/runJobs.php script? [09:20:42] we have a shell script that does a while loop [09:20:51] and execute nextJobs.php to find jobs that need to be run [09:20:56] then run jobs maintenance script [09:21:06] what user owns the file /mnt/upload7/private/gwtoolset/wikipedia/commons/commonswiki-gwtoolset-metadata/Dan-nl/8/c/6/8c6f87aac4b9377b1492af22184b875f.xml [09:21:07] can't remember the user , let me look it up [09:21:51] hmm [09:22:41] looking on the jobrunner instance, it doesn't have /mnt/upload7 :-D [09:22:54] ahh [09:23:29] so it's not looking in that dir? is that easy to fix [09:25:41] yeah [09:25:52] something got broken when production changed the layout of the files [09:25:55] will fix it up [09:26:21] hello [09:26:24] I need help to program a cron of my java tool on tools.wmflabs.org [09:26:54] does anybody already run a java tool ? [09:26:54] thanks hashar [09:27:27] Hercule: you probably want to ask on the mailing list labs-l [09:40:46] dan-nl: I have renamed in mediawiki config the /mnt/upload7 to /data/project/upload7 [09:40:55] I have no clue how to retry the job though :D [09:41:18] can you modify the table entry? [09:41:26] if not i can submit another job [09:41:42] since the job failed, I guess it is out of the job queue [09:42:49] right, if you can edit the table entry you can set job_attempts = 0, job_token = '' and job_token_timestamp = null then the job will be picked up again [09:43:04] but if the dir has changed that won't help [09:43:28] i'll submit another job … one moment ... [09:44:47] !log deployment-prep deleted old jobs from commonswiki job queue (up to timestamp 20130315031930) [09:44:52] Logged the message, Master [09:45:00] dan-nl: the job failed so it got removed from the queue :( you will get to send another one [09:46:19] hashar: just created it [09:46:46] 2013-12-09 09:46:27 deployment-jobrunner08 commonswiki: gwtoolsetUploadMediafileJob User:Dan-nl/GWToolset/Mediafile_Batch_Job/52a5916898e83 options=array(3) whitelisted-post=array(37) user-name=Dan-nl user-options=array(19) STARTING [09:46:50] hey job starting :-D [09:46:56] it worked! :) [09:46:58] status good [09:47:02] http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload [09:47:06] 7114 miliseconds [09:47:17] awesome!! [09:47:24] nice [09:47:26] so sorry the path ended up being wrong :-(((( [09:48:05] seems like it's just a bit of trial and error … unfortunately … is that path private based on server config? [09:48:10] dan-nl: I noticed that jobs have been triggered for CirrusSearch, so those images should be available in beta search engine [09:48:25] nice [09:48:26] http://commons.wikimedia.beta.wmflabs.org/w/index.php?search=Bianhu&button=&title=Special%3ASearch :D [09:48:52] excellent! [09:48:57] I am so happy to somehow have helped on GLAM front [09:49:04] :) [09:49:18] did you get an access on the beta cluster ? [09:49:38] just need to sort out the fb on production now … i have no idea how to set-up a swift fb … hopefully we can get the path correct for that config ;) [09:49:44] would be helpful to look at the mediawiki log files i guess [09:49:47] not shell access [09:49:52] yes definitely [09:50:01] account on labs is dan-nl [09:51:50] do i just need shell access to the beat cluster? [09:52:06] !log deployment-prep added dan-nl so he can look at MediaWiki log files when playing with glamtoolset [09:52:10] Logged the message, Master [09:52:44] thanks. do i ssh into bastion? or somewhere else? [09:52:47] dan-nl: should be good njow. You would have to ssh to deployment-bastion.pmtpa.wmflabs Log files are in /data/project/logs [09:52:57] ahh [09:53:07] k, will try now [09:53:54] the log files there are written by udp2log, they correspond to the log buckets defined via wfDebugLogGroup( 'some bucket', 'message here' ) ; [09:54:17] web.log and cli.log are the equivalent of $wgDebugLogFile so you have full debug :D [09:54:51] which also mean you have the users cookies / ip address. So be careful when copy pasting on the internet/publicly [09:57:26] cool, got to the log dir thanks [09:58:59] everything looks good hasher, thanks! [09:59:56] Nikerabbit: regarding your GlusterFS shared directory being dead. Have you try rebooting the instance ? [10:00:04] Nikerabbit: and is language a new labs project? [10:00:07] dan-nl: :-] [10:00:21] Nikerabbit: seems like the volume did not created on GlusterFS :-( [10:05:31] rebooting doesn't help in those cases [10:07:03] Nemo_bis: if the volume did not get created, indeed :( [10:07:08] I can't fix it up myself [10:10:07] hashar: all you had to change was the basePath in filebackend-labs.php from /mnt/upload7/private/gwtoolset/$site/$lang to /data/project/upload7/private/gwtoolset/$site/$lang ? [10:11:17] dan-nl: exactly [10:11:29] dan-nl: we used to have /mnt/upload7 configured using puppet [10:11:36] some refactoring during the summer broke it [10:11:46] aka the path is no more configured [10:11:51] or maybe it never has been :( [10:11:56] k, do you happen to know which dir might work as a basePath for production swift? [10:12:08] for swift I have no clue [10:12:15] i'll try and check-in with aaron later tonight [10:12:17] it doesn't use real file path iirc [10:12:30] You might need a swift volume to be created [10:12:33] k, thanks for this though, we can now move forward with further testings [10:12:37] k [10:12:48] paravoid and apergos might be able to help as well [10:12:56] if i want to make any further config changes, do i add you to the gerrit commit? [10:13:06] they are members of the ops team and they surely knows about Swift. Both are in Greece so are available during our days :] [10:13:18] feel free to add me to Gerrit commits :-D [10:13:38] even if I ignore the change I might comment anyway [10:13:53] oh cool, so just join #wikimedia-ops [10:14:23] yup [10:14:37] i just have a few more domains to whitelist and push that config to the beta cluster … not sure who would be able to merge that commit when i make it … would you or could i do it myself? [10:14:41] and hope they got some available I/O :} [10:15:33] :) [10:16:47] dan-nl: for change in operations/mediawiki-config.git , they have to be reviewed by people with Wikimedia cluster access [10:17:06] usually people from the mwcore team (aka Reedy, bd808, manybubbles, me ..) [10:17:12] or folks from the feature teams [10:17:30] well basically, wmf employee / long term contractors that use to deploy mediawiki config changes in production [10:17:38] k, will try and gather a few domains for the whitelist before i make the next commit [10:17:52] if made today, I can surely review/merge it for you [10:18:05] and by the way, congratulations on figuring out the $wgFileBackend config :D [10:19:11] :) [10:20:04] dan-nl: do you have any WMF people to assist with GWToolset ? [10:20:21] you will probably want someone internally to handle for you the Swift container creation [10:20:33] or get someone from wmf ops assigned to help you [10:20:43] i'll speak with aaron and bryan about it [10:20:56] greg will help out if needed [10:26:53] hashar: just committed https://gerrit.wikimedia.org/r/#/c/100356/1 [10:29:31] hashar: gerrit doesn't seem to allow me to add you to the reviewers for that commit … [10:29:49] i have been added [10:30:05] k, guess the ui just doesn't show it for me yet [10:31:41] I have added some trailing commas and merging the change [10:33:13] dan-nl: should be enabled on beta now [10:34:05] thanks hashar. i see that it merged successfully [10:39:22] yeah all automatically \O/ [11:34:24] toolserver wants me to renew my account... [11:59:04] AzaToth, just type burnbridges;) [11:59:26] heh [12:00:03] but first nuke your email from ldap [12:00:08] -bash: burnbridges: command not found :( [12:00:58] when was ts now planned to shut down again? [12:01:26] it's already half-broken [12:01:35] I meant for real [12:02:39] aha, burnbridges is nightshade only [12:02:59] oh [12:03:47] If you're all set, then please type uppercase yes: uppercase yes [12:03:48] [0:0][azatoth@nightshade ~]$ [12:03:50] ... [12:16:08] hashar: yes I ahve rebotted [12:16:24] -typos [13:40:47] Coren: I didn't meet springle before I went to bed, I think it's hard for me to meet him due to the different time zones. But replication works again and flaggedpages table looks good, too. But there are still some revisions missing in the revision table. Could you point him to https://bugzilla.wikimedia.org/show_bug.cgi?id=57642#c14 when you see him? There are some sample revisions missing in the database. [13:42:32] so who can fix "volume not created" issues? [13:43:36] abogott [13:46:12] I don't see him around [13:46:34] should be a matter of a couple hours now [13:48:38] Nemo_bis: but but [13:51:10] tub tub [13:51:38] Nemo_bis: do you hav a toolserver account? [13:52:57] Steinsplitter: whats up? [13:53:29] can toolserver user wiev the content of other users folders? (like on labs) [13:54:17] Steinsplitter: yes/no [13:54:41] Steinsplitter: its really bad etiquette to do that [13:56:51] Steinsplitter: what do you want? [13:57:20] Nikerabbit: An alternative, and better in the long term, is to switch your instances to NFS instead. [13:57:52] Betacommand: only asked :) [13:58:10] Steinsplitter: Also, most users on Tool Labs do not share their homes; only tools are o=rx by default. [13:59:21] apper: I pointed him at it. [13:59:36] thanks [13:59:47] Coren: thanks [14:09:39] Coren: is that documented anywhere? [14:13:03] Nikerabbit: Actually, now that I think of it, probably not (because it's meant to just be the default after the move). It's trivial to do though: check to add the 'role::labsnfs::client' role and reboot. [14:27:14] Coren: oh I see [14:27:21] though I wanted to keep the changes minimal [14:30:28] Coren: did you have a chance to bring up a raring machine? [14:31:11] Nikerabbit: Well, it's not a very big change since that'll be the default within two months or so. :-) [14:32:09] matanya: No; I need to take a day this week to build a raring image but most of my time is spent creating the infrastructure in eqiad. [14:33:55] sure thanks Coren [14:39:55] Coren: How to disable crontab emailing? [14:41:05] zhuyifei1999: Well, the easiest way is to make sure that whatever is invoked with crontab doesn't output. You can also set MAILTO="" but that applies to the whole crontab [14:42:24] Coren: Append 'MAILTO=""' to the crontab? [14:42:41] prepend, but yeah. [14:44:36] Anyway, {{done}} [14:45:20] Thanks [14:53:30] Coren: Did you see my message yesterday about fixing how the status page shows "48M/M" instead of the actual limit? I'd be happy to put the diff in bugzilla (or gerrit if it's in there) if you don't want to mess with it now. [15:02:18] anomie: It's in gerrit; labs/toollabs (check the www subdirectory) [15:02:27] ok [15:30:12] Hi [15:30:31] what are people's feelings about running Google Analytics on a website hosted in Labs? [15:30:54] WMF grantees would like this ability [15:31:18] or something similar (so like PiWik would work and it's Open Source, but we might have to host it somewhere centrally) [15:31:40] Coren: ^ / anyone? [15:32:46] milimetric: It's actually very much forbidden by the rules unless any use of analytics is /preceeded/ by a disclaimer explaining its use. (I.e., whatever landing page with that disclaimer needs must be excluded from it) [15:33:07] ok, cool [15:33:24] so simple landing page -> analytics enabled site is ok [15:33:46] is that the case for any analytics solution (like PiWik) or just Google Analytics and other Evil (TM) solutions? [15:34:22] milimetric: Well, third party is "worse"; but any collection of data needs to be disclaimed before it occurs. [15:34:27] k, cool [15:34:47] https://wikitech.wikimedia.org/wiki/Wikitech:Labs_Terms_of_use#If_my_tools_collect_Private_Information... [15:35:02] (IPs are, by default, considered private) [15:36:12] https://wikitech.wikimedia.org/wiki/Wikitech:Labs_Terms_of_use#What_can_and_can.E2.80.99t_be_done_with_user_information.3F is more to the point, actually. [15:36:15] cool, thank you so much Coren, that's super useful [17:53:34] trying to install a python package, the documentation for it recommends using virtualenv. I've gotten it all installed, but there's one more point of confusion: How do I get the python script to run in the virtualenv when it is executed via web request? [17:53:48] Scottywong: wsgi [17:54:47] ok [17:54:57] I'm not all that familiar with wsgi either :( [17:55:30] is that a few lines of code I'll need to add at the top of my py script to define a path or something? [17:56:51] any docs for wsgi that I could read? [17:58:07] derp, got booted [17:58:26] are there any docs available for configuring WSGI on tool labs? [18:00:49] or perhaps there is an easier way to install a simple python module to a local tool user, without messing with virtualenv? [18:06:21] Scottywong_: I don't actually know. I just know the answer is wsgi :D [18:07:26] haha ok [18:27:47] Scottywong_: what's wrong with virtualenv? [18:28:02] (the answer is 'no', because venv /is/ the simple way) [18:31:32] valhallasw: can't figure out how to get my pythonscript to run in the venv when invoked via web request [18:31:45] i entered a jira request to get the module installed globally [18:31:51] that'll probably be the easiest way for me... ;) [18:32:18] … [18:33:01] *bugzilla, not jira [18:34:01] thank god [18:34:26] Scottywong_, how do you execute your script? lighttpd + fastcgi? [18:35:53] no idea :) i put them in ~/public_html/myscript.py and then I direct my browser to http://tools.wmflabs.org/myscript.py [18:36:17] or, more correctly, tools.wmflabs.org/mytool/myscript.py [18:36:57] Scottywong_, okay. anyway, you should be able to specify the Python interpreter to use in the myscript.py shebang [18:37:11] #! /path/to/your/venv/bin/python [18:37:11] ahh [18:37:21] good call [18:37:29] that makes sense, i'll try that [18:37:49] anyway, depending on the framework you use, you should consider using fastcgi :-) [18:45:15] Scottywong_: use #!/path/to/venv/bin/python [18:45:25] oh, ireas already mentioned that [19:17:05] valhallasw: ireas: i tried setting the shebang to this: [19:17:25] #!/data/project/mytoolname/venv/bin python [19:17:32] but it's still not able to load the module [19:17:48] Scottywong_, it seems you forgot a slash before python [19:18:20] python is in the /bin directory [19:18:38] Scottywong_, #!/data/project/mytoolname/venv/bin/python [19:19:12] no such directory [19:19:25] ? [19:19:48] i think in toolserver, I had to put the "python" at the end for some reason [19:19:53] might not be needed in this environment [19:20:08] you have to specify the full path of the python binary [19:20:15] Scottywong_: What's your actual tool name? [19:20:28] usersearch [19:21:10] Scottywong_: Did you name your virtualenv directory "babel" instead of "venv"? [19:21:49] yeah [19:21:59] So try #!/data/project/usersearch/babel/bin/python [19:23:19] Scottywong_: I think you're confused by the '/usr/bin/env python' that should be used if you want the *system* python [19:24:54] tried /python at the end, but it still seems to be returning an error [19:25:20] if I try to navigate in the shell to /data/project/usersearch/babel/bin/python, it doesn't work becuase there is no python directory [19:25:38] Scottywong_: Do you have a python /file/ in that directory though? [19:25:45] yeah [19:25:45] Scottywong_: And is it executable? [19:26:11] hmm [19:26:13] no, it doesn't appear to be [19:26:13] hold on [19:26:18] I.e. if you type '/data/project/usersearch/babel/bin/python' do you get a python prompt? [19:26:33] oh [19:26:34] yes, i do [19:26:48] oh, i see what the problem is [19:27:09] when I'm running python in that environment, I can load the module that I installed, but I can't load any other modules that I normally can load, like MySQLdb [19:27:20] do I need to link the venv to the main python installation somehow? [19:28:28] Scottywong_: yes. You can either install modules in the venv (to seperate them from the system install), or you can use virtualenv --system-site-packages [19:28:45] oh [19:28:50] so in your case virtualenv --system-site-packages /data/project/usersearch/babel [19:29:01] do I need to recreate the venv, or just type that command? [19:29:15] just type in that command [19:29:28] ok sweet, thanks for the help [19:29:33] it will update the virtualenv with the new settings [19:30:24] things are looking promising....... [19:32:22] ok, so now I can execute the script without errors in the shell [19:32:28] which is a big step forward [19:32:37] but it still returns an error from a web request [19:33:15] nm [19:33:16] i got it [19:33:23] didn't "take" ownership of the files [19:33:33] completely working now, thanks to everyone! [19:33:43] Yeay success!@ [20:13:32] Hi. Any idea why file.write(final.encode( "utf-8" )) wouldn't work in my pywikibot script? [20:13:53] to be exact, it's file = open('%spublic_html/porzucone.html' % (config.path['home']), 'w') [20:13:55] file.write(final.encode( "utf-8" )) [20:14:15] I tried the same using python console and it worked [20:14:28] so it looks like a difference between login and exec servers [20:14:29] (?) [20:15:04] Does "config.path['home']" end with a '/'? [20:18:38] bd808, it does [20:19:00] as I was saying, the copy-paste of these commands to python console on login server works [20:19:06] Darn. That would have been an easy fix. :) [20:19:25] let me double check it on console [20:24:37] hang on, it does work now. Never mind, thanks bd808 for the effort (; [20:27:23] hi ^d, hope you are doing well [20:27:41] <^d> I am thanks. And yourself? [20:28:12] I'm all right! I'm stymied by a Labs thing but I will fix it. [20:33:03] Pray tell, what labs thing stymies you so, brainwane? [20:33:04] :-) [20:34:31] Coren, are the pagecounts mounted correctly on the exec servers now? (a few weeks ago you mounted them temporarily in /mnt) [20:35:02] alkamid: Hm, that's actually a very good question since some of them were restarted. Lemme make sure everything is allright on that front. [20:35:18] Coren, that would be great! thanks [20:35:39] Hi Coren. I'm trying to figure out how to get my Flask app in http://tools.wmflabs.org/missing-from-wikipedia/ running - ireas & Maarten were helpful a few days ago, but I had trouble finding my error logs [20:35:40] my current hypothesis is that FCGI is eating my stack trace or something. Coren, where should I be looking for the real error logs? I look in ~/.error_log and just see a few lines about application restarting from like 30 min ago, nothing recent and none of the 500s [20:36:15] brainwane, you are looking for Python errors? [20:36:23] brainwane: Are you using the new webservice system? [20:36:52] ah ok, interesting, NOW there are errorlogs - there were nearly none on the 3rd [20:37:04] Coren: yes, I did webservice start [20:37:09] ireas: I believe I am [20:37:13] brainwane, if you are looking for python errors, you have to manually enable logging in your FCGI file, see this example: https://github.com/valhallasw/gerrit-patch-uploader/blob/master/app.fcgi [20:37:33] brainwane, lines 7 to 12 [20:37:42] anyone know why http://commons.wikimedia.beta.wmflabs.org may be giving Error: 503, Service Unavailable at Mon, 09 Dec 2013 20:20:02 GMT messages? [20:37:43] brainwane: Then your "true" error log is in ~/error.log, provided Flask reports stack traces to stderr by default [20:38:17] If you report this error to the Wikimedia System Administrators, please include the details below. Request: POST http://commons.wikimedia.beta.wmflabs.org/wiki/Special:GWToolset, from 83.163.0.31 via deployment-cache-text1 frontend ([10.4.1.133]:80), Varnish XID 2142135504 Forwarded for: 83.163.0.31 [20:38:27] thanks ireas, investigating [20:41:33] Coren: re https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help/NewWeb - it's called "new" - I assume it started around 10 Dec, when the page started? is there a proposed month or time period when it's gonna be the new default and will replace something else? [20:42:06] brainwane: Probably coincident with the move to eqiad (~Feb) [20:42:27] brainwane: But I'm not going to disable the apaches then; only change the default. [20:45:47] ireas: so, right now, my application.py (the file with Flask @app stuff in it) is public_html/missingfromwikipedia/webapp/application.py . So this fgci config script - I should put it in that same directory and call it app.fcgi ? [20:46:04] can anyone give me the rights to add templates to http://commons.wikimedia.beta.wmflabs.org? [20:46:41] brainwane, do you have a ~/.lighttpd.conf file? [20:48:02] I do, ireas! it's one line: debug.log-request-handling = "enable" [20:48:08] brainwane: yes, that would work. Then you refer to that from your ~/.lighttpd.conf [20:48:18] brainwane, okay, so you are *not* using FCGI. [20:48:23] (at the moment) [20:48:32] (see the config at https://github.com/valhallasw/gerrit-patch-uploader/blob/master/lighttpd.conf ) [20:49:05] in the patch uploaders' case, the app.fcgi is in ~/src/gerrit-patch-uploader (not even in public_html) [20:49:26] * Coren mumbles evil things about automount [20:49:48] * valhallasw gives Coren a hug [20:52:15] ireas: I'm not? but I did webservice start! should that have deleted the .lighttpd.conf file? [20:52:50] Erm, only php uses FCGI by default with the lighttpd config; python needs to add a stanza for it. [20:52:55] brainwane: you are using newweb/lighttpd, but it's not running your script via FCGI yet [20:53:07] brainwane: it's still being called as normal CGI script [20:53:21] oh hi valhallasw! thank you for your past and present help (sorry I misremembered you as Maarten) [20:54:44] brainwane: I have no problems with inserting happy memories of Maarten into people's brains ;-) [20:54:50] awwww [20:55:46] alkamid: Something is still broken with the automounter. Lemme try to beat some sense into it. [20:56:02] Coren, actually /mnt/pagecounts looks suspicious to me [20:56:16] on exec servers [20:56:19] alkamid: It's beyond suspicious, it downright broken. [20:56:39] huh [20:56:59] it seems I'm the only one using it -- it's been broken for over a week [20:57:20] At least you're the first to notice. :-) [20:59:54] brainwane: http://nbviewer.ipython.org/gist/valhallasw/7723637 :-) [21:00:33] hmm, maybe it would be nice to have more of my stuff in ~/src or similar instead of public_html [21:01:12] I'm getting kicked out of tools-login.wmflabs.org, but can login to bastion.wmflabs.org [21:01:41] valhallasw: wow! [21:03:07] cool! I should learn to use matplotlib! [21:04:19] It worked quite well, although I noticed matplotlib and numpy are really aimed at more structured data :-) [21:04:33] nonetheless fun to do something less structured with it [21:05:07] yeah! [21:05:21] ok, so now toollabs /missing-from-wikipedia gives a FORBIDDEN error. [21:05:24] * brainwane checks privs [21:05:35] https://tools.wmflabs.org/missing-from-wikipedia/ [21:06:21] brainwane: Forbidden is 90% likely to be you don't have an index.html to display by default, and directory listings are not allowed unless you did it. [21:06:44] yeah, but before I changed my lighttpd settings, it was a 404 [21:06:48] I think I understand better now [21:06:56] brainwane: check error.log. there's an error with starting the fcgi handler [21:07:35] brainwane, sorry, I was afk getting some food ;) [21:07:49] brainwane: main trick to solve errors with the fcgi handler is running the handler directly (i.e. /data/project/missing-from-wikipedia/public_html/missing-from-wikipedia/webapp/app.fcgi ) [21:08:18] brainwane, and ensure that your app.fcgi is executable (was a problem for me ^^) [21:08:19] Why can't I log in to tools-login.wmflabs.org?, but bastion.wmflabs.org works [21:08:35] Dispenser: What error do you get exactly? [21:09:13] PuTTY Fatal Error: Server unexpectedly closed network connection [21:09:16] brainwane: and the reason you're getting a 403 instead of a 404 is that apache is now handling the request -- lighttpd crashed because it couldn't start fcgi [21:10:12] Dispenser: You are not a member of the tools project. [21:13:25] thanks ireas - I have now made app.fcgi executable. [21:14:07] Coren: was max_user_connections decreased recently? [21:14:12] valhallasw: I did try running it and got this error http://pastebin.ca/2495229 [21:14:15] on the replicas [21:14:33] MrZ-man2: Not that I am aware of, but springle would know for sure. [21:14:35] Coren: Ok thanks. "Making a powerful edit counter" [21:15:00] brainwane, did you define a route for /? [21:15:02] Dispenser: I'm in a meeting atm. [21:15:19] brainwane: looks like it's working, apart from the routing issue ireas pointed out [21:15:26] brainwane: now you just have to restart lighttpd :-) [21:16:22] valhallasw: yay! thank you! [21:16:24] brainwane: https://tools.wmflabs.org/missing-from-wikipedia/index ! :D [21:16:38] I forgot to do webservice restart after, you know, every step :) [21:16:47] thanks valhallasw Coren ireas [21:17:40] brainwane: you seem to be missing from enwiki! ;-) [21:17:52] HAHAH Makes sense :) [21:20:42] brainwane: fatal error : Negligé [21:21:11] hedonil: I gotta deal with Unicode better [21:21:18] I think I have a local branch that fixes some of that [21:21:25] brainwane: ;-) [21:23:19] brainwane: if you run into any issues, don't hesitate to ask me -- I've had my fair share of unicode dealings in pywikipedia ;-) [21:23:46] (bots that start editwarring depending on the python version, for instance....) [21:24:07] valhallasw: omg that sounds terrible and hilarious [21:24:24] hedonil, tsss, these Germans with their umlauts!!! ;-) [21:24:50] brainwane: valhallasw: there are two major threads to the IT universe: IPv6 and Unicode [21:25:06] ireas: it's french :) [21:25:17] hedonil, yes, but German is worse :) [21:25:30] ireas: :P [21:25:38] blame Heavy Metal https://en.wikipedia.org/wiki/Metal_umlaut [21:25:54] !somethingstorelax [21:26:05] hey. [21:26:17] hedonil: Unicode is OK as long as you just work on the level of code points. Once you start dealing with normalization and collations, things become ugly real fast. [21:26:39] valhallasw: wise words [21:27:24] valhallasw: not to mention the d** rtl thing [21:28:04] !somethingtorelax [21:28:04] http://www.flickr.com/photos/110698835@N04/ [21:28:08] hedonil: oh, yeah, that too. [21:28:09] ahh [21:31:16] http://fr.wikipedia.org/w/index.php?title=Mark_Zuckerberg&offset=2010100400000&limit=25&action=history&tagfilter= < that's what a normalization bug does to your edit history [21:31:38] bot 1: "it's hi:मार्क ज़ुकेरबर्ग)!", bot 2: "it's hi:माऱ्क जुकेरबर्ग)!" [21:32:22] user: "why are two bots fighting over changing the page name from something to the same thing?!" [21:32:30] ☺ [21:32:42] so does it work it your irc client?:) [21:33:24] that shows up as a perfectly fine smiley face [21:33:39] there we go, replace all the :) [21:35:00] will Gerrit let me say "yes, merge this changeset *into the specified branch but not into master*"? [21:35:50] yes, but you need to tell it during the push (to refs/for/thisbranch instead of refs/for/master) [21:36:19] oh. will git-review let me do that? [21:36:22] * brainwane investigates [21:36:29] yes... but I'm not sure how [21:36:37] I think just 'git review thisbranch' [21:52:35] brainwane: works now :) [21:52:56] hedonil: yeah! [21:53:29] I'm trying to figure out a puzzle -- sometimes, if I put .encode('utf-8') in a particular spot, it breaks because it wants .decode instead. And sometimes vice versa! [21:53:49] but right now I am gonna put that aside and just merge the db access stuff into master and then work on the Wikidata stuff [21:54:00] thanks for your help, all [21:54:00] brainwane: depends on the format of the string [21:54:04] * hedonil poners about evil words [21:55:31] brainwane: the only way to deal with unicode is by keeping track of what you have where -- a unicode string (which is what flask gives you based on user input), a bytestring (which is what you get if you .encode('utf-8') that, or a urlencoded representation (the '%3E%2A' stuff) [21:56:07] brainwane: found another: (no error, but not found) d'Artagnan and Three Musketeers [21:56:25] hedonil: what was your target language? [21:57:23] brainwane: now I was just looking for some word with single quotes [21:57:43] brainwane: en wiki [21:57:46] oh you are a good tester, hedonil :) [21:57:58] (darn it, I will need to fix that) [21:58:19] brainwane: gone through all that bitter tears, too [21:58:53] Coren: what's the problem with s1 replication today? [22:01:49] russblau_: just what I was about to say, enwiki is busted [22:03:08] brainwane: the main confusion in python stems from 'ä'.encode('utf-8'), which gives a Unicode*De*codeError because it first tries to make 'ä' into a unicode string (decoding!) before encoding it [22:03:19] brainwane: don't know about python, but in php I use for that - this will proper quote and escape any input and mitigate sql injection [22:03:45] Coren: poke [22:03:59] valhallasw: aha, THAT's why that is happening?!?! [22:04:02] omg [22:04:09] I need to go get up and get some fresh air, but thank you! [22:04:17] * brainwane accumulates TODO items [22:05:18] russblau: I see nothing wrong with it? [22:05:36] anyone online that can allow me to import templates into http://commons.wikimedia.beta.wmflabs.org/ [22:06:03] Coren: My tools are telling me the replication lag is >12 hours 21 minutes [22:06:24] Coren: select max(rc_timestamp) from recentchanges; [22:06:25] Coren: on enwiki_p, that is [22:06:55] select max(rc_timestamp) from recentchanges; [22:06:57] +-------------------+ [22:06:58] | max(rc_timestamp) | [22:07:00] +-------------------+ [22:07:02] | 20131209093400 | [22:07:03] +-------------------+ [22:07:05] 1 row in set (0.04 sec) [22:07:17] ^d: would you be able to allow me to import templates into http://commons.wikimedia.beta.wmflabs.org/ [22:09:31] russblau: Slave status report no lag; interesting. [22:09:35] * Coren digs in deeped. [22:11:32] Ah. I see why. Someone is holding a lock. [22:12:28] Catscan2. They have /got/ to start doing queries that are less heavy on the locks. [22:14:10] over 12 hours? That's ... what's the right word? ... unbelievable! [22:14:25] No, it [22:14:36] it's believable, just also quite inapropriate. [22:15:13] And updates into replication have started up again; shouldn't be too long before it catches up. [22:15:19] thanks! [22:15:43] * Coren mumbles something about queries holding locks for >42000 s [22:29:58] Seconds_Behind_Master: 0 [22:36:56] Coren, any news on pagecounts? [22:37:55] there is a lot of lag on ssh tools-login [23:02:29] lbenedix1: There are a lot of users doing a lot of stuff. :-) The eqiad tools-login is going to be considerably beefier. [23:03:19] alkamid: The problem is more complicated to solve than first appears, thanks to the automounter's complete brain damage; I can only reliably mount it on boxes I reboot. [23:03:57] sure, my scripts will just wait until then [23:04:20] should I set them to search in /mnt/pagecounts or /public/datasets? [23:05:34] Actually, /public/pagecounts; this is where they are intended to go. [23:08:46] Cofen|Dinner: What woul be the best way to get this installed? php Internationalization extension (php_intl.dll) http://pecl.php.net/package/intl [23:10:21] hedonil: It's supported, so just open a bugzilla for it and I should be able to get to it tomorrow. [23:13:14] Coren: k.