[02:35:56] Coren|Away: Is it possible to have jsub overwrite the output file rather than just appending to it? [05:49:33] hi i want to make a new project for runing bot on wikipedia [05:49:38] what should i do? [07:13:56] @seen reza [07:13:56] petan: Last time I saw reza they were quitting the network with reason: Quit: Page closed N/A at 9/9/2013 6:25:16 AM (48m39s ago) [07:26:24] [bz] (8NEW - created by: 2jeremyb, priority: 4Unprioritized - 6normal) [Bug 53935] install ExpandTemplates mediawiki extension @ wikitech - https://bugzilla.wikimedia.org/show_bug.cgi?id=53935 [07:30:25] [bz] (8NEW - created by: 2MZMcBride, priority: 4Unprioritized - 6normal) [Bug 52170] Provide dumps of wikitech.wikimedia.org - https://bugzilla.wikimedia.org/show_bug.cgi?id=52170 [08:35:10] Hey there! I tried testing my extension on the labs, http://tools.wmflabs.org/lifeweb/core/index.php?title=Special:LifeWeb/editor&debug=true -- but the server (too) often responds with 403 forbidden, even on index.php. What can cause such behaviour? [08:44:03] Granjow: 1) you should run on your own labs instance probably. tools isn't for developing mediawiki [08:44:44] 2) re tool labs in particular, idk if that's a recurring issue or what [08:47:56] jeremyb: I disagree... tool labs can be used for this purpose as well [08:48:12] it's a waste of resources to spawn bunch of instances for every single wiki [08:48:28] only problem of tool labs in this moment is that we have no memcache... :/ [08:48:45] wtf [08:48:54] * jeremyb ignores petan and runs away [08:49:17] hint: don't bring up memcache anymore unless there's a substantive point to be made [08:49:41] maybe the point is that many people use it on production so it would be nice for developers to have environment with memcache to work on? [08:50:05] just that /you/ don't use memcache doesn't mean others don't [08:50:37] memcache is still used in production? i thought they switched it all out for redis [08:50:55] I am talking about other productions not wmf [08:51:04] there are other people using mediawiki as well [08:51:15] well, duh :P [08:52:24] APC isn't there either :'( [08:53:09] jeremyb: I'm not working on mediawiki itself, I'm testing an extension for it. (Actually, an extension for Wikibase.) [08:53:27] you don't need APC for that test... [08:55:04] For my extension I do need APC because otherwise I have to wait around 30 s for the page to be loaded. And for each click, actually. [09:05:58] Granjow: that is what can be easily solved by redis I guess [09:06:30] mediawiki is slow mastodon, without memory cache it's nearly unusable [09:09:00] redis? [09:10:45] I have another problem that I need to scan the whole database for each request, which is why memcaching is not enough. Currently I'm caching the results, later I might switch to Wikibase QueryEngine. [09:10:47] that is something like memcache [09:11:06] APC wouldn't solve this problem [09:12:49] petan: What is like memcache? APC works here, I'm using its object cache and update the objects when needed. [09:13:04] redis is [09:14:00] ah. [09:15:38] memcache and redis are both key->value data stores [09:16:23] https://www.mediawiki.org/wiki/Manual:Cache [09:17:01] Granjow: you can probably ask Coren|Away or another root to install APC for you [09:20:15] that would be useful! [09:21:48] Just read the example about memcached now, could use them too. I think I decided for APC because it is recommended on the above page. [09:29:09] Granjow: but memcache is not available on tools for hard-to-understand-reasons so you must use redis [09:29:29] @search memcache [09:29:29] No results were found, remember, the bot is searching through content of keys and their names [09:29:58] !redis is There is no memcache on tools project, only redis. If you want explanation, talk to YuviPanda [09:29:58] Key was added [09:34:18] petan: Can I see somewhere what Wikidata is using? [09:34:35] yes [09:34:41] redis [09:34:57] you can see that in public configuration which IMHO is somewhere in gerrit [09:35:22] @searchrx gerrit.*config [09:35:22] No results were found, remember, the bot is searching through content of keys and their names [09:39:04] Ok! Then I'll add redis support. [09:43:05] @notify YuviPanda [09:43:05] I'll let you know when I see YuviPanda around here [09:43:08] :> [09:44:47] anyone else around that has +2 on labs/tools/grrrit ? :> [12:58:10] hi Ryan_Lane [12:58:20] :/ [13:34:10] Coren|Away, ping [13:35:51] addshore, ping [14:34:41] Hi anyoen here [14:34:43] ? [14:34:53] . [14:34:55] Want to help with something incredily boring but needed [14:35:03] what u need [14:35:16] https://en.wikipedia.org/wiki/Wikipedia:Database_reports/Unused_file_redirects - All the 0 link entries should be G6 [14:35:25] This is something tsreport used to cope with [14:35:57] What I essentialy need is a dynamic version of the report, with a TAG this as G6 button, that does the tagging authomatically... [14:36:20] adding an edit note G6/R4: Retitled file with no significant incoming links [14:36:41] If it can be further autmoated to do massacre tagging even better [14:36:55] (i.e do the whole batch in one go) [14:36:56] Qcoder00: I can take care of it. [14:37:01] legoktm: Thanks [14:38:53] For a large part what i do in terms of edits, most of it can be automated... [14:39:00] like - Tagging oirphan fiar use [14:39:09] like - Identfying images for Commons etc... [15:07:34] Qcoder00: er, whats the rationale for deleting redirects? i thought redirects are cheap [15:07:52] File redirects are untidy [15:08:22] It also so Commons images are de-clipsed [15:08:30] ah right. [15:09:02] and so that duplicate filenames can be removed... [16:19:19] * Coren waves from the SF office. [16:26:55] Coren, ping [16:27:26] SF Office? [16:27:48] Qcoder00, ? [16:28:32] OH right [16:29:00] Foundation Office.. complete with Corporate jacuazi and plasma TV's ;) [16:30:53] Qcoder00: And a healthy champagne budget [16:35:54] Where do I apply? :p [16:36:27] Stupid xchat [16:36:45] Where do I apply? :p [16:36:50] Heh. You mean "complete with open plan work areas" and "poor coffee" [16:37:15] Coren, look at the cyberbot node. [16:37:52] Coren: Use the espresso machine, it's t3h better [16:49:59] Cyberpower678: What's with your node? I don't yet have a workstation here and my ipad is okay for comm and emergency work but I can't really keep an eye on things. [16:50:45] Coren, nothing important. Just letting you know that 70% of the node is in use and the CPU seems to be at 100% all the time. [16:51:49] That seems to be reasonable; 100% CPU use means no wasted resources. [16:52:13] Coren, actually it's slowed my bot down a bit. [16:52:35] Instead of 20 hour scan times, they are 27 hours. :p [16:52:56] I'm thinking you've got a lot of room for optimization though. Since most things are using the same code base, any win is a big win. [16:53:51] Most of the code is pretty well optimized. Optimization would need to be done at the PHP level. Python too. [16:54:08] Ugh. These Python scripts are consuming a lot of resources. [16:54:17] Cyberpower678: Have you looked into using an opcode cache for php? [16:54:24] Python is a hog [16:54:36] Coren, I've never heard of something like that. [17:01:24] [bz] (8NEW - created by: 2Chris McMahon, priority: 4High - 6enhancement) [Bug 53061] support Flow on beta cluster - https://bugzilla.wikimedia.org/show_bug.cgi?id=53061 [18:34:25] hey addshore. [18:34:29] addshore: will merge your patch :) [18:35:30] * YuviPanda pokes Ryan_Lane [18:35:38] Ryan_Lane: I finished out the API on the flight. [18:35:47] :D [18:35:50] so quick [18:35:50] https://github.com/yuvipanda/invisible-unicorn [18:35:59] i'm gonna try that Yuvi :) [18:36:01] yeah, very quick. only a few months :P [18:36:09] mutante: :D [18:36:13] it wasn't a few months ;) [18:36:15] Ryan_Lane: do I just clone this and run it manually on the instance? [18:36:29] for testing, yes [18:36:35] we'll package it, though [18:36:36] * YuviPanda pokes andrewbogott [18:36:36] it's a daemon, right? [18:36:48] Ryan_Lane: it's just a flask app. We can run it with uwsgi [18:36:51] or whatever. [18:36:54] * Ryan_Lane nods [18:37:06] we really need git-deploy for labd [18:37:07] *labs [18:37:35] I should add authentication to the deployment system's web server [18:37:52] hmm, I've not added the auth stuff [18:38:00] but that was mostly because I didn't have access to flask docs [18:38:04] well, I'm talking about something different ;) [18:38:11] yeah, i know [18:38:24] I should also move this to Gerrit [18:38:24] I want to make git deploy multi-tenant [18:38:58] Ryan_Lane: it's using sqlite for the db now. I guess that's good enough. [18:39:08] don't want to put mysql on that box [18:39:16] that's good enough [18:41:10] Ryan_Lane: alright. [18:41:12] * YuviPanda pokes andrewbogott [18:41:19] Ryan_Lane: do you know why i can't ssh into wikidata-dev-9 anymore? [18:41:24] No route to host [18:41:27] oh? [18:41:32] maybe it failed a reboot in virt11 [18:41:35] and i can't access console log for that [18:41:45] could be [18:42:12] hm. no [18:42:14] no problem with our other instances [18:42:15] it's on virt8 [18:42:18] YuviPanda, where should I start? Do you just want me to read and comment or are there more specific action items here? [18:42:50] andrewbogott: so end result is to have a 'manage routes' section on wikitech, where you can add mappings of domains to instance/ports. [18:43:03] sure [18:43:16] andrewbogott: let me find a link to the page with the docs [18:43:27] andrewbogott: https://wikitech.wikimedia.org/wiki/User:Yuvipanda/Dynamic_http_routing/API [18:43:31] aude: have you tried rebooting it? [18:43:38] yes [18:43:40] ok [18:43:49] maybe it's stuck at grub [18:43:51] one sec [18:43:53] ok [18:44:11] andrewbogott: it is pretty accurate, though it was written before the code was written. [18:44:32] aude: yep [18:44:34] it's booting now [18:44:40] yay! [18:45:11] !ping [18:45:11] !pong [18:45:13] ok [18:47:35] Ryan_Lane: hmmm, An error occurred while mounting /mnt/mysql [18:47:47] don't know if i setup it wrong [18:47:57] it's fine to skip and troubleshoot the mounting [18:53:40] andrewbogott: Ryan_Lane alright, API running on 5000 at proxy-dammit [20:00:06] aude: ah. andrewbogott ^^ [20:00:21] I think andrewbogott fixed that in places, but maybe there's an issue on your instance [20:00:33] * andrewbogott catches up [20:02:13] we're talking about wikidata-dev-9? [20:03:20] should I be able to log in? [20:05:36] hm [20:05:42] it booted. I wonder why it says no route to host [20:05:58] oh. I wonder if it was rebooted again [20:06:01] it has an issue with grub [20:06:35] it's booting again [20:06:44] we need to disable that stupid grub feature [20:06:46] Cheers yuvipanda :) [20:06:58] An error occurred while mounting /mnt/mysql. [20:06:58] Press S to skip mounting or M for manual recovery [20:07:00] * Ryan_Lane sighs [20:07:09] :9 [20:07:10] that's the problem [20:07:21] Sounds like fun without a keyboard Ryan ;p [20:07:33] I have a keyboard ;) [20:07:58] Now that's just cheating ;p [20:08:30] * Qcoder00 ponders [20:08:34] Erm people... [20:08:48] Is a data-bunker for WMF feasible? [20:08:51] Ryan_Lane, this doesn't ring a bell as something that I've fixed… am I forgetting something? [20:08:52] hm. maybe I need to connect to the console [20:08:58] andrewbogott: it's different. sorry [20:09:03] ok, np [20:09:28] hm. connecting to serial1 doesn't help either [20:12:20] ok. I need to stop the instance and mount its disk [20:13:59] Ryan_Lane: Ciscos? [20:14:44] think you can still try the web based console applet thing if you proxy through bastion host [20:17:48] what about ciscos? [20:17:50] oh [20:17:50] no [20:17:54] it's an instance [20:20:34] I commented out /dev/vdb and the mysql bind mount and am rebooting the instance [20:20:44] then I'll fsck it and uncomment it [20:24:35] Oh, fsck it! [20:28:08] this grub issue is incredibly annoying [20:29:45] Ah, the recordfail thing? [20:29:49] yes [20:30:17] seems it's in ubuntu's default for cloud images [20:33:20] so... [20:33:21] /var/lib/mysql /mnt/mysql none bind [20:33:32] that bind mount is what is breaking the boot of wikidata-dev-9 [20:33:56] andrewbogott: is that bind mount what we're doing by default for stuff now? [20:34:44] Not that I know of… is that present on other systems or just this one? [20:34:49] Why not just a symlink anyways? [20:34:56] I have no clue [20:35:01] I think that dev-9 isn't puppetized at all, btw, so only aude knows how it got the way it got. [20:35:04] this is how things are set up on this system [20:35:07] aude: ^^ ? [20:38:56] back! [20:40:26] andrewbogott: the api is running on port 5000 in proxy-dammit now (in a screen) [20:48:43] "proxy-dammit"? [20:49:06] nice name for an instance, isn't it? [20:49:48] Sounds like you've been trying for a while and are beginning to be frustrated. "Work, dammit!" [20:50:02] Coren: yeah, this was when I was trying to get labsdebrepo to work [20:56:32] YuviPanda: did you see my note on Github's respone? [20:56:36] response* [20:56:38] valhallasw: oh? [20:56:39] no [20:56:50] i lost some scrollback [20:56:57] so I saw you said something but have no idea what it was [20:57:27] YuviPanda: I emailed them about not being able to push to my own fork from changes on the original repository [20:57:54] YuviPanda: they replied they are working on implementing forking and pull requests, and thought being able to push to a fork also made sense - so it's on their to-do list. [20:58:18] valhallasw: what do you mean by 'not push to my fork from changes on original'? [20:59:28] YuviPanda: clone wikimedia/pywikibot-core, create branch, commit, push to valhallasw/pywikibot-core [20:59:40] why not? [20:59:43] i do that all the time [20:59:47] in github for windows? [20:59:51] ahaa [20:59:59] no I didn't realize you were using that [21:00:04] Just git [21:00:12] now everything makes sense :D [21:00:22] I'm still trying to find a way that makes it easier for windows users to contribute [21:00:33] valhallasw: yeah, I understand. I was just confused because I wasn't thinking of it [21:00:59] valhallasw: it is called 'triangular' workflow, because there's an official repo, your fork, and then the local one. And you need to sync changes across the three. [21:01:04] and I guess they're slowly adding support to that [21:01:18] Right. [21:01:26] Hm, SourceTree might be better for this. [21:02:23] valhallasw: yeah, do look at alternatives to github on windows too. [21:02:37] valhallasw: their android app has been stagnant, and I'm not sure how much effort they're putting into it on windows [21:03:06] YuviPanda: basically, the Eclipse-based Mylyn solution would be best, but Mylyn is not exactly putting much effort into merging Gerrit 2.7 support [21:03:42] git-review is sucky on windows, to say the least [21:03:50] yeah, true [21:03:57] i've managed to avoid it [21:05:11] I have actually considered building a basic svn-like git interface with one of the pure-python git implementations [21:41:58] [bz] (8PATCH_TO_REVIEW - created by: 2Chris McMahon, priority: 4High - 6enhancement) [Bug 53061] Install Flow extension on beta cluster - https://bugzilla.wikimedia.org/show_bug.cgi?id=53061 [21:41:58] [bz] (8NEW - created by: 2Michelle Grover, priority: 4Unprioritized - 6major) [Bug 53962] When I try to logout after logging in on mobile or desktop I receive a 500 Server Error - https://bugzilla.wikimedia.org/show_bug.cgi?id=53962 [21:43:21] Hi! I'd like to know how I can export a query into a CSV file. I try to use the statement "select * from page into outfile..." but I get the error 1045: Access denied for user XXX [21:45:16] I think it can happen because the user I use to connect to the tools-wmflabs.org via SSH and the user I use to connect to the mysql server is different [21:45:52] I cannot write into the /tmop driectory neither [21:50:38] elgranscott: i think you'll need more mysql grants instead of file system.. GRANT FILE ON .. [21:51:21] because it says Access denied and not Permission denied [21:51:38] and FILE is a separate privilege [21:57:47] mutant: I try to use "grant file on *.* to 'user'@'%';" but I get ERROR 1045 (28000): Access denied for user XXX when I run that statement [22:02:14] elgranscott: you'll need to do that as mysql root .. actually i just talked to Coren briefly, we can't enable that for security reasons unfortunately [22:02:28] elgranscott: so your work-around is going through command line [22:02:59] how? [22:03:15] let's say you have your queries in queries.sql [22:03:43] mysql .. < queries.sql > out.csv [22:03:56] and then a little formatting on it, like: [22:04:11] sed 's/\t/,/g' [22:04:18] to replace the tabs with commas [22:04:49] or, heh, here's another one: [22:04:52] mysql --user=wibble --password wobble -B -e "select * from vehicle_categories;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > vehicle_categories.csv [22:05:09] http://stackoverflow.com/questions/356578/how-to-output-mysql-query-results-in-csv-format (except those that need FILE privilege) [22:09:23] mutante: I tried this before, but when I run "mysql -u user -ppassword enwiki < test.sql > test.csv" I get ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) [22:10:58] elgranscott: try mysql --defaults-file=~/replica.my.cnf enwiki.labsdb < teset.sql > test.csv [22:11:06] s/teset/test [22:11:36] your DB login credentials are in that defaults file, no need to specify username/pw on the command line [22:14:48] Nettrom: I get the same error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) [22:16:59] elgranscott: you can try -h localhost to use TCP instead of a socket [22:17:55] yeah, I forgot the '-h ' in front of 'enwiki.labsdb' to specify the hostname, sorry [22:17:56] elgranscott: eh,, wait , -h 127.0.0.1 or the actual local IP it listens on [22:18:11] * Nettrom has brain fried from programming [22:23:07] andrewbogott: poke me when you want to start on the proxy stuff, or even if you just want to discuss it [22:23:39] YuviPanda: OK. I'm roughing in a bit of a GUI but it'll be a while before it looks like anything. [22:23:44] alright [22:24:19] I think it'll look a fair bit like https://wikitech.wikimedia.org/wiki/Special:NovaDomain [22:24:28] (Which, it occurs to me, you can probably not view :) ) [22:24:40] yeah :D [22:28:06] hi J-Mo|away [22:28:42] mutante & Nettrom: I got it! Thanks a lot!!! [22:29:47] elgranscott: cool:)