[01:15:12] 3Wikimedia-Labs-Infrastructure, Labs-Team: Debian Jessie image for Labs - https://phabricator.wikimedia.org/T75592#988060 (10Andrew) OK, we've abandoned our partition scheme and agreed to make just one 20g / by default. There's now an image live on labs, debian-8.0-jessie (testing). If it stands the test of ti... [01:55:26] 3Wikimedia-Labs-Infrastructure, Continuous-Integration: Figure out how to dedicate baremetal to a specific labs project - https://phabricator.wikimedia.org/T84989#988157 (10Krinkle) [01:59:36] PROBLEM - Puppet failure on tools-webproxy-jessie is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [03:33:51] PROBLEM - Puppet failure on tools-master is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [03:58:52] RECOVERY - Puppet failure on tools-master is OK: OK: Less than 1.00% above the threshold [0.0] [04:19:03] 3Tool-Labs: Set up a tileserver for OSM in Labs - https://phabricator.wikimedia.org/T62819#988323 (10mxn) The old Toolserver tile server worked for all the Wikimedia languages, not just English, German, and Russian. Are there plans to expand the selection? I had to switch the Vietnamese Wikipedia’s OSM gadget to... [05:53:02] 3Wikimedia-Labs-Infrastructure, Continuous-Integration: Create labs project for CI disposables instances - https://phabricator.wikimedia.org/T86167#988443 (10hashar) [05:53:51] 3Continuous-Integration, Wikimedia-Labs-Infrastructure, Labs-Team: OpenStack API account to control `contintcloud` labs project - https://phabricator.wikimedia.org/T86170#988446 (10hashar) [05:54:31] 3Wikimedia-Labs-Infrastructure, Continuous-Integration: Figure out how to dedicate baremetal to a specific labs project - https://phabricator.wikimedia.org/T84989#988449 (10hashar) [05:54:42] 3Wikimedia-Labs-Infrastructure, Continuous-Integration: Create labs project for CI disposables instances + OpenStack API credentials - https://phabricator.wikimedia.org/T84988#988450 (10hashar) [05:55:10] 3Wikimedia-Labs-General, Continuous-Integration: Create labs project for continuous integration nodepool - https://phabricator.wikimedia.org/T55978#988454 (10hashar) [06:34:54] PROBLEM - Puppet failure on tools-master is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:59:56] RECOVERY - Puppet failure on tools-master is OK: OK: Less than 1.00% above the threshold [0.0] [08:14:57] PROBLEM - Puppet failure on tools-exec-13 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [08:44:59] RECOVERY - Puppet failure on tools-exec-13 is OK: OK: Less than 1.00% above the threshold [0.0] [08:51:42] 3Wikimedia-Labs-Infrastructure, Labs-Team: No init script for idmapd on labs jessie instances - https://phabricator.wikimedia.org/T87309#988597 (10yuvipanda) 3NEW [08:52:18] 3Wikimedia-Labs-Infrastructure, Labs-Team: LVM failures on Debian Jessie labs VMs - https://phabricator.wikimedia.org/T87310#988604 (10yuvipanda) 3NEW [08:58:12] 3Tool-Labs-tools-Commons-Delinker: Warning: InnoDB @ cdh - https://phabricator.wikimedia.org/T75353#988611 (10Steinsplitter) 5Open>3Invalid a:3Steinsplitter [08:59:40] 3Tool-Labs-tools-Commons-Delinker: CommonsDelinker: Proposal of deletion WITH REPLACEMENT - https://phabricator.wikimedia.org/T84882#988618 (10Steinsplitter) 5Open>3declined a:3Steinsplitter The bot is to remove deleted files, not to replace it. [13:55:39] 3Wikibugs: wikibugs has too many channels, causing join flood - https://phabricator.wikimedia.org/T86758#988907 (10adrianheine) 5Open>3Resolved a:3adrianheine [14:06:39] PROBLEM - Free space - all mounts on tools-login is CRITICAL: CRITICAL: tools.tools-login.diskspace.root.byte_percentfree.value (<20.00%) [14:16:38] RECOVERY - Free space - all mounts on tools-login is OK: OK: All targets OK [18:06:49] [13tsreports] 15valhallasw pushed 1 new commit to 06master: 02http://git.io/LhmJXw [18:06:49] 13tsreports/06master 14e5da90c 15Merlijn van Deen: dbname does not have _p anymore [18:08:19] 3Tool-Labs-tools-tsreports: oldpagesvswikidata: add page creation date - https://phabricator.wikimedia.org/T87340#989210 (10valhallasw) 3NEW [18:10:13] 3Tool-Labs-tools-tsreports: oldpagesvswikidata: add page creation date - https://phabricator.wikimedia.org/T87340#989219 (10valhallasw) a:3valhallasw [18:17:45] is there any log rotation for the tools logs? [18:25:12] PROBLEM - Puppet failure on tools-exec-10 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [18:51:17] why do I always get "error: [Errno 101] Network is unreachable"? is there any firewall to avoid flooding? it's just a wikidata script, it has to check all projects. [19:03:17] Mjbmr: when you do what? [19:03:47] Mjbmr: also, no, there's no log rotation by default, but you can set it up yourself, I think. Not sure how, though. [19:05:07] valhallasw`cloud: I used grid, but I checked log, it's all "error: [Errno 101] Network is unreachable", I never this problem tho. [19:05:55] *had [19:06:45] .... [19:07:01] again, when you do *what* [19:07:29] is this when you run jsub? is it the output of whatever you try to run on the grid? [19:07:29] dude, I run a pywikibot script. [19:07:37] no, it's always. [19:08:33] .... [19:08:42] nerver mind [19:08:51] you're not providing *any* context [19:08:59] the pywikibot traceback is not from my script tho, I can't debug it. [19:10:16] hello [19:10:16] i have a doubt on tool labs database use. [19:10:16] i am trying to submit a query which i don't want it to be case sensitive [19:10:18] i am using "lower()" to set both variables to lower characters but it doesn't seem to work [19:10:20] besides, i don't know what might happen if i run this same code on a chinese or hebrew alphabet, for isntance [19:10:23] anyone knows? [19:10:36] valhallasw`cloud: I know what I'm running, it's a network issue. [19:10:49] but never mind. [19:10:58] Mjbmr: in that case, maybe you should provide some info on /what/ you're trying to connect to? [19:11:23] 3Tool-Labs: Set versions and dates in man pages from debian/changelog - https://phabricator.wikimedia.org/T87344#989258 (10scfc) 3NEW a:3scfc [19:11:29] I don't post codes here, I connect to any WMF projects using api. [19:12:04] moccard: lower() will not work in general, as the database is not actually using the correct locale. [19:12:25] moccard: (and, apart from that, something like upper() and lower() is locale-dependent) [19:12:25] valhallasw`cloud: what can I do then? i want to do case insensitive [19:12:34] searches [19:13:51] moccard: I don't think you can, in general, as the performance penalty would be too high, but maybe it's possible in your specific case. What are you trying the match? [19:14:14] article and category titles [19:14:25] pastebin a part of your code, you having problem. [19:15:09] for example, in some cases they write "English writers" while in other they do it as "countries where english is spoken"... [19:15:26] i would like to do a LIKE "%english%" to catch them all [19:16:02] moccard: that's an impossible query in performance terms, even if you'd do it case-sensitively [19:16:27] why impossible? I did that with dumps and a java api [19:16:35] with a local mysql database [19:17:05] i don't see why it is that hard. my query implies: English, eNglish, english,... [19:17:10] moccard: hm, okay? that surprises me, because it requres a full table scan [19:17:51] what you can do is do something like page_title COLLATE utf8_general_ci LIKE '%english%' [19:17:53] when you use LIKE %s% you are doing an open query too, right? [19:18:26] SELECT LOWER(title) AS title2 WHERE title2 LIKE '%english%' [19:18:52] I tried that Mjbmr but it doesn't work in Tool Labs :( [19:19:05] Mjbmr: that's not going to work for anything non-ascii [19:19:25] ok, sorry. [19:19:54] COLLATE utf8_bin ? [19:20:19] no. [19:20:27] no, utf8_general_ci (to make the comparison case-insensitive), or a more specific one [19:21:43] where do i put the collation? [19:21:47] moccard: which wiki? [19:22:00] end of it, of course. [19:22:02] i am trying with italian, catalan,... [19:22:06] but how? [19:22:16] collate utf8_general_ci? [19:22:20] end of it. [19:22:34] in each query? [19:22:40] yeah. [19:23:25] but you can check pages, one by one, if it didn't work, tho. [19:25:59] collation utf8_general_ci [19:26:08] i added that on the query and it gives a syntax error [19:26:21] hmm [19:27:40] SLECET * FROM tb WHERE .... COLLATE utf8_general_ci [19:27:52] moccard: SELECT ... WHERE page_title COLLATE utf8_generate_ci LIKE '%english%' AND x=y LIMIT 10; [19:28:43] https://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html (first hit on googling 'mysql case insensitive comparison') [19:29:33] this gives me a syntax error: SELECT page_id, page_title FROM page WHERE page_namespace=0 AND page_title COLLATE utf8_generate_ci LIKE '%catala%' ORDER BY [19:30:29] er, general_ci [19:30:53] and then it complains COLLATION 'utf8_general_ci' is not valid for CHARACTER SET 'binary', so you need some more casting magic [19:31:27] probably I was wrong. find an alternative way. [19:31:48] valhallasw`cloud: it gives the same syntax error even i fixed general [19:31:51] but again, https://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html to the rescue. http://quarry.wmflabs.org/query/1661 [19:32:26] I also do full database search for checking both arabic and persian characters. [19:32:41] moccard: well, if that's your entire query, then you should probably have something behind 'ORDER BY' :-p [19:33:48] mmm...that's my entire query, and when I call it with ; it gives the syntax error [19:34:18] moccard: works for me: http://quarry.wmflabs.org/query/1661 [19:34:51] I see the CONVERT [19:35:11] it works! [19:35:14] :D [19:35:34] however, when I use the query in chinese, what will it happen? [19:35:49] arabic, persian, hebrew,... [19:39:52] moccard: whatever those languages see as 'case-insensitive' [19:39:59] or, well, not languages [19:40:12] utf8_general_ci defines which character should be seen as 'the same' [19:40:45] ok. I see. [19:41:09] and I think it's following a unicode standard with that, but you'd have to check the mysql docs to be sure [19:41:23] also, you might need utf8mb4 instead of utf8, but I'm not sure if our mysql has that [19:41:51] let me rephrase: if it does have it, you should use that ( and use utf8mb4_general_ci) [19:42:26] thanks a lot! [19:45:42] valhallasw`cloud: another problem I am encountering while i do these queries is the difficulty of escaping [19:46:41] characters [19:47:44] I am using Python with MysqlDB which theoretically escapes automaticly when you pass the strings as paramethers [19:49:45] cursor [19:50:12] cursor? [19:52:56] db = MySQLdb.connect(...); cur = db.cursor(); cur.execute("SELECT * FROM tb WHERE key = ?",(value,)) [19:54:07] for row in cur: do something ...; cur.close(); db.close() [19:54:26] yup, right. even though i put them inside execute it fails [19:54:35] when in the title there are "" [19:54:57] for instance, in italian wikipedia: [19:54:58] Persone_legate_all\'Università_degli_Studi_"Guglielmo_Marconi" [19:56:09] cur.execute("SELECT * FROM tb WHERE title = ?",(u'Persone_legate_all\'Università_degli_Studi_"Guglielmo_Marconi"',)) [19:59:09] i don't use this u' [19:59:17] my equivalent would be: [20:00:08] cur.execute("SELECT * FROM tb WHERE title = %s”,’Persone_legate_all\'Università_degli_Studi_"Guglielmo_Marconi”’) [20:00:19] although the title comes in a variable [20:00:43] moccard: that's very strange; it should escape that [20:01:04] moccard: do you actually get a syntax error, or something else? [20:01:22] mysql_cur.execute(sql) [20:01:22] File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute [20:01:22] self.errorhandler(self, exc, value) [20:01:24] File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler [20:01:26] raise errorclass, errorvalue [20:01:28] _mysql_exceptions.ProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near \'San_Marco"" AND p.page_namespace=0\' at line 1') [20:01:58] moccard: whaaaaaaaaaaaaaaaat [20:02:08] sorry :$ [20:02:17] is that MySQLdb module? [20:02:23] yup [20:02:27] this appears in my terminal [20:02:39] \'San_Marco"" [20:02:43] is something longer...cut [20:02:52] moccard: are you sure you're using excute(query, params) and not execute(query % params) ? [20:03:28] cur.execute("?, ?, ?", (var1, var2, var3)) not cur.execute("%s %s %s" % (var1, var2, var3)) [20:03:40] yeah [20:04:10] actually [20:04:11] mysql_cur.execute(sql,k) [20:04:17] where sql is the query [20:04:22] with %s [20:05:15] uhm, that's weird. [20:05:16] how do you exactly build that query? [20:05:25] sql ='SELECT page_id, page_title FROM page p, categorylinks cl WHERE cl.cl_from = p.page_id AND cl.cl_to="%s" AND p.page_namespace=0;' [20:05:26] mysql_cur.execute(sql,k) [20:05:43] it's werid because it doesn't work [20:05:55] cl.cl_to=? [20:06:00] sql ='SELECT page_id, page_title FROM page p, categorylinks cl WHERE cl.cl_from = p.page_id AND cl.cl_to="' + k + '" AND p.page_namespace=0;' [20:06:00] mysql_cur.execute(sql) [20:06:07] or cl.cl_to=%s [20:06:18] while this second version it works...unlesss you come across titles like that I posted before [20:06:47] Mjbmr: is there anything wrong with %s? [20:06:54] mysql_cur.execute('SELECT page_id, page_title FROM page p, categorylinks cl WHERE cl.cl_from = p.page_id AND cl.cl_to=? AND p.page_namespace=0',(k,)) [20:07:32] mysql_cur.execute('SELECT page_id, page_title FROM page p, categorylinks cl WHERE cl.cl_from = p.page_id AND cl.cl_to=%s AND p.page_namespace=0',(k,)) [20:09:53] don't use quotation in the sql string it self. [20:10:08] cur.execute(sql,params) where params must be a tuple or list. [20:10:31] don't ever use like + k + [20:10:57] it seemed logical though [20:11:09] i don't understand (k,) [20:11:32] (k,) is a tuple since is (k) is equal to k. [20:12:30] params must be like (var1,) or (var1,var2) ... [20:13:07] or [var1] or [var1,var2] but these are lists not tuples. [20:13:13] yes but, why do u need to send an array? [20:13:18] is a tuple an array? [20:13:25] yeah, sure. [20:13:29] sorry but I come from java...i started with python yesterday :) [20:13:49] maybe i should read sth before coding in python [20:14:01] still... [20:14:26] why do you need to send an array? [20:14:26] a tuple. if it is just one parameter [20:15:06] they're being replaced in sql string where ever a question mark exist or a %s exist. [20:15:17] must be a tuple of parameters. [20:15:24] aha, i thought a single variable would make [20:17:33] level = level + 1 [20:17:33] File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 249, in __exit__ [20:17:33] self.rollback() [20:17:35] _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") [20:17:37] Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in > ignored [20:17:56] do u know where is this coming from? [20:20:23] moccard: google! ;-) [20:20:33] moccard: I think the issue is you haven't read the last result set [20:20:52] which, in this case, is the 'result set' from the error [20:23:34] great. it gives no problem now. [20:23:48] thank you guys! [20:24:04] you really helped me solve these two annoying issues [20:25:47] you're welcome. [20:27:10] dinner time. take care! [20:27:11] bye! [21:49:29] YuviPanda, who can I talk to, to have a new instance with my own resources be granted to me? [21:50:25] T13|mobile, MusikAnimal ^^ [21:59:33] seconded [22:10:10] +2 [22:12:40] MusikAnimal, should we also move the Pageviews tool to it as well? [22:13:00] T13|mobile, ^ [22:13:35] Umm. [22:14:03] Umm what [22:14:10] Anything that's been sucked up by xtools should be moved to it. [22:14:39] Well, then I need access to the tool Hedonil created and then became inactive. [22:14:49] I have yet to hear from Coren about that. [22:14:58] you mean the gadget? [22:15:37] We've been pleaing with Coren for that for awhile now. Wasn't he going to look into that and get back to you? [22:15:54] Yea. About 20 days ago. [22:18:26] It's not very helpful that Coren hasn't been very active lately. [22:18:39] Or petan, or Time [22:18:40] *tim [22:19:01] @seen petan [22:19:01] Cyberpower678: Last time I saw petan they were talking in the channel, they are still in the channel #wm-bot at 1/21/2015 11:05:20 AM (11h13m40s ago) [22:19:09] Hmm... [22:19:16] @seen Coren [22:19:16] Cyberpower678: Coren is in here, right now [22:19:35] * [Coren] is away (Auto away at Tue Jan 20 20:52:59 2015) [22:19:46] Was stwalkerster going to help with setting up our own instance for this if we needed him? [22:19:57] T13|supper, dunno [22:20:20] Krenair, are you able to create an instances? [22:20:42] in projects I'm an admin of, I think [22:20:48] what project is this? [22:21:10] oh, tools? [22:21:10] We want to create an instance to dedicate xTools to. [22:21:42] So we want to move tools.wmflabs.org/xtools to xtools.wmflabs.org [22:22:29] Krenair, ^ [22:23:12] I'm not a tools admin and I don't think I can help move things like that [22:23:51] We would move it, but we need an admin/root/whatever it takes to get the new instance created. [22:24:26] I don't know if you'd be moving to a whole new project or just a separate instance in the tools projecrt [22:24:28] project* [22:24:40] New project. [22:25:00] Which I thought was called an instance of labs. Not a new instance of toollabs [22:25:13] wat [22:25:58] Forget it. Can you create a new project with the sub-domain xtools on labs. [22:26:30] one of these guys I think: https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::instance-5D-5D/-3FInstance-20Name/-3FInstance-20Type/-3FProject/-3FImage-20Id/-3FFQDN/-3FLaunch-20Time/-3FPuppet-20Class/-3FModification-20date/-3FInstance-20Host/-3FNumber-20of-20CPUs/-3FRAM-20Size/-3FAmount-20of-20Storage/searchlabel%3Dinstances/offset%3D0 [22:27:43] MusikAnimal, actually no. [22:28:00] Krenair, project is what I was looking for, not instance. [22:28:48] those instances list the type, cpu, ram etc, we want a big one [22:28:54] m1.xlarge [22:29:27] MusikAnimal, projects have their own instances. I had them mixed up. [22:29:38] Cyberpower678: I think we should put this request on hold until we know exactly what we need. [22:29:48] We want our own project with large instances. [22:30:44] MusikAnimal, T13|supper ^ [22:30:48] Coren: My mail to the lab's list isn't coming through. [22:31:00] a930913, he's not here. [22:31:23] How large of an instance do we really need though? Part of our problem is our problem. I don't feel good about asking for more resources when we're not managing the resources we have very well yet. [22:31:38] very true [22:32:06] Brute force and sheer size isn't going to fix the issue. [22:32:10] Indeed. [22:32:25] Cyberpower678: Where is he then? :p [22:32:32] well it'd sure run faster [22:32:56] I think the foundation would be much more willing to entertain our request once our shit is together. [22:33:29] Which is as much my problem as both of you. [22:33:49] Anyways... nom nom nom... Chinese supper.. [22:34:06] Can you officially link from wikis to non-tools? [22:35:18] Tools was set up with the security and privacy required to link. [22:35:23] T13|supper, yes, but regardless, we are getting a lot of hits, which is hitting web service limits, causing the pages to never load. OOM results in the web service crashing, but excessive requests can cause it to hang up. [22:36:23] That will be an excellent argument when we know what we need. [22:36:32] Cyberpower678: Can you forward a mail from me to the list? [22:36:43] yeah more CPU power is needed regardless. Memory management is another thing [22:38:30] What we need is to rewrite xTools' scripts and an SQL logger to see what SQL requests are being made, including memory consumption during and after the request. [22:38:40] *add an [22:38:40] MusikAnimal: Cyberpower678 all of ops is tied up in random meetings and shit today and tomorrow [22:38:50] CPU is needed? What for? Isn't it DB and RAM? [22:38:51] YuviPanda, thanks for the info. [22:39:01] I want to fix tools so the problems don't exist [22:39:12] And am willing to put effort and time into it [22:39:20] To fixing xTools? [22:39:22] So telling me what's the issues would help [22:39:26] That's great. [22:39:28] Sure why not [22:39:48] what's the exact issue with xtools? [22:39:55] It runs out of memory? [22:40:10] We have a memory leak somewhere. [22:40:32] Apparently a faulty SQL command is executed and the connection remains open as a result/. [22:40:37] Hmm [22:40:56] what are your thoughts on splitting xtools into multiple tools? [22:41:01] It is a lot of tools [22:41:08] I had the same thought [22:41:23] So separating them means problems with one won't affect others [22:41:32] Yes. It's opening requests which fail and cause those requests to not be properly closed as best as I can tell. [22:41:35] you're still going to get certain tools such as the edit counter getting lots and lots of requests [22:41:39] And makes it more of a hassle to manage. They run on a unified interface. [22:41:55] But then again, a static directory fixes that. [22:42:29] YuviPanda, ^ [22:42:35] Unified interface as in ui? [22:42:49] I'd rather not split. It makes it harder for me personally to remember which tool is in what. [22:43:09] YuviPanda, yes, as well as Intuition. [22:43:29] Right but we can easily set up permissions to share files and code [22:43:35] Cyberpower678: Can't you have xtools-*? [22:43:52] ugh [22:43:58] I think the important bit is running them as different web services [22:43:58] I'd prefer not. [22:44:17] Why [22:44:40] It will improve stability a lot [22:45:02] * Cyberpower678 thinks. [22:45:07] xtools-core holding the unification. [22:45:10] I think even if the edit counter were on it's own things would run much better [22:45:26] Yup [22:45:34] you have a couple requests coming in, running the ec on users with hundreds of thousands of edits, things are going to lock up [22:45:34] Agreed, [22:45:43] Then the tools running in their own namespace with their own webservice. [22:45:52] I like the idea Cyberpower678. If we go that road, we should just make an admin web interface to monitor each tool. [22:46:30] * a930913 fetches the hacksaw. [22:46:33] that might help isolate memory issues too [22:46:38] I'll work on creating xTools branched tonight. I'll also invite YuviPanda, if everyone is okay with it. [22:46:46] Would make it easier to see which tools are struggling. [22:47:02] +1 [22:47:28] Yeah splitting out just the other edit counter first [22:47:28] Is a good idea [22:47:49] And then see how stable everything else is [22:48:21] the page analytic tools could be pretty expensive too, but from what I can tell the ec is doing the most thinking out of all the tools [22:48:25] YuviPanda: You can't admin the making list, can you? [22:48:27] I'll move the edit counter to its own branch tonight and replace the existing file with a redirect script. And I'll add YuviPanda [22:49:05] Does the ec cache results? [22:49:13] yes [22:49:17] Not to my knowledge. [22:49:19] it must [22:49:38] at least for the same end user [22:49:52] if you run it twice on the same editor the second time is lightning fast [22:50:10] Oh it should [22:50:35] MusikAnimal: If the same end user hit refresh, the scripts runs from the beginning Nd recompiled the data. [22:51:04] We need to fix the log in feature too. [22:51:22] It forgets too quickly. [22:51:59] doing some caching tests right now [22:52:29] T13|supper: You mean oauth? [22:52:58] It forgets too quickly. [22:53:39] I know. I have an OAuth module on Supercount that should work better. [22:55:54] Shit. If you Oauth from any branches tool, it takes you back to XTOOLS itself. [22:56:19] I may have a solution for that. [22:58:55] T13|supper: ^ [23:22:07] anyone knows how to install mediawiki-utilities in tool labs? [23:38:19] marcmiquel: virtualenv? [23:56:21] yeah, use virtualenv