[00:01:56] Warning: There is 1 user waiting for shell: Richregel (waiting 162 minutes) [00:15:25] Warning: There is 1 user waiting for shell: Richregel (waiting 176 minutes) [00:15:26] Warning: There is 1 user waiting for access to tools project: Sven Manguard (waiting 8 minutes) [00:16:29] legoktm: It contains all blocks, or just active ones? [00:16:34] just active ones [00:16:39] if you want block log, use the logging table [00:19:50] legoktm: I'll probably put this into the getwarnlevel.py :) [00:19:58] ok [00:20:29] Though I want some efficient way of stalking who's reported. [00:20:48] reported to where? [00:20:55] legoktm: Any chance lee* can do that? [00:20:59] legoktm: AIV/ [00:21:03] it does [00:21:11] s/\/// [00:21:32] also, the memcache key "aiv" is a pickled python list of all users currently on AIV [00:23:57] legoktm: How to I memcache in python? [00:24:05] s/to/do/ [00:24:10] I should sleep... [00:24:31] install the python-memcached library from pip [00:24:36] https://github.com/legoktm/mtirc/blob/master/mtirc/cache.py should give you a good idea [00:26:11] legoktm: Can't you just file it? :) [00:26:19] file it? [00:26:21] is that library not installed via the system? [00:26:23] why use pip? [00:27:05] Ryan_Lane: idk. i use virtualenv's to ensure that my libraries are kept in sync across multiple machines+systems [00:27:12] heh [00:27:33] legoktm: f=open("file.txt","w"); f.write(json.dumps(warnings)); [00:28:42] a930913: memcached is better, plus you dont have issues with threading+multiple scripts. also i think pickling is faster than the json implementation [00:28:54] Warning: There is 1 user waiting for shell: Richregel (waiting 189 minutes) [00:28:55] Warning: There is 1 user waiting for access to tools project: Sven Manguard (waiting 21 minutes) [00:29:14] Ryan_Lane: want to add Sven to the tools project? :) [00:29:19] legoktm: Pickling is only python though. [00:29:20] one sec [00:30:35] a930913: Yeah, but I use python everywhere so.... [00:30:49] a930913: I was working on a web API that exposed it as json [00:31:04] oh here, http://tools.wmflabs.org/editcountitis/cgi-bin/avi/av/api.py/aiv/ [00:31:20] he's added [00:31:24] Ryan_Lane: thanks :D [00:31:27] yw [00:32:03] https://www.ohloh.net/languages/bfpp [00:32:52] legoktm: :) [00:33:27] That's much easier to consume as it's batteries included :) [00:33:40] * a930913 -> afk. (Battery) [00:42:20] Warning: There is 1 user waiting for shell: Richregel (waiting 203 minutes) [00:55:50] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 216 minutes) MrFredH (waiting 11 minutes) [00:59:45] er, how do you look up a users shell name? [00:59:50] Sven Manguard isn't sure what his is [01:01:27] I'm baaaaaaaaack, and hopeless as ever <3 [01:02:09] ohai [01:02:17] maybe someone in here is more familiar with windows.... [01:03:01] how familiar? [01:03:04] so I stick tools-login.wmflabs.org in at putty [01:03:09] then it says login as: [01:03:14] and I stick in svenmanguard [01:03:24] and it gives me a message Disconnected: No supported authentication methods available (server sent: publickey) [01:04:36] gry: using putty to connect to labs. [01:05:09] i have used putty before but only with passworded logins; i have no clue how to add pubkeys into it [01:07:07] hm [01:07:41] Sven_Manguard: try setting the file location at http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/images/pka/figure7.png [01:09:20] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 230 minutes) MrFredH (waiting 24 minutes) [01:09:44] I'm in [01:09:50] yayayay [01:10:01] ok now its simple [01:10:43] type in "become svenbot" [01:11:07] then "source python/bin/active" [01:11:14] erp [01:11:16] activate* [01:11:19] sorry, a password is required to run sodu [01:11:24] er [01:11:40] maybe wait a minute since i just added you? [01:14:36] svenmanguard@tools-login:~$ become svenbot [01:14:37] local-svenbot@tools-login:~$ [01:15:23] yay [01:15:34] now " source python/bin/activate" [01:16:21] (python)local-svenbot@tools-login:~S [01:17:02] "cd rewrite/scripts" [01:17:29] "python login.py" [01:17:35] which should prompt you for a password [01:18:05] the fuck? [01:18:14] Password for user Svenbot on wikidata:wikidata: [01:18:15] Logging in to wikidata:wikidata as Svenbot [01:18:17] en.wikidata is not a valid site, please remove it from your config [01:18:18] (python)local-svenbot@tools-login:~/rewrite/scripts$ [01:18:52] what's with the third line? [01:20:25] erm [01:21:32] just ignore it [01:21:36] k [01:22:05] (python)local-svenbot@tools-login:~/rewrite/scripts$ [01:22:06] anyways, now just follow the instructions at https://www.mediawiki.org/wiki/Manual:Pywikipediabot/claimit.py [01:22:21] oh wait [01:22:24] lemme fix one thing [01:22:54] wait, do I need python? [01:22:59] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 243 minutes) MrFredH (waiting 38 minutes) [01:23:01] its already set up [01:23:04] so just run [01:23:21] "python claimit.py -cat:"Women physicists" P1 Q12345 [01:23:26] or whatever [01:23:54] oh [01:25:17] wait, so how does it work? [01:25:47] gimme a sec [01:26:52] P# <- property you want to add, and Q# <-- the item property should link to [01:28:21] What's a good place to paste a long error message? is there a good pastebin website [01:29:10] dpaste.de [01:29:41] http://pastebin.com/t0mwi8vb [01:30:02] oh right [01:30:20] that was my fault, fixed. [01:31:38] what just happened? [01:32:29] i had to change your default site to enwiki but forgot to set your username for it. [01:34:23] http://pastebin.com/Zivr8WMw [01:35:33] ermmmm [01:35:44] did you try copypasting or something? [01:35:49] yeah [01:36:07] because no edits happened [01:36:25] ok, you missed the : [01:36:28] it should be -cat:"" [01:36:28] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 257 minutes) MrFredH (waiting 51 minutes) [01:36:33] so [01:36:34] python claimit.py -cat:"Seminaries and theological colleges in Massachusetts" P31 Q233324 [01:37:18] yeah, tried tha [01:37:20] t [01:38:01] try it again? [01:38:06] it just worked for me [01:39:10] why is it sleeping for 8.9 or 9.3 seconds? [01:40:19] oh, edit rate [01:41:51] yeah, you can easily change that by adding the argument "-pt:#" where # is how many seconds to sleep [01:42:23] what's a good rate? [01:46:29] legoktm: https://dpaste.de/vSTuK/ [01:46:35] is that because it's not flagged? [01:46:47] heheheh [01:46:48] yeah [01:47:07] well i always run my bots at -pt:0 [01:47:31] so what, 300 a minuted? [01:48:29] no, usually not that fast [01:48:39] most take processing time in between [01:49:58] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 270 minutes) MrFredH (waiting 65 minutes) [01:50:23] anyways, you should be set now :D [01:51:11] legoktm: you're a crat, so tell me, how long do I need to wait before {{Wikidata:Requests for permissions/Bot/Svenbot}} is approved? [01:51:37] like 30 more seconds [01:52:57] done [01:55:39] cool thx [02:03:27] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 284 minutes) MrFredH (waiting 78 minutes) [02:07:27] legoktm: you built frowny faces into your bot? [02:07:30] Processing [[en:International Baptist College]] [02:07:31] [[en:International Baptist College]] doesn't have a wikidata item :( [02:16:52] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 297 minutes) MrFredH (waiting 92 minutes) [02:30:12] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 310 minutes) MrFredH (waiting 105 minutes) [02:35:29] wm-bot: quiet [02:35:29] Hi Sven_Manguard, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [02:35:43] wm-bot: !quiet [02:36:01] did that actually work? [02:43:33] Warning: There are 2 users waiting for shell, displaying last 2: Richregel (waiting 324 minutes) MrFredH (waiting 118 minutes) [02:49:03] Sven_Manguard: heheh :D [02:53:27] legoktm: do you see the error in this command: (python)local-svenbot@tools-login:~/rewrite/scripts$ python claimit.py -cat:"Seminaries and theological colleges in South Tennessee" P31 Q233324 -pt:0 [02:53:47] no…. [02:53:52] whats the traceback? [02:54:00] There is no such state as South Tennessee [02:54:23] oh [02:54:26] There is a South Dakota, which comes right before it alphabetically [02:54:39] I forgot to remove the South when I replaced Dakota with Tennessee [02:54:42] heheh [02:56:58] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 337 minutes) MrFredH (waiting 132 minutes) Paulboal (waiting 7 minutes) [03:10:21] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 351 minutes) MrFredH (waiting 145 minutes) Paulboal (waiting 21 minutes) [03:23:55] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 364 minutes) MrFredH (waiting 159 minutes) Paulboal (waiting 34 minutes) [03:34:47] hey Ryan_Lane [03:34:52] thanks for the assist earlier [03:35:02] yw [03:35:25] * Sven_Manguard almost, almost had to ask what wy was before getting it :S [03:35:31] yw* [03:35:40] heh [03:35:42] I CAN TYEP! [03:37:20] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 378 minutes) MrFredH (waiting 172 minutes) Paulboal (waiting 48 minutes) [03:43:25] legoktm: I'm getting bad conflicts using the script. I'm adding P31 (instance of) with the item Seminary, but for some reason if there's already a P31 with the value University, it won't go. Some things are both seminaries and universities [03:49:47] Yeah, the script is conservative in that manner [03:50:46] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 391 minutes) MrFredH (waiting 186 minutes) Paulboal (waiting 61 minutes) [04:04:17] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 405 minutes) MrFredH (waiting 199 minutes) Paulboal (waiting 75 minutes) [04:17:42] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 418 minutes) MrFredH (waiting 213 minutes) Paulboal (waiting 88 minutes) [04:31:08] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 431 minutes) MrFredH (waiting 226 minutes) Paulboal (waiting 102 minutes) [04:44:32] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 445 minutes) MrFredH (waiting 239 minutes) Paulboal (waiting 115 minutes) [04:57:53] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 458 minutes) MrFredH (waiting 253 minutes) Paulboal (waiting 128 minutes) [05:11:18] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 472 minutes) MrFredH (waiting 266 minutes) Paulboal (waiting 142 minutes) [05:24:47] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 485 minutes) MrFredH (waiting 280 minutes) Paulboal (waiting 155 minutes) [05:38:17] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 499 minutes) MrFredH (waiting 293 minutes) Paulboal (waiting 169 minutes) [05:51:39] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 512 minutes) MrFredH (waiting 307 minutes) Paulboal (waiting 182 minutes) [06:05:12] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 525 minutes) MrFredH (waiting 320 minutes) Paulboal (waiting 196 minutes) [06:18:41] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 539 minutes) MrFredH (waiting 334 minutes) Paulboal (waiting 209 minutes) [06:32:11] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 552 minutes) MrFredH (waiting 347 minutes) Paulboal (waiting 223 minutes) [06:45:41] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 566 minutes) MrFredH (waiting 361 minutes) Paulboal (waiting 236 minutes) [06:59:10] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 579 minutes) MrFredH (waiting 374 minutes) Paulboal (waiting 250 minutes) [07:12:31] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 593 minutes) MrFredH (waiting 387 minutes) Paulboal (waiting 263 minutes) [07:26:05] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 606 minutes) MrFredH (waiting 401 minutes) Paulboal (waiting 277 minutes) [07:39:34] Warning: There are 3 users waiting for shell, displaying last 3: Richregel (waiting 620 minutes) MrFredH (waiting 414 minutes) Paulboal (waiting 290 minutes) [07:48:23] !rq Richregel [07:48:23] https://wikitech.wikimedia.org/wiki/Shell_Request/Richregel?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Richregel?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Richregel [07:48:28] !rq Paulboal [07:48:28] https://wikitech.wikimedia.org/wiki/Shell_Request/Paulboal?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Paulboal?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Paulboal [07:48:31] !rq MrFredH [07:48:32] https://wikitech.wikimedia.org/wiki/Shell_Request/MrFredH?action=edit https://wikitech.wikimedia.org/wiki/User_talk:MrFredH?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/MrFredH [08:04:05] !log deployment-prep Creating deployment-cache-upload04 using a Precise image. The aim is to replace deployment-cache-upload03 which runs Lucid (see also {{bug|49470}} [08:04:08] Logged the message, Master [10:18:35] Coren: can you install the zend framework on tool labs [10:31:04] petan: ^^ [10:36:13] iwhat is tools-dev for [10:36:19] and how do I use it ? [10:40:26] you ssh to it from tools-login [10:40:38] ok [10:40:50] and it should be used for 'developemt' building stuff, running tests etc. :) [10:41:00] does it have a different url to access tools on it [10:41:14] no, everything on the cluster is accessed through the same url [10:41:48] hmm [10:41:59] what are you trying to do? :) [10:42:21] I need to use Zend frame work in my tool [10:42:39] iafaik it is only installed on tools-dev [10:43:56] once http://framework.zend.com/ ? [10:44:29] yes [10:44:39] there isnt generally anything to install with php frameworks is there? It is just a collection of files? [10:45:35] I don't know [10:46:30] as far as I know the framework will be a collection of hpp files and folder, you should just need to put this in /data/project/toolname/public_html and it will be accessible on all instances :) [10:46:48] I'll try that [10:46:51] thanks [10:47:00] np, if you have any problems just ping me :) [11:02:01] BTW, tools-dev.wmflabs.org is directly accessible as well. [11:07:04] indeed! [11:07:26] hehe, scfc_de I just tested huggle out on wikidata, think we might need a bit of work there before it is usable ;p [11:18:18] addshore: is tools-dev new, or have I've been ignorant? [11:18:29] its been there for a long while :) [11:18:51] * AzaToth blames Coren for not informing him [11:18:54] its where you should run stuff your testing instead of -login :) [11:19:12] addshore: okai [11:20:08] addshore: I see it has the same home at least [11:20:23] yup, same /home and /data :) [11:22:42] AzaToth: I think it's intended only for testing stuff with a heavy CPU/memory footprint (baking a kernel, etc.), so that the "normal" interactive use on tools-login isn't disrupted more than necessary. [11:23:15] really, nothing should be run on -login :) [11:24:20] hmmm I guess I need the old framework [11:25:45] what's the simplest way to access the API from a php tool ? [11:27:41] Oren_Bochman: https://www.mediawiki.org/wiki/API:Client_code#PHP [11:28:08] !log huggle addshore: adding wikidata whitelist [11:28:09] Logged the message, Master [11:30:09] hmm nothing simple there [11:31:11] Oren_Bochman: with a framework? :D [11:31:25] >> https://github.com/addshore/addwiki/blob/master/classes/botclasses.php [12:00:28] Oren_Bochman I already installed zend framework... [12:00:30] it's on tools-dev [12:00:39] where else do you need it? [12:24:20] I nrrfrf iy on tools-login [12:24:20] Is there a way to link to an article by the article ID instead of the title? eg http://en.wikipeidia?article=1234567 [12:24:28] I got a local copy [12:51:02] Coren: would it be possible to create a aggregating view (alg=merge) over revision and revision_userindex, and would that allow the index to work? [12:55:44] FutureTense: not that I know of [12:55:57] if there were it would be something like http://en.wikipedia.org/w/index.php?pageid=30092150 [12:59:27] can someone tell me how to "fix" this query? [12:59:28] select page_title from page where page_id=458122; [12:59:43] it returns the page_title ok, but it is garbled [13:00:02] I guess it is encoded somehow.. never understood this utf crap [13:02:52] FutureTense: Try "SELECT CONVERT(page_title USING utf8) FROM page WHERE page_id=458122;". [13:03:23] still doesnt work [13:03:51] FutureTense: There are several levels of encoding; which programming language do you use? [13:03:59] python [13:04:16] however, I'm using mysql at the terminal right now [13:04:33] (BTW, http://en.wikipedia.org/w/index.php?curid=30092150 works. Doc: http://www.mediawiki.org/wiki/Manual:Parameters_to_index.php) [13:04:35] so I should be able to get the "proper" results from that, no? [13:04:44] FutureTense: No, there are differences. [13:05:24] Cf. https://github.com/mzmcbride/database-reports/blob/master/dbreps for an example how to connect with Python. [13:06:29] Hello, Labs. [13:07:20] so are you telling me the way I connect to the database effects the output I get in my queries? [13:07:33] AzaToth: It could, but it wouldn't. [13:07:49] ok [13:07:51] I'm still perplexed why I can't get the query to return the proper results from MySQL [13:08:42] Coren: http://stackoverflow.com/questions/17002809/handling-two-almost-identical-tables-as-one-model was a reply there, that's why [13:09:22] FutureTense: UTF-8 is a way to encode text. Mysql does exactly the right thing: report what's in the database to you. If you show it on screen but your console doesn't display it right, it's not mysql's fault but your console's. :-) [13:09:56] Coren, FutureTense: It's a bit more complicated. WMF stores UTF8-encoded text as LATIN1 in the database, so you have to work around that. [13:10:08] scfc_de: they do? [13:10:14] that sounds wrong [13:10:25] that can't be true [13:10:30] you must be lying [13:10:38] no one would be so stupid to do such a thing [13:10:53] ok, so how can I get it to convert properly? this is hair pulling mad [13:11:09] FutureTense: recode or iconv [13:11:59] scfc_de: No we don't. If that were the case, no title not in latin-1 could be storable. [13:12:02] FutureTense: What I said above: Use UTF8 as client encoding and "CONVERT(column USING utf8)". [13:12:35] FutureTense: which db? [13:12:37] Coren: Sorry, you're right: VARBINARY. [13:13:12] AzaToth: enwiki [13:13:12] I ran this from MySQL [13:13:12] scfc_de: Right, so that there is no transcoding going on at all. UTF-8 goes in, UTF-8 goes out. :-) [13:13:12] select convert(page_title using utf8) from page where page_id=458122; [13:13:24] same results [13:13:49] FutureTense: You're not understanding what I'm telling you. The page_title is in UTF8. Mysql just displays this to you. Unless your terminal speaks UTF-8, it'll look like garbage to you. [13:13:52] FutureTense: http://paste.debian.net/9912/ [13:14:22] Gay-straight alliance? [13:14:26] yeah [13:14:32] AzaToth: it's a dash. [13:14:40] ah [13:15:00] FutureTense: Are you using putty? [13:15:04] yes [13:15:27] FutureTense: http://i.imgur.com/2bpPMEt.png [13:15:30] FutureTense: Tell putty to use UTF-8 then. You have to do this anyways, the VMs are also set to UTF-8. :-) [13:15:32] Coren: ↑ [13:16:12] Coren: garbage in, garbage out as they say [13:16:16] Well, I really don't care about what putty tells me, but I DO care about what python "gets" when it runs the same query [13:16:20] Coren: Problem is most clients don't like VARBINARY and barf, so you have to either decode the data in the client or make MySQL mark it correctly as UTF8. [13:16:26] and its having the same issue [13:16:31] FutureTense: Where's your Python source? [13:16:33] FutureTense: Python gets UTF-8 too. EVERYTHING gets UTF-8. :-) [13:16:50] scfc_de: I've never had any problem with perl or C. [13:16:54] Coren: only if you tell python to understand that utf8 is relevant [13:16:55] scfc_de: please clarify? [13:17:11] FutureTense: How do you know it's having the same issue? [13:17:13] scfc_de: The problem with having it stored in the database as a TEXT type is that MySQL's UTF-8 support has historically not been so great. [13:18:10] anomie: I am well aware of WMF's commitment to make pigs fly :-). [13:18:25] FutureTense: Where's the source of your script to do the query so we can look at it? [13:18:50] scfc_de: Made http://commons.wikimedia.org/wiki/File:Sus_scrofa_avionica.png a long time ago ヅ [13:18:58] Im logging my errors [13:19:00] http://tools.wmflabs.org/common-interests/traceback [13:19:07] do a view source on that to make it readable [13:19:32] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 3: ordinal not in range(128) [13:19:42] FutureTense: welcome to the club [13:19:55] python is alias for UnicodeDecodeErrir [13:19:58] Error* [13:20:03] ? [13:20:09] i dont like this club [13:20:10] :( [13:20:34] you always get UnicodeDecodeErrors with python, even if you do everything correclty [13:21:05] FutureTense: issue can be you are trying to decode unicode twice [13:21:13] but you probably doesn't have any power over it [13:21:48] Coren, PING [13:22:07] Cyberpower678: ? [13:22:20]  Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 32 bytes) in /data/project/xtools/Peachy/Plugins/database/MySQL.php on line 119 [13:22:39] ini_set("display_errors", 1); [13:22:39] ini_set("memory_limit", '512M'); [13:22:44] WHy? [13:23:56] Cyberpower678: Clearly, that memory_limit isn't the only one in force; likely it gets overriden after you set it. [13:24:09] FutureTense: instead of "str()" try using "unicode()" [13:24:12] FutureTense: You have at least two more SELECTs on text columns: "distinct(rev_user_text)" in get_common_editors() and "distinct(page_title)" in get_article_names(). [13:24:14] But also, "512M"? Really? [13:24:27] Coren, what do you mean? What's overriding it? [13:24:40] Coren, article info tool pulls a lot of information. [13:24:58] scfc_de: but that's not causing this coding problem? [13:25:52] Cyberpower678: I'd need to read through and analyze your entire code to figure it out. You might want to do a phpinfo() earlier in your code to see what, exactly, is being taken into account. [13:26:43] Coren, I'm just scanning all scripts for ini_set("memory_limit" [13:27:04] FutureTense: You're passing both variables articles and editors to render_template, so I assume they contain text whose charset's not properly declared. [13:27:16] I think I found it. [13:27:20] Coren: So to confirm, text in the database is stored as UTF8 [13:27:42] FutureTense: Yes. [13:28:07] scfc_de: thats correct. And I've isolated the problem variable and value [13:49:56] Coren, Fatal error: Out of memory (allocated 149422080) (tried to allocate 32 bytes) in /data/project/xtools/public_html/articleinfo/base.php on line 65 [13:50:08] Nothing else seems to be overriding it. [13:50:30] Cyberpower678: Well your limit is now clearly much higher. [13:50:44] No where near 512 though. [13:51:08] Cyberpower678: (I'm still boggling at the amount of ram this thing needs, though) [13:51:33] articleinfo has always been a resource hog. [13:51:43] It eventually got shut off on toolserver. [13:51:58] It's a webtool, so the ram is merely momentary. [13:52:29] Cyberpower678: It's still ridiculously high and may run into trouble for the same reason. What does it /do/? [13:52:58] http://tools.wmflabs.org/xtools/articleinfo [13:53:14] It's very popular [13:53:58] How in blazes can it consume tens of megs of ram when examining a single article? I don't think there /is/ that much data about an article! [13:54:36] The person who reported it was querying Wikipedia:Requests for page protection [13:54:47] That explains a lot. :p [13:55:31] Coren, ^ [13:56:11] That doesn't explain why all that data needs to be in ram at once; it was obviously written with no consideration for memory consumption. You're hitting the system-wide memory_limit atm, and I'm pretty sure I don't want to raise /that/ much higher if we want the webservers to be reasonably stable. [13:57:03] Coren, it's working now. It caches the data afterwards though. [13:57:17] "caches"? In a DB? [13:57:31] no. As files. [13:57:45] Meh. Flatfiles are a DB too. :-) [13:58:04] True. [13:58:56] Yeah, I'm not going to raise the global memory_limit, it's already overly generous as it is. If there is a strong enough demand for that tool that requires it, we'll make a webserver just for it. [13:59:27] Coren, will it still link to xtools? [13:59:55] Cyberpower678: cache into memcached? [14:00:01] Cyberpower678: It'd have to be separated into its own tool (though nothing would prevent you from redirecting to it from /xtools/) [14:00:23] Coren, that's good enough for me. [14:00:48] Coren, it is definitely in demand. [14:01:21] If you go to a page's history, you'll see a link that says Revision history statistics [14:02:10] Cyberpower678: Let me be clearer; if it's in (a) significant demand, (b) a significant fraction of uses break because of the memory limit and (c) there is no workaround (like limiting the time interval). And even then, only if (d) the code can't be fixed to be less of a glutton. [14:02:41] Coren, got it. [14:02:45] Cyberpower678: At the very least, the tool should first fail gracefully if there is to much data to handle. :-) [14:38:10] Coren, what's the news on Researcher? [14:40:32] Cyberpower678: There is no news. You'll have to be patient; like I said earlier this will take weeks at best. [14:41:02] Cyberpower678: This needs considered thought by legal, they are very busy, and this is very low priority.' [14:42:26] Coren, I know. I am being patient. I just like regular updates/ [14:42:30] :-) [15:05:36] Warning: There is 1 user waiting for shell: Pcodeaxonos (waiting 0 minutes) [15:19:10] Warning: There is 1 user waiting for shell: Pcodeaxonos (waiting 13 minutes) [15:19:31] what crates /usr/local/apache/commons-local? its missing here with a fresh install if the videoscaler [15:24:37] what [15:24:42] is going on with jsub? [15:25:27] Theopolisme: Nothing /should/ be going on. [15:25:32] http://pastebin.com/a9uMrTYa [15:26:17] Works for me without those errors. How odd. [15:26:46] Hmm...that's running through crontab; let me try it directly [15:27:10] Theopolisme, it doen't like you. [15:27:12] :p [15:27:14] If directly works, can you paste your crontab line? [15:29:52] Okay, so it's a problem with my crontab. [15:29:54] 0 */1 * * * jsub python $HOME/cgi-bin/other_scripts/latest_commit_to_enwiki.py > /dev/null [15:31:10] Theopolisme, where are you seeing these errors? [15:31:45] in the .out file in my home directory [15:32:20] Which home directory? [15:32:31] Warning: There is 1 user waiting for shell: Pcodeaxonos (waiting 27 minutes) [15:32:47] the home directory of my bot [15:32:54] ok. [15:32:56] theoslittlebot project [15:34:23] My crontab is working fine for me. [15:34:32] No issues. [15:34:47] Other than minor bug messages orginating from my bot. [15:36:52] I think your command script is wrong. [15:37:00] Theopolisme, give me a sec [15:37:34] Great, any help is appreciated [15:39:37] 0 */1 * * * cd $HOME/cgi-bin/other_scripts && jsub -mem 512m -cwd -once -N latest_commit_to_enwiki -o /dev/null -e /dev/null python latest_commit_to_enwiki.py [15:39:41] Try that [15:40:04] Anyways. Gotta go. [15:41:22] Thanks! I'll give it a shot now [15:46:02] Warning: There is 1 user waiting for shell: Pcodeaxonos (waiting 40 minutes) [15:59:31] Warning: There is 1 user waiting for shell: Pcodeaxonos (waiting 54 minutes) [16:08:55] !rq Pcodeaxonos [16:08:55] https://wikitech.wikimedia.org/wiki/Shell_Request/Pcodeaxonos?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Pcodeaxonos?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Pcodeaxonos [16:11:09] Coren: The compilation errors should have been fixed by https://gerrit.wikimedia.org/r/#/c/67643/2. Theopolisme: How old is this output? [16:11:42] Let's see [16:11:45] Recent [16:12:26] Date: Tue, 11 Jun 2013 00:00:05 +0000 [16:15:18] scfc_de: ^^ [16:16:16] scfc_de: That's been merged in 1.0.2, but even then it wouldn't explain most, not $vfmem and $new_vmem [16:16:16] Hmmm, hmmm, hmmm. /usr/bin/jsub (on tools-login) is from Jun 11 00:03 :-). Have you received any errors since then? [16:16:53] Why not? [16:17:45] Oh, no, you're probably right; I hadn't noticed that the version of memparse_kb you originally wrote didn't scope the vars. [16:18:01] (I noticed it was moved in the source, not that you added the my $foos) [16:18:43] Theopolisme: So yeah, fixed in 1.0.2 which is the version currently installed. [16:19:21] Well that's fabulous [16:20:53] Coren: I didn't write that :-), 0543a5d is by Merlijn, and indeed jsub from that commit produces errors on compilation. [16:21:14] scfc_de: Oh, I thought that was yours. Nevertheless, it's gone now. :-) [16:27:16] AzaToth: Do you know from the top of your head how to add pre-release tests to Debian packages? I. e. the equivalent of "make check" in a tarball, but rather simply run "perl -cw" on some files and stop if it doesn't succeed. I saw something somewhere, but I don't even remember enough of it to google. [16:27:38] scfc_de: so uh, how do i stop the spam from mzm not having his mail thing set up? [16:28:00] legoktm: You nag mzm to login to Tools :-). [16:28:39] scfc_de: depends if you are using dh7 or not, but dh_auto_test runs tests and can be overrided by override_dh_auto_test: blablabla in the rules [16:28:42] legoktm: Alternative: We set up ~local-dbreps/.forward => legoktm, scfc. Let's see if it works. [16:29:28] scfc_de: well what would be really nice is if i only got mail when the reports i wrote (need some way to specify that…) fail [16:30:05] because on TS i never got any mail from dbreps, and logged in once a week or so to check for any error logs [16:30:36] AzaToth: Uh, what's installed on Tools? Is dh_auto_test a part of the standard packaging workflow, or do you have to add some option to run that? [16:31:12] legoktm: On Tools, it shouldn't be any different except for mzm's login. I'll create ~local-dbreps/.forward and test it. Moment. [16:31:26] scfc_de: ok, thanks [16:35:48] is there a fast way to find a page_id based upon a case insensitive title? [16:40:59] FutureTense: No more than yesterday; the best you can do is make sure that you restrict the search to a single letter also (since you know the first letter of a username is necessarily uppercase) [16:41:45] ok, i see that articles can have multiple cases, so that means they can have seperate id's [16:41:58] FutureTense: They can. [16:42:11] furry: (Most often, they'd just be redirect on almost every project) [16:42:13] that was a dumb idea [16:42:26] :D [16:42:28] furry: Misping. Sorry. [16:42:35] Coren: Could you take a look at the size of the mail queue and whether some mails from/to local-dbreps are in there (last 15 minutes)? [16:42:38] FutureTense: I also think it was. [16:43:23] scfc_de: None that I can see. [16:43:39] No mail was processed at all? [16:44:13] scfc_de: I only checked the queues. Lemme see the log. [16:45:28] 2013-06-12 16:32:25 1UmnyD-0000J8-7R is to local-dbreps and was delivered to legoktm.wikipedia@gmail.com [16:45:51] FYI: .forward needs to be a single, comma delimited list [16:46:25] I.e.: legoktm,scfc [16:46:33] scfc_de: ^^ [16:46:39] http://www.exim.org/exim-html-current/doc/html/spec_html/filter_ch-forwarding_and_filtering_in_exim.html said "The contents of traditional .forward files are not described here. They normally contain just a list of addresses, file names, or pipe commands, separated by commas or newlines, but other types of item are also available." -- *grrr* [16:47:06] How... well-documented. :-) [16:47:44] AFAIK, newlines don't generally work though I expect some MTA may accept them. [16:48:00] Well, they didn't claim that separation by newlines would *work*, just that they normally contain them. [16:48:01] (I've never seen non-commas personally) [16:49:42] legoktm: So you received four mails, I one, and neither of us was pestered with mzm's non-existence? [16:50:29] Coren: (I do remember correctly that we use exim?) [16:50:44] yup :D [16:50:45] scfc_de: You do. [16:51:00] Ok guys.. my tool is done and ready for the masses to start testing. [16:51:07] Alternately, you could just have told MZM to stop not existing. :-) [16:51:23] Coren.. this one is yours. http://tools.wmflabs.org/common-interests/cgi.py/findbyeditor?editor=Coren&maxArticles=5&database=enwiki&ns=0 [16:51:40] Coren: Done that :-). [16:52:18] FutureTense: Makes it clear that I mostly do vandal whacking when I get AV bots :-) [16:52:37] ive already caught an old sock with this [16:53:01] FutureTense: You might want to exclude the queried editor themselves from the list though. It's not really informative to tell me that I got a lot in common with myself. :-) [16:53:09] yeah, i thought of that [16:53:57] My top 5 is also amusing; they are all vandal magnets I had to revert a *lot* on. [16:54:07] (Well, E-commerce is mostly a /spam/ target, but same idea) [16:54:37] hopefully this will be useful for the unrepentent sockers [16:55:18] FutureTense: I should hope it's more useful to the people tracking the sockers than to the sockers themselves. :-) [16:55:32] grr [16:57:27] > These editors share common interests with Legoktm : [16:57:28] Legoktm [16:57:51] <-- sock of no one :D [16:57:58] Out of curiosity I looked myself up and got http://tools.wmflabs.org/common-interests/cgi.py/findbyeditor?editor=Wolfgang42&maxArticles=5&database=enwiki&ns=0 -- not particularly helpful... [16:58:37] wolfgang42: if it is using straight up CGI, I think using cgitb might help [16:58:51] or if it is using something saner like flask, you can set debug=True [16:58:51] YuviPanda: actually it just started exceptioning [16:59:04] a second before it worked just fine [16:59:58] i'm just preaching the wonder of beatufiul stacktraces [17:00:01] that we saw that day :) [17:00:09] YuviPanda: What? I just got a random list of articles... [17:00:19] hmm? I got a 500 [17:01:18] No such problem here. [17:19:22] Coren: fixed [17:28:22] fp=urllib2.urlopen("http://localhost/editcountitis/cgi-bin/avi/av/api.py/aiv/") URLError: <urlopen error [Errno 111] Connection refused> [17:29:03] I could fetch lab hosted pages through localhost until just now; has something changed? [17:31:43] scfc_de: sorry was away [17:32:00] scfc_de: dh_auto_test is part of debhelper, pretty much standard [17:56:48] a930913: Nothing on purpose. [18:16:42] Hmm. "These editors share common interests with Anomie: Addbot, AnomieBOT, ClueBot, ClueBot NG, SmackBot, Yobot" [18:17:34] Anomie is a bot!!! [18:17:35] I knew it! [18:20:17] anomie: Yeah, I got mostly the same set. Vandal fighters FTW! :-) [18:20:48] Coren: I'm not a vandal fighter though. [18:21:22] You might also be a vandal. :-) [18:21:45] I should say that excluding users with the bot group would be a++ [18:26:01] anomie: where did that list come from? :P [18:27:24] addshore: http://tools.wmflabs.org/common-interests/cgi.py/findbyeditor?editor=Anomie&maxArticles=5&database=enwiki&ns=0 [18:27:52] hashar, petan, do either of you know anything about the deployment instance uploadtest07? [18:30:00] anomie, HTTP 500 Internal Server Error [18:30:47] got it now. "These editors share common interests with Krenair : Addbot, ClueBot NG" [18:31:06] Krenair: Yeah, seems to be unstable. Or else FutureTense is actively breaking things. [18:31:27] andrewbogott: I was out travelling, but I'll look at the puppet stuff in a bit. [18:32:03] I might be able to delete and recreate maps-tile3 [18:32:17] or consolidate it with maps-tiles{1,2} [18:33:26] we really badly need to make everything in the puppet repo modules so that we can kill off puppetmaster::self [18:33:42] it may be worth the labs team sprinting on this [18:34:05] how would puppetmaster::self work then? [18:34:12] it wouldn't be needed [18:34:19] theo|cloud: did it work? [18:34:32] you could have remote branches in gerrit, then use puppet environments [18:35:00] to switch to another branch, you'd just change your config to use a different environment [18:35:23] do you have to go through review in the remote branches? [18:36:03] nope. we'd set it up such that branch would allow direct push from members of [18:36:22] or at minimum you'd be able to self-review [18:36:45] how would this be different than puppetmaster::self? [18:37:14] apmon: Because then puppet runs would actually keep stuff updated. [18:37:24] rather than needing to update git repos on large numbers of instances, we'd be able to merge into the remote branches [18:38:39] also, it would mean a bunch of instances wouldn't need to maintain repos individually [18:38:44] but would pull from a single source [18:39:00] Yes, I can see that is an advantage. [18:39:21] But each project would still have to deal with merge conflicts when updating the puppet remote branch from master [18:39:32] also, it would mean the instances wouldn't need to run a puppet master, which is less memory usag [18:39:41] apmon: yep. that's unavoidable, though [18:39:53] at least they'd need to only deal with it once, rather than on each instance :) [18:40:11] Does role::puppet::self not already do that? [18:40:24] and ideally remote branches would be used to get a feature up to snuff, and we'd merge it into master [18:40:34] At least the help suggests, on can point all instances of your project to a single puppetmaster::self [18:40:36] apmon: nope [18:40:39] oh [18:40:40] right [18:40:41] that's new [18:46:15] apmon: Great, thanks [18:46:57] Ryan_Lane, do you have a salt query you can run to see which labs instances are still nonresponsive? [18:48:30] well, only kind of [18:49:08] salt-run manage.up [18:49:15] ^^ that'll show you the instances that are working [18:49:48] which is currently 305 instances [18:50:55] out of 396… that doesn't seem right :( [18:52:01] let me do a salt-restart via dsh [18:53:34] I have a dsh I wrote in python that pulls instance info from wikitech [18:53:41] via SMW [18:54:50] heh. now only 300 report? [18:54:51] damn it [18:55:25] Is that 396 current, or does it update on an infrequent batch? [18:55:29] 305 reporting again [18:55:36] Because lots of instances were deleted in the last day. [18:55:41] lemme see [18:55:42] well, lots = ~10 [18:55:57] it's accurate [18:56:04] I purged the main page [18:56:17] unless deletions aren't updating the MW pages [18:56:20] I have about a dozen instances which I can't access, and two which I haven't updated yet but all the rest should be running recent puppet [18:56:34] they were last I checked [18:56:47] any specific instance for me to check? [18:59:21] nevermind I have a list that are showing the old salt master_finger [18:59:22] paravoid you are DD right? [18:59:26] I'll check them out [18:59:35] back in a bit. lunch [18:59:36] paravoid I wanted to know something, for a long time [19:00:25] paravoid: is there anything, some place which person who eventually want to help out / participate on the debian project should check out to find out how? joining the debian community seemed always quite hard to me [19:47:47] andrewbogott_afk: ah. for a number of them it's because puppet fails to run [19:48:04] for instance, on asher-m1: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class generic::packages::git-core for i-0000066c.pmtpa.wmflabs on node i-0000066c.pmtpa.wmflabs [20:26:35] Ryan_Lane, want to email me the list of saltless instances? [20:26:40] or, I guess, old-salt instances? [20:30:57] andrewbogott: well, I'm writing a maintenance script to modify puppet classes and variables [20:31:12] for most of these it's likely that the class they are using simply doesn't exist anymore [20:31:27] so just removing it will let puppet run [20:31:53] so your script will purge obsolete classes from the node definitions? [20:34:15] well, not really [20:34:26] because there's no way of knowing whether it is or not [20:34:38] it'll display a list of classes and variables and let you delete them [20:34:45] well, delete specified ones [20:37:12] I guess that won't be /too/ tedious [20:43:54] yeah [20:44:17] it would be nice if we could automatically determine which classes/variables were available [20:49:04] Hm… the doc generator must know that information in order to set up the site properly. [20:49:24] Hacky, but doing some sort of find + sed command would probably get us the list of classes [20:56:17] I can't ssh to bots-4 from bastion... infact I'm getting permission denied publickey on almost everything. I'm using PuTTY and I enabled user agent forwarding. What is the issue? [21:13:29] sdamashek: I can connect to bots-4, so most likely something is off on your end [21:15:23] andrewbogott: any ideas? [21:15:35] I put the private key in [21:15:42] public key* [21:15:42] I don't use windows so don't have a lot of insight about putty. [21:16:01] Have you looked at our docs already? [21:16:07] how often does puppet update keys? Could that be it? And yes [21:16:26] Puppet runs every 30 [21:16:57] well that's not it then, I updated my key maybe 3 weeks ago [21:17:43] I take it you were never able to get access beyond bastion? [21:17:49] yeah [21:18:51] figured it out [21:18:56] I didn't have pageant installed [21:19:59] cool [21:43:50] Coren: how to login on tools.wmflabs.org ? [21:44:18] Coren: eh, i mean shell :) [21:44:39] You can't. You probably want tools-login.wmflabs.org instead. :-) [21:44:46] tools. is the web server. [21:44:59] i would like to make a change in the docroot [21:45:02] and replace the favicon [21:45:05] if you dont mind [21:45:16] unless it's puppetized [21:45:55] The webservers aren't yet. I'll have to put it in place myself (or petan). If you put it somewhere in your home, though, I'll do it. [21:45:59] mv https://gerrit.wikimedia.org/r/#/c/68291/1/docroot/bits/favicon/toollabs.ico http://tools.wmflabs.org/favicon.ico :) [21:46:08] <-- this is all we want:) [21:46:32] i told odder to put the files into docroot/bits/favicon/ for now, so they are somewhere [21:46:41] and it has favicons for all the other projects [21:46:52] Sounds reasonable. :-) [21:46:58] i also replaced https://wikitech.wikimedia.org/favicon.ico [21:48:04] it fixes https://bugzilla.wikimedia.org/show_bug.cgi?id=49351 [21:50:44] Coren: So wikidatawiki_p is currently replag'd by 11 hours…. [21:50:53] Is there a tool somewhere tracking replag? [21:52:09] legoktm: Not as far as I know; but 11h really surprises me. [21:52:25] mutante: {{done}}. I don't see a noticable difference with Chrome though. [21:53:05] Ah, forced reload did it. [22:04:00] Coren: thanks, odder will love it [22:13:44] andrewbogott_afk: oh, openstack in folsom has a feature you'd like [22:13:59] andrewbogott_afk: it's an api call to rebuild an instance [22:15:12] I need to test the feature first, of course [22:15:20] but adding it to the interface should be easy [22:19:25] andrewbogott_afk: ok. I added a maintenance script on virt0 [22:19:34] I need to push it into gerrit, but it's working there now [22:19:47] cd /srv/org/wikimedia/controller/wikis/slot0/extensions/OpenStackManager/maintenance [22:20:53] php puppetValues.php --instance='i-0000066c' [22:21:06] php puppetValues.php --instance='i-0000066c' —delete-class=0 [22:22:17] php puppetValues.php --instance='i-0000066c' --delete-var=ssh_x11_forwarding [22:38:26] Cyberpower678: or TParis around? [22:48:43] Ryan_Lane, mutante: my labs permissions (DarTar) apparently are still broken [22:48:55] in which way? [22:49:11] drdee is trying to add me to the list of users with access to limn0 [22:49:18] and he can't [22:49:25] mutant had the same problem a while ago [22:49:34] >mutante [22:49:44] he's trying to add you to the project? [22:49:48] what's your wikitech username? [22:49:49] yep [22:49:54] DarTar [22:51:10] you're already in the analytics project [22:51:46] I am, but if I try to ssh into limn0 I get a Permission denied [22:52:50] DarTar: can you try? I'm tailing the logs [22:56:55] paravoid: https://wikitech.wikimedia.org/wiki/Nova_Resource:I-000001c8 do you need this instance? [22:57:15] yes [22:57:24] can you get puppet working on it? [22:57:44] basic commands seem to hang for me on it [22:57:46] on second thought [22:57:50] it's lucid [22:57:55] ps -ef hangs [22:58:25] deleted [22:58:34] thanks :) [22:58:50] do-release-upgrade via salt ?:) [22:59:00] not in labs [22:59:12] that'll just make sure you can never boot the instance again [22:59:19] ok, heh [22:59:26] i remember [22:59:37] (it also eats up shitloads of disk space, so I never plan on fixing that) [23:00:11] yeah no point either [23:00:31] this is (was) staging for swift, and the current production swift nodes were pristine installs [23:01:34] heh [23:39:32] Coren: Could you add Bgwhite to the Tools project? [23:40:37] Ryan_Lane: Could you add Bgwhite to the Tools project? [23:41:03] sure. one sec [23:41:53] done [23:42:47] Warning: There is 1 user waiting for access to tools project: Bgwhite (waiting 0 minutes) [23:43:15] Ryan_Lane: Thanks. [23:43:39] yw