[06:42:55] cgi-bin in tools-lab does'nt seem to work for work. I added a simple python cgi script as index.cgi with chmod +x, but it does not work.. [06:43:26] what i receive is [06:43:27] Four hundred and four! [06:44:19] rohit-dua: cgi-bin folder is a "legacy" folder from the old apache setup [06:44:50] hedonil: ok, how do I run python- cgi [06:45:12] put everything in public_html or subfolders [06:45:38] will .cgi work inside public_html? [06:46:09] if you name a file foo.py it will work out of the box [06:48:48] rohit-dua: if you want to execute cgi-files with other extensions, you have to configure them [06:48:49] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Web_services [06:50:43] hedonil: thank you, :-) [06:51:00] rohit-dua: my pleasure ;) [13:30:21] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c36 (10Liangent) (In reply to Luis Villa (WMF Legal) from comment #34) > Sorry for the slow response, Liangent - email slipped through the cracks for > some reason. Q: what's the... [13:44:34] hello Coren [13:49:00] the crontab fixer is stupid. I use a wrapper script for a jstart call and was using the wrapper script directly in crontab, but it now prefixes it with /usr/bin/jsub :( [13:49:55] liangent: that shouldn't be a huge issue, right? It just takes slightly longer to start. [13:52:01] valhallasw: yeah and there's another thing: arguments need to be escaped (again) to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=48811 [13:52:54] huh? your wrapper script takes arguments? [13:58:06] valhallasw: not the same problem, but this can be an issue. and in my case there were https://bugzilla.wikimedia.org/show_bug.cgi?id=48811 hits and I didn't use SGE for then yet. wrapper script is used in some other lines [13:58:14] *for them [13:58:44] liangent: right, so build a wrapper script for those? [13:59:01] that's what the crontab fixer should do, actually, IMO [14:00:11] crontab fixer should create wrapper scripts based on crontab lines, and replace those lines with "jsub wrapper-script-1.sh" ? [14:01:06] yeah [14:11:26] actually .. ssh tools-submit /usr/bin/crontab my.crontab.file bypasses the fixer [14:11:53] yes, and then Coren will find you and hit you with a stick :-p [14:35:32] curl http://tools.wmflabs.org/xxx doesn't work in ToolLab [14:35:52] curl http://tools-webproxy.eqiad.wmflabs/xxx works, but I'm not sure whether this will keep working in the future [14:37:11] liangent: https://bugzilla.wikimedia.org/show_bug.cgi?id=54052 [14:37:53] valhallasw: thanks [16:10:53] 3Wikimedia Labs: [Regression] wikitech.wikimedia.org is sending empty Echo notification emails - 10https://bugzilla.wikimedia.org/53778#c3 (10Maarten Dammers) This is fairly annoying. It should at least contain the message you see when you sign in to Wikitech. [16:38:18] Since I don't know where else to ask either: in which Bugzilla category would a bug in "Javascript-enhanced contributions lookup 0.2" fall ?? [16:58:52] Hi. When I run "crontab -e" y get this warning: "You (tools.ralgisbot) are not allowed to use this program (crontab)". [16:59:22] I've never had this problem before. [16:59:51] the crontab system was recently changed [16:59:58] but a tool should be able to edit crontabs [17:00:08] (users are no longer allowed to have a crontab) [17:01:28] siglar: For some reason, you part is hitting /usr/bin before /usr/local/bin [17:01:44] siglar: Use /usr/local/bin/crontab explicitly, or just 'xcrontab' [17:02:36] Beetstra: What's the URL? [17:02:38] valhallasw: And no, I'd only use the stick on people who circumvent the edit script to add non-grid jobs. It's meant to provide a quick fix to newbies and/or users who don't read documentation. [17:02:54] Coren: ah :-p [17:02:59] Thank you, Coren. xcrontab worked :) [17:08:51] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c37 (10Nemo) Small/medium/big wikis have 10/100/1000 thousands total pages or more. You can see the lists at https://noc.wikimedia.org/conf/ The lists were originally made as rou... [17:23:43] scfc_de: which URL [17:24:33] scfc_de: do you mean the wiki url: https://en.wikipedia.org/wiki/Special:Contributions/198.24.31.121/16 <- does not work correctly (make sure you have the javascript lookup enabled [17:30:16] Beetstra: I don't see an error on that page, and I can't find "Javascript-enhanced contributions lookup" on https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-gadgets. Is that "Allow /16, /24 and /27 – /32 CIDR ranges on Special:Contributions forms"? [17:30:51] The error is that there are 2 IPs in that range who were active in 2014 .. but you don't see them .. [17:31:16] scfc_de: yes, I think that that is the gadget thingy [17:32:51] Beetstra: Ah, okay. Well: Don't know if that is an error :-). Preferences says "Please report any issues here." with here = https://en.wikipedia.org/wiki/MediaWiki_talk:Gadget-contribsrange.js#Update_.28October_2008.29 [17:33:25] Yeah, I did not see that until now .. (filed the bug already) .. [17:33:34] I think line 49 needs a '+1' [17:34:15] scfc_de: https://en.wikipedia.org/wiki/MediaWiki_talk:Gadget-contribsrange.js#IP_wildcards_and_CIDR_ignoring_an_address [17:34:21] but not properly fixed, apparently [17:45:51] Coren: new proxy is in place. How about da logs? I wanna eat 'em up ;) [18:18:48] hedonil: https://wikitech.wikimedia.org/wiki/User_talk:Petrb#m:toollabs:awstats [18:19:04] (But I think there's still a bug of yours open?) [18:20:16] scfc_de: yeah. I want to assimilate them => one entity :-) [18:22:21] https://bugzilla.wikimedia.org/show_bug.cgi?id=59222 [18:30:36] 3Wikimedia Labs: Request to access redacted webproxy logfiles of (Tool) Labs - 10https://bugzilla.wikimedia.org/59222#c4 (10metatron) Now that new YuviProxy is in place, I just need access to logdumps (IP's stripped off). sed & awk will do the rest of the job. [18:54:41] @replag [18:54:41] Replication lag is approximately 00:00:00.5231610 [18:55:01] atomic precision [18:55:12] Appoximately with 7 digits after the decimal point. Awesome [18:55:18] hehe [18:56:05] 3Wikimedia Labs / 3(other): Database dewiki_p on dewiki.labsdb : views hashs and links broken - 10https://bugzilla.wikimedia.org/55708#c4 (10Tim Landscheidt) (In reply to Andre Klapper from comment #3) > metatron: Is this still a problem or can this ticket be closed nowadays? Still around: | MariaDB [dewik... [18:59:02] Reminds me of when German news media replay reports from across the pond: "The cyclist flew over 100 feet (30.48 meters) through the air." :-) [19:14:24] Hi. Could someone re-label this form: https://wikitech.wikimedia.org/w/index.php?title=Special:NovaServiceGroup&action=addservicegroup&projectname=tools [19:14:41] It should say add tool, not group. [19:14:55] Greetings, can someone help me with a bot on one of out channels? Thanks. [19:15:01] It's confusing otherwise. [19:15:06] Eccenux: except the labs terminology is 'group' and not 'tool'. [19:15:16] ('service group', specifically) [19:16:19] Yeah, but group is "tool." and it says "Create New Tool" in the link [19:17:06] Only on tool labs -- it's actually called .. But even then, I think it makes sense to have a function to use TL terminology instead. [19:19:19] +1. Needs to be added to Extension:OpenStackManager, though, so no trivial change. [19:19:53] Ah, I see where this came from. Still it adds more then just a group. At least in terms of Tool Labs. [19:20:53] scfc_de: or just using site javascript [19:27:49] now who is sysop at wikitech... [19:28:39] Coren: could the contents of https://wikitech.wikimedia.org/wiki/User:Merlijn_van_Deen/common.js be added to the central common.js? [19:29:58] scfc_de ^ :-) [19:45:15] valhallasw: No can do, I'm just an contentadmin :-). [21:14:49] oh, Coren, https://gerrit.wikimedia.org/r/#/c/102721/ ? [21:28:41] Is there replication lag or something? Seem to be more than usual missing revision data when checking edits [21:30:54] Damianz: Hmm. some minor wikis have a certain lag, according to Betacommand 's tool https://tools.wmflabs.org/betacommand-dev/cgi-bin/replag [21:31:09] enwiki looks alright though [21:37:00] hi, I have some problem with "webservice start", always worked before today [21:37:44] "webservice status" says: "Your webservice is scheduled: queue instance "continuous@tools-exec-06.eqiad.wmflabs" dropped because it is temporarily not available" [21:38:13] what can I do? [21:39:03] rotpunkt: grid engine seems to be confused ;) wrong queue [21:39:11] rotpunkt: try again [21:39:34] ok thanks! [21:42:49] Hmm... definatly seem to be missing things that should be there *digs into the database* [21:49:16] * Damianz wonders if Draft: is a namespace on enwiki... which should explain this one [21:55:52] 3Wikimedia Labs: [Regression] wikitech.wikimedia.org is sending empty Echo notification emails - 10https://bugzilla.wikimedia.org/53778 (10Tim Landscheidt) a:5Ryan Lane>3None [21:56:05] 3Wikimedia Labs: [Regression] wikitech.wikimedia.org is sending empty Echo notification emails - 10https://bugzilla.wikimedia.org/53778#c4 (10Tim Landscheidt) With Ryan being elsewhere, unassigning for the moment. [21:56:20] 3Wikimedia Labs: [Regression] wikitech.wikimedia.org is sending empty Echo notification emails - 10https://bugzilla.wikimedia.org/53778 (10Tim Landscheidt) 5ASS>3NEW [22:05:33] springle, ping [22:05:41] Coren, ping [22:06:21] * Cyberpower678 is getting impatient. [22:17:21] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c1 (10Cyberpower678) p:5Unprio>3High s:5normal>3major BUMP. I've opened this almost a month ago with no initial comments. Revision... [22:18:50] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777 (10Cyberpower678) s:5major>3normal [22:52:04] hedonil: you there? [22:52:16] Betacommand, he is. [22:52:21] Betacommand: yep [22:52:49] hedonil: my tool just shows the time since the last recorded edit on that wiki to calculate lag [22:53:13] IE the most recent item in the rc table [22:53:25] is this a bad thing ? ;) [22:53:29] not 100% accurate for small wikis [22:53:34] hehe [22:53:44] it's a nice tool [22:54:01] some graphics would make it even more wonderful ;) [22:54:05] thats why it may show high values for some of the smaller wikis while its not an issue [22:54:15] ahh ok [22:54:35] 3Wikimedia Labs / 3tools: Tool Labs: Provide anonymized view of the user_properties table - 10https://bugzilla.wikimedia.org/58196#c38 (10Krinkle) Changing the timestamp in this table based on wiki size would be a mistake in my opinion. It would make the data too arbitrary and hard to maintain. It also proba... [22:55:23] IE jvwiktionary has parse edits and shows a inaccurate high value [22:56:44] hedonil: Im not big into graphics, if you use the non-https version you can sort the table [22:57:04] loading via https breaks that js [22:58:16] * Betacommand reminds self to poke coren about protocol relative programming [23:00:45] Betacommand: maybe if you omit the protocol (http:) this will fix it ? https://tools.wmflabs.org/paste/view/6a6b533e [23:02:09] Cyberpower678: that's the output from optimizer [23:02:12] https://tools.wmflabs.org/paste/view/b698b560 [23:02:32] a little bit screwed [23:03:46] Hmm... [23:03:57] but anyway... the problem is, MySQL Optimizer uses index `usertext_timestamp` as prefered index [23:05:58] Re replag, there should be graphics at ganglia.wmflabs.org (though the reporting script was probably disabled by the crontab change). [23:08:43] hedonil: There are different tables in the replicas and I'm not sure if you can assume the same results for them (though I don't have enough MySQL knowledge why in the first place you need different tables for different indexed queries anyway; I would assume that you just use one table with two indexes). [23:09:31] scfc_de: MySQL only chooses /one/ index per query (as rule of thumb) [23:09:37] https://tools.wmflabs.org/paste/view/e9906b7a [23:10:19] scfc_de, and the query we use, uses the wrong index. Hence rev_user, and ar_user being slow on revision_userindex, and archive_userindex [23:11:01] hedonil: Ive made a few tweaks, not sure when Ill have time to push the changes live [23:11:09] scfc_de, if we could tell the DB what index to use when we query, it would solve our problems, but it won't let us. [23:11:12] hedonil: Sure; but we have the views/tables revision and revision_userindex. I would assume that you just have one table revision with different indexes and MySQL figures out on its own which index to use depending on the clauses you provide. [23:12:50] Cyberpower678: can you specify the indexed field early in the query, it should force it to use that index [23:13:13] Betacommand, apparently doesn't work on views like the replication DBs. [23:13:38] It's really slowing down the edit counter. [23:13:44] Cyberpower678: it should, I know it worked like that on the TS [23:13:59] MBisanz's stats loaded in 114 seconds. That is excessively high. [23:14:13] * Cyberpower678 is beginning to miss the tool server. [23:14:31] darn [23:14:46] hedonil1, need scroll back? [23:15:10] Hmm. don't think so [23:15:25] What was the last thing you got? [23:15:41] there are ways (my last words) [23:15:44] heh [23:16:50] scfc_de, I see you added yourself to the CC list of the bugzilla I filed. [23:16:58] 1st thing was to make sure, the underlying index is accessible in view revision_userindex [23:17:02] https://tools.wmflabs.org/paste/view/ac820c40 [23:17:11] which is ok [23:17:49] 2nd thing is to force MySQL otimizer to /not/ use this index as default [23:18:10] which can be accomplished in two ways [23:18:43] 1. access the table directly and use USE INDEX(foo) statement [23:18:55] -> can't be done [23:19:52] 2. rewrite the query (add a join, leave something out), so MySQL optimizer changes his mind [23:20:57] I'll try something on my optimizer replica "u3710__enwiki_optimizer_p" database on enwiki.labsdb [23:21:12] hedonil1, rewrite the query how? [23:22:00] that's the point of matter ;-) [23:22:31] as said, add a random join on something additional [23:22:46] or something like that [23:22:49] 3Wikimedia Labs / 3Infrastructure: install ExpandTemplates mediawiki extension @ wikitech - 10https://bugzilla.wikimedia.org/53935#c1 (10Derk-Jan Hartman) 5NEW>3RES/WOR ExpandTemplates is in core now, so this can be closed. [23:23:23] * Cyberpower678 wants to know who's idea it was to design the replication DBs like this. [23:23:36] !newlabs [23:23:36] The tools project in labs another version of toolserver.  Because people wanted it just like toolserver, an effort was made to create an almost identical environment.  Now users can enjoy replication, similar commands, and bear the burden of instabilities, just like Toolserver. [23:24:01] That should be rewritten. [23:25:06] well, it's just... there are many indeces on revision table [23:25:33] and our ol' friend OPTIMIZER is choosing one for us :-) [23:26:10] To hell with optimizer. Things in this case would seem to work better without it. [23:26:40] if one could use the table directly .... [23:26:52] For the most part, the queries in the bug report return for me in <= 0.01 s. [23:27:14] scfc_de, which table? [23:27:19] scfc_de: in our case it's like 20 seconds [23:27:24] Or more [23:28:07] SELECT COUNT(*) AS count FROM revision_userindex WHERE `rev_user_text` ='Tim.landscheidt'; [23:28:15] SELECT COUNT(*) AS count FROM archive_userindex WHERE `ar_user_text` ='Hedonil'; [23:28:19] enwiki_p [23:28:39] /not/ rev_user_text ! [23:28:44] scfc_de, ok. That's the fast one. It's also the one we /don't/ want to user. [23:28:44] that's the point [23:28:55] We're trying to use rev_user [23:28:59] and ar_user [23:29:27] Didn't you write in the bug report that you specifically did *not* want to use rev_user? [23:29:45] Ah, okay, read again. [23:29:51] `rev_user_text` == KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`,`rev_user`,`rev_deleted`,`rev_minor_edit`,`rev_text_id`,`rev_comment`) [23:30:09] that's what we are talking about [23:30:50] 3Wikimedia Labs / 3(other): archive_userindex on replication server not indexed well. Takes 10s of seconds to execute a query. - 10https://bugzilla.wikimedia.org/63777#c2 (10Cyberpower678) For clarity I'm not trying to use rev_user_text, or ar_user_text. [23:31:20] we want to get the optimizer use this index: KEY `user_timestamp` (`rev_user`,`rev_timestamp`), [23:31:22] scfc_de, I wrote a clarification on the bug report. I could've written the initial statement better. [23:32:13] Cyberpower678: The clarification was in the initial statement; I had just assumed that the two queries were the same and only repeated to show the effects of caching. [23:32:21] hedonil1, can you inspect archive's indexing keys? [23:32:42] hedonil1: Are you sure that index /actually/ exists on the replicas? [23:32:44] scfc_de, ok. :-) [23:33:13] Cyberpower678: I can check the view structure (as I did before) [23:33:19] scfc_de: yep [23:33:25] hedonil1, please do. [23:33:40] https://tools.wmflabs.org/paste/view/e9906b7a <-- that's the running schema [23:34:32] hedonil1: How do you get that? [23:34:43] mysqldump [23:35:13] hedonil1, can you dump archive as well? [23:35:48] Cyberpower678: https://tools.wmflabs.org/tools-info/schemas.php?schema=enwiki [23:37:01] hedonil1, PRIMARY KEY (`ar_id`), [23:37:01] KEY `name_title_timestamp` (`ar_namespace`,`ar_title`,`ar_timestamp`), [23:37:01] KEY `usertext_timestamp` (`ar_user_text`,`ar_timestamp`), [23:37:01] KEY `ar_revid` (`ar_rev_id`) [23:37:30] hedonil1, explains why archive is total shit with ar_user queries. [23:37:48] yeah [23:38:03] no index for that :/ [23:38:27] hedonil1: "mysqldump: Got error: 1044: "Access denied for user 'u2267'@'%' to database 'enwiki_p'" when using LOCK TABLES" [23:38:40] not even a preferably /not/ chosen one ;) [23:38:56] scfc_de: ingore errors [23:39:43] scfc_de: https://tools.wmflabs.org/tools-info/misc/optimizer.sh.txt [23:39:44] hedonil1: Ah! [23:40:07] dump=$(mysqldump --defaults-file=replica.my.cnf -h $host --no-data \ --lock-tables=false --ignore-table="${source_db}._counters" --force $source_db 2>/dev/null | \ sed -e "s/AUTO_INCREMENT[=0-9]*//g" -e "/rc_source\|rc_moved_to_ns\|rc_moved_to_title/d" ) [23:41:18] ... and no locks [23:41:49] brb [23:43:54] hedonil1: But where are the indexes? I don't see any in "mysqldump -f -h enwiki.labsdb --no-data enwiki_p | less", and https://tools.wmflabs.org/tools-info/schemas.php?schema=enwiki doesn't mention revision_userindex? [23:44:30] Indeces are shown as KEY ... [23:45:19] On schemas.php, yes. But where does that come from? [23:45:46] scfc_de: hmm leme look [23:46:19] maybe I added an option for the schema views on tools-info... [23:46:29] (The replicas are a black box for me, and thus I don't trust them a bit :-).) [23:47:30] hedonil1: Could it be that you added the indexes yourself after MediaWiki's tables.sql master? [23:47:46] :-) no [23:49:39] scfc_de: but for the quick look [23:49:51] 1. mysql --defaults-file=replica.my.cnf -h enwiki.labsdb [23:49:59] 2. use enwiki [23:50:17] (not the _p version !) [23:50:35] 3. show create table revision; [23:52:19] which gives you something like this https://tools.wmflabs.org/paste/view/70a83f78 [23:53:12] Yep. I also ran optimizer.sh, and it dumped the KEY statements to *.sql, so they are in the DB. [23:53:38] (And revision_userindex doesn't exist in enwiki, but only in enwiki_p, I assume.) [23:53:56] scfc_de: yeah, it's just a view [23:55:49] the only thing that matters in the views, is that whenever there's a conditional ( IF foo ), you can't access the index in the underlying table [23:55:50] https://tools.wmflabs.org/paste/view/ac820c40 [23:56:01] but that is all ok [23:58:43] Yeah, had forgotten that it's defined at http://git.wikimedia.org/blob/operations%2Fsoftware.git/master/maintain-replicas%2Fmaintain-replicas.pl [23:59:21] the job is to rewrite the query to convince the optimizer to not use the one index, but the other