[00:10:53] ToAruShiroiNeko [00:29:59] Admin? [00:31:33] Admin for what? [00:33:03] for change files owner [00:35:50] chamge owner ua31 to local-elvisor in files y resource elvisor [00:36:09] did you try "take"? [00:36:44] if you "become elvisor" you should be able to "take filename" [00:37:00] Yes but only take some files [00:37:40] And the files are in a subdirectory [00:56:14] Technical_13 [00:56:50] ? [00:57:30] Can you change owner ua31 to local-elvisor in files y resource elvisor [00:57:46] Me? No. [00:58:36] Not yet anyways. [01:00:01] Any admin online? [01:04:47] UA31_: chown -R local-elvisor:local-elvisor /data/project/elvisor/ [01:07:38] Operation not permitted [01:10:23] UA31_: why doesn't `take` work? [01:11:56] only take some files [01:12:01] And the files are in a subdirectory [01:12:14] then just take the appropriate files/directories. [01:14:14] only take some files [01:14:57] I really don't understand what you're saying. [01:16:03] Are you an admin? [01:16:29] not on Labs... [01:24:19] ....... [01:33:51] Coren|Away, ping [01:34:10] Yes? [01:35:41] Something wierd is going on. Everytime, Cyberbot attempts to query the externallinks table of enwiki, it gets an error, crashes and restarts. What's causing it. I haven't modified any code that could cause this. This problem came on it's own. [01:35:52] Coren|Away, ^ [01:36:04] What's the error? [01:36:32] PHP Notice: Undefined offset: 1 in /data/project/cyberbot/bots/cyberbot-ii/externallinks.php on line 23 [01:36:32] PHP Warning: mysql_connect(): Can't connect to MySQL server on 'enwiki.labsdb' (4) in /data/project/cyberbot/Peachy/Plugins/database/MySQL.php on line 65 [01:36:32] PHP Warning: mysql_query() expects parameter 2 to be resource, boolean given in /data/project/cyberbot/Peachy/Plugins/database/MySQL.php on line 32 [01:36:32] PHP Fatal error: Uncaught exception 'DBError' with message 'Database Error: (code 0) SELECT COUNT(*) AS count FROM externallinks ' in /data/project/cyberbot/Peachy/Plugins/database.php:155 [01:36:35] Stack trace: [01:36:36] #0 /data/project/cyberbot/Peachy/Plugins/database.php(297): DatabaseBase->query(' SELECT COUNT(*...') [01:36:39] #1 [internal function]: DatabaseBase->select('externallinks', 'COUNT(*) AS cou...') [01:36:43] #2 /data/project/cyberbot/Peachy/Plugins/database.php(807): call_user_func_array(Array, Array) [01:36:47] #3 /data/project/cyberbot/bots/cyberbot-ii/externallinks.php(143): Database->__call('select', Array) [01:36:50] #4 /data/project/cyberbot/bots/cyberbot-ii/externallinks.php(143): Database->select('externallinks', 'COUNT(*) AS cou...') [01:36:53] #5 {main} [01:36:55] thrown in /data/project/cyberbot/Peachy/Plugins/database.php on line 155 [01:37:25] Please use pastebin for large stack traces. [01:37:48] Hm. [01:39:15] Apparently, your bot is having trouble connecting to the databases. Have you tried stopping it entirely and restarting it? If it was a transient error, it may not clear state completely between attempts? [01:40:05] Well, when it crashes, since it's set to run continuously, it completely restarts, does it not? [01:40:40] ... it should; the only thing preserved would be the environment and the parent process. [01:41:19] So would stopping it and restarting it make a difference? [01:41:34] That only gets restarted if the compute node is disabled, or if you stop manually -- but I don't know what could be in the environment that could cause networking issues. [01:41:39] What is the job number? [01:42:12] 889719 [01:43:07] Hm. You don't seem to be tight in memory either, so that wouldn't be why the bot fails to connect to the database. [01:43:21] * Coren|Away thinks. [01:43:42] Well I took memory conservation measures for efficient processing. [01:44:28] Yeah, I was looking for perhaps the actuall connection to the database trying to allocate memory and failing, but you've got >100m of elbow room so that's very unlikely. [01:45:27] Should I try to restart the program? [01:45:58] Probably more instructive would be to figure out /why/ it fails at all. Hm. Give me a minute, I'll strace the actual process. [01:49:19] That's odd. They are actuallly getting timeouts trying to connect to the databases. [01:49:46] Try restarting it entirely? Beyond something in the environment, I admit I'm a little confused. [01:51:22] Program restarted. It was able to connect to the local database though. [01:53:08] Is the database maybe going away? [01:53:42] Coren|Away, ^ [01:54:07] No, I see lots of active connections churning away without trouble. [01:54:42] It'll take a few hours before we'll know if it works. [02:02:52] Coren|Away: got a moment? [02:03:11] Earwig: A little one. What's up? [02:05:05] Coren|Away: https://dpaste.de/L4ww2/raw/ - basically, Python's oursql module seems to be doing queries 30-75x slower than the TS [02:05:09] any ideas? [02:07:07] Given that the labs DB is known to be several orders of magnitude faster than the TS's that's a little surprising; perhaps you're drowning the actual query time in transmission between datacenters though; have you tried making a query that doesn't involve transmission of a large result set to see where the bottleneck is? [02:07:51] well, yes - it definitely seems to be in the transmission of results, not the query's complexity itself [02:11:32] Right now, labs runs in tempa while the database is in Ashburn; while the latency is very good, the bandwidth isn't infinite; I would expect that large result sets might be slower than the TS (whereas long queries with smaller result sets would be faster). The labs is eventually going to move to Ahsburn as well, at which point this should not be so noticable. [02:12:47] also, the slowness you report seems to be at the tail end of the bell curve; perhaps oursql is being inefficient with rountrips when fetching results with a cursor? [02:13:25] (I.e. I'm guessing you are actually fetching rows one at a time, rather than getting the whole result and have the local library dole them out to you one by one). In that case, latency would be what kills you. [02:13:42] I don't think we can do more than guesswork without actually looking at what oursql, specifically, is doing. [02:14:15] Lemme do a quick test. [02:15:34] Ah, that's probably what's happening; fetching the result of select page_title from page limit 1000 as a whole set takes neglectable time. [02:15:47] you're right; the operation's a slower either way, but it's a lot more noticeable when I fetch lazily instead of buffer the entire result set in memory first [02:16:56] Either way, that will go away in a couple of months (last I heard, the move to eqiad was expected around 4Q this year). [02:17:24] In which case the replicas will be on the same 10G network as the nodes. [02:18:22] Uh huh. Guess I'll use fetchall() in the meantime. [02:18:32] Thanks for your help. [02:19:15] FYI, the roundtrip between the nodes and the replicas seems to be fairly stable at ~26ms; that'd add up quickly when fetching lazily.' [02:19:38] right [02:19:53] * Coren|Away makes a note to document this. [02:20:01] well, there you go, 26s for 1,000 rows, just like I had observed [02:23:32] I'm not familiar with oursql, but you might have some tunable to fetch rows in groups rather than one by one and get the best of both worlds? [02:24:24] yeah, there's a fetchmany() method which I hadn't considered [02:26:10] [bz] (8NEW - created by: 2MZMcBride, priority: 4Unprioritized - 6normal) [Bug 53640] "links" database view is broken - https://bugzilla.wikimedia.org/show_bug.cgi?id=53640 [02:27:16] oursql can't overcome MySQL's limitations, much as it tries. :-) [02:27:34] heh [02:29:36] MariaDB's * [02:29:38] (o; [11:08:51] !windows [11:08:51] shiny, but fragile and expensive [11:10:08] _O_O_ [11:12:30] hi [11:15:04] hi T13! [11:15:27] look, my app now says something rather than nothing: http://tools.wmflabs.org/wlm-nl-table-gen/install/ [11:15:57] I'm working on debugging this thing with a guy I know [11:16:36] Cool. :) [11:17:16] Looks like progress to me. You'll have it running in no time. :) [11:17:36] got to stay positive [11:18:12] I'm assuming I'll have to take the password from the replica.my.cnf from the project directory [11:18:20] have some trouble downloading that [11:18:47] in WinSCP I don't see how I "become" my toolname, and without that access get denied [11:24:15] Watch for Coren|Away to get back. He's the smartie-pants in here with the access to help you with permissions. :) [11:24:48] ok [12:29:33] Coren|Away, ping [12:55:13] petan, ping [13:29:05] Cyberpower678: könntest du bei deinen admin count auch eine dateiverschiebestatistik hinzufügen? [13:30:02] ??? Das ist doch kein Admin statistik. [13:33:05] If I'm not mistaken, these look like continuous tasks. [13:44:35] Cyberpower678: wär aber praktisch :P wen manns in einer section hinzufügen könnte [13:48:41] Ich ueberlege es mir. Ich habe andere prioritaeten im Moment. z.B. Totaller Elektronic versag im Auto. [14:01:24] anyone around able to help me figure out why labs is fucking up my code? [14:10:43] Betacommand, what's it doing? [14:11:08] http://tools.wmflabs.org/betacommand-dev/cgi-bin/img_removal.py?title=Sabur%C5%8D_Kurusu [14:11:12] Cyberpower678: ^ [14:12:06] Does the file belong to betacommand-dev? [14:12:09] facepalm [14:12:19] let me fix something [14:13:28] Cyberpower678: yes [14:13:51] Cyberpower678: this code should work its been running on the toolserver for years [14:13:53] Are the file permissions set to be readable to everyone? [14:13:59] 775 [14:14:06] Lemme look. [14:15:01] I can't find the file the browser is calling. [14:15:09] It doesn't exist. [14:15:59] Betacommand, ^ [14:16:06] Cyberpower678: where are you looking? [14:16:15] public_html [14:16:35] thats not where cgi-bin is located on labs [14:16:43] its under the root [14:17:13] Cyberpower678: http://tools.wmflabs.org/betacommand-dev/cgi-bin/img_redmoval.py?title=Sabur%C5%8D_Kurusu is a 404 [14:17:24] * Betacommand added a D [14:17:45] That won't work. All websites are stored in public_html. Anything outside of the folder isn't accessible from a browser. [14:18:15] Try dumping all of that into public_html and then run it again. [14:18:33] Cyberpower678: yes it is [14:18:43] I spoke with coren about that [14:19:02] That's the first time I've heard something like that. [14:19:04] Cyberpower678: its seeing the script and trying to run it [14:19:14] but not able to for some reason [14:19:14] How do you know? [14:19:43] Cyberpower678: see the first and second links I gave you [14:19:54] ones a unable to run message the other is a 404 [14:20:01] AKA file not found [14:20:29] Aha. [14:20:34] Let me look again. [14:22:24] Hell even my SIL doesnt work [14:23:56] is the cgi-bin supposed to redirect? That could be causing the error. [14:24:13] But I don't know. [14:24:21] Cyberpower678: it shouldnt cause any issues, its the same setup that have on the ts [14:24:27] Everything else looks ok. [14:24:58] But Apache is setup differently on labs than on toolserver. Don't forget that. [14:25:41] Cyberpower678: are x's tools opensource? [14:25:49] Technical_13, somewhat. [14:25:53] Why? [14:26:24] Cyberpower678: thats why Im trying to un-fuck the labs setup [14:26:41] Betacommand, goodluck [14:26:45] !newlabs [14:26:46] This is labs. It's another version of toolserver. Because people wanted it just like toolserver, an effort was made to create an almost identical environment. Now users can enjoy replication, similar commands, and bear the burden of instabilities, just like Toolserver. [14:27:10] Because I saw a post on meta for someone who is going to try and make a clone without the optin. [14:27:15] Cyberpower678: Ive been fooling around for several months [14:27:40] Technical_13, good luck with that. [14:28:11] The edit counter has a complicated setup. Probably to drive off potential copycats. [14:28:11] It's not me. [14:28:51] The edit counter alon depends on dozens of files for it's operation. [14:29:03] Technical_13, point me to the person. [14:29:27] Cyberpower678: the kicker is these files run fine when run via command line [14:29:42] You were pinged to discussion on meta. [14:30:06] I'm mobile, but can get it for you later if need be. [14:37:05] Technical_13, doesn't look like someone is going to clone it. [14:37:57] https://meta.wikimedia.org/wiki/Talk:Requests_for_comment/X!%27s_Edit_Counter#Few_questions [14:43:31] Betacommand, do me a favor? [14:43:40] Cyberpower678: what? [14:43:52] try and access the xtools folder [14:44:30] denied [14:44:38] Good. [14:44:43] Files secured. [14:45:56] <{{Guy}}> :) [14:46:45] I should probably shut down source.php as well, but toolserver's version still exists, if he want's that broken copy. [14:48:53] Now that you've locked it down, would you add me to the project? [14:49:17] xD [14:49:51] Technical_13, when migration is complete. Are you experienced with PHP? [14:49:56] Or I could make my suggestions for improvement on a talk page someplace [14:52:51] Cyberpower678: is that even legal? [14:52:57] under the license? [14:53:11] PiRSquared, I changed my wordiing. [14:53:17] And yes. [14:53:30] is the toolserver code the same? [14:53:34] No [14:53:49] It's broken, and unreliable. [14:54:05] then under the GPL you *must* report any changes to the code [14:54:07] IIRC [14:54:47] It was originally under the GPL, and thus still is. [14:55:10] "But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL." [14:55:11] I am. Fresh out of class. [14:55:13] http://www.gnu.org/licenses/gpl-faq.html#GPLRequireSourcePostedPublic [14:56:43] PiRSquared, I'm concerned about the fact that someone is trying to defy consensus here. [14:57:15] I'm not making this permanent. [14:57:18] consensus is not legal, licensing is. I believe you're violating the GPL. [14:57:30] I may be wrong however [14:58:20] Ricordisamoa is right about the ToU as well [14:58:38] PiRSquared might be right. However, you could license a subsequent version under a different license. Say X's tools V.2 [14:59:39] PiRSquared, does have a point. [14:59:42] if it's based on the same code, it would need to be GPL compatible, or you need to get approval from all authors of the code [14:59:47] (IANAL) [15:00:56] Believe me, I don't think they should violate consensus. But I think that freedom is more important to Wikimedia's mission, especially in this case, than consenus in an RfC is. [15:01:02] *consensus [15:01:17] I'll reopen the source files for the tool then. [15:01:59] it's up to you, as long as you don't break the rules (which you might have done) [15:02:08] But a hacked X!'s edit counter will be met with apprehension not just from me, but other people to per the global consensus. [15:02:18] I agree. [15:02:28] <{{Guy}}> Cyberpower678: what if you made the code.... pm [15:02:34] And it's not official :p [15:03:04] Maybe the WMF needs to make a rule about privacy on Labs. [15:03:17] {{Guy}}, but I didn't, I just acquired ownership of it. [15:03:21] that would be a better step in the right direction than what we have now [15:03:38] PiRSquared, agreed. [15:03:55] That would me all tools would need consensus otherwise. [15:04:12] if you do indeed get permission from X! and all other authors, I think you can change the license (or release it under another) [15:05:12] maybe not [15:05:17] not sure [15:08:45] Source re-enabled. [15:22:00] sigh. [15:22:29] I wish people respected the community's wishes though. :| [15:25:52] me too [15:49:32] Hello. I'm migrating my tools to Tools Labs, and noticed a performance difference with crosswiki tools that connect to every wiki DB. I open one connection to each DB slice, then do a "use `dbname`" to switch to each wiki for the relevant queries. [15:49:32] This is very fast on the Toolserver (0.3 seconds), but very slow on Tools Labs (a whopping 23 seconds). Does the Tools Labs infrastructure make "use `dbname`" significantly slower? Is there a more efficient way to do this? [15:49:32] I created a simple script that simulates this on the Toolserver (running: https://toolserver.org/~pathoschild/accounteligibility/test.php source: http://pastebin.com/ctUQsXZm ) and Tools Labs (running: http://tools.wmflabs.org/pathoschild-contrib/accounteligibility/test.php source: http://pastebin.com/Xi3abGFP ). [17:47:39] Pathoschild: I've responded to your thread on labs-l [17:52:26] Coren|Away: I saw, thanks. I'll respond in a bit. [17:55:26] {{Guy}}: you like to chang your nickname ;-) -{{}} [17:57:49] <{{Guy}}> Connection issues riding down the road. [19:43:36] is there a reason that a popen /whois query wouldnt work from the webservers? [21:03:54] Coren|Away: ... afaik the replicas are in the same datacenter as labs [21:06:58] hm. they are... [21:07:14] I thought we had hardware allocated in pmtpa for this? [21:27:41] Ryan_Lane: Hardware? Dude, the cloud [21:27:49] :D [22:13:10] A few databases seem to have a ghost ipblock table: it shows up with `show tables`, but I get "Table 'ipblocks' doesn't exist" if I try to query it. This happens for testwikidatawiki, tyvwiki, viwikivoyage, and votewiki. Is this expected? [22:14:05] don't think so, Pathoschild [22:14:07] Coren|Away: ^ [22:14:12] file a bug, perhaps? [22:15:48] Will do. [22:39:26] [bz] (8NEW - created by: 2Jesse PW (Pathoschild), priority: 4Unprioritized - 6normal) [Bug 53668] Some replicated databases are missing tables - https://bugzilla.wikimedia.org/show_bug.cgi?id=53668 [22:53:14] Pathoschild: Interestingly, except for votewiki none of those projects have any blocks yet. [22:59:59] Hm. [23:01:18] I think thats just a coincidence though. [23:01:35] Mainly because they're new…except for votewiki :P [23:06:03] Silly votewiki, hopping on the bandwagon. ;) [23:21:10] Traceback (most recent call last): [23:21:10] File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 128, in apport_excepthook [23:21:10] os.O_WRONLY|os.O_CREAT|os.O_EXCL, 0o640), 'w') [23:21:10] OSError: [Errno 2] No such file or directory: '/var/crash/_var_spool_gridengine_execd_tools-exec-06_job_scripts_908718.40004.crash' [23:21:23] Coren|Away: ^ [23:21:59] sudo shutdown -r now [23:22:00] duh [23:26:27] YuviPanda: huh? [23:26:43] oh, just making a 'turn it off and back on' joke :) [23:26:53] oh hahaha