[01:03:39] petan, I need an expert in PHP [01:17:36] ptarjan, addshore or petan might be able to do it [01:17:50] Krenair: thanks [01:18:03] Krenair: I just added an ssh key to my account and gave that to him [01:18:07] it is an ok stopgap [01:18:49] If you don't mind someone else having access to your account, I guess it's okay... [01:19:02] i have nothing on there, and we work together :) [01:19:15] I have no idea what WMF thinks though [01:19:26] then i'd love for them to add him to his own account [01:20:28] some volunteers such as those I mentioned earlier can grant shell access to his account [01:22:39] paravoid and RobH (not in this channel) could also do it [01:28:54] or notpeter (also not in this channel) [06:52:52] is magnus around? [06:54:11] Fatal error: Call to undefined function db_get_image_data() in /data/project/commonshelper/public_html/index.php on line 1279 [06:55:42] magnus rarely comes on IRC [07:22:45] tools is being really slow right now.... [08:23:45] * fale 's tools has not arrived yet, so he is not to be blamed, legoktm :D [08:23:54] :P [08:24:54] hi [08:24:58] !logs [08:24:59] http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ [08:25:09] seems to be fine though now [08:25:18] Cyberpower678: what do you ned? [08:36:45] !log deployment-prep Fixing up the abuse filter central DB to points to 'labswiki' instead of the non existent 'metawiki' {{gerrit|69461}}. Suggested by Steinsplitter :) [08:36:49] Logged the message, Master [08:46:05] petan / Coren , /tmp on tools-exec hosts seems to be as slow as the home directory (~10 MB/s). Is that supposed to be that way? I had hoped to have better performance by using /tmp :-) [08:46:41] oh, I'm a moron [08:47:52] ? :> [08:47:53] I was using /dev/urandom instead of /dev/zero, and /dev/urandom is bit.. slower [08:48:01] as dd source [08:48:45] now I'm getting ~100MB/s for /home and 600MB-1.2GB/s for /tmp [08:49:11] valhallasw on all exec hosts? [08:49:20] I tried tools-exec-04 [08:49:28] valhallasw /tmp is a local storage it should be faster [08:49:32] but the problem was /dev/urandom, not /tmp :-) [08:49:38] ok [08:50:00] /dev/zero will be ultimately fast on exec-04 [08:50:08] because it uses btrfs for /tmp [08:50:31] so it's not written until some point later? or...? [08:50:37] it uses spare files [08:50:56] it will only write (this file should contain 40gb of zeroes) [08:51:06] instead of 40gb of zeroes :P [08:51:14] oh, that's also cheating :p [08:51:18] yes [08:53:03] using a more reasonable file, it's 50MB/s for /data/project->/home, 100MB/s for /data/project->/tmp [08:53:21] 133MB/s, even [08:54:55] I'll play around with it later tonight. [09:37:26] hey Coren. we are hitting the max connection limit with the render article list generator tool. is there a way to increase the limit? [09:38:42] petan I want to get a page within server (using w3m in command line) what is the address [09:38:45] http://tools.wmflabs.org/wikitest-rtl/w/api.php [09:39:00] I checked http://localhost/wikitest-rtl/w/api.php but didn't work [09:39:53] can somebody help me in this? ^ [09:42:59] Amir1: you have to use the internal address of the server [09:43:09] try http://tools-webserver-01/wikitest-rtl/w/api.php [09:43:45] JohannesK_WMDE: let me check [09:47:54] JohannesK_WMDE: thanks It's working [09:48:06] :) [10:06:49] Coren, petan: any idea about the connection limit? [11:38:00] Coren, petan: what number is the connection limit and where is it documented? i can't look it up via sql, erm, because of the connection limit [11:39:13] and when did it change? because we didn't hit the limit a few days ago, and were probably using the same amount of connections [11:51:58] JohannesK_WMDE what you talk about? [11:52:02] connection limit? [11:52:13] to what? [12:14:55] petan: sql connection limit, max_user_connections [12:15:10] yes there is quite a strict limit [12:15:17] you mean production or tools-db? [12:15:24] tools-db [12:15:29] yes there is.. [12:15:31] I don't know why [12:15:42] I can increase it, but AFAIK it requires server to be restarted... [12:15:50] o.O [12:16:01] can't it be increased per-user? [12:16:45] i mean, '10' is quite limited anyway so maybe you want to increase the global limit too at some point... but for now, per-user would do. i think that can be done without restarting the server. [12:16:50] ok I don't need to restart it [12:17:06] !log tools petrb: increasing limit on mysql connections [12:17:09] Logged the message, Master [12:17:52] btw current limit is 12 [12:17:54] * 512 [12:19:06] any currently there is like 8 connections [12:19:12] I increased it to 4096 [12:19:22] which according to kernel configuration is current maximum [12:19:28] but I can tweak kernel if we needed more [12:19:45] not going to do that until I see we are hitting at least 2k simultaneous connections :P [12:20:09] petan: on render-tests, i still can't connect because of the limit... on the render tool account, it still says the limit is 10 [12:20:26] MariaDB [dewiki_p]> show global variables like 'max_user_connections'; [12:20:29] +----------------------+-------+ [12:20:29] | Variable_name | Value | [12:20:29] +----------------------+-------+ [12:20:29] | max_user_connections | 10 | [12:20:32] ah [12:20:38] hold on [12:20:56] no need to tweak the kernel btw... 4096 should be plenty ;) [12:21:55] try now [12:22:03] I increased limit of all users to 80 [12:22:07] I can give you even more if you needed [12:26:07] no change, petan. render-tests can't connect, max_user_connections still says 10 [12:26:13] o.O [12:26:28] what user name it is [12:26:38] render and render-tests [12:27:34] are you sure it is tools-db and not replica? [12:27:51] because max_user_connections return 0 on tools-db [12:28:10] and if I select it from mysql.user I see 80 everywhere [12:29:20] also users render not render-tests do not exist [12:31:48] JohannesK_WMDE let me try 1 more thing... [12:32:35] JohannesK_WMDE try now [12:34:51] fale, ? [12:35:13] Cyberpower678: you were looking for a PHP guy :) [12:36:11] I'm trying to make a progress bar that shows how far the script has loaded, since some of X!'s tools take some time to compile the data. [12:36:28] I've done a lot of internet research and none of them work. [12:36:51] Cyberpower678: the progress bar should increase while the user is looking at it, right? [12:37:04] Clearly I am doing something wrong and need an expert to help me. [12:37:04] Yes. [12:37:15] Cyberpower678: is more a js thing, than a php thing ;) [12:37:38] JS on HTML or js as an actual script? [12:38:02] js on the html page that the user sees while the tool is loading [12:38:11] Ok. [12:38:25] still 10, petan [12:38:29] Can you help me develop a PHP script that actually works. [12:38:30] ? [12:38:45] Or JS script [12:38:57] "show global" and "show local" both 10 [12:38:59] I'm not very strong with js :( [12:39:17] Cyberpower678: but I can try :) [12:39:17] I've tried flush, and ob_flush, but they don't seem to flush. [12:39:32] you have somewhere the code? [12:40:25] I've tried code samples on the internet, but it didn't work as it should. [12:40:31] petan: maybe GRANT works? http://dev.mysql.com/doc/refman/5.0/en/grant.html [12:40:39] fale, so I don't really have any code anywhere. [12:41:02] JohannesK_WMDE sec [12:42:21] JohannesK_WMDE updated for render [12:42:25] try there [12:42:34] GRANT USAGE ON *.* TO 'francis'@'localhost' [12:42:35] -> WITH ... [12:42:38] I used that example [12:42:40] from mysql [12:42:41] fale, any thoughts? [12:49:03] addaway!!!!!!!!!!!!!! [12:49:09] addaway you blocked me on wikitech!! [12:49:12] :D [12:49:18] petan: ... no luck [12:49:21] still 10 [12:49:34] JohannesK_WMDE: in that case grant doesn't solve it as well :/ [12:49:47] bah... [12:49:49] I have no idea where is this limit set [12:52:04] OMG [12:52:14] paravoid ping [12:52:18] I need unblock on wikitech [12:53:10] petan: oh... this is for the replica too, of course, so the sql user names would be p50380g50613 and p50380g50454 [12:53:29] maybe you changed only the tools-db stuff [12:53:46] Cyberpower678: I'll look into it and ping you back :) [12:53:59] fale thank you. [12:57:51] Petan... really??? [12:57:55] How..... [12:57:55] YES [12:57:59] it's bugged [12:58:08] last time I blocked user it blocked someone else as well [12:58:10] I was told not to use it [12:58:13] Mwahahahhaa [12:58:28] Do you need an unblock? [12:58:29] the funny part is you won't be able to unblock either by name [12:58:31] yes I do [12:58:36] remove block #2 [12:58:38] * #1 [12:59:04] OK, I'm on my mobile so it might take a bit longer than usual ;p [12:59:26] * Cyberpower678 places an inhibitor on addaway  [12:59:29] Mwahahahaha [12:59:54] uh... petan? [13:00:00] hm? [13:00:23] (02:53:11 PM) JohannesK_WMDE: petan: oh... this is for the replica too, of course, so the sql user names would be p50380g50613 and p50380g50454 [13:00:49] yes I know, but I have no access to replica db's, + if it doesn't work on -db it probably won't work on replica either :;/ [13:02:36] addaway... [13:02:45] UNBLOCK ME FOR **** SAKE [13:02:51] something seems to have worked... on the tools-db i see max_user_connections 80 now, did you make that change petan? [13:03:01] JohannesK_WMDE yes [13:03:06] JohannesK_WMDE under which user you see it? [13:03:12] JohannesK_WMDE and since when? [13:03:30] local-render-tests@tools-login:~$ mysql -h tools-db [13:03:38] since now, petan [13:04:10] user is 'rendertests' [13:04:26] JohannesK_WMDE that is pretty weird [13:04:27] apparently the '-' from the unix account name got removed for sql [13:04:31] JohannesK_WMDE because I did grant on render [13:04:33] not render-test [13:04:36] lol [13:04:41] did you grant 80? [13:04:46] maybe you just needed to relog? [13:04:59] no I did grant usage on bla blab labal bla with max_user_connections 80; [13:05:54] lol the render user doesn't even have a .my.cnf [13:06:38] that doesn't really matter at some point [13:06:48] then how do i connect? [13:06:51] JohannesK_WMDE: you need to make a tool after you get a user [13:06:54] sql tool doesn't require it and your scripts can load credentials from replica.my.cnf [13:07:21] yes, but the replica credentials are different than the tools-db credentials, no? [13:07:27] no [13:07:29] they are same [13:07:37] newly [13:07:37] not here, no [13:07:42] ahhhh lol [13:07:42] in past they were different [13:07:47] so old accounts have different [13:07:47] 'newly'... i see [13:07:51] new accounts have same [13:07:57] the render account was created recently [13:08:01] ok [13:08:18] addaway [13:08:26] what is a progress of unblock [13:08:57] * Cyberpower678 stabs Coren  [13:09:03] petan: on tools-db, the limit is 0 now, for user p50380g50613 (that is, render) [13:09:03] * petan stabs addaway [13:09:10] o.O [13:09:11] Wait what? [13:09:19] Coren can u unblock me on wikitech :P [13:09:30] the thing is, we need to have the stuff running until, like, yesterday [13:09:31] JohannesK_WMDE that is truly ultimately weird [13:09:37] petan: cool [13:09:44] Coren, I need to contact legal. Can you give me their address [13:09:45] ? [13:09:47] petan: Bah, the block bug is still there? [13:09:55] yes [13:09:56] https://bugzilla.wikimedia.org/show_bug.cgi?id=49811 [13:09:58] Cyberpower678: Unsurprisingly, legal@wikimedia.org [13:10:02] i like weird, but i like weird+working even more [13:10:12] Coren, I knew that .:/ [13:10:26] hey, Coren! you're awake! [13:10:48] JohannesK_WMDE, he's probably in pain now that I stabbed him. [13:11:01] Cyberpower678 I suggest you type "child porn" in subject, that will make them read your e-mail asap [13:11:12] otherwise they will never read it [13:11:20] petan, XDXDXD [13:11:39] Coren: can you please increase the sql connection limit for p50380g50613 and p50380g50454 [13:11:47] Or it'll cause some other issues and they'll hate me forever. [13:11:50] for the replica... [13:11:55] Pet an try now [13:12:12] goodf [13:12:24] JohannesK_WMDE: No I can't, but I can ask Asher though. :-) [13:12:25] Did it work? :) [13:12:34] petan: Someone just removed it before I could. [13:12:38] Asher... ok. [13:12:48] JohannesK_WMDE: He's the DB meister. [13:12:51] Sorry my 3G is super slow ATM, bloody o2, *blames pet an* [13:13:24] noted, Coren. he's not here though. can you contact him another way? [13:13:42] Ta ta for now [13:13:45] addmobile I handle only o2 in Germany [13:13:50] Cyberpower678: Don't listen to petan. :-) If you put that in the subject line they'll read the email fast and get annoyed at you faster. :-) [13:14:00] JohannesK_WMDE: He's on IRC every day. binasher [13:14:07] Well hopefully when I go to Germany it will improve ;p [13:14:07] Coren, exactly. [13:15:00] Coren: where? [13:15:26] JohannesK_WMDE: Often here, but always on #wikimedia-operations [13:16:29] oooookay... [13:17:50] he's not there either. do you have another way to reach him Coren? [13:18:12] He's at GMT-7 so he's probably still asleep for 2-3 more hours. [13:19:47] right. ok. guess we can wait till then, Coren. btw, a default connection limit higher than 10 per default would be useful, i think. on TS there was no limit for MMPs. i'll ask him if he can increase it globally... [13:20:10] JohannesK_WMDE: Type "@notify binasher". [13:20:44] @notify binasher [13:20:44] You've already asked me to watch this user [13:20:55] :-) [13:20:55] fail [13:21:12] ok. works independently of channels obviously. the other time was in -operations [13:21:22] yes you can even request it in PM [13:21:29] it works network wide [13:21:36] OK [13:21:58] but if it didn't work in channels, nobody would ever notice this feature exist :P [13:22:08] * Cyberpower678 replaces the t in petan with a c and butters it. [13:22:21] :\ [13:22:31] * Cyberpower678 is hungry. [13:22:56] * petan replaces Cyber in Cyberpower678 with flower and laughs [13:23:08] * Cyberpower678 is eating buttered petan  [13:23:21] * petan is laughing at Flowerpower678 [13:23:42] * Cyberpower678 thinks Flowerpower678 is one the coolest nicks out there. [13:39:29] !git mediawiki/php/wmerrors [13:39:29] For more information about git on labs see https://labsconsole.wikimedia.org/wiki/Help:Git [13:39:33] !gerrit mediawiki/php/wmerrors [13:39:34] https://gerrit.wikimedia.org/ [13:39:36] !gitrepo mediawiki/php/wmerrors [13:39:38] !gitweb mediawiki/php/wmerrors [13:39:38] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/php/wmerrors.git [13:39:42] !gitblit mediawiki/php/wmerrors [13:43:20] are the mediawiki.org tables accessible from tool labs? [13:47:49] lbenedix: They haven't been marked as finished yet at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Production_replicas, but you can access them at mediawikiwiki_p. [13:49:51] how do I connect to that db? [13:49:58] mysql --defaults-file="${HOME}"/replica.my.cnf -h mediawiki.labsdb mediawiki_p is not working [13:50:20] its mediawikiwiki [13:50:24] lbenedix: "mediawikiwiki.labsdb" (mediawiki + wiki). [13:51:15] and you are importing the tables right now? [13:51:45] or, what is the reason that mediawikiwiki is not in the list of available replicas? [13:52:48] lbenedix: *I* do nothing :-). I don't know if something still needs a final touch, or if Coren just hasn't got around to updating the list. [13:53:38] AFAIK, Asher was doing the dump yesterday. [13:53:46] with "you" I was talking about "them"... I have no idea who, but I think the person is here [13:55:01] MAX(rc_timestamp) is a minute old, so something's working. [13:55:07] Why's the labs in http://meta.wikimedia.beta.wmflabs.org/ meaning http://deployment.wikimedia.beta.wmflabs.org/, not https://wikitech.wikimedia.org/? And what's *.wikimedia.beta.wmflabs.org anyway? [13:56:31] zhuyifei1999: meta.wikimedia.beta is the beta of metawiki [13:57:19] deployment.wikimedia.beta is the central site of all of the beta wikis (as meta.wikimedia.org is the central site for all production wikimedia project sites) [13:58:50] *.wikimedia.beta is the beta deployment cluster. there software is tested on the beta cluster before it is put it into production on Wikimedia sites [13:59:16] Don't we already have http://test.wikipedia.org/ ? [13:59:24] wikitech is for internal technical documentation for the Wikimedia Foundation (where magical things happen) [14:00:24] This probably isnt the right channel, but I'm sure someone here knows the answer [14:00:46] Is there a way to discover ip addresses associated with a specific domain? [14:00:49] zhuyifei1999: the beta sites are used by ‎Selenium testing software and generally shouldnt be used for test edits [14:01:20] the test sites are for users to test extensions, scripts, bugs and such [14:01:37] FutureTense: a wikimedia domain? [14:01:50] no, any domain [14:02:08] ah.. i ofund a networking chan [14:02:15] So how's your peachy powered script coming? [14:02:50] addshore, ^ [14:04:01] zhuyifei1999, test.wikipedia.org is actually in production [14:04:52] Cyberpower678: not had any time :( [14:05:13] addshore: Thanks. FutureTense: you can ping. $ ping www.google.com add you'll get PING www.google.com (173.194.72.147) 56(84) bytes of data. where 173.194.72.147 is the ip [14:05:27] nslookup [14:05:35] ping usually gives you the www site [14:05:35] addshore, that's not what you said yesterday. [14:11:16] nslookup might be a better idea, but $ nslookup www.wikipedia.org Server: 127.0.0.1 Address: 127.0.0.1#53 Non-authoritative answer: www.wikipedia.org canonical name = wikipedia-lb.wikimedia.org. wikipedia-lb.wikimedia.org canonical name = wikipedia-lb.eqiad.wikimedia.org. Name: wikipedia-lb.eqiad.wikimedia.org Address: 208.80.154.225 [14:11:40] still only one ip [14:12:47] though google is different, which I got six [15:12:57] addshore new version of pidgeon released o/ [15:12:59] :D [15:13:02] not that you were using it [15:13:06] but you might consider it [15:13:21] petan How are you today? [15:13:36] sweating like pig [15:13:47] lol I left you a note on #huggle [15:13:51] ok [16:16:45] and Coren, any chance i can log in to tools-webserver-01 and kill some python processes which are probably uselessly waiting for sql connections? [16:18:01] Coren, one of the HHVM guys was asking for https://wikitech.wikimedia.org/wiki/Shell_Request/Oyamauchi to be approved [16:48:49] JohannesK_WMDE try it [16:49:51] Krenair: done [16:50:04] does he realize what shell access is for? [16:50:14] lol again? [16:50:19] I... think he knows [16:50:34] Krenair ok just that shell access itself doesnt give you access anywhere [16:50:41] that guy needs to be in project as well [16:50:49] JohannesK_WMDE I mean login to apache server [16:51:07] Yeah, but you can't add people to a project unless they have shell access [16:51:09] ah, ok [16:51:14] you wanted to kill some processes [16:51:17] yes [16:51:29] try if you can ssh and if not I will have a look what we can do about it [16:51:33] jkroll@tools-login:~$ ssh tools-webserver-01 [16:51:41] Permission denied (publickey). [16:51:43] I think Facebook's HHVM guys are using the 'performance' project [16:51:45] :( [16:52:07] Krenair ok he has shell access now :) [16:52:22] gotta go now anyway... but thanks petan [16:52:26] JohannesK_WMDE ok I can either kill it or I can give you access there for a while [16:52:51] in fact I have no idea why it is restricted [16:53:41] if you find something like tlgwsgi, or tlg*, having sql connections open, doing nothing, kill it :) nothing else important should be running atm [16:53:49] ok [16:53:57] having access to the webserver would be great for things like this [16:54:17] hm Coren did restrict it... [16:54:17] maybe ask him [16:54:23] i will, tomorrow [17:59:42] hm. I wonder if I can disable the use of $wgAuth for LdapAuthentication [17:59:59] I don't think we really need it for authentication [18:00:00] mostly just as a library [18:01:21] any progress with OAUTH &OPENID support ? [18:05:28] OrenBochman: both will be added to the production sites in this quarter AFAIK [18:09:17] Coren, petan: Could one of you please "for JOBID in 382724 390298; do qacct -j $JOBID | mail -s $JOBID scfc; done" on tools-master? [18:11:43] scfc_de: Yes, I'm pretty sure we could both do that. [18:12:22] (done) [18:13:56] Coren: Thanks. BTW, just googling: http://arc.liv.ac.uk/SGE/howto/nfsreduce.html suggests we could rsync or something accounting file to all hosts, and thus allow qacct without NFS? Does accounting contain private information? [18:15:51] I. e. "rsync tools-master:/path/to/accounting /var/lib/gridengine/default/common" as a cron job every five minutes or so. [18:16:03] scfc_de: Ostensibly, no; there is the question of how often we'd want to rsync but it's not a bad idea. [18:28:23] ^demon: ok, it seems to be working mostly! [18:28:42] except I'm getting this for the 'compat' repository (I think the biggest of the four): remote: error: internal error while processing changes (timeout 60ms, cancelled) [18:30:10] ^demon: should I try to push in parts or something like that? [18:30:21] Coren: have you received my email? :) [18:30:54] fale: Actually, no. What was its subject? [18:31:06] Coren: bugzilla and gerrit :) [18:32:18] Coren: Added the thought to . [18:33:06] fale: I'm not seeing it. You did send it to mpelletier@wikimedia.org right? [18:33:12] yes [18:33:31] maybe because I sent it from my gmail account? [18:34:29] Hm. Try again to marc@uberbox.org? [18:35:21] Coren: sent : [18:40:49] ... o_O I'm still not seeing it fale. What is the email address you are sending it from? I'll do a full search of my mailboxes. [18:41:03] Coren: fabiolocati@gmail.com [18:41:42] No dice. [18:44:14] Coren: maybe is a gmail problem... strange but possible