[04:52:20] I'm getting 500 hphp_invoke errors (I guess they're errors) with labs-vagrant, where's the log file? [04:56:15] errors are in /var/log/hhvm/error.log [05:10:24] YuviPanda|zzz: ping me if not zzz. [05:10:47] What is location of betalabs mediawiki installation? [05:14:17] hello [05:14:47] I have some access issue in wiki labs [05:15:10] I could get inside the proxy but cannot access the host [05:15:16] the error is as below [05:15:19] sayantan-13@bastion2:~$ ssh sayantan@eqiad.wmflabs ssh: Could not resolve hostname eqiad.wmflabs: Name or service not known [05:15:34] can anyone please help [05:22:31] hello [05:22:38] I have some access issue in wiki labs [05:22:48] I could get inside the proxy but cannot access the host [05:22:54] the error is as below [05:23:04] sayantan-13@bastion2:~$ ssh sayantan@eqiad.wmflabs ssh: Could not resolve hostname eqiad.wmflabs: Name or service not known [05:23:12] can anyone please help [05:23:48] @rillke can you please help [05:26:29] self note: /data/scratch/mediawiki/core is what I was looking for.. :) [05:26:43] tantan, I don't know exactly what you are doing but are you probably missing your instance name? ..eqiad.wmflabs [05:26:57] * .eqiad.wmflabs [05:31:21] my instance name is same as my shell id right? [05:31:54] tantan, are you project admin or are you using tool labs? [05:32:12] just trying to use it [05:32:17] i am not an admin [05:32:29] my shell id is sayantan-13 [05:32:35] and what are you trying to use? [05:33:01] I just tried this and got the same error sayantan-13@bastion2:~$ ssh sayantan-13.eqiad.wmflabs ssh: Could not resolve hostname sayantan-13.eqiad.wmflabs: Name or service not known [05:33:32] ....because there is no instance with that name running, I guess [05:34:15] how to create an instance then? [05:34:49] i could get into the bastion with ssh sayantan-13@bastion2.wmflabs.org [05:35:15] First, you probably want to know what you need. Do you need something like a virtual server or a shared hosting environment? [05:37:02] i need to access the wikipedia user database for a research project. [05:37:19] how far i undestand i need to access the tool labs for that [05:37:26] then tool labs is sufficient [05:37:47] yeah. that's what i am trying to access [05:38:00] but i am getting the error there [05:38:03] https://wikitech.wikimedia.org/wiki/Special:FormEdit/Tools_Access_Request [05:38:15] tantan ^^ did your fill this in? [05:38:21] yes [05:38:29] i got access confirmation too [05:39:13] okay, then simply do [05:39:14] Tim Landscheidt sent me the approval mail [05:39:32] ssh -a sayantan-13@tools-dev.wmflabs.org [05:39:59] or, if you have mosh [05:40:00] mosh -a sayantan-13@tools-dev.wmflabs.org [05:40:22] ah. I want to import on deployment-salt using importDump.php and it says php is not installed? [05:40:32] YuviPanda|zzz: ^ or anyone? [05:41:06] tantan, could you follow? [05:41:18] i think i have got into the wiki labs [05:41:26] let me check gimme a min [05:43:25] tantan, then you want to follow https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Database_access [05:44:03] thank you I am checking rillke [06:13:06] rillke thank you so much [06:13:18] I got what I was looking for [06:13:28] thank you so much again [06:13:53] n.p. [06:14:09] that's what we are here for :) [07:13:44] I'm importing articles using, php maintenance/importDump.php --conf /usr/local/apache/common/wmf-config/CommonSettings-labs.php /home/kartik/es/Wikipedia-20140715130243.xml eswiki [07:14:20] Getting: DB connection error: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) (localhost) [07:14:27] what can be wrong? [07:14:32] matanya: ^ :) [09:14:34] 3Wikimedia Labs / 3tools: Please install pdf2svg - 10https://bugzilla.wikimedia.org/68092 (10Rainer Rillke @commons.wikimedia) 3NEW p:3Unprio s:3normal a:3Marc A. Pelletier I've already a trial running of what I have in mind at http://mol.wmflabs.org/services/ but I'd like to move that to tool labs.... [09:21:56] Anyone with deployment-bastion can help me to fix import issue here? [09:22:07] *access [10:37:32] 3Wikimedia Labs / 3tools: Please install pdf2svg - 10https://bugzilla.wikimedia.org/68092#c1 (10Tim Landscheidt) 5NEW>3ASSI s:5normal>3enhanc a:5Marc A. Pelletier>3Tim Landscheidt The package and its dependency only needs 346 kBytes of disk space. [11:59:06] 3Wikimedia Labs / 3deployment-prep (beta): cannot create account on beta wikidata - 10https://bugzilla.wikimedia.org/68031#c2 (10Aude) don't know if it is a duplicate but perhaps related. anyway, the captcha works today. I don't know why or if someone fixed it? [12:50:56] Look like YuviPanda is here.. :) [12:56:22] kart_: hey! moment, having 2 other conversations at the same time :) [12:57:23] :) [13:00:49] YuviPanda: left you msg with details when you're free. [13:01:32] I need to leave from here in few minutes. detaching tmux :) [13:36:14] scfc_de: 100M for just that one page load seems like an awful lot… do you think I should just move the memory limit, or is it likely that there's something absurdly inefficient in that code? [13:36:24] I don't know much/anything about memory management with php [13:36:51] Reedy: same question [13:37:12] page load of what [13:37:13] ? [13:37:35] of what? [13:37:35] Ah, sorry, context! That's for the sudo policy page for the tools project [13:37:35] andrewbogott: Currently we allow 245MB per Wikimedia wiki page load... [13:37:38] And that's not enough [13:37:43] It's erroring out, hitting 100M limit [13:37:47] yay PHP :D [13:37:49] Aha [13:38:09] If the answer is just "php is a hog, push up the limit" then I'll do that :) [13:38:29] The obvious thing is possibly it's reading a large dataset into memory [13:38:43] so maybe an inefficient query [13:39:03] Reedy: It's inefficient, but it's not /that/ inefficient. It's just text, and there aren't 10,000 different sudo policies [13:39:08] Right [13:39:10] I'd probably be tempted to just increase the limit though [13:39:39] ok, lemme see if I can do that in a local-to-wikitech way [13:40:28] You should just be able to set $wgMemoryLimit in LocalSettings.php [13:40:49] Oh, not just in php.ini? That should be easy then... [13:40:53] Uh [13:41:19] ? [13:41:26] Yeah, carry on [13:41:32] I got slightly confused with it in WMF config [13:41:37] Which does ini_set( 'memory_limit', $wmgMemoryLimit ); [13:41:49] but in MW core we have a function that does "Set PHP's memory limit to the larger of php.ini or $wgMemoryLimit;" [13:41:55] So, yeah, set $wgMemoryLimit in LS [13:41:59] ok [13:42:21] $wgMemoryLimit = "50M"; [13:42:47] I guess try 150M [13:43:59] Yep, that's all it took. Thanks. [13:46:54] Does php dealloc memory when it goes out of scope? Or does it just save up absolutely everything until exit? [13:48:12] andrewbogott: Ostensibly, PHP does garbage collection. In practice, it rarely actually reduces its footprint. [13:48:41] IIRC it does refcounting, and simply returns orphans to the heap. [13:49:29] Are there explicit things that a good coder should do to free things? [13:50:36] Not that I know of. PHP is not generally known for it's brilliant memory management. :-) [13:51:25] or brilliant anything :D [13:55:20] 3Wikimedia Labs / 3tools: Please install pdf2svg - 10https://bugzilla.wikimedia.org/68092 (10Tim Landscheidt) 5PATC>3RESO/FIX [13:58:38] andrewbogott: If upping the limit solves this for the moment, great; though it will probably only last so long :-). Extension:OpenStackManager however is very complicated, so I don't know immediately how that can be optimized. [14:00:30] andrewbogott: Also, I think if we remove the old sudoers rules (local-*, with a very careful look first), the list should shrink considerably. [14:00:57] andrewbogott: do you have any idea when the evaluation of 'Horizon' is slated to be? [14:01:06] scfc_de: oh, that's a good idea [14:01:19] YuviPanda: Later in the year. But even if we switch it'll be a long transition [14:01:40] andrewbogott: true [14:13:54] (03PS1) 10Zfilipin: Updated list of repositories that ping #wikimedia-qa [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 [14:16:26] (03CR) 10AzaToth: [C: 031] "loock techinally ok, but I cant judge if the config is actually what yee want" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 (owner: 10Zfilipin) [14:20:01] 3Wikimedia Labs / 3deployment-prep (beta): Beta should not use productions interwiki.cdb - 10https://bugzilla.wikimedia.org/67931#c1 (10Marius Hoch) Alternatively we could provide extra interwiki prefixes for labs sites (if those don't exist already)... like beta-de (or so). That would also allow importing f... [14:34:47] 3Wikimedia Labs / 3tools: Please install pdf2svg - 10https://bugzilla.wikimedia.org/68092#c4 (10Rainer Rillke @commons.wikimedia) Thanks! http://tools.wmflabs.org/convert/ [14:45:33] (03CR) 10Zfilipin: "I want notifications from qa and child repositories disabled, and notifications from mediawiki/ruby and child repositories enabled." [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 (owner: 10Zfilipin) [15:14:11] !log tools Restarted toolhistory with 350 MBytes; OOMed June 1st [15:14:13] Logged the message, Master [15:14:46] (03CR) 10Jforrester: [C: 031] "Will let Yuvi or Lego merge and deploy." [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 (owner: 10Zfilipin) [15:15:37] (03CR) 10Yuvipanda: [C: 032] Updated list of repositories that ping #wikimedia-qa [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 (owner: 10Zfilipin) [15:15:40] (03Merged) 10jenkins-bot: Updated list of repositories that ping #wikimedia-qa [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/146774 (owner: 10Zfilipin) [15:16:06] YuviPanda: Yay. :-) [15:16:13] James_F: :D deploying now [15:17:19] !log tools.lolrrit-wm Restarted to pickup new QA config change [15:17:21] Logged the message, Master [15:17:50] James_F: that should do it :) [15:18:40] !log tools replagstats OOMed four hours after start on May 6th; with ganglia.wmflabs.org down, not restarting [15:18:42] Logged the message, Master [15:18:54] Yay. [15:20:37] James_F: a lot of people already have +2 and deploy access, do let me know if you want to add more people :) [15:21:42] YuviPanda: I have +2 but not deploy. :-) [15:38:24] James_F: what's your labs shell name? [15:38:46] YuviPanda: … no idea. Never used Labs. [15:38:56] James_F: ah, hmm. create an account? :D [15:39:10] YuviPanda: Well, I have cluster access, so the accounts have the same name, right? [15:39:22] James_F: hmm, theoretically, yeah. what's your shell there? [15:39:27] * James_F isn't particularly asking for deploy rights. :-) [15:39:30] jforrester [15:39:33] yeah, but I like handing 'em out [15:39:45] how do I sftp to my tool account? I can just do ssh:// in my browser t get to my labs account, but I'm tired of doing everything within the tool itself via command line. [15:40:30] ragesoss: you can't sftp to your tool account, but you can just sftp *as* your user account, and go to /data/project/ [15:40:34] ragesoss: Your user account should have write access to the tools' directories that the user account is part of. [15:40:50] awesome, thanks! [15:41:22] ragesoss: What they said. Mostly because tool accounts don't have credentians to authenticate against. [15:41:24] James_F: bah, you don't seem to have a labs account I can find :( I'm going to let it be... [15:45:21] YuviPanda: I don't appear in the list of completed shell access requests (it claims I would have got it manually when my account was created), so have applied. [15:47:23] James_F: added and added :) docs at https://wikitech.wikimedia.org/wiki/Grrrit-wm [15:48:37] YuviPanda: Ta. :-) [16:59:04] 3Wikimedia Labs / 3deployment-prep (beta): Beta should not use productions interwiki.cdb - 10https://bugzilla.wikimedia.org/67931 (10Greg Grossmeier) p:5Unprio>3Normal [17:00:51] 3Wikimedia Labs / 3deployment-prep (beta): cannot create account on beta wikidata - 10https://bugzilla.wikimedia.org/68031#c3 (10Greg Grossmeier) No, probably not a dupe of the hhvm-specific bug. I can't reproduce this right now, maybe it was an intermittent issue? [17:02:41] Coren: I filed https://wikitech.wikimedia.org/wiki/New_Project_Request/extdist [17:03:34] 3Wikimedia Labs / 3deployment-prep (beta): Beta should not use productions interwiki.cdb - 10https://bugzilla.wikimedia.org/67931#c2 (10Greg Grossmeier) Antoine: would this break any of the auto-fancy stuff like where we fetch an image from prod commons if we need it? [17:19:17] legoktm: created [17:19:24] woo, thanks :D [17:21:52] andrewbogott: I'm on https://wikitech.wikimedia.org/wiki/Special:NovaInstance and set the project filter to "extdist", but I don't see a "create instance" button, do I need to wait a bit or anything? [17:24:03] hm, maybe it didn't make you an admin [17:24:39] legoktm: looks wrong for me too [17:24:53] I restarted memcached earlier, I bet log out and in will fix [17:24:56] I'm trying, you should too :) [17:26:34] Jah, that fixed it for me at least [17:26:50] Seems broken, it should notice that the cache is cold [17:27:38] yup, worked [17:43:50] YuviPanda: Does https://gerrit.wikimedia.org/r/#/c/106907/3 look right to you? (If you have to look in the nginx docs to answer, then I can just look myself :) ) [17:44:00] * YuviPanda clicks [17:44:43] andrewbogott: looks right, but I'd need to check docs :D [17:44:53] ok, I'll read [17:45:41] "If the “X-Forwarded-For” field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable" [17:45:48] So, that should work [17:46:19] Hoi, I rely on https://tools.wmflabs.org/wikidata-todo/stats.php according to Magnus the files on labs or in the wrong place and he expects them to be there. [17:46:22] yeah [17:46:50] what does it take to get this resolved ... I do rely on these statistics because I am looking for the effects of things we do [17:47:04] a more accurate description of the problem would help :) [17:47:06] and adapt based on the numbers [17:47:15] there are two bugs for it already [17:47:20] link? [17:49:41] YuviPanda: https://bugzilla.wikimedia.org/show_bug.cgi?id=66362 [17:49:41] GerardM-: no there is only one bug report if you don't consider yourself two persons :) [17:50:25] https://bugzilla.wikimedia.org/show_bug.cgi?id=66362 [17:50:35] duplicates ? [17:50:55] one for last dump a new one for a new dump ? [17:52:14] scfc_de: GerardM- I suspect this has to do with the lack of disk space on the host serving the dumps? IIRC Coren was working on it [17:52:20] Coren: Have dumps already been moved to the new host? [17:52:29] And, if so, do they need to be referenced at a different path? [17:52:37] scfc_de: same question, if you know the answers... [17:53:14] andrewbogott: 'new host' => tracked in RT 7578 [17:53:31] andrewbogott: So no; but once they are it's a simple puppet change to mount the new filesystem [17:54:33] GerardM-: in short: we're out of space for dump, but there's a new host that is going to be provisionned for them with plenty of space. It's in the provisionning queue now so it shouldn't be long. [17:54:33] Coren: At which point dumps will be mounted at the same mount point as before, right? [17:54:39] andrewbogott: Right. [17:54:45] andrewbogott: Just from a different source. [17:54:53] ok ... any due date ? [17:55:05] ok. So, GerardM- (to reiterate) that means you don't need to do anything but wait :) [17:55:28] how long ? [17:55:57] GerardM-: I expect RobH is the one who can tell you this. [17:56:01] days weeks months ? [17:56:05] 3Wikimedia Labs / 3Infrastructure: Wikidata dump not available - out of space for dump - 10https://bugzilla.wikimedia.org/66362 (10Andre Klapper) [17:56:10] days [17:56:20] 3Wikimedia Labs / 3tools: Include pagecounts dumps in datasets - 10https://bugzilla.wikimedia.org/48894#c24 (10Tim Landscheidt) New server tracked in RT #7578. [17:56:21] ah so I can be a more again with the next dump [17:56:28] bore [17:56:47] fourteen days give or take a few [18:19:21] andrewbogott: so https://wikitech.wikimedia.org/wiki/Nova_Resource:I-000004a0.eqiad.wmflabs is still "building"...how long should it take? I haven't used trusty in labs before, precise normally took under 10 minutes [18:20:11] legoktm: looks like something is going wrong, let me look [18:20:38] thanks [18:22:11] legoktm: Hm, for one thing the security group doesn't allow ssh. I thought I fixed that :/ [18:22:32] I just used "default" I think [18:23:13] yeah, not your fault, the whole idea of 'default' is that it permits ssh from bastion [18:24:14] is ther a limit for java applications? [18:24:26] Steinsplitter: like, a memory limit? [18:24:37] Could not reserve enough space for object heap [18:24:47] increase how much memory you're giving it [18:24:53] yes, i have given +2 GB ram for a small tool. i can't belive it eats moor [18:24:58] I remember that java is particularly hungry, it might need a few GB [18:25:08] keep increasing it until it works? [18:25:29] legoktm: I'm going to reboot that instance, then I think it will be OK [18:25:37] ok [18:25:52] Steinsplitter: The -mem parameters limits total virtual memory, which can be significantly larger than the actual footprint. Java, with the default VM, uses nearly 2G just to get started. [18:26:04] legoktm: have tried (9 is not engough \O/), is there a grid commant to assign mem dinamically? [18:26:54] Coren: is there a way to change javamen on grid? [18:27:07] Java needs at least 4G min I guess? [18:27:24] 2 i thin? [18:27:41] Steinsplitter: https://wikitech.wikimedia.org/wiki/Nova_Resource_Talk:Tools/Help#300-350M.3F_Really.3F [18:28:09] legoktm: that instance should be fine now, and future instances should work as well. [18:28:11] Steinsplitter: Look below the table, there is a discussion there about Java memory usage [18:28:23] And, you didn't do anything wrong; I just need to tune up the code for new projects [18:29:01] ok, thanks. I was able to ssh in :) [18:29:22] andrewbogott: https://wikitech.wikimedia.org/wiki/Nova_Resource:I-000004a0.eqiad.wmflabs still says building though? [18:29:44] legoktm: I'd encourage you to ignore that for now [18:29:49] ok :P [18:30:32] 20:28:21 - andrewbogott: I just need to tune up the code for new projects [18:30:32] O_O [18:31:29] Steinsplitter: well, I'll still need to create them by hand, but it should at least allow ssh in new projects :/ [18:32:28] andrewbogott: okay, if i understand correctly you can change the mem for a toolslabs vm? [18:33:11] Not for an existing instance. [18:34:03] so we need to create a new one? :/ [18:35:36] Steinsplitter: No, I don't think you're running out of RAM on the whole instance, I think it's process-specific. [18:36:00] But I don't know much about your problem, you should consider your conversation with Coren and legoktm [18:36:32] it runs on shh, only problems with grind - jsubmit'ing it. [18:36:40] *grid [18:36:43] Steinsplitter: Need moar -mem [18:37:24] Steinsplitter: Or if its the actual JVM that runs out of heap, then you need to make /its/ heap bigger with -Mxm [18:37:28] Err, -Xmx [18:39:00] Coren: jepp. works. Thank you! :):) [18:48:03] YuviPanda: so http://extdist.wmflabs.org/ is timing out with a 504, but if I # curl http://127.0.0.1 it works fine [18:48:17] legoktm: open port 80 with security groups? [18:48:30] * legoktm checks [18:50:25] "Failed to add rule. " [18:50:30] andrewbogott: ^ [18:50:45] oh I probably need to set a range [18:50:46] legoktm: leave 'source group' empty but specify 'tcp' [18:50:51] Yeah, that'll help [18:50:59] 10.0.0.0/8 [18:51:02] is a good default [18:51:19] woo [18:51:21] thanks :D [19:14:59] !log deployment-prep updated puppet about 20 minutes ago for new ocg variables (now officially in production puppet instead of just cherry picked) [19:15:02] Logged the message, Master [19:27:27] legoktm: can you add me as projectadmin? [19:27:41] * legoktm does [19:28:29] YuviPanda: done [19:28:36] legoktm: sweet [19:28:41] legoktm: you should !log [19:28:42] :D [19:28:53] legoktm: aalso why isn't this just on tools? [19:28:59] !log extdist added YuviPanda [19:29:00] Logged the message, Master [19:30:49] YuviPanda: I talked with Coren and he said for efficiency we don't need NFS since we can just recreate the tarballs [19:31:00] 'need' or 'want'? [19:31:24] YuviPanda: Both. Also, given the possibly very high number of tarballs to make, tools may suffer a bit. [19:31:29] ah [19:31:30] hmm [19:32:08] legoktm: You almost certainly want to create said tarballs in /srv after having allocated your space there btw. [19:32:14] indeed [19:32:19] oh ok [19:32:23] Coren: he's doing it by hand now, I'm going to help him puppetize it after [19:32:25] how do I allocate space? [19:32:39] legoktm: there's a 'srv' role in the 'configure instance' space [19:33:06] role::labs::lvm::srv [?] [19:33:09] that one? [19:33:19] Coren: yeah [19:33:21] err [19:33:22] legoktm: yeah [19:34:12] !log extdist enabled role::labs::lvm::srv role, running puppet now [19:34:18] Logged the message, Master [19:34:50] ok, it's mounted [19:34:58] how do I configure how much space it gets? [19:35:07] legoktm: it gets as much as it can, I think [19:35:21] that...works :D [19:35:47] legoktm: :D [19:39:22] ok, everything is pointing to /srv now [19:43:28] Yeah, role::labs::lvm::srv grabs all the available storage. You can finetune with labs_lvm::volume but that requires having an actual manifest. [19:44:18] it has 28GB right now, so I don't think it'll run out space anytime soon [20:16:11] andrewbogott: i plan to test jitsi-meet in labs, any objection? can you support this? https://github.com/jitsi/jitsi-meet [20:18:28] matanya: Everyone at WMF badly needs a better videoconf solution, so in theory I support it... [20:18:39] …if you are testing it for vaguely WMF-related purposes [20:19:07] I'm not sure if Labs will throttle bandwidth too much for it to work. I wouldn't think so. [20:19:11] I do, FDC stuff, mostly to test replacing he use of skype and hangout [20:19:14] Do you need a project and a public IP and such? [20:19:22] yes [20:19:34] 'jitsi'? [20:19:46] yes, sounds reasonable [20:21:12] matanya: ok, should be set up [20:21:18] thanks a lot [20:22:33] Let me know how it goes -- Ops has been butting up against the number-of-users limit in Hangouts for a while now [20:24:36] I will. i guess i'll have hard time with nginx at first [20:24:53] but i hope it will go smoothly [20:32:40] !ping [20:32:40] !pong [20:40:02] andrewbogott: so, legoktm is building out a project for ExtensionDistribution tarballs, I'm going to help him puppetize it. I wonder where I should put the role? [20:40:14] I don't think it should be a module [20:40:30] ExtensionDistribution has to do with packaging composer runs? [20:40:43] no, it's just tarballs of mw extensions [20:40:46] nope, it's for tarballs of mw extensions [20:40:50] built every day, I think [20:40:52] YuviPanda: if it's really just a role, then you can just drop it in 'manifests/roles' with the other ones :) [20:41:02] alright [20:41:09] it's just going to glue together nginx + uwsgi + a cron [20:41:32] yeah, that's fine. There's not much of a scheme for organizing roles yet. [20:41:56] right [20:42:17] legoktm: let me know when your manual setup is complete (and make sure the etherpad is up to date) [20:42:24] YuviPanda: it's all done [20:42:30] legoktm: oh, works? [20:42:35] yeah :D [20:42:43] legoktm: etherpad link again? [20:43:12] https://etherpad.wikimedia.org/p/extdist [20:44:34] legoktm: hmm, so /srv/extensions would just clone mediawiki/extensions.git and do git submodule update right before the cron to do the tarballs? [20:44:47] no [20:44:49] andrewbogott: hmm, I'm wondering if I should make this a module? It would have multiple files + templates. [20:44:57] legoktm: oh? then? [20:45:05] I have /srv/src/extensions, which is a bunch of individual checkouts [20:45:09] Yeah, should be a module then [20:45:11] I don't want to rely on the mega-repo [20:45:22] legoktm: it won't be a mega repo, it'll be individual repos [20:45:37] "clone mediawiki/extensions.git" is the mega-repo [20:45:49] legoktm: mediawiki/extensions.git is a 'meta repo' that just checks out the individual ones as submodules. so in /srv/extensions/MobileFrontend would just be the regular MF repo [20:46:11] meta, mega, same thing :P [20:46:14] I just don't want to use it [20:46:23] It relies on gerrit magic working, which doesn't always work [20:46:59] like for VisualEditor [20:47:20] hmm, right [20:47:42] https://dpaste.de/q0jf/raw that's the main script [20:48:01] it uses https://gerrit.wikimedia.org/mediawiki-extensions.txt which is what ExtensionDistributor uses [20:48:12] aaah [20:48:17] I didn't know that there was a .txt file [20:48:51] legoktm: can you modify it to use the logging module instead of print? [20:49:03] yeah sure [20:49:31] legoktm: and add moar comments! [20:51:06] ok :P [20:58:59] legoktm: hmm, why does it need uwsgi at all? isn't it just 1. run cron to generate tarballs, 2. serve them [20:59:28] YuviPanda: it mimics github's API: https://github.com/legoktm/extdist/blob/master/api.py [20:59:35] aaah [20:59:35] right [21:00:49] legoktm: right, so I'll just 1. use git for deploy (so you deploy by sshing in and doing a git pull), 2. use nginx for serving, with uwsgi for API and direct for tarballs [21:01:05] sounds good [21:01:14] I requested a repo on gerrit, the github one is just temporary [21:01:33] yeah, figured [21:02:15] !log extdist created extdist-test to test new puppet module [21:02:17] Logged the message, Master [21:22:59] [enwiki_p]> SELECT COUNT(*) FROM page WHERE page_namespace=0\G COUNT(*): 11,097,385 [21:23:14] Should that be closer to 5 million? [21:24:34] on redirects... [21:24:37] oh* [21:26:09] Yeah, if you AND is_redirect=0 you should get the right number. [21:27:08] Huh, amusing how dewiki has comparatively so very few redirects. [21:29:22] IIRC enwiki redirects wrong spellings, while dewiki uses -- if at all -- pages to the effect of: "We have nothing for tpyo. Did you mean typo?" and otherwise relies on the search engine. So that may account for some of the difference. [21:31:16] (03PS1) 10Ricordisamoa: add .gitreview [labs/tools/maintgraph] - 10https://gerrit.wikimedia.org/r/146939 [21:40:33] (03PS1) 10Ricordisamoa: load the latest minified version of d3 from toollabs:static [labs/tools/maintgraph] - 10https://gerrit.wikimedia.org/r/146941 [21:53:02] Coren: do you know what would be causing https://dpaste.de/WQSF/raw ? Earwig is having some issues [21:55:03] legoktm: I'm not seeing an obvious reason. [21:55:14] Unless of some odd behaviour of that library. [21:55:34] Earwig said that other urls work just fine, it's just that specific one [21:56:09] the entire www.asc-csa.gc.ca domain seems to be failing, but another one I picked randomly (www.cic.gc.ca) works fine [21:57:29] If I run that on tools-exec-03, it works for me. [21:57:45] scfc_de: and webgrid? [21:59:18] legoktm: On -webgrid-02, http://www.asc-csa.gc.ca/eng/astronauts/biosaintjacques.asp times out, but also with curl. [21:59:22] note that it seems to be failing on all the webgrids, so it's probably something related to its configuration [21:59:23] yeah [22:01:59] On the network level, we don't do much configuration. Does the site auto-throttle? [22:03:07] I wouldn't think so? [22:13:52] !log extdist Made extdist-test self-hosted puppetmaster [22:13:53] Logged the message, Master [22:15:24] Running "pdsh -f 1 -g tools 'curl http://www.asc-csa.gc.ca/eng/astronauts/biosaintjacques.asp > /dev/null'" at the moment, and -exec-{01..08} are fine, but -{09..11} time out. IIRC these use NAT IPs, so that may be the problem. [22:16:17] !paste [22:16:17] http://tools.wmflabs.org/paste/ [22:17:23] Earwig: But as "curl http://www.yahoo.de/" works on those hosts without any problems, I'm pretty confident that it is an auto-throttle at www.asc-csa.gc.ca. Does tools.copyvios make many subsequent requests to that server? [22:18:00] scfc_de: no [22:18:28] I don't understand why auto-throttling would cause a problem? it failed on the first request that was made [22:18:31] unless I'm missing something here [22:19:40] Earwig: first request ever? [22:20:04] as far as I can tell [22:20:50] Earwig: Perhaps there are other bots hitting that server? [22:21:28] hmm, I suppose that's possible, but I wish there was a way to confirm it [22:22:15] The symptoms are just clear: On hosts that do not have their own fixed external IP, connects to that specific webserver fail, while requests to other webservers succeed. You could contact the webmaster of asc-csa.gc.ca to confirm that. [22:23:27] alright [22:23:29] well, thanks for your help [22:23:45] np [22:23:59] * YuviPanda puts 'faster status page' on his todo list for toollabs [22:29:30] YuviPanda: I thought about replacing it with a Python/Perl web app (Dancer/Flask/whatever). That way you could cache stuff "naturally". [22:30:55] scfc_de: indeed, same here. I wonder if the delay is in getting output from qstat or elsewhere [22:32:13] YuviPanda: Pretty certain that that's the cause (what else is there?). [22:32:39] scfc_de: right. So I guess we'll cache it pretty low (once every 30s?) with rolling replacements [22:32:47] so it isn't getting stats from all the machines for each cache miss [22:34:32] YuviPanda: Some of the frameworks even have higher-level caching integrated (I'm pretty sure Dancer has), so in essence we could get away with just saying "cache ?status for 30s", "?list for 60s", etc. [22:34:54] scfc_de: yeah, but I don't want a cache *miss* to take as much time as it takes now [22:35:19] scfc_de: so what to do instead is to have individual API endpoints for each of the machines, and cache those for 30s, etc [22:35:28] so the page doesn't block on having all of 'em loaded to actually load [22:38:14] YuviPanda: Yes, that's a point. But also there is a lot of potential for optimization: For example, at the moment AFAICS for each job a separate qstat call is issued. I assume you could also read once "qstat -u \* -xml" and then parse the XML for all users. [22:38:18] (Untested.) [22:38:35] aaha [22:38:35] yes [22:41:01] !log tools reloaded nginx on tools-webproxy to pick up https://gerrit.wikimedia.org/r/#/c/146466/3 [22:41:03] Logged the message, Master [22:41:13] !log project-proxy reloaded nginx on dynamicproxy-gateway to pick up https://gerrit.wikimedia.org/r/#/c/146466/3 [22:41:14] Logged the message, Master [22:58:30] andrewbogott_afk: use chrome to access http://jitsi.wmflabs.org [23:02:14] scfc_de: Nope; there's a qstat of -u '*' done, not per-tool. [23:03:17] scfc_de: Oh, wait, for the details you mean. Yeah, there's one per tool. [23:03:28] scfc_de: But wildcards won't allow you to get the details. [23:03:41] Coren: if you want to help out with that to - be my guest :) [23:03:59] i.e. jitsi [23:04:15] matanya: I was just curious. :-) [23:04:20] *too. too many typos, time to sleep [23:04:59] but basically the idea is to host a normal conf call/ video /group chat app [23:06:00] I have issues with video and audio at the moment Coren but they are solveable in reasonable hours. [23:24:52] Coren: "qstat -j \* -xml" (not "-u \*" as I proposed above) seems to have all the data of "qstat -j $JOBNR -xml", but I'll need to dig deeper. [23:25:21] scfc_de: Interresting. That might help a lot, actually. [23:38:05] YuviPanda|zz: Re -webproxy, with my recent patch Puppet should have detected the change and reloaded nginx. [23:38:14] scfc_de: ah, right. [23:39:01] (Not that "being sure" is wrong in security issues :-).) [23:40:43] YuviPanda|zz: ... because I was wrong: The change occured in /etc/nginx/sites-available/proxy, and thus Puppet scheduled no reload, and you were right to reload. Clearly: Good night! :-) [23:40:49] :D [23:43:42] Ah! The patch chained up redis.conf, not nginx. Reading is such an underappreciated skill. I'm off to bed. [23:43:47] right [23:43:54] good night, scfc_de