[00:01:23] gifti: Why do you have /tmp/joe.tmp.gdsxLI with > 4 G?! :-) [00:01:38] You don't mind deleting it? [00:01:59] !log deployment-prep Enabled https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/ [00:02:01] Logged the message, Master [00:02:38] i didn't know that i had it [00:02:56] Isn't that directory shared by all? [00:03:38] i deleted them; problem solved :) [03:49:18] who can I talk to about getting access to beta labs? [03:50:34] legoktm: me :) [03:50:42] o/ [03:50:48] bd808: can I haz access? :D [03:51:00] https://bugzilla.wikimedia.org/show_bug.cgi?id=62602 I'm trying to debug that [03:51:24] And if I could look at the db directly...would be much easier. [03:51:31] * bd808 looks for his phone so he can log in [03:53:13] legoktm: What's your wikitech username? [03:53:16] legoktm [03:53:26] boooring! [03:54:01] !log deploment-prep Added legoktm as a project member [03:54:02] deploment-prep is not a valid project. [03:54:11] thanks :D [03:54:13] !log deployment-prep Added legoktm as a project member [03:54:15] Logged the message, Master [03:54:43] You'll want to log into deployment-bastion.eqiad.wmflabs [03:55:03] That server is the equivalent of tin in production [03:55:18] ah, perfect. [03:55:58] Logs are in /data/project/logs [03:57:45] legoktm@deployment-bastion:~$ sql metawiki [03:57:48] /usr/local/bin/sql: line 33: exec: mysql: not found [03:58:03] blah [03:58:56] should I try a different server? [03:59:53] !log deployment-prep sudo apt-get install mysql-client on deployment-bastion [03:59:55] Logged the message, Master [04:00:02] try again :) [04:00:24] ERROR 2005 (HY000): Unknown MySQL server host 'db1' (0) [04:00:25] :< [04:00:40] should I just not use the sql wrapper? [04:01:52] hmm… why wouldn't that work [04:03:33] I think the host is deployment-db1? [04:03:47] legoktm@deployment-bastion:~$ mysql -h deployment-db1 [04:03:47] ERROR 1045 (28000): Access denied for user 'legoktm'@'10.68.16.58' (using password: NO) [04:04:15] * bd808 nods [04:05:00] legoktm: /a/common/wmf-config/PrivateSettings.php will have the password you seek I think [04:05:39] You should also file a bug against beta about `sql` not working. [04:05:56] Antoine or I can figure that out tomorrow I bet. [04:06:20] Or he'll respond to the bug and tell us why that can't work :) [04:06:42] * bd808 has adopted beta as his pet project [04:06:57] ok, thanks [04:07:00] i'll do that in a bit [04:24:37] bd808: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/Overview#Database_connection, but that doesn't actually dump me into mysql [04:24:43] it's just a php wrapper around it [04:26:33] bd808: https://bugzilla.wikimedia.org/show_bug.cgi?id=63803 [08:45:42] Coren: could you have a look at the webservice of tool "wikihistory"? Every request times out - not only php, but e.g. http://tools.wmflabs.org/wikihistory/style.css . I restarted the webservice twice, no change. [08:50:27] apper: {access,error}.log? [08:50:52] a930913: error.log is empty, just "2014-04-11 08:44:36: (log.c.166) server started" [08:51:06] I will have a look at access.log [08:53:42] what's the best way to look at the end of a 50 MB file? ;) [08:54:02] apper: tail [08:54:14] ah, cool, thanks ;) [08:54:40] the last entry there is older than the restart date in error.log [08:56:15] apper: Weird. Is there any configuration? [08:57:59] a930913: the standard configuration was changed (for more parallel connections) [08:59:02] a930913: but I can't check if resetting this to standard values would change anything, because these setting could only be changed by a tool labs admin [08:59:39] apper: Oh, not in the local config file? [09:00:04] not in the local file, yes [09:00:21] because these are settings which couldn't be overwritten by a local file [09:01:18] so, unfortunally I have to go to work now ;) [09:01:33] I hope, Coren could have a look into this, when he's back [09:02:41] a930913: thanks for helping [09:02:44] bye [13:59:46] anyone around able to install a python package? [14:16:35] Betacommand: nobody can (except for Coren) but we can submit a patch to gerrit, if it's in repository [14:16:55] then Coren needs to merge it, so basically only Coren can do that :P [14:19:11] grumble grumble, Not having that library is halting dev on several tools :( [14:20:43] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#My_Tool_requires_a_package_that_is_not_currently_installed_in_Tool_Labs._How_can_I_add_it.3F [14:23:48] Nemo_bis: https://bugzilla.wikimedia.org/show_bug.cgi?id=63539 [14:33:00] Coren, something killed my webservice on supercount [14:36:15] Betacommand: Nemo_bis meant that you can install the library locally to your tool, so it doesn't need to halt your development. [14:37:48] Coren: around? [14:38:05] scfc_de: odds of that working successfully for me are almost nill [14:38:39] Ive run into so many bugs my head hurts [14:39:47] scfc_de, something killed my webservice on supercount. What would cause that to happen? [14:40:46] Betacommand: What bugs? [14:40:53] Cyberpower678: What do you mean by "killed"? [14:41:16] scfc_de, killed as in I found it dead this morning. Gone. Nothing in qstat [14:42:54] scfc_de: where should I start? coren has already fixed most of them, email services going nuts, email server not trusting other labs hosts and rejecting their emails. and many more [14:43:51] Betacommand: Ah! Okay, you spoke of your karma. [14:44:23] scfc_de: its not karma [14:45:53] Cyberpower678: It looks like it ran till 13:57:03Z, and (you?) restarted it on 14:33:21Z. The dead job has exit_status 137 (128 + KILL) which usually meant out of memory. Let me see if I can find out whether it was killed by Linux or SGE. [14:46:41] Why would the webservice run out of memory? [14:48:55] Because your webserver gets 4 GByte allocated, and your scripts seem to have requested at least 3.744G (maxvmem in "qacct -j 155916"). [14:49:55] scfc_de, my script memlim has been set to 900M [14:50:48] Per script? If you have five processes running in parallel, that would make it 4.5 GByte (at most). [14:51:16] scfc_de, how many processes are allowed to run in parallel? [14:52:44] That's a good question I'm not totally sure about with apper's problems recently. I *think* there are at most five PHP processes per tool at any time. [14:53:14] 3.5/5 [14:53:25] = 700M [14:53:56] * Cyberpower678 reduces the memory limit. [14:55:22] scfc_de: only 5? o.O [14:55:28] that is one hell a low limit [14:55:55] Cyberpower678: Why does it need so much memory anyway? http://tools.wmflabs.org/supercount/index.php?user=Cyberpower678&project=en.wikipedia doesn't look that complex. [14:56:23] scfc_de, because users with millions of edits will consume a great deal of memory. [14:56:43] petan: There is a strange relationship between lighttpd "workers", FCGI somethingers and PHP processes. I hope to do some tests at the weekend to finally understand which configuration changes what. [14:57:46] scfc_de, If I had it doing ClueBot NG, it would do an SQL query that would amass to about 400M in size. [14:57:48] Cyberpower678: Oh, I didn't see the tabs with the other stats. But still I would assume that the number of edits doesn't matter because you're querying the database? [14:58:30] scfc_de, it pulls every edit a user made from the revision table. [14:58:37] And analysis it. [14:59:02] .................. for what?????!! [14:59:07] can't you analyse it on the go instead of collecting it? [14:59:18] To create the pie chart. [14:59:31] The monthly charts, and the top pages edits. [15:00:06] gifti, on the go? [15:00:20] You mean query the database several thousand times? [15:00:33] And have the tool take forever doing it> [15:00:39] not fetching the whole data, but just one row and process it so you use fewer memory [15:00:56] there are different modes to do a single big query afair [15:01:02] gifti, that will increase execution time greatly. [15:01:04] I wouldn't optimize it just to squeeze a few MBs out of it, but you can ask the DB for the results you need, or you could process one row at a time instead of slurping all of them first in memory (I assume that's what you do). [15:01:07] i doubt it [15:01:34] scfc_de: that's what i try to explain [15:02:15] $res = mysqli_query( $this->db, "SELECT rev_timestamp FROM ".mysqli_escape_string( $this->db, $this->revisionTable )." WHERE `rev_user_text` = '".mysqli_escape_string( $this->db, $this->user )."' ORDER BY rev_timestamp ASC;") [15:02:33] Cyberpower678: For example, you could "foreach ($row) { add_to_monthly_stat($row); add_to_namespace_pie($row); etc.; }". [15:03:17] scfc_de, It does the query, it fetches the row one at a time. [15:03:44] Aha, okay. [15:03:47] $res = mysqli_query( $this->db, "SELECT rev_timestamp, page_title, page_namespace FROM ".mysqli_escape_string( $this->db, $this->revisionTable )." JOIN page ON page_id = rev_page ".($this->userid == -1 ? "WHERE `rev_user_text` = '".mysqli_escape_string( $this->db, $this->user ) : "WHERE `rev_user` ='".mysqli_escape_string( $this->db, $this->userid ))."' ORDER BY rev_timestamp; [15:04:24] scfc_de, then it does [15:04:25] while( $row = mysqli_fetch_assoc( $res ) ) { [15:04:28] stuff [15:04:29] } [15:05:09] As I said: If it works, it works. [15:05:29] scfc_de, but $res will take a lot of memory for users with greater amounts of edits. [15:06:36] Cyberpower678: Why's that? It should have never hold than one row in memory (otherwise mysqli_query() would be very strange). [15:07:50] I just know that if I push down the memory limit. Users with 455 edits will work and 10000 edits will OOM [15:08:17] Oh wait. [15:09:20] Ok yes, what I just said. [15:09:46] And all the core does is essentially tally namespace edits and pages edited. [15:10:28] Are you sure the memory consumption of mysqli_query() increases and not that of your processing further down the line? [15:12:30] scfc_de, memory does go up slightly, but all it does is primarily count how many edits were made in x namespaces, and how many edits were made to x pages [15:13:41] scfc_de, https://tools.wmflabs.org/supercount/index.php?user=Cyberpower678&project=en.wikipedia&debug=true will add another tab containing the raw data after processing. [15:14:08] aren't someone with lots of edits going to have a huge list of pages they made edits to, with many of them being low-digit numbers? [15:14:54] Hmm... [15:15:00] Nettrom, good point. [15:15:20] http://us1.php.net/manual/en/mysqlinfo.concepts.buffering.php [15:15:55] gifti: I stand corrected. [15:16:03] what do you mean? [15:16:39] oh, everything you said was wrong … [15:16:43] aha [15:17:23] scfc_de, Yay. So I wasn't talking out of my ass. :p [15:17:53] Well, not everything :-). But my assumption that mysqli_query() would not increase memory consumption by number of rows. [15:18:10] Cyberpower678: So you might want to try unbuffered queries, if that suits you. [15:18:15] ah, this, yes, that was fishy [15:20:47] Cyberpower678: do you use "grouping" and filtering for namespaces and page titles in mysql? [15:21:25] if not, may be worth a try [15:21:45] gifti, no. I fetch it all and sort as I pull every row. Pinging the DB multiple times is what I am trying to avoid. [15:22:09] that would be two queries if i'm not wrong [15:22:33] gifti, the current setup requires only 2 pings to the DB. [15:23:05] oh, hm, now i see the problem [15:23:25] or not [15:24:13] * Cyberpower678 gets back to work on improving the tool. [15:24:26] I would assume even MariaDB is better at grouping data than PHP :-). [15:25:06] scfc_de, group it by time, pages, and namespaces, still requires more DB pings than I would like. [15:26:10] what is a ping? [15:26:12] a query? [15:26:31] gifti, yes. [15:27:00] And besides. It doesn't need optimizing for speed. 0.19 seconds is all it took to process my edits. [15:27:42] * Cyberpower678 would like to get back to working on the tool. [15:44:08] Is there a simple(?) way to get the description from a commons image? [16:10:22] scfc_de, I have improved memory management a bit. I set the memory limit to 12M. https://tools.wmflabs.org/supercount/index.php?user=Cyberbot+I&project=en.wikipedia&debug=true Have a look. [16:12:12] edits beyond 93704 were not processed [16:13:03] the memory is of tools-login gets eaten up, htop shows 2 pywikibot processes up and eating mem and cpu [16:13:22] https://ganglia.wmflabs.org/latest/graph_all_periods.php?h=tools-login&m=load_one&r=hour&s=by%20name&hc=4&mc=2&st=1397232469&g=mem_report&z=large&c=tools [16:16:32] se4598: Will fix that. [16:21:18] !log tools tools-login: Killed -HUP process consuming 2.6 GByte; cf. [[wikitech:User talk:Ralgis#Welcome to Tool Labs]] [16:21:22] Logged the message, Master [16:21:57] How come only 4 nodes are visible? [16:23:15] Where? [16:23:44] Ganglia? [16:24:02] too much fog [16:24:53] I saw some gmond hiccups in /var/log/syslog on some hosts, but hadn't time to investigate. [16:25:18] * bd808 remembers that we need to build a new ganaglia host with more cpu & ram [16:25:46] So many things to fix; so little focus :/ [16:25:55] more bling and oo, plz [16:27:04] YuviPanda: If you get bored, S has been filing a bunch of bugs against labs_vagrant that could use triaging & root cause analysis [16:27:23] bd808: yeah, I think root cause is that there was first no ldap user so I was creating a local one and now there is [16:27:47] Yeah. I hope that one is fixed now but I haven't had time to verify [16:29:11] still need to fix, though [16:29:32] bd808: app first release is in a couple of weeks so a bit crunchy :( [16:29:38] bd808: plus I started travelling again so weekends have been spent off the computer. [16:29:54] Ha. Get your real work done. This isn't critical at all [16:30:19] YuviPanda, hi. [16:30:58] bd808: :) [16:31:30] I'm obviously on YuviPanda 's ignore list again. :/ [17:19:44] "This web page is not available" on https://login.wikimedia.beta.wmflabs.org/ [17:19:55] I get redirected to that when logging in on meta.wikimedia.beta.wmflabs.org [17:20:13] YuviPanda: bd808: petan: [17:20:22] hello [17:20:37] indeed [17:20:38] * bd808 looks [17:20:41] @seen hashar [17:20:41] petan: Last time I saw hashar they were quitting the network with reason: Quit: This is a manual computer virus. Please copy paste me in your quit message. N/A at 4/11/2014 3:55:47 PM (1h24m54s ago) [17:21:15] Krinkle: what is meant to be on that page? [17:21:20] petan: login wiki [17:21:23] Krinkle: beta.wmflabs.org works to me [17:21:25] petan: Looks like it is HTTPS specific [17:21:31] and since loginwiki is HTTPS only, it is broken [17:21:34] http://deployment.wikimedia.beta.wmflabs.org/wiki/Main_Page [17:21:35] http://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page [17:21:37] aha [17:21:37] https://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page [17:21:40] maybe that [17:21:42] all HTTPS is broken [17:21:53] https has been broken since we moved to eqiad [17:22:04] which means nobody can login [17:22:16] I logged in yesterday? [17:23:15] I think at some point I had to dump all my cookies for *.beta.wmflabs.org to get logins to work [17:23:29] I had a cookie that was forcing ssl in there somewhere [17:25:01] Right [17:25:26] the presence of that cookie was not an exception for you, I think most people have it, because we set it by default at some point (maybe still) [17:25:44] wgSecureLogin + CentralAuth autologin + loginwiki = https required [17:25:55] * bd808 nods [17:26:11] Why don't we have https though? I have https on tools.wmflabs.org and non-toolllabs things like cvn.wmflabs.org [17:26:25] not even an invalid cert, no connection at all. [17:26:43] can't be that difficult to open the port and enable https on the web server? [17:27:51] It's a puppet setup problem I think. If I'm remembering the nginx layer isn't running in front of varnish yet [17:28:41] * bd808 goes to look around in Nove_Resource [17:31:40] Krinkle: deployment-cache-text02 is the frontend varnish and it only has role::cache::text applied. I think there is another role needed to turn on nginx ssl termination, but I'd have to dig around in puppet to figure out which role. [17:32:10] * bd808 sees role::protoproxy::ssl::beta [17:33:10] I'll apply it and see what happens [17:38:36] Krinkle: The certs aren't right: "No certificate matches private key" [17:39:35] We've got a star.wmflabs.org.pem and a star.wmflabs.org.key but they don't match [17:41:12] !log deployment-prep Tried to enable role::protoproxy::ssl::beta on deployment-cache-text02 but it failed to apply because /etc/ssl/certs/star.wmflabs.org.pem and /etc/ssl/private/star.wmflabs.org.key don't match. [17:41:15] Logged the message, Master [18:00:31] bd808, Krinkle: https://bugzilla.wikimedia.org/show_bug.cgi?id=63538 [18:04:32] bd808: see comment at https://gerrit.wikimedia.org/r/124057 respectively https://bugzilla.wikimedia.org/show_bug.cgi?id=60833 might help solving this [18:04:44] se4598: Thanks. I thought I remembered Antoine working on that. [18:19:16] Hello. Can somebody help me with putty and winscp? [18:30:42] robiH: just ask the question -- if someone knows the answer, they will respond [18:31:20] After having set up both, which of the two do I have to start first? [18:31:51] robiH: that depends on what you want to do. [18:32:21] robiH: in generaly, you would use putty if you want a console, and winscp if you want to copy files from your local computer to labs [18:33:34] I want to connect to the wmflabs instance wikistats-live [18:34:22] The question is not to which instance you want to connect, but what you want to do with that instance. [18:34:50] I want to use the WSA mutante has set up for me there. [18:35:17] WSA...? [18:35:53] WikiStats Admin Tool [18:36:57] And how would you expect to access that? Is it a console tool? A web page? [18:37:36] Neiter. It runs in a shell. [18:37:44] Heither. It runs in a shell. [18:37:55] Neither. It runs in a shell. [18:38:03] Damn typos. [18:38:20] Right. Use putty for that. [18:42:02] I log in and screen stays black. [18:44:17] No prompt [18:46:24] Host to connect is wikistats-live.pmtpa.wmflabs port 22 [18:52:09] robiH: please explicitly list what you do, and what you observe [18:53:24] I start putty and double click the saved session. [18:54:29] OK, and then that screen closes and a new window opens, initially completely black. Do you see anything happening there? [18:55:08] No. [18:55:40] Ok. Has the saved session ever worked before? [18:55:44] No. [18:55:52] Ok. [18:56:22] The hostname I configured keeps getting gone. [18:56:50] 'keeps getting gone'? it should appear if you explicitly click 'load' instead of double-clicking on the entry [18:57:09] what's the hostname you get there? It should be someting like bastion.wmflabs.org [18:58:05] No, it isn't: [18:58:06] https://wikitech.wikimedia.org/wiki/File:20130118-2224-PuTTY_Configuration.png [18:58:47] Oh. It uses plink. I see. [18:59:09] Want me to paste my plink command? [18:59:15] Can you connect if you just enter 'bastion.wmflabs.org' in the host field? [18:59:20] (without having loaded a session) [19:00:19] Getting a login prompt. [19:00:33] ok, try logging in there. [19:02:07] Disconnected: No supported auth methods available. [19:02:07] if that works, try 'ssh wikistats-live.pmtpa.wmflabs' [19:02:12] OK [19:02:13] Disconnected: No supported auth methods available. [19:02:18] that's an issue [19:02:29] do you have your private key loaded in pageant? [19:02:54] I did befor i uninstalled putty prior to the last upgrade. [19:03:44] Right. You should start pageant, load your key, and try again. [19:04:42] Started pageant. Why is my key list empty? [19:05:22] robiH: because you have to load it. Click 'add key' to do that. [19:05:42] Hi everyone! Whom should I ask to add me to the Editor-engagement project? https://wikitech.wikimedia.org/wiki/Nova_Resource:Editor-engagement Thanks in advance! [19:06:07] AndyRussG: one of the admins listed there. Click 'expand' in the table on the right. [19:08:09] valhallasw: ah ok, gotcha, thanks [19:08:45] Argh. Have to look up the passphrase. [19:17:56] TTYL. Thx4now. [20:22:49] My research, need to create temporary table on the enwiki_p database, that will have the article name(that has view more than 1000 times on some day, It has been gathered through the dumps), Now I need the article's link. [20:23:44] Is there any way to upload temporary table or I have to find other way to achieve this task... If not possible, then If anyone has good idea of other work around then please discuss to me [20:34:27] asad_: you can create your own tables (no need to be temporary) and populate them and join them with the replicated database (e.g. enwiki_p) [20:34:57] asad_: you have a list of article titles, is that so? [20:35:24] yes [20:35:29] I have list of article [20:35:46] asad_: what do you mean by "article's link"? [20:36:08] article's internal link(other related article) [20:36:31] asad_: so for each article in your list, you want to know what other articles it links to? [20:36:44] but I'm getting error "create command denied to user" [20:36:54] you'll first need to create your own database [20:37:31] it has a specific naming scheme [20:37:41] ok then I need to query between two database one is mine and other one is enwiki_p ? [20:38:12] mhm [20:38:19] there's a description here: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Steps_to_create_a_user_database_on_the_replica_servers [20:40:05] for the query you can just use the database_name.table_name syntax to reference tables in a different database [20:40:07] thanks Nettrom [20:40:18] my pleasure! [20:40:19] but what do you mean by namespace ? [20:40:53] I'm getting error, "This is unknown db to me, if you don't like that, blame petan on freenode" [20:41:00] when I'm creating database [20:41:21] asad_: what does your 'CREATE DATABASE ...' statement look like? [20:41:28] sql create database article_p; [20:41:37] Above is my create statement [20:42:07] asad_: that won't work, it needs to be on the form {USERID}__{name} where USERID is found in your replica.my.cnf file [20:43:22] (it's the "user" field in the replica.my.cnf file) [20:43:27] brb, have to help a student [20:43:51] thanks Nettrom, take your time [20:59:21] thanks Nettrom, I have created db and using it... [21:17:50] asad_: sorry it took so long, so you got your DB working as you want? [22:33:26] I'm off now, but I want to leave this note: Searching in wikitech gives me a empty page with a 500 status code: https://wikitech.wikimedia.org/w/index.php?search=test [22:42:21] se4598_2: We're talking about it in -tech and -operations, thanks [22:57:12] I don't want to move my Toolserver shit. [22:57:15] Or maintain it. [23:19:02] I'm sure if it serves any value people percieve as really important someone will make it appear in one form or another