[00:11:59] hi [03:59:33] [bz] (8RESOLVED - created by: 2Maarten Dammers, priority: 4Immediate - 6critical) [Bug 54847] Data leakage user table "new" databases like wikidatawiki_p and the wikivoyage databases - https://bugzilla.wikimedia.org/show_bug.cgi?id=54847 [04:49:25] [bz] (8UNCONFIRMED - created by: 2metatron, priority: 4Unprioritized - 6major) [Bug 54953] Grid error executing console php script - https://bugzilla.wikimedia.org/show_bug.cgi?id=54953 [08:49:32] [bz] (8NEW - created by: 2Liangent, priority: 4Unprioritized - 6normal) [Bug 54959] Table 'wikidatawiki_p.user' doesn't exist - https://bugzilla.wikimedia.org/show_bug.cgi?id=54959 [09:35:20] anyone know how to overcome the issue where in a wiki on a labs instance, making an api call from that wiki produces a timeout so you have to hard code the call to 'http://127.0.0.1' [09:42:19] dan-nl: https://bugzilla.wikimedia.org/show_bug.cgi?id=45868 [09:42:20] dan-nl: a possible workaround is to add the public DNS entry in /etc/hosts and point it to 127.0.0.1 [09:42:32] One question...From the Englishi wikipedia db is there any documentation about what each table represents? Thank you [09:43:26] panos_: there is [09:43:32] on mediawiki.org look for database schema [09:43:43] panos_: https://www.mediawiki.org/wiki/Manual:Database_layout [09:44:09] hashar: thank u [09:46:51] [bz] (8UNCONFIRMED - created by: 2Beta16, priority: 4Unprioritized - 6normal) [Bug 54962] Missing or wrong information in meta_p.wiki table - https://bugzilla.wikimedia.org/show_bug.cgi?id=54962 [09:52:37] hashar: thanks, so for me i would enter at the command prompt -> iptables -t nat -I OUTPUT --dest 208.80.153.219 -j DNAT --to-dest my.instance.ip ? [10:07:46] hashar: that worked, thanks for the link to the bug [10:08:00] dan-nl: use an entry in /etc/hosts [10:08:15] dan-nl: the iptables workaround is not that great, it will disappear whenever the instance is rebooted [10:08:31] dan-nl: the proper solution is either fixing OpenStack network stack (might not be possible) [10:08:33] or [10:08:53] implements DNS split horizon so that the DNS server gives a different record based on the client IP [10:09:16] (i.e. if the DNS query comes from labs, serve the target instance IP, not the public IP) [10:09:55] hashar: ok, that's getting beyond what i currently know … [10:10:41] dan-nl: feel free to raise your use case on the labs-l mailing list [10:10:43] hashar: for now the iptables workaround seems to work, but i'll add it to the hosts file. thanks [10:10:44] that might interest other people [10:10:59] hashar: i will do that thanks [11:46:10] [bz] (8RESOLVED - created by: 2metatron, priority: 4Unprioritized - 6major) [Bug 54953] Grid error executing console php script - https://bugzilla.wikimedia.org/show_bug.cgi?id=54953 [12:02:41] @op [12:02:46] :o [12:02:47] !ping [12:02:48] !pong [12:36:40] OK who screwed the webservers up? [12:38:10] Coren: poke [12:38:36] * Coren pokes back! Poink! [12:39:30] Coren: also, i can't ssh into toolsbeta-hadooptest (hangs at ssh). not urgent. [12:39:38] Coren: webservers are throwing 500's [12:40:02] Betacommand: Works for me; which tools gives you the 500s? [12:40:23] Coren: http://tools.wmflabs.org/betacommand-dev/cgi-bin/rationale_check.py?title=File:Rothhorse2.jpg [12:40:23] YuviPanda: I'm not here today. :-) [12:40:35] Coren: worked fine for the last month or so [12:40:46] * YuviPanda hears "This is not the Coren you are looking for..." [12:40:56] This is not the Coren I'm looking for [12:40:57] ok [12:41:21] Betacommand: Lemme see what's up. [12:43:09] Ah. It's OOM [12:44:04] * Coren looks at usage. [12:45:17] Coren: OOM? [12:45:25] Out Of Memory [12:49:00] * Coren rebalances the load a little. [12:49:10] I'll need to add a fourth webserver. [12:49:38] But this should ease things until then. [13:02:43] [bz] (8ASSIGNED - created by: 2Liangent, priority: 4Unprioritized - 6normal) [Bug 54959] Table 'wikidatawiki_p.user' doesn't exist - https://bugzilla.wikimedia.org/show_bug.cgi?id=54959 [13:54:31] hey Coren, do you know what the average write speed is to the nfs volume on labs? [13:54:50] OVER 9000!!! :P [13:55:00] bytes/s? [13:55:12] no, no idea, sorry. was just being useless. [13:55:19] drdee: That's a vague metric to guess at. When I did the initial testing, it was about 2G/s but that's completely unloaded. [13:55:29] i was trolling as well ;) [13:55:56] NFS adds latency, but probably less than the disk can. [13:56:02] is there a tool that i can use to do some quick tests? [13:59:20] drdee: There are many, but all of them work by hammering the storage, causing the system administrator to beat you up. [13:59:50] I.e.: not on my watch, buddy. :-) [14:00:27] * YuviPanda takes a hammer to labstore3 [14:01:48] well i wasn't proposing a 24 hour stress test :D [14:07:51] drdee: I saw the conversation in -analytics. From my experience, you don't want to use NFS for that :) [14:08:22] drdee: you can probably get larger storage, though. don't think 160g is a hard cap [14:08:41] good to know! [14:08:42] ty [15:07:45] Coren: https://bugzilla.wikimedia.org/show_bug.cgi?id=54934 ? [15:10:18] liangent: There's a lot of DDL going on in the DB as the user tables return; I wouldn't worry about replag for an hour or two still. [15:11:46] Coren: that's one day and no more rows get replicated at all [15:13:18] liangent: Hm. Didn't count right then. I'll ask the DBA to take a looksie. [15:14:41] Slave_IO_State: Waiting for master to send event [15:15:29] Looks like the replication is actually stopped; I've poked the right people. [15:16:28] Coren: btw "every account that was affected was sent an email." is it really true? [15:16:54] I got only one email, then I thought it's only my account that was affected [15:16:56] liangent: Well, except for those accounts who /didn't/ have email set, obviously. [15:17:42] Coren: no, my bot account has email set [15:17:50] liangent: Only one email was set per address though, regardless of how many accounts use it. [15:18:04] liangent: If you tell me the email, I can look it up though and tell you. [15:18:48] Coren: liangent at gmail [15:19:28] Liangent enwikivoyage,loginwiki,wikidatawiki,wikidatawiki,wikidatawiki,wikimania2013wiki,wikimania2014wiki [15:19:45] So that's only you. What's the name of your bot account? [15:19:57] Coren: liangent-bot [15:20:22] liangent: Wasn't affected. [15:20:38] Coren: hm but I found that my bot stopped [15:20:50] and when I log in to it manually a password change is asked [15:22:07] liangent: ... really? Either it doesn't actually have an email set on the affected wikis, or there is a bug in the force-new-password code because it's definitely not in the list [15:22:27] (Or it's completely unrelated, but that'd be odd) [15:22:31] Ohwait... [15:23:03] Nope. Definitely not. What wikis does your bot edit? [15:23:41] Coren: zhwiki arwiki wikidatawiki [15:23:50] * Coren ponders. [15:23:54] Lemme go check prod. [15:23:56] in the past, zhwikisource [15:27:53] liangent: ah, yes, your bot uses the same email address as your primary account; it /was/ affected but "merged" for the notice. [15:28:32] In retrospect, that email should have said "your account(s)" to make that clear. [15:28:56] Coren: and more importantly, list all affected user names [15:29:02] Or "account(s) associated with that email address" [15:29:13] liangent: There was no time to make the outgoing email customised. :-( [15:29:51] But the basic remains: if you didn't receive an email, your account(s) were not affected. [15:30:00] Coren: anyway, is it possible to tell me now, that do I have other accounts that are affected? [15:30:40] are / were [15:31:26] anyone know how to translate an iptables statement to a host file? i tried public.ip internal.ip but that doesn't seem to work [15:31:39] Coren, btw, I never thanked you for getting arwiki back up and running. So Thank You! :) [15:31:49] (this is like forever ago, I just realized) [15:32:21] liangent: Liangent-test [15:32:30] liangent: Nothing else has that email address. [15:35:54] Coren: hrm this account is not that important, and I feel I want to test the password reset system using it [15:36:17] It would seem to be... a good account to do tests with. :-) [15:38:04] login via api: { "login": { "result": "WrongPass" } } [15:40:00] fyi figured it out, just needed a 127 entry to the domain [15:49:33] hey Coren; I am trying to create an xlarge instance but i get "Failed to create instance. " message [15:50:24] drdee: You're probably out of quota. I'm not at home atm so I can't fix you up with more of that, but Ryan or andrewbogott_afk can probably help you when they get around. [15:51:00] can i check that myself? [15:52:31] drdee: "Display Quotas" on the project management page. :-) [15:54:34] Cores: 30/80 [15:54:34] RAM: 60416/71680 [15:54:36] Floating IPs: 2/3 [15:54:37] Instances: 13/20 [15:54:37] Security Groups: 2/10 [15:54:44] ohh probably the RAM [15:56:29] Need moar raums! [15:56:35] aight [16:01:25] andrewbogott_afk: can you add some more RAM to the analytics project on labs? [16:06:38] morning Ryan_Lane [16:06:50] was just looking for you :) [16:06:56] can you add some more RAM to the analytics project on labs? [16:07:10] i am trying to create an xlarge instance [16:07:44] Ryan_Lane: I also created toolsbeta-hadooptest.pmtpa.wmflabs, and am unable to ssh in [16:08:15] YuviPanda: i think we already have a running hadoop setup in labs [16:08:41] drdee: I know, didn't want to mess with analytics' testing infrastructure :) [16:08:52] drdee: I was going to try importing the logging dump and see how it goes [16:09:01] why not? [16:09:09] isn't that the purpose of labs? [16:09:30] I thought you guys were using it as a testing ground, before moving things to production? [16:09:50] configuration stuff and testing failover scenario;s [16:10:04] but if you just need to import some data and stuff like that [16:10:09] can't see how that would hurt us [16:10:16] ah, that's good to know, drdee [16:10:21] email ottoman to be sure [16:10:25] right [16:10:34] but 99.9% sure no problem [16:10:41] and saves you the work of setting it up [16:10:45] my ultimate aim is to have a publicly accessible (with some gating!) hadoop cluster that can be part of toollabs or something [16:10:45] it's quite some work [16:10:52] drdee: yeah, I figured. [16:11:00] drdee: it's something I should learn at some point anyway [16:11:03] maybe not now [16:11:05] why would you replicate the work of the analyics team :D [16:11:06] ? [16:18:19] Ryan_Lane; got a sec? [16:18:32] can you add some more RAM to the analytics project on labs? [18:07:16] andrewbogott_afk: ping? [18:07:21] oh, still afk [18:40:38] Coren, can you give me toollabs access when you have a chance? [18:42:19] Eloquence: I could too, I think. [18:42:54] cool [18:42:59] Eloquence: done [18:43:09] Eloquence: assuming your wikitech username was also Eloquence [18:43:13] yep [18:43:20] and I got a shiny echo notification :P [18:43:24] :) [18:43:55] love the ascii art unicorn [18:44:58] Eloquence: there's a different unicorn in the labs-vagrant / vagrant one, courtesy bd808|LUNCH [18:45:26] Eloquence: oh, and https://wikitech.wikimedia.org/wiki/Labs-vagrant if you haven't seen it before. Makes setting up demo stuff on labs as easy as using vagrant [18:47:27] hmm, time for me to mail that to wikitech-l [18:47:40] I'm mostly poking at the labsdb setup to see if there's anything else scary going on :P [18:48:43] yeah, figured :) [18:56:38] YuviPanda: we shouldn't recommend that people use tools for mw dev ;) [18:56:50] Ryan_Lane: i linked to labs-vagrant! [18:56:54] it won't even run on tools :P [18:56:59] ah. ok [18:57:10] YuviPanda: does the labs-vagrant stuff do all the stuff legal wanted? [18:57:17] Ryan_Lane: not yet, I was just looking into it [18:57:19] like add the privacy policy to the wiki? [18:57:20] Ryan_Lane: seems trivial. [18:57:21] ok [18:57:23] to port [18:57:28] sounds good [18:57:38] Ryan_Lane: that's why I haven't really spammed mailing lists, etc, with it yet [18:57:48] * Ryan_Lane nods [18:58:12] Ryan_Lane: my right hand has massively started hurting the last few days, so reducing computer time until that gets fixed. [18:58:30] Ryan_Lane: in good news, I'm coming to SF again end of the month :) [18:59:01] anyone remebers where the logs are on beta? [18:59:09] YuviPanda: :D I'll be in nola [18:59:14] ah! [18:59:27] Ryan_Lane: fine, I'll steal your chair :P [18:59:28] or something [18:59:44] Ryan_Lane: https://bugzilla.wikimedia.org/show_bug.cgi?id=33890 seems like a good candidate for dynamicproxy :) [18:59:53] I'll do it once I figure out how to modify the build andrewbogott_afk made [19:01:41] Ryan_Lane; could you please add some more RAM to the analytics project maybe 24Gb ?(i am trying to build an xlarge instance) [19:02:18] goddamn powercuts again :| [19:02:20] !logs [19:02:21] raw text: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ cute html: http://tools.wmflabs.org/wm-bot/logs/index.php?display=%23wikimedia-labs [19:10:14] !ping [19:10:14] !pong [19:16:08] Ryan_Lane, has an echo update been done recently? [19:16:14] I just got a broken email... [19:17:59] Krenair: yeah [19:18:01] (on wikitech) [19:18:07] and OSM needs to be adjusted for it [19:18:12] I haven't had the time to look into it [19:18:53] I can probably fix OSM, but you might want to consider putting Echo back to its previous version [19:19:02] (until I've fixed osm) [19:19:27] Krenair: I use the wmf branches [19:19:30] the only broken thing is the email [19:19:32] oh. [19:19:48] otherwise it works fine :D [19:21:45] Ryan_Lane, what version did you update from? [19:22:04] I see we're now on 1.22wmf18 [19:25:45] Krenair: um. let me see [19:26:05] is there a guide porting your bot to run on labs? [19:26:54] notconfusing: yes [19:27:09] Ryan_Lane, can you point me to it [19:27:10] ? [19:28:45] nvm https://wikitech.wikimedia.org/wiki/Help:Move_your_bot_to_Labs [19:30:23] drdee: sure [19:30:29] how much again? [19:31:22] i think we have about 10Gb available (RAM: 60416/71680) [19:31:43] so at the very least 6Gb so I can make a xlarge instance [19:31:52] but some spare Gb's would;t hurt i guess [19:32:35] !log deployment-prep Created table bug_54847_password_resets on all wikis [19:32:41] Logged the message, Master [19:33:00] dr0ptp4kt: you're now at 128000 [19:33:25] Ryan_Lane, huh? [19:34:13] whoops [19:34:20] dr0ptp4kt: sorry, wrong person ;) [19:34:24] drdee: ^^ [19:34:38] ty Ryan_Lane! [19:34:46] Ryan_Lane, thx [19:35:18] yw [19:42:08] anyone around that happens to be logged into wikitech and part of the bots project? [19:42:15] * Damianz_ hopes not to have to walk back to work [19:42:48] Damianz: why, what's up? [19:43:06] Need to reboot bots-cb as it seems to have omnomnomnom'd all it's memory again [19:43:13] heh [19:43:14] What am I supposed to enter into the token field when logging in? [19:43:14] one sec [19:43:19] notconfusing: nothing [19:43:26] unless you have two factor auth enabled [19:43:36] eventually we'll move token to a challenge screen [19:43:41] MW needs to support it, though [19:44:21] Btw is there a simple way to move 2fa betwean devices? Or is it a case of disabling then re-enabling? I really should put the app on my new phone rather than my lovely, half broken iphone [19:45:35] Ryan_Lane, thanks, I'm in, needed a password reset. [19:54:05] Ryan_Lane, did you figure it out? If you're too busy please tell me [19:58:52] anyone online know how to checkout a change set someone else created, work on it and submit those changes back to the change set as a new patchset? [19:59:15] Yep [19:59:22] Do you have git-review installed dan-nl? [19:59:28] Krenair: yes [19:59:46] Krenair: i have a clean dir with a fresh copy of core [19:59:50] okay, in the URL should be a number. e.g. in https://gerrit.wikimedia.org/r/#/c/70112/ there's 70112 [20:00:09] go to your local clone of the relevant repo and run 'git review ' and the number [20:00:20] ^ ignore that [20:00:23] go to your local clone of the relevant repo and run 'git review -d ' and the number [20:01:15] that will fetch and checkout the change they made. now all you have to do is amend their commit and run 'git review' to submit a new patch set [20:01:47] (make sure you actually amend it rather than making a new commit) [20:03:26] Krenair: cool, so the -d will also switch me to the corresponding branch? [20:03:38] yes [20:04:10] I have my shell set up to show me what branch I'm on when I'm in a git directory [20:04:10] Krenair: great, then it's straight forward … what about the other person working on it. how do they pull down my patch set to their local copy? [20:04:26] they do the same thing [20:05:14] Krenair: so even if they are already the branch they need to run git review -d number? which will pull down any other patch sets made to the changeset? [20:05:28] it should only pull the latest patch set iirc [20:05:48] cool [20:06:32] Krenair: thanks. i'll give this a try [20:09:16] maximilianklein@tools-login:~$ become viafbot [20:09:16] sudo: sorry, a password is required to run sudo [20:09:23] should i passwd? [20:12:58] notawaffle: No; that means you are not in that tool's group -- did you just create it? [20:13:04] notconfusing: ^^ [20:13:09] (@^#%$ autocomplete) [20:13:38] Coren, i did just create it [20:13:40] don't know about kvirc but xchat has an option to autocomplete to the last matching speaker in the channel rather than alphabetically [20:13:46] notconfusing: If you were logged in when you created the tool, you won't be in its group until you log off then back on: unix group membership is only checked at login time [20:14:10] Krenair: That would be grand; I'll go hunt for something similar. [20:14:30] Coren, thanks it worked [20:15:17] It's Settings -> Preferences -> Interface/Input box -> Nick completion sorted: Last-spoke order (in XChat obviously) [20:22:48] @replag [20:22:49] Replication lag is approximately 12:33:02.0862530 [20:23:11] is that at 12 hours of lag? [20:40:22] OMG.................... https://meta.wikimedia.org/w/index.php?title=Talk:October_2013_private_data_security_issue&diff=5939348&oldid=5939310 [20:41:07] kosher is a ******** [20:44:15] examiner is equally untrustworthy as its authors. [20:46:44] Steinsplitter: What else is new? [20:47:35] noting, only som users ar making drama. :/ [20:48:15] Coren: but not notable. [20:48:17] imhop [20:48:19] -p [20:49:27] Betacommand: select rev_timestamp from revision order by rev_timestamp desc limit 1 -> 20131004074947 seems darn up to date to me. How does wm-bota calculate lag? [21:59:33] should i wrap this in a shell script, since it doesn't seem to work [21:59:35] local-viafbot@tools-login:~/harvest_infobox_book$ jsub python harvest_books.py [22:00:23] why jsub and not psub? [22:01:01] yes you would have to wrap that [22:04:13] Gryllida, thats what is says here https://wikitech.wikimedia.org/wiki/Help:Move_your_bot_to_Labs#Submitting_simple_one-off_jobs_using_.27jsub.27 [22:04:31] links from the pywikipediabot setup guide [22:08:21] Gryllida, do the paths have to be absolute? [22:15:07] its does require absolute paths [22:24:22] $PBS_O_WORKDIR may be available as a directory you submitted your job from [22:24:37] so adding a 'cd $PBS_O_WORKDIR' line at the top of your script may do the desired effect [22:25:02] (that is not advice specific to wmflabs; just how the job submitter may work) [22:25:10] excellent, also which python is run, because it is not seeing local libraries? [22:26:52] how local are they? [22:27:01] in my home directory [22:55:00] I suspect that the script would run on a different node than the head node and you might have to request installation of these libraries on all nodes [22:55:57] I can be missing something obvious however and having someone else peek at your question would be useful [22:56:33] Gryllida: what's your screen resolution? [22:57:19] echan [23:04:31] Gryllida: Actually, one's home is visible from every node so if one's scripts sets up all the paths right, it'll Just Work™ [23:30:03] nifty [23:39:47] Coren yes i'm trying to use virtualenv now, but getting logged in is difficult