[00:32:41] Hi, I used to be able to login. But now when I try to ssh into bastion, I get error message: The RSA host key for bastion.wmflabs.org has changed, [00:33:05] I tried regenerating the public key and uploading it again, but I keep getting the same error message [00:38:08] Has anybody seen anything like this before? [00:39:04] Howie: The bastions moved to another server. Are you subscribed to labs-l? Basically, you need to remove the lines pertaining to bastion.wmflabs.org from ~/.ssh/known_hosts; then login should work. [00:40:11] ah! :) thank you very much [00:40:29] Am I subscribed to labs-l? What is labs-l? [00:40:40] (BTW, I was now able to login) [00:41:08] https://lists.wikimedia.org/mailman/listinfo/labs-l [00:42:49] thank u. I just joined [00:51:08] Hi. I want to scp a file from one of my instances. Do you know what the syntax for the domain name is? [01:20:19] Howie: If you use the ProxyCommand setup from https://wikitech.wikimedia.org/wiki/Help:Access, you should be able to just use "scp INSTANCE.eqiad.wmflabs:/path/to/remote/file /path/to/local/file". [01:30:19] scfc_de: thanks, that works great! [02:24:51] a930913: I don't think you understand the context. How is wm-bot relevant to not using a connection to freenode.net? [03:00:24] Coren: thx :) I'll continue briefly here [03:00:38] So what do I do to get round this. How do I setup identd and get that IP? [03:00:59] I don't particularly like having a public IP on those nodes. [03:01:12] You don't need to open anything but identd. [03:01:19] OK [03:01:49] tcp/113 [03:02:38] And identd is just a daemon. It's the deb package "identd". Just make sure you have ensure => installed for the package and ensure => running for the service in puppet and you're done. [03:02:51] These instances aren't puppetised unfortunately [03:03:03] Then 'apt-get install identd' [03:03:07] They're pretty basic linux boxes, most of it is running straight out of git and bash [03:03:08] But also, puppetize them. :-) [03:03:23] managed by a single entry in crontab that does everything else [03:03:47] All identd does is allow a server on the remote end of a connection to request the username of a socket on your end. [03:04:12] Hm.. package not found [03:04:16] Look at wm-bota's whois and notice the conspicuous lack of ~ in front of its username. :-) [03:04:43] Right [03:04:55] oidentd ? [03:05:08] Oh, sorry, 'pidentd' [03:05:12] OK [03:05:43] Hm.. they're not running as separate users though [03:06:11] Does it differentiate users by the same name on different instances? [03:06:31] I guess it uses something more unique than local username or local userid when coming from behind NAT [03:06:40] Krinkle: If they have public IPs, yes. foo@208.80.155.141 is distinct from foo@208.80.155.142 say [03:07:22] The point is, without identd freenode doesn't care what the client claims its local username is. [03:08:53] Coren: just noted that the parameterized interwiki links [[toollabs:/foo/bar]] are broken now. put a memo on labs-l [03:08:54] Hmpf, this is getting out of proporation. I can't justify setting up a public IP and/or creating loads of user accounts just for a bot. I'm like behind a dozen layers of legacy crap and abstraction layers. This is insane... [03:09:34] Sry for the brusquely, but it really hurts [03:09:55] hey guys [03:10:12] i want to help out with wikimedia on the sysadmin/devops side [03:10:21] Freenode is offering an iline, which we accepted for Toolserver in the past and that was great. Why wouldn't we do that same for Wikimedia's IP range? I'm pretty sure there was an RT ticket for that last year. A shame we didn't take it. [03:10:27] also, are you guys transitioning more from php to python? [03:10:52] I create bots using an abstraction layer, they run as the same user. Having that system also create a local user is going to be hard to manage properly. [03:11:18] And if I'd run this within tool labs, I'd run in the same problem, as they'd all be tools.cvn [03:11:41] maybe shared between a couple exec nodes, but that's it. [03:11:49] spread* [03:12:29] I'm more than a little confused why you need to have many bots in the first place if they're running the same code; or why they can't run as distinct users if they are not. I guess I'm just not getting a clear picture of your use case. [03:13:32] pancakes9: That depends; we're not "moving away" from PHP; web-facing stuff tends to be PHP by default, internal tools python by default -- but neither are hard rules. [03:14:27] The CVNBot software is what bridges between irc.wikimedia.org recentchanges feed and the patrollers of WMF wikis. One bot monitors one or more wikis (e.g. we have one for commons, one for nlwiki, one for all simple.* wikis, one for all fr.* wikis, and one for 100s of smaller wikis, and so on) [03:15:12] The bot is relatively lightweight, and is modelled after creating more instances, it doesn't have an internal ability to manage more channels. Each bot has a home channel. [03:15:39] Sure the software could be changed, but that's like 3 layers down the chain. I'm just trying to keep the lights on. And today when restarting one of the bots, it got killed by the connection limit. [03:15:58] So now, whenver one of the bots needs a restart, I'm playing russian roulette with potentially never getting yet another back online. [03:15:59] Ah, the mongolian horde approach. :-) [03:16:06] * Coren ponders. [03:16:59] I'm not defending a single line of CVNBot code, in fact, nobody currently active in the entire wikimedia community is, the maintainers have left us unfortunately. But there's only so many big problems we can fix in one day... [03:17:00] Krinkle: if you could spare a few moments (after/while fixing your bots) to merge my pull request on Intuition, I'd be happy. [03:17:32] hedonil: I saw the update earlier this morning, it's been less than a day. I'll get to it tomorrow (or as siebrand, he has access) [03:17:40] ask siebrand * ) [03:17:53] Krinkle: fine. will do. [03:18:00] He should wake up 3 hours from now. I'm gong to sleep very soon. [03:18:00] That's really not a model I had intended to support at all -- it's far too easy for one runaway instance to eat up all the connections and break things for everyone that way. The easiest way to have it work then really is to run it one one instance and not bother with identd (but have a public IP) -- this will give your bot horde its own per-IP limit and we can see about asking freenode to give [03:18:35] just that IP a higher limit. [03:19:08] Coren: Well, in a way we're relying on that already. I can create local usernames all I want right now, right? Even without a public IP and just identd that's a lot of a-z{2,16} combiinations. I'm still baffled by the idea that local username matters over their TCP limit. I guess that's a general convention for limitting connections, not freenode specific? [03:19:16] It's far easier to ask for your instance to be bumped to, say, 40 clients than asking for the general NAT address to be allowed an arbitrarily large limit. [03:20:20] It's general, yes, and they support it exactly for the reason of allowing shared hosting. But you'd still need the public IP because identd would fail through a NAT. [03:20:21] Coren: it being too easy for one runaway instance to... you mean that about how it is now, or how it would be? [03:21:22] I mean both; right now you're suffering the junior version of this which is why there is a mechanism in place to do per-user. The worse that can happen now is that we run out of connections on the NAT IP; we can't actually flood freenode with bots. [03:21:33] Coren: Maybe we can ask Freenode to raise the limit for the general catch-all IP that is the endpoint of all natted instances. [03:22:00] That way if that one goes crazy, they'll let us know and we can find the offender. And major projects that want their own IP and limit (like Tool Labs) can do so. [03:22:32] and I suppose we could get Tool Labs IP raised as well, though it seems that's not needed for the moment, I think it will come up at some point. [03:22:42] That's what I /don't/ want to do; the barrier to entry to the labs is low enough that it's saner for the limit to be low for the general case and allow exceptions for specific projects. [03:23:10] Considering the amount of maintenance I find myself under, I'd very much like to rid all cvn infrastructure and use tool labs instead. But that'll require some weeding down the stream and stack. [03:23:23] Coren: OK, I agree. [03:23:45] Coren: But the reason I proposed it is because that way we control it, instead of having to escalate to Freenode for each labs project. [03:24:11] I don't even know where to begin to get my public IP excempted there. Who's going to do that on who's behalve, and who would monitor it if IPs change or data center migration etc. [03:24:16] To be fair, the number of projects that can legitimately ask for an exception to running ~20 IRC bots is going to always be very low. [03:24:24] True [03:24:31] Anyway, what do you recommend knowing all this now. [03:24:48] The immediate problem is that my bots are dieing one by one. [03:24:58] Coren: are there wikimedia employees working near Ashburn, VA? [03:25:17] Getting a public IP will immediately give you a ~20 client limit that isn't shared with anyone. That should tide you over? [03:25:20] and keeping live patrollers from getting edits in front of them, which likely will never get a second pair of eyes on them. [03:25:26] pancakes9: We have just the one, as far as I know. [03:25:33] Yeah, right now I only have 7 bots running. [03:25:38] OK [03:26:04] I expect once our migration is complete it'll be about 12 or maybe 15. [03:26:28] https://meta.wikimedia.org/wiki/Countervandalism_Network/Bots (most are down or using privately run backups) [03:26:49] So you'd still be okay for a while, and we'd be in a good position to either start thinking about another mechanism or requesting a freenode I-line. [03:30:20] Coren: How do I go about that btw? [03:30:27] https://wikitech.wikimedia.org/w/index.php?title=Special:NovaAddress&action=allocate&project=cvn®ion=eqiad [03:30:29] I think? [03:30:55] Failed to allocate. [03:31:01] I guess I need some credits :) [03:31:21] I'd need 2 (cvn-app4 and cvn-app5) [03:31:34] I don't expect to have more application servers anytime soon. [03:32:42] Ah; lemme up your quota. [03:32:47] What's the project name? [03:32:50] cvn [03:33:28] You can haz IP. [03:34:07] Once you allocated them, you just need to assign them to the actual instances (same place, under 'manage addresses') [03:34:48] Yep, done [03:35:39] Do I need to restart? [03:38:37] Woo, one of the bots trying to reconnect just connected. [03:49:23] !log cvn Installed pidentd package on cnv-app servers. [03:49:36] !log cvn CVNBot13 (migrated from KyluBot) now runs on cvn-app4 for #cvn-wp-da [03:50:10] Logged the message, Master [03:50:11] !log cvn CVNBot12 (migrated from KrinkleBot7) now runs on cvn-app4 for #cvn-wp-nl [03:50:11] Logged the message, Master [03:50:12] Logged the message, Master [03:54:10] nice [04:12:05] !log cvn Renamed KrinkleBot2 to CVNBot2 (assigned NickServ account CVN-Bots, shard0) [04:12:14] Logged the message, Master [05:16:41] help on gerrit plase... http://www.mediawiki.org/wiki/Gerrit/Tutorial#Set_your_username_and_email 'says you can use "@gerrit.wikimedia.org", substituting your gerrit username.', but doing so I get error does not match user account. [05:40:58] 3Wikimedia Labs / 3tools: Tool Labs project URLs don't work without a trailing slash - 10https://bugzilla.wikimedia.org/64274 (10MZMcBride) 3NEW p:3Unprio s:3normal a:3Marc A. Pelletier $ curl -I "https://tools.wmflabs.org/gerrit-patch-uploader/" HTTP/1.1 200 OK Server: nginx/1.5.0 ^ This is fine. N... [06:20:13] 3Wikimedia Labs / 3tools: Tool Labs project URLs don't work without a trailing slash - 10https://bugzilla.wikimedia.org/64274#c1 (10Ori Livneh) a:5Marc A. Pelletier>3Yuvi Panda I *think* it's : lo... [08:12:41] Hi all, How to check edits of a particular day? e.g. is that possible if i want to see all the edits of a particular day (older than a month)? [08:26:31] is there any one to give user "xqt" (svn commiter and pywikibot developer) shell access? [08:27:18] hello by the way and thank you :P [08:33:32] anyone there? [08:33:41] petan: Coren ^ [08:34:30] subha: almost, well, you have =days and =from , so what you can do is stuff like https://wikitech.wikimedia.org/w/index.php?title=Special:RecentChanges&from=20140423000000&days=0 [08:34:56] Amir1: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools#Getting_access [08:36:10] subha: from= is a timestamp, i just used the date and set time to 000000, so if you set that and then the number of days to show.. try it and you'll see [08:37:11] mutante: Thanks, but it doesn't give any hitorical data (older than 30 days) [08:37:29] tried and failed already :/ [08:38:15] valhallasw: hi, thanks https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/xqt [08:38:27] I hope someone gets this fast [08:38:49] subha: hmmm.. i see ... [08:40:37] mutante: I appreciate your support :) [08:40:52] Anyone could help me with total bytes added by a user on any wiki? [08:40:59] @replag [08:41:00] Replication lag is approximately 00:00:01.7754010 [08:44:46] Krinkle|detached: I was asking why you couldn't pack bots together (i.e. use wm-bot). One connection/bot can serve many applications. [08:45:35] subha: https://wikitech-static.wikimedia.org/w/index.php?title=Special:RecentChanges&limit=500&namespace=1&days=120 [08:46:46] subha: it's the limit= combined with days= and depends on number of edits in namespace, not a hard 30 day things [08:47:23] and there is static wikitech [08:47:30] let me check [08:47:46] thanks again mutante: [08:48:24] you're welcome [08:48:33] can i add the time duration? [08:49:23] that should be days= ? [08:50:33] but you also can't go over limit= for the number of changes [08:51:26] hmm, maybe ask Roan too [08:53:04] subha: try using the API, it has these things: [08:53:13] rcstart - The timestamp to start enumerating from rcend - The timestamp to end enumerating [08:53:24] that should get you what you want [08:53:37] and how do i use the API? [08:53:49] from https://en.wikipedia.org/wiki/Special:ApiSandbox? [08:53:49] -> https://wikitech.wikimedia.org/w/api.php search for "recentchanges" in there [08:53:57] oh, ok [08:54:14] and how to check the amount of bytes added by a user? [08:55:19] total amount of bytes added in "Article" namespace* [09:00:01] there might be a tool for that, but dunno [09:48:42] a930913: Anything is possible. The situation is that there is a 10-year old infrastructure in place with loads of dependencies and complex factors. Changing it at the application level is not a sensible solution in this case. [09:49:13] also, changing the apps isn't very rewarding since this is the top of the stack with the lowest level of the stack already being phased out this year, so it's all going away and woudl require massive changes. [09:49:24] Krinkle: What does CVN actually do these days? [09:49:32] All I've seen is tumbleweed. [09:49:39] Ask me tomorrow. [09:49:42] a930913: http://meta.wikimedia.org/wiki/SWMT [09:49:43] o/ [09:53:47] a930913: @ 'merging bots': different bots have different maintainers and are written in different languages. I'm also not sure what the advantage would be -- multiple connections are not really an issue for freenode. [09:55:01] valhallasw: I used my own bot before labs. Now I pump it all through wm-bot. [09:55:41] eh.. why does this change look like i added 3 users, when i really just added 1 [09:55:50] https://wikitech.wikimedia.org/w/index.php?title=Nova_Resource%3ABastion&diff=110518&oldid=110464 [09:56:45] used the forms only.. would be nice if that could be multiple lines [09:57:11] a930913: another advantage is that it's easier to selectively ignore bots if they have different names :-) [09:59:01] valhallasw: Use a prefix? [09:59:26] I have no clue how to filter by message content in irssi [09:59:31] ignoring a username is trivial [10:00:11] valhallasw: I think it's trivial because that's what you know. [10:01:04] valhallasw: I know neither, but know I can highlight certain strings/regexes, and thus probably block like that too. [10:06:22] apparently I can pass -regex to /ignore, indeed. [10:16:22] !log deployment-prep stopping udp2log and starting udp2log-mw instead (known old bug that prevents logging) [10:16:24] addshore: ^^^ [10:16:40] :P [10:16:48] 27860 ? Ss 0:00 /usr/bin/udp2log --config-file=/etc/udp2log/mw --daemon --pid-file /var/run/udp2log-mw.pid -p 8420 --recv-queue=524288 [10:16:58] addshore: on start up, the udp2log service is launched [10:17:01] all apearing now, cheers hashar [10:17:04] then udp2log-mw which uses the same port [10:17:06] so it bails out :( [10:17:12] gotta fix it somewhere, maybe in puppet [10:17:12] hehe :P [10:17:21] I filled a bug about that a while ago but never took the time to fix :-( [10:21:25] addshore: I havent verified but it should be good now [10:21:38] yup, looks it from my side [10:21:41] [= [10:24:27] \O/ [10:39:28] 3Wikimedia Labs / 3wikidata: Fatal error: Class 'LuceneSearch' not found at /srv/common-local/php-master/includes/search/SearchEng ine.php on line 463 - 10https://bugzilla.wikimedia.org/64283 (10Addshore) 3NEW p:3Unprio s:3normal a:3Wikidata bugs Happens when making a new Item using the special page... [10:39:41] 3Wikimedia Labs / 3wikidata: Fatal error: Class 'LuceneSearch' not found at /srv/common-local/php-master/includes/search/SearchEngine.php on line 463 - 10https://bugzilla.wikimedia.org/64283 (10Addshore) p:5Unprio>3Highes s:5normal>3critic [10:45:42] 3Wikimedia Labs / 3wikidata: Fatal error: Class 'LuceneSearch' not found at /srv/common-local/php-master/includes/search/SearchEngine.php on line 463 - 10https://bugzilla.wikimedia.org/64283 (10Lydia Pintscher) [11:25:16] halfak: ping [14:48:56] 3Wikimedia Labs / 3deployment-prep (beta): Fatal error: Class 'LuceneSearch' not found at /srv/common-local/php-master/includes/search/SearchEngine.php on line 463 - 10https://bugzilla.wikimedia.org/64283#c5 (10se4598) 5PAT>3RES/FIX I suppose it's fixed now, closing and moving to labs/beta-component [15:10:03] halfak: ping [15:14:56] 3Wikimedia Labs / 3tools: Tool Labs project URLs don't work without a trailing slash - 10https://bugzilla.wikimedia.org/64274#c2 (10Tim Landscheidt) 5NEW>3RES/DUP *** This bug has been marked as a duplicate of bug 64058 *** [15:14:56] 3Wikimedia Labs / 3tools: http://tools.wmflabs.org/$TOOL doesn't redirect to http://tools.wmflabs.org/$TOOL/, but gives error instead - 10https://bugzilla.wikimedia.org/64058#c3 (10Tim Landscheidt) *** Bug 64274 has been marked as a duplicate of this bug. *** [15:18:56] YuviPanda: I understand your fix for https://bugzilla.wikimedia.org/64058 ("http://tools.wmflabs.org/$TOOL doesn't redirect to http://tools.wmflabs.org/$TOOL/, but gives error instead") is "finished"? http://tools.wmflabs.org/hay for example seems to work as expected. [15:19:28] scfc_de: yeah but it is a hack, since it just passes things back to lighty, and *lighty* does the redirect [15:20:33] YuviPanda: Good enough for me :-). Thanks. [15:20:43] 3Wikimedia Labs / 3tools: http://tools.wmflabs.org/$TOOL doesn't redirect to http://tools.wmflabs.org/$TOOL/, but gives error instead - 10https://bugzilla.wikimedia.org/64058 (10Tim Landscheidt) 5NEW>3RES/FIX a:5Marc A. Pelletier>3Yuvi Panda [15:24:24] YuviPanda: scfc_de: thanks for taking care of the issue. but problem: with this hack https is redirected to http [15:24:33] oh wat [15:24:39] (!§)"!=/"§!(/$!/$/!("§/! [15:24:43] Wait, that shouldn't be possible. [15:24:58] yeah, that's what's happening [15:25:13] I guess lighty is getting things via http so doesn't know of the https proxy in front of it [15:25:18] Oh, duh. Yeah, that's a side effect of lighttpd doing the redirect. /it/ doesn't know that the original was https. [15:25:29] yeah [15:25:45] any headers to query? [15:25:48] But that would also be the issue for tools.wmflabs.org/TOOL/A/B/C, so we should revisit it there. [15:26:24] scfc_de: Indeed. Yes, there is a header that tells the server it was HTTPS, but the scripts would get it too late. [15:30:26] 3Wikimedia Labs / 3tools: lighttpd redirects fail without a trailing slash - 10https://bugzilla.wikimedia.org/59926#c6 (10Tim Landscheidt) 5RES/FIX>3REO Unfortunately, this redirects from https to http, i. e.: | [tim@passepartout ~]$ lynx -mime_header -dump https://tools.wmflabs.org/matthewrbowker/cnrd... [15:31:56] 3Wikimedia Labs / 3tools: http://tools.wmflabs.org/$TOOL doesn't redirect to http://tools.wmflabs.org/$TOOL/, but gives error instead - 10https://bugzilla.wikimedia.org/64058#c4 (10Tim Landscheidt) The issue of https://tools.wmflabs.org/hay being redirected to http://tools.wmflabs.org/hay/ (https -> http) is... [15:52:55] Hi everyone, a question, Tools' help says snapshots would be re-enabled after eqiad migration, is this being worked on? Or is some other kind of backup available? [15:53:21] Coren: ^ [15:54:19] jem-: Not for some time; migration was a prerequisite but right now I'm hesitant to reintroduce possible instability. I'm going to wait until the new DC is ready so I can test the setup there before bringing it back. [15:54:34] wait, another new DC? [15:54:54] YuviPanda: eqiad isn't new; it's our primay DC. We're setting up a new secondary one. [15:55:01] aaah! [15:55:02] that one [15:55:02] right [15:55:05] Ok, so the page sholud be updated [15:55:57] As the message is signed by you, Coren, I don't know if it would be correct to update it myself... [15:56:12] Agh, should* [15:58:50] jem-: I'll tweak it when I get a minute. [15:59:32] Ok then [15:59:51] Well, another question now that I have your attention :) [16:00:07] Should the .htaccess explanations be removed? [16:00:43] I've read that "other web servers can be used", but... [16:02:01] Probably; it's really not all that relevant anymore. [16:02:46] Ok [16:05:13] Well, it doesn't seem very clear how to properly do it, so I prefer not to mess [16:32:15] hello. i just changed my public ssh key (local and https://wikitech.wikimedia.org) and i'm not able to ssh @tools-login.wmflabs.org.. [16:32:19] help please [16:39:26] 3Wikimedia Labs / 3tools: lighttpd redirects fail without a trailing slash - 10https://bugzilla.wikimedia.org/59926#c7 (10Liangent) Not the same issue... [17:15:07] Hi again, I guess it is known that doing: ssh jem@tools-login.wmflabs.org mysql -h ... doesn't work from a remote host, so, could it be done in another way? [17:16:46] YuviPanda ? Coren ? (sorry to bother) [17:17:12] jem-: is the remote host inside labs? [17:17:28] I don't see why that wouldn't work. What issue are you experiencing? [17:17:44] No, YuviPanda [17:18:12] Well, Coren, I'm trying the same command inside and outside with ssh but only the first works [17:18:38] I just get the help of mysql [17:21:14] That should actually be the other way around; tools-login.wmflabs.org is a public IP and should only be reachable from outside labs. So I'm a bit confused. Where exactly are you doing this from and what error message(s) do you get? [17:22:28] <^d> Anyone about who knows how we can add a wiki to beta? [17:22:29] Well, the point is that I don't get any error message [17:22:47] Just the mysql command line help [17:22:57] I copy the lines... [17:23:17] jem@tools-login:~$ mysql -h eswiki.labsdb eswiki_p -e "select * from user limit 3" [17:23:28] (works without problem) [17:24:02] ^d: I thought I'd seen a page on wikitech about how to do that but I'm not finding it now [17:24:11] <^d> I know how to do it for prod. [17:24:12] ssh jem@tools-login.wmflabs.org mysql -h eswiki.labsdb eswiki_p -e "select * from user limit 3" [17:24:15] <^d> [[Add a wiki]] [17:24:17] ^d: Found it -- https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/Add_a_wiki [17:24:26] <^d> Ah! [17:24:29] And I get: If you are having access problems, please see: https://wikitech.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [17:24:32] mysql Ver 15.1 Distrib 5.5.37-MariaDB, for debian-linux-gnu (x86_64) using readline 5.1 [17:24:37] ... Usage, etc. [17:24:58] Oh, wait [17:25:24] <^d> bd808: That's exactly what I need, thanks. [17:25:26] It seems escaping is needed :) [17:25:34] <^d> I want to add a bunch of weird languages for search testing. [17:25:41] That sounds like fun [17:26:24] I was distracted because with a simple "ls -l" the quotes aren't needed [17:27:05] (I mean, I need to quote 'mysql... "') [17:37:34] jem-: Actually, you could just (double-)quote "select * from user limit 3". ssh's behaviour is *very* unorthodox (IMHO broken) in that regard, so if you have a cascade of "ssh host1 ssh host2 ssh host3 command arg1 'ar g2'" you need to know how many invocations there will be to properly quote "ar g2". [17:41:41] 3Wikimedia Labs / 3tools: Killed Mysql queries still running - 10https://bugzilla.wikimedia.org/64140#c7 (10kolossos) Process of enwiki is still there: MariaDB [(none)]> show processlist; +----------+--------+-------------------+-------------------------+---------+---------+-----------+----------------------... [17:42:50] I see, scfc_de, thanks [17:54:06] Ok, everything is working fine now [17:54:14] Thanks everyone for the help [17:54:26] 3Wikimedia Labs / 3tools: Puppet is stuck on exec nodes due to ttf-dejavu-core being already defined - 10https://bugzilla.wikimedia.org/64156#c2 (10Gerrit Notification Bot) Change 127476 abandoned by Tim Landscheidt: Tools: Remove already defined ttf-dejavu-core from exec_environ Reason: Done in Ie815547ab9... [17:55:12] 3Wikimedia Labs / 3tools: Puppet is stuck on exec nodes due to ttf-dejavu-core being already defined - 10https://bugzilla.wikimedia.org/64156#c3 (10Tim Landscheidt) 5PAT>3RES/FIX a:5Tim Landscheidt>3Marc A. Pelletier Fixed by Gerrit change #129206. [18:01:00] what's the difference in code review models available on wikimedia labs [18:03:31] rohit-dua: What do you mean by that? [18:04:08] i mean diff bw gated-trunk/push-for-review and straight push model [18:04:25] https://www.mediawiki.org/wiki/Gerrit/Project_ownership#Other_MediaWiki_extensions [18:06:08] Wikimedia Labs is more about servers provided for development. You mean Gerrit. Doesn't the wiki page explain the two models for you? [18:12:21] <^d> Krinkle, bd808|LUNCH: beta update is stuck waiting for an obviously-open executor again. [18:15:12] scfc_de: why lynx -dump vs curl -I [18:17:29] YuviPanda: Old habits :-). I know this lynx line, but don't use curl much. Now that I know I just have to type seven characters ... [18:17:42] scfc_de: :D [18:25:06] <^d> beta jobs *really* backed up. [18:32:56] ^d: crap. Yesterday the "fix" was random. [18:33:26] I'll try what hashar said he did and see what happens [18:35:49] ^d: I killed the slave process on deployment-bastion, but that didn't seem to fix the problem this time. [18:36:04] <^d> Blargh [18:36:12] It restarted but the master still thinks that it's free and busy at the same time [18:36:57] * ^d stabs jenkins, softly. [18:37:05] It seems like that matrix database update job is involved each time this locks up [18:56:59] what is LDAP username (for gerrit)?? [19:00:12] It's your "instance shell account name" at https://wikitech.wikimedia.org/wiki/Special:Preferences#mw-prefsection-personal [19:00:24] <^d> No it's not your shell name. [19:00:30] <^d> It's your wiki name, the cn from ldap. [19:01:36] ^d: you mean the username ?? [19:01:54] <^d> Use the same username for gerrit that you use to login to wikitech. [19:02:16] ^d: thank you [19:02:21] <^d> you're welcome [19:06:42] Ah, for the web interface. Sorry for the confusion. [19:15:46] is it possible to read the bz2 dumpfiles from enwiki with mediawiki-utilities? [http://pythonhosted.org/mediawiki-utilities/core/xml_dump.html] [19:16:28] 3Wikimedia Labs / 3deployment-prep (beta): Not getting any result for VisualEditor media search in Betalabs - 10https://bugzilla.wikimedia.org/64253#c3 (10James Forrester) a:3None (In reply to Alex Monk from comment #2) > Hmm... Reminds me of bug 63989 No, this was a deployment issue with the Beta Labs mo... [19:17:31] halfak: ^ [19:24:07] <^demon|lunch> bd808: Doesn't help that beta's also force-redirecting me to https and I get connection timeouts on that. [19:24:13] <^demon|lunch> :\ [19:24:23] <^demon|lunch> ERR_CONNECTION_REFUSED [19:24:27] <^demon|lunch> Actually, not timeout. [19:24:58] ^demon|lunch: Look in your cookies for an https cookie (can't remember the name). [19:25:16] Ssl is all messed up in beta still I think [19:26:19] I was locked out when we first switched to eqiad until I found the cookie that was making varnish/nginx redirect me to the broken ssl instances [19:27:52] ^demon|lunch: Also (/me crosses fingers) I may have fixed the deadlock that was breaking the beta update jobs [19:29:12] <^demon|lunch> Ah, there we go [19:29:12] It's all just manual in Jenkins UI at the moment, but I changed the down stream trigger for the scap job from a job step in the upstreams to a post-build step. [19:29:18] <^demon|lunch> cookie worked [19:29:34] Cool. Someday we'll get the ssl working again [19:30:04] There's a bug that hashar was working on; I think it's blocked on ops in some way [19:30:18] * bd808 may be lying [19:30:54] :-D [19:32:27] <^demon|lunch> !log deployment-prep: created zhwiki, ukwiki, ruwiki, kowiki, hiwiki, jawiki for testing [19:32:28] deployment-prep: is not a valid project. [19:32:46] how much time does Git/New repositories/Requests take ??? [19:32:54] to be accepted [19:34:04] <^demon|lunch> rohit-dua: Not long. Either qchris or I usually get to it within 24-48h. [19:34:08] ^demon|lunch: Try it without the colon after deployment-prep [19:34:17] <^demon|lunch> !log deployment-prep created zhwiki, ukwiki, ruwiki, kowiki, hiwiki, jawiki for testing [19:34:19] Logged the message, Master [19:34:24] <^demon|lunch> Ah [19:38:22] <^demon|lunch> hashar: Where can I find the root mysql password for deployment-db1? [19:39:34] ^demon|lunch: /root/secret I guess ? [19:39:50] its: ****************** [19:39:52] do we have sqlite in tool-labs? [19:40:02] <^demon|lunch> hashar: No luck :( [19:40:18] hmm [19:40:54] rohit-dua: Yes. Something not working for you? [19:40:54] ^demon|lunch: maybe we forgot to copy the password file to db1 when migrating :] [19:41:43] <^demon|lunch> Could reset it maybe... [19:41:47] scfc_de: nothing so. just choosing b/w sqlite and mysql for gsoc project.. [19:43:11] ^demon|lunch: it was on the pmtpa homedir for the root user which was on GlusterFS [19:43:18] ^demon|lunch: so I am pretty sure it is gone now [19:43:45] ^demon|lunch: sorry :-( [19:43:55] <^demon|lunch> I can easily reset. [19:44:01] <^demon|lunch> Wondering if it'll break $something [19:44:02] do! [20:04:20] !log integration switching integration-dev to use the project puppetmaster instance [20:04:22] Logged the message, Master [20:06:40] !log integration Updated puppetmaster local operations/puppet.git clone [20:06:42] Logged the message, Master [21:50:21] what are the hosts for bits.beta.wmflabs.org ? I need to touch a .js file on them so that icons show up. [21:53:11] 3Wikimedia Labs / 3tools: tools.wmflabs.org inaccessible via labs instances - 10https://bugzilla.wikimedia.org/54052 (10DrTrigon) s:5major>3blocke [21:53:33] alternatively, does sync-file on deployment-bastion only sync to labs hosts? [21:53:41] 3Wikimedia Labs / 3tools: tools.wmflabs.org inaccessible via labs instances - 10https://bugzilla.wikimedia.org/54052#c10 (10DrTrigon) To me and my bot this is a blocker - so please continue and merge. [21:56:14] spagewmf: sync-file would only go to deployment-* hosts, but it's slightly tricky to use [21:56:26] Let me see if I can figure out what host bits is on [21:57:18] spagewmf: The bits host is deployment-cache-bits01.eqiad.wmflabs [21:57:38] I found that via https://wikitech.wikimedia.org/wiki/Special:NovaAddress where the public ips are mapped [21:58:07] * bd808 also realizes that host isn't getting updated with scap in beta [22:25:58] spagewmf: It turns out that deployment-apache0[12] are the bits backend hosts. The other host I mentioned is just the varnish cache in front. [22:27:12] You can touch the files that need to be updated on deployment-bastion and then either wait for the next automatic scap (every ~10 minutes) or run /usr/local/bin/wmf-beta-scap to scap yoursefl [22:27:47] I don't have wrappers there for setting up the right ssh-agent forwarding to make using sync-* easy. [22:28:14] But if you look at the contents of wmf-beta-scap you can see how to do it manually. [22:46:04] thanks bd808. I went to deployment-apache0[12] and after much inappropriate touching, `sudo -u mwdeploy touch /usr/local/apache/common/php-master/extensions/Flow/modules/discussion/styles/*.less` did the trick [22:46:18] cool [23:04:41] 3Wikimedia Labs / 3tools: tools.wmflabs.org inaccessible via labs instances - 10https://bugzilla.wikimedia.org/54052#c11 (10Tim Landscheidt) You can easily work around that by using tools-webproxy as Coren wrote in comment #1.