[00:05:34] bd808: I'm so confused I can't find labs_vagrant/manifests/init.pp anywhere on flow-tests [00:07:02] spagewmf: It lives on the labs puppet master and is sent down as a "compile manifest" when the agent runs. [00:08:02] spagewmf: Also, puppet agent is eating 90%+ cpu on my instance right now as the agent runs [00:08:05] this may be "normal" [00:10:26] bd808: yes but for 13 minutes? and to stat for git::clone.rb in 18 locations literally thousands of times? That don't seem right. There's a puppet running right now on flow-tests if you care to debug. [00:10:31] spagewmf: and strace shows it doing a ton of stat("/usr/lib/ruby/1.9.1/puppet/type/git::clone.rb", 0x7fffb6e34660) [00:10:59] bd808: what makes it stop? There really, definitely isn't a file with that name? :) [00:12:24] I have a hunch that there is some bad puppet manifest in the repo. I'll see if I can narrow it down. [00:16:34] spagewmf: It's looking for "puppet/type/git::clone.rb" everywhere that ruby would be keeping gems. So puppet has decided that it wants to find a custom puppet plugin rather than using the modules/git/manifests/clone.pp file for some git::clone operation. [00:17:14] So the question is what got into our puppet manifests that makes it want a custom type [00:17:17] bd808: dumb guess, should it be Git::Clone['mediawiki/vagrant'] rather than just 'vagrant' ? [00:17:31] * spagewmf falling over, must eat [00:18:19] spagewmf: Good instinct, but not in this case -- https://github.com/wikimedia/operations-puppet/blob/production/modules/labs_vagrant/manifests/init.pp#L45-L48 [00:18:25] and go eat! [00:34:14] spagewmf: FWIW the puppet process that was running on my labs instance finally fnished [01:04:30] my instance is shown as "puppet status: failed" in wikitech and the reason is a whole bunch of things that arent specific to it [01:04:41] actually, here is general epic puppet fail [01:05:03] basically everything related to package installs [01:05:33] E: Problem with MergeList /var/lib/apt/lists/_data_project_repo_Packages [01:05:48] E: The package lists or status file could not be parsed or opened. [01:06:13] labsdebrepo broken [01:13:05] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c2 (10Bryan Davis) I'm pretty sure that "Debug: Executing '/usr/bin/test -d /mnt/vagrant'" is not what's really running while the system goes nuts. I n... [01:15:36] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c3 (10Bryan Davis) Created attachment 16509 --> https://bugzilla.wikimedia.org/attachment.cgi?id=16509&action=edit puppet agent --debug output Outpu... [01:16:55] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981 (10Daniel Zahn) 3NEW p:3Unprio s:3normal a:3None on instance wikistats-petcow, major puppet fail related to package installs, without having changed it and with... [01:20:28] E: Problem with MergeList /var/lib/apt/lists/_data_project_repo_Packages [01:20:34] bash: cd: /var/list/apt/lists: No such file or directory [01:21:05] eh, /var/lib of course ! duh.. but still , there is no "_date_project_repo_Packages" [01:21:32] no, there is, but it can't read it.. wtf [01:23:36] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c1 (10Daniel Zahn) doesn't go away after de-selecting "labsdebrepo" role either: root@wikistats-petcow:/root# file _data_project_repo_Packages _data_project_repo_Packa... [01:25:36] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c2 (10Daniel Zahn) (In reply to Daniel Zahn from comment #1) > doesn't go away after de-selecting "labsdebrepo" role either: > > root@wikistats-petcow:/root# file _data... [01:38:22] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c3 (10Daniel Zahn) deleting /var/lib/apt/lists/_data_project_repo_Packages doesn't fix this either and i don't see any change that uses "labsdeb" in the commit message... [01:50:02] PROBLEM - ToolLabs: Low disk space on /var on labmon1001 is CRITICAL: CRITICAL: tools.tools.diskspace._var.byte_avail.value (10.00%) [01:58:11] RECOVERY - ToolLabs: Low disk space on /var on labmon1001 is OK: OK: All targets OK [02:23:37] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c4 (10Bryan Davis) Possibly related to upstream puppet bug [03:58:01] PROBLEM - ToolLabs: Low disk space on /var on labmon1001 is CRITICAL: CRITICAL: tools.tools.diskspace._var.byte_avail.value (10.00%) [04:08:10] RECOVERY - ToolLabs: Low disk space on /var on labmon1001 is OK: OK: All targets OK [04:09:51] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c5 (10Bryan Davis) I ran another test with this command: $ TZ=UTC strace /usr/bin/ruby /usr/bin/puppet agent --onetime --verbose --no-daemonize --no-s... [05:27:36] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c6 (10Bryan Davis) Another debug run done with: $ TZ=UTC /usr/bin/ruby /usr/bin/puppet agent --onetime --verbose --no-daemonize --no-splay --debug --t... [07:18:38] 3Wikimedia Labs / 3deployment-prep (beta): Install and configure pool counter - 10https://bugzilla.wikimedia.org/70940#c1 (10Antoine "hashar" Musso) We earlier dismissed installing Poolcounter: bug 36891 on the basis there was barely any traffic / contention on beta cluster. I guess we can do it now :-D [07:18:38] 3Wikimedia Labs / 3deployment-prep (beta): setup poolcounter daemon - 10https://bugzilla.wikimedia.org/36891#c5 (10Antoine "hashar" Musso) Reopened as Bug 70940 - Install and configure pool counter [10:31:45] Silke_WMDE: Hello, are you there? [11:11:28] hi jem [11:25:42] Silke_WMDE: I wanted to ask if you have received a petition to save Platonides' data on Toolserver, probably from Ecemaml [11:25:56] jem: yes, I did. [11:26:42] I forwarded the request to the toolserver admins. if it's okay with Platonides' license, there's no problem. [11:27:01] Ah, Ok [11:27:11] nosy will check this [11:27:52] Good, then I'll contact her... it's important because of a broken tool and a needed bot, both related to WLM in Spain [11:28:00] Thanks :) [11:28:37] jem: So you and ecemaml are going to migrate it to Tool Labs? That's cool! [11:29:05] I think you aren't the only WLM folks who use his stuff. [11:29:32] In fact there have been a lot of users worried and it has been commented in eswiki mailing list [11:30:03] I don't know if I would do the migration personally, but currently I'm afraid I'm the main candidate :) [11:30:17] Did Platonides leave a message or did he disappear? [11:30:33] In fact he disappeared [11:30:37] :( [11:30:58] He wasn't very active in IRC before, but he didn't connect or answer since Aug 5 [11:31:30] Last year he was still very active on the toolserver [11:31:35] Yes [11:32:02] And we thought he would be caring of his tools, but... [11:34:42] As secretary of WM-ES I have his physical address... but he was always discrete about that, and we have no other members in that city, so I don't want to use that [11:35:58] Yeah, I wouldn't use it either. [11:37:04] It can always happen that volunteers run out of time or don't want to continue for other reasons. Toolserver shutdown was one of the moments where you really see who isn't active. [11:37:33] Yes, no doubt about that [11:37:45] (Though some people also wrote back that they will migrate their tools later when they have more time.) [11:37:55] I can imagine [11:38:20] In fact it would have been Ok if he had just left the tools to someone else [11:40:11] Ok, I'll write to nosy in a few hours and we'll keep commenting on eswiki and/or WM-ES list [11:40:17] Thanks again :) [11:40:30] np [11:41:02] * YuviPanda encourages people to add as many maintainers as possible to their tools on toollabs to prevent similar situations :) [11:41:04] hi Silke_WMDE [11:41:09] jem: I heard that they are using Platonides tools in the Netherlands too, for WLM [11:41:11] have you seen tools.wmflabs.org now [11:41:16] hi yuvipanda [11:41:20] it has a better tool listing now [11:43:06] yuvipanda has it changed? Apart from the length of the list? [11:43:20] Silke_WMDE: it has! look at the entry for magnustools, for example [11:43:31] oh there! [11:43:35] yes, cool [11:44:02] :) [11:44:04] i also like the direct links to source code [11:44:08] Silke_WMDE: yup! [11:44:09] Silke_WMDE: Do you mean right now? I guess it's the upload bot, because the web tool was related to Spanish monuments only, I think [11:44:18] people can specify this with a json file in their tools directory [11:44:49] jem: No they poked us the day after toolserver shutdown to revive Platonides' tool for their jury meeting. :) So it was a few months back. [11:45:22] Ah, Ok [11:45:38] We'll contact them, then [11:46:09] jem: yes, maybe they even know someone who can support you with the migration - who knows... [12:18:59] Hi all ! I'd like some advice about how to query Wikipedia from the labs [12:19:27] I'm writing a bot that keeps track of active and inactive members of a WikiProject (and updates the members list accordingly) [12:20:03] to do so, I need to fetch for each user the date of its last contribution to a page in a category [12:20:25] I've implemented it with Pywikibot [12:20:51] but it's damn slow and I wonder whether I could make a wise SQL request instead [12:21:13] ok [12:21:24] Do you need help with the MW database structure? [12:21:30] exactly. [12:21:40] Or do you need help getting to the DB replicas? [12:21:43] how do you get the list of page ids belonging to a category ? [12:21:54] it's easy to get the list of page names [12:22:14] Also, English Wikipedia I guess? [12:22:22] yes, to keep it simple :-) [12:22:44] pintoch, so... labs DB replicas, or MW database structure? [12:23:01] MW database structure [12:23:16] https://www.mediawiki.org/wiki/Manual:Database_layout lists all the core tables [12:23:30] and their history. each table has a page documenting all the fields [12:23:32] yep, that's what I am looking at [12:23:55] but I don't see how namespaces are handled in this scheme [12:24:15] category table? [12:24:16] is the correspondance between namespace ids and namespace names hard coded ? [12:24:22] Yes. [12:24:48] ok, so basically there is no way to write in SQL the request I am looking for [12:24:55] ummm [12:25:00] I'd be surprised if that were true [12:25:16] I'd be happy if you show me I'm wrong :-) [12:25:57] basically, my problem is that the table "categorylinks" contains page names, not page ids [12:26:14] so then you have to decompose the page name into (namespace,title) [12:26:22] cl_from -> Stores the page.page_id of the article where the link was placed. [12:26:41] cl_to -> Stores the name (excluding namespace prefix) of the desired category. Spaces are replaced by underscores (_) [12:26:41] isn't it the id of the category ? [12:26:52] oh, I got it wrong then [12:26:54] great ! [12:27:58] thanks a lot Krenair :-) [12:28:01] You'll want to look in the 'revision' table for page history, see rev_page to link it with the page table [12:28:07] and rev_user to link it to the user table [12:31:16] pintoch, let me know if you need anything else [13:21:51] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c4 (10Marc A. Pelletier) I had a similar issue on tools yesterday; something seems to have changed in or around apt-get such that if the repo's list is compressed (Packa... [15:41:36] 3Wikimedia Labs / 3deployment-prep (beta): Setup puppet exported resources to collect ssh host keys for beta - 10https://bugzilla.wikimedia.org/70792#c2 (10Prateek Saxena) 5PATC>3NEW Sorry! Typed the wrong bug number. [16:32:34] !log hadoop-logging moving hadooplogstash04 and hadooplogstash06 to virt1003 [16:32:36] Logged the message, dummy [16:49:50] 3Wikimedia Labs / 3deployment-prep (beta): Beta Cluster isn't redirecting en.wikipedia.beta.wmflabs.o rg correctly - 10https://bugzilla.wikimedia.org/70948 (10Greg Grossmeier) [17:42:58] jeremyb: please ping when you are available [18:03:51] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c9 (10Bryan Davis) (In reply to Bryan Davis from comment #4) > Possibly related to upstream puppet bug > andrewbogott: wanna merge https://gerrit.wikimedia.org/r/#/c/160626/ [18:11:42] andrewbogott: we're going to run shinken for labs specific monitoring, and run it from a labs project itself [18:11:44] rather than from labmon [18:11:55] keeps separate things separate [18:19:34] I'm really the only one whose firefox won't open links from irc anymore? [18:20:03] andrewbogott: works fine for me... [18:20:09] andrewbogott: maybe your IRC client is weirded out? [18:20:19] For me, FF takes focus but then doesn't open a new tab. [18:20:29] The same from thunderbird. Focus but no open link [18:20:44] My my rightmost tab in firefox is empty then it works [18:20:47] Clearly a FF bug [18:20:52] oh wow [18:20:53] heh [18:20:56] * YuviPanda is on nightly [18:21:26] is on old debian iceweasel :p [18:21:55] YuviPanda: the thing I'm missing about ldap is… labs instances can already read ldap, right? It's used for every single user login on every single instance. [18:22:04] So what is it about ldap that we can't read from labs that we need? [18:22:16] andrewbogott: do we pass ldap password through labs at any point? [18:22:17] Or is this a totally different ldap db we're talking about? [18:22:19] people login via ssh [18:22:27] the password isn't used anywhere [18:22:40] Hm… I'm still pretty sure it's available on labs. Let me try something... [18:22:44] ok [18:24:21] andrewbogott: it'll definitely simplify my life a lot if we can use it :) [18:25:53] YuviPanda: so… on a labs instance I run [18:25:55] ldapsearch -LLL -x -D 'uid=andrew,ou=people,dc=wikimedia,dc=org' -W -b 'ou=groups,dc=wikimedia,dc=org' [18:26:06] And it prompts me for a password, and I enter one (which is the same as my wikitech login password) [18:26:07] and it works [18:26:13] andrewbogott: right, so it works. [18:26:17] andrewbogott: but, I think the concern is... [18:26:28] that if I pass my ldap password through a labs instance [18:26:42] that password could be stored/hijacked and now my account is hijacked [18:26:50] Ah, yeah, that's definitely a danger. [18:27:08] So I guess I misunderstood the problem. It's not that you lack access, it's that you lack secure access :) [18:27:11] considering that the connection from the labsproxy to the backend machine is plain http [18:27:13] indeed [18:28:37] andrewbogott: so I've to figure out an alternative auth mechanism... [18:28:45] andrewbogott: would keystone fit in? [18:28:55] YuviPanda oauth? [18:28:57] * YuviPanda is very unsure about what exactly keystone does [18:29:07] Betacommand: that's the other option, but then I'll have to write a shinken plugin supporting oauth :) [18:30:15] YuviPanda: you could use keystone, it should know what ldap knows. Hm... [18:30:34] But you don't want to have users type their password for keystone either, right? So you'd want to use the token that they got from wikitech... [18:30:39] well, that doesn't help, that's just oauth [18:30:56] yeah [18:31:07] andrewbogott: so seems I'll have to implement a shinken oauth plugin... [18:31:43] andrewbogott: but in the meantime, I'm just going to create a simple guest/guest account with not much privilages [18:32:21] How would it work if you used oauth? Wouldn't that /still/ involve passwords? Or is there some cookie magic involved? [18:32:45] andrewbogott: oauth doesn't involve passwords, no [18:33:01] andrewbogott: you'd click 'login', it would redirect you to wikitech, where you login if necessary, and then grant access by pressing a button [18:33:13] ah, sure. [18:33:18] andrewbogott: and the application gets back a token that can only perform specific actions (in our case, read your groups) [18:33:21] and then you use that [18:33:36] That seems like what you should do, then. Unless I'm missing an obvious use of keystone. [18:33:57] andrewbogott: yeah [18:34:02] andrewbogott: it's just a bunch of work [18:34:19] andrewbogott: thankfully it's python, so there is halfak's nice libary to implement it, and wiktech already has oauth enabled [18:34:28] I take it shinken will know secure things? [18:34:33] andrewbogott: nope [18:34:39] * halfak pops in [18:34:48] andrewbogott: password only required for acknowledging things and personal dashboards and stuff [18:34:55] Ah, sure [18:35:05] andrewbogott: so starting with guest/guest thing is ok [18:35:13] halfak: your oauth library might end up in production! [18:35:15] (in some form) [18:36:05] Woo. It's a good think that the repo is under "wikimedia/" then. [18:36:40] halfak: that's actually bad(ish?) since the only other things there are mirrors from github [18:36:48] halfak: and if it's going to be on prod it has to be in gerrit anyway [18:37:32] Bah. Damn gerrit. [18:38:02] I suppose it is stable enough that we're not benefiting from being primarily github anymore. [18:38:26] I think it is about time to walk through the code and release a 1.0.0 [18:39:03] halfak: +1 [18:39:25] YuviPanda: unrelated, I'm running a fresh install of labs-vagrant, and it's failing due to a lack of /vagrant/puppet/manifests/manifests.d/ [18:39:39] uh oh [18:39:40] My guess is that directory is supposed to be installed by the vagrant .deb, and isn't anymore? [18:39:46] there's no vagrant deb [18:39:48] IIRC [18:39:57] ok, then where does /vagrant/whatever come from? [18:40:00] * andrewbogott reads more puppet [18:40:14] andrewbogott: is /vagrant git clean? [18:40:21] andrewbogott: I see a manifests.d folder in my local checkout [18:40:29] andrewbogott: /vagrant/whatever comes from a git clone [18:40:43] Oh, it's linked to /srv/vagrant! [18:40:46] * andrewbogott understands slightly more now [18:41:21] YuviPanda: root@labs-vagrant-freshtest:/vagrant/puppet/manifests# ls [18:41:22] site.pp [18:41:27] andrewbogott: git status? [18:41:41] root@labs-vagrant-freshtest:/vagrant/puppet/manifests# git status [18:41:42] On branch master [18:41:42] Your branch is up-to-date with 'origin/master'. [18:41:51] This is a totally fresh install on a new instance [18:41:52] wat [18:42:07] andrewbogott: git log? [18:43:17] andrewbogott: afaik, mw-vagrant is installed by role::labs::vagrant. do you have that enabled? [18:43:29] marxarelli: yes, that's what I'm testing. [18:43:54] YuviPanda: I'm going to do a fresh checkout of that repo on a local machine. https://gerrit.wikimedia.org/r/mediawiki/vagrant right? [18:44:01] andrewbogott: ya [18:44:16] same thing, just site.pp [18:44:25] So, maybe you want to do the same checkout, and dig into the history huh? [18:44:36] andrewbogott: which instance is this? [18:44:39] andrewbogott: can you add me to the project? [18:44:48] testlabs [18:45:26] andrewbogott: strange. i just spun up an instance yesterday with vagrant and it's working ok [18:46:16] YuviPanda: I take it that dir is supposed to be checked into git? [18:46:33] andrewbogott: yeah [18:46:44] * andrewbogott digs [18:47:00] I'll bisect [18:47:53] andrewbogott: bah, wikitech is doing that thing again where everything is empty, and I can't relogin because my phone is dead :( [18:48:11] andrewbogott: can you give me the name of the instance? there doesn't seem to be one named testlabs, and if there is I guess I'm not part of the project? [18:48:19] YuviPanda: https://gerrit.wikimedia.org/r/#/c/153420/ [18:48:26] project testlabs [18:48:31] instance labs-vagrant-freshtest [18:48:38] but, that gerrit link knows all [18:48:42] andrewbogott: gah [18:48:50] andrewbogott: but that was ages ago... [18:48:54] yep! [18:49:01] Git wouldn't remove the old files though [18:49:13] hmm, but marxarelli set one up yesterday, I suppose [18:49:23] that I cannot explain [18:49:50] andrewbogott: what labs-vagrant command is failing? [18:49:51] except that labs vagrant was 100% broken yesterday? Due to the issue I'm trying to test... [18:50:04] namely, this one: https://gerrit.wikimedia.org/r/#/c/161276/ [18:50:09] marxarelli: puppet fails to install it in the first place [18:51:09] andrewbogott: ah, right. i saw chatter about that [18:51:34] hmm [18:51:34] yeah, very strange, considering i was setting up my instance around the same time [18:51:49] marxarelli: trusty or precise? [18:51:51] I suspect changes to the labs_vagrant code in ops/puppet to deal with hiera are needed [18:52:17] trusty [18:52:21] YuviPanda: We can wait for bd808|LUNCH to get back from |LUNCH and dump this all in his lap :) [18:52:25] marxarelli: huh, same as me [18:52:29] andrewbogott: sounds like a good start :) [18:52:45] i don't know why the ops puppet would fail on account of manifests.d missing [18:52:52] Lately dumping things in bd808's lap is what I do best [18:52:59] marxarelli: because it tries to install a file there [18:53:02] that was mainly used for managing mw-vagrant's enabled role state [18:53:02] marxarelli: it might try to write things there to enable the default role [18:53:18] '/vagrant/puppet/manifests/manifests.d/vagrant-managed.pp' [18:53:40] oh, roger that [18:54:54] rummaging through my puppet logs, it is failing for me. :) labs-vagrant still works though [18:55:05] back in 10... [18:56:15] i'll take a look at the ops manifest since i'm familiar w/ the changes bd808|LUNCH made [18:56:24] marxarelli: \o/ [18:56:33] marxarelli: the labs-vagrant command might also need modification (unsure) [18:56:53] YuviPanda: right-o. i'll double check [18:56:58] cool [19:06:36] andrewbogott, marxarelli, YuviPanda: I need to make a follow up patch to remove old junk from the labs-vagrant role. We take care of adding the labs content role with hiera now and the old template thing is no longer needed. [19:06:48] cool [19:06:53] yay hiera [19:06:57] although I haven't looked at it much yet [19:07:11] * YuviPanda has been away from any MW development and consequently vagrant work for quite a while now [19:07:45] YuviPanda: I think you can get an idea of some of the things hiera can take care of from looking at the current mwv puppet repo [19:07:54] yeah [19:07:57] will do! [19:08:12] bd808: an almost working shinken install is here!!!!1 :) [19:09:35] cool. is it nicer to work with than icinga? [19:09:43] bd808: yeah, much [19:09:49] bd808: shinken.wmflabs.org, user/pass: guest/guest [19:09:54] * bd808 admits to not having ever migrated his brain from nagios [19:09:57] bd808: http://shinken.wmflabs.org/all [19:10:40] problem #1: it expects my browser window to be wider [19:10:44] bd808: cool, let me know if you need me to take anything off your plate [19:10:51] bd808: heh :) [19:11:31] so much bootstrap css [19:11:43] bd808: heh, indeed [19:11:49] still, I'll take bootstrap over icinga's thing [19:11:53] marxarelli: Just you know make all the problems go away [19:12:42] bd808: oh, i'm on that [19:13:09] * marxarelli tries to think of the sketchiest drug dealer he knows [19:14:12] marxarelli: that would be bd808? :) [19:14:34] I'm not sketchy [19:14:47] YuviPanda: he really does do it all! [19:15:35] * YuviPanda sketchily gets M&Ms from bd808 [19:15:41] bd808: one bag is empty now [19:15:54] You are really rationing those [19:16:03] I expected them to be gone in the first week [19:16:03] bd808: I've frosties here! [19:16:18] bd808: plus I'm staying with someone who is worried about my sugar intake [19:22:04] YuviPanda: Re auth inside labs, Ryan thought the fix would be to implement an OpenID provider in wikitech. See https://bugzilla.wikimedia.org/show_bug.cgi?id=61754 [19:22:35] But he went and found another job before making that happen :( [19:22:40] yeah [19:22:41] :( [19:22:44] bad Ryan! [19:28:27] andrewbogott: can you merge https://gerrit.wikimedia.org/r/#/c/161289/? [19:28:50] * andrewbogott clicks on link, shakes fist at ff [19:29:59] * YuviPanda pats andrewbogott [19:31:09] andrewbogott: ty [19:31:46] andrewbogott: hmm, I think I'm blocked now until I write the OpenStack API thing :( [19:31:58] andrewbogott: shinken is going to live on the labs instance, so can't use the OS api directly [19:32:15] andrewbogott: any idea if there are other ways I can get 'list of instances in a project' without having to rely on scraping results from SMW? [19:32:20] Routing things through wikitech shouldn't be hard… there's plenty of code in OSM that already gathers all the info you need [19:32:28] andrewbogott: yeah [19:32:32] but I've to write PHP :( [19:32:37] ldap? [19:32:37] True :) [19:32:49] hm, good point. [19:33:17] bd808: security issues with exposing user ldap passwords to labs instances [19:33:23] bd808: I continue to think that openid is the correct solution. [19:33:43] I guess ldap doesn't know about instances, only about projects. [19:33:51] Well, no, it does know. For DNS [19:33:53] YuviPanda: I thought you were looking for data on instances in a project now? [19:34:04] bd808: yeah, I am [19:34:07] But it probably doesn't know the relationship of projects to instances [19:34:12] bd808: ldap doesn't have that info [19:34:26] yeah, what andrewbogott said [19:34:36] at least that's my understanding, would be awesome to be proven wrong :) [19:34:39] Coren: Yes. openid is the right auth thing to build. Easy (ish) to use with lots of stuff and keeps secrets outside of labs [19:35:21] YuviPanda: hence my question mark. :) I haven't poked deeply at what we stuff in ldap and what we don't [19:35:27] right [19:35:43] bd808: the alternative is to specify hostgroups manually, but that seems... less than ideal [19:36:10] Also writing php is good for the soul. It reminds you of why you started to prefer other languages. [19:36:28] heh [19:36:52] or I could go the resource collection route [19:37:12] but that would probably require quite a bit of effort [19:37:25] nasty puppet magic in my opionion [19:37:36] and it won't help you with beta [19:37:41] ah, right. [19:37:48] it's totally useless for anything running its own puppetmaster [19:37:55] * bd808 nods [19:38:00] which makes it even more useless [19:38:15] You could do something with salt maybe... [19:38:26] you'd need to make betas salt master talk to the labs one [19:38:37] true, but this needs to run on shinken itself... [19:38:46] and any other (probably tiny) set of hosts running their own salt master [19:38:47] I think 'cleanest' solution is probably wikitech API [19:52:33] andrewbogott: if I add 'shinken' as 'source group' from deployment-prep, would that mean that all instances from project shinken can access all instances from project deployment-prep as if they were the same? [19:53:55] mmmaybe? I think all those other instances would also need 'shinken' in their source group [19:54:05] oh, I guess that's what you're saying... [19:54:17] I think it's just that everything with the same (arbitrary) source group string can talk to each other. [19:54:19] i mean, just add shinken as source group to the default security group [19:54:21] ah [19:54:22] I see [19:54:28] Docs at the bottom of this page: http://docs.openstack.org/openstack-ops/content/security_groups.html [19:54:33] I may be misreading though [19:54:39] we'll find out soon [19:54:47] * YuviPanda goes to turn his phone on so he can relogin to wikitech [19:57:29] andrewbogott: bah, I can't actually do what I wanted :( [19:57:39] why not? [19:57:45] andrewbogott: what I wanted was for all instances in project shinken to have access to all instances in project deploytment-prep [19:58:19] Can you add 'shinkentalkstobeta' as a source group in the default policy of both projects? [19:58:29] andrewbogott: groups are local to a project, apparently [19:58:32] oh [19:58:35] yeah [19:58:36] then that definitely won't work! [19:58:41] ya [20:11:23] mutante: ok if I move wikistats-live (project wikistats) now? [20:21:36] 3Wikimedia Labs / 3Infrastructure: puppet agent on labs-vagrant instance using 99% of CPU, looking for git::clone.rb - 10https://bugzilla.wikimedia.org/70971#c10 (10Bryan Davis) 5PATC>3RESO/FIX After this patch landed, /var/log/puppet.log says: Notice: Finished catalog run in 60.99 seconds Before tha... [20:44:52] Coren: have a sec to troubleshoot an idiot error? [20:45:07] Betacommand: What's up? [20:45:25] Im in $HOME/tspywiki/cgi [20:45:34] trying to run ln -s fix_refs.html $HOME/public_html/fix_refs.html [20:46:11] and Im getting a file not found in winscp when viewing the link [20:46:35] Lemme go see. [20:48:57] Oh! You're doing the symlink the wrong way round. The first argument is, essentially, a litteral that will go into the symlink. So at the destination you have a link named 'fix_refs.html' that points to... 'fix_refs.html'. [20:49:30] Thanks, knew it was an IDIOT error [20:49:35] If you to 'ln -s $HOME/tspywiki/cgi/fix_refs.html fix_refs.html' from public_html it'll do what you want. [20:50:02] andrewbogott: move to other datacenter you mean? yes [20:50:15] mutante: just moving from virt1006 to virt100somethingelse [20:50:37] andrewbogott: go ahead [20:57:14] Coren: see PM [21:03:10] mutante: what about phab-01? [21:03:27] (I'm planning to move all these in a lump over the weekend, figuring it's nicer if I do them while you're watching...) [21:04:36] andrewbogott: it might have useful data (phab-01) [21:04:44] andrewbogott: I suggest punting until later? [21:04:51] andrewbogott: it's being used to help with the phab migration [21:04:52] andrewbogott: technically i'm not on that project, i just made it:) [21:05:02] Yuvi can decide :) [21:05:04] YuviPanda: it's going to happen. Today while you watch, or this weekend while you don't. [21:05:21] YuviPanda: So far your box is the only one that has died on the table :/ [21:05:24] andrewbogott: can it happen tomorrow when I watch? I don't want to migrate another instance this late at night :) [21:05:32] would be good if you tell Quim [21:05:33] YuviPanda: surely. Just remind me [21:05:43] andrewbogott: will do [21:39:37] andrewbogott: You forgot -03 [21:40:10] But I saved it :) [21:40:36] That's ressucitation. It still died. :-) Now it's a frankenstance. :-) [21:43:23] true [21:45:43] lazarustance [21:50:19] Coren: megachron-three? That you? [21:50:38] andrewbogott: It's me-ish. Safe to move. [21:50:44] thx [21:51:13] megachron-three sounds a /lot/ like a Transformer [21:51:30] decepticon, obv. [22:07:20] http://icinga.wmflabs.org/cgi-bin/icinga/status.cgi?hostgroup=all&style=hostdetail&hoststatustypes=4&hostprops=2097162&nostatusheader [22:09:30] andrewbogott: Coren I just emailed ops@ about organizing code sharing between icinga/shinken, do respond if you've opinions [22:53:21] 3Wikimedia Labs / 3wikitech-interface: Cleanup and enable UserFunctions extension on wikitech - 10https://bugzilla.wikimedia.org/45455#c2 (10Alex Monk) Ryan: Are you still intending to do this? [23:00:43] andrewbogott: btw, I looked at the code for Special:NovaInstance, building the API shouldn't be too hard :D [23:01:31] andrewbogott: oh, damn. looks like I need to have an authenticated ldap user to be able to access the OS apis? [23:01:50] grrr [23:09:11] this is going to be messy [23:09:12] :'( [23:11:00] !log wikistats - package install problem due to bug 70981 [23:11:02] Logged the message, Master [23:16:21] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c5 (10Daniel Zahn) (In reply to Marc A. Pelletier from comment #4) > It was fixed on tools by /not/ compressing the package list (that is, the > output of dpkg-scanpacka... [23:57:21] 3Wikimedia Labs / 3Infrastructure: The package lists or status file could not be parsed or opened. - 10https://bugzilla.wikimedia.org/70981#c6 (10Daniel Zahn) (In reply to Marc A. Pelletier from comment #4) > It was fixed on tools by /not/ compressing the package list (that is, the > output of dpkg-scanpacka...