[06:57:24] Coren: ping [07:32:40] andrewbogott: Service groups in the Tools project get a sudo rule "(root) NOPASSWD: chown -R local-$SG:local-$SG /data/project/$SG" (that doesn't work; cf. https://bugzilla.wikimedia.org/show_bug.cgi?id=48105). Do you know where this is added? OpenStackManager and Puppet seem clean. [07:33:13] I'd expect it to be in OSM [07:34:04] scfc_de: I'm looking... [07:34:22] andrewbogott: You're right and I'm blind. [07:34:30] ok :) [07:35:57] You don't happen to know *why* it fails? Does chmod need to be fully pathed for sudo? (If you don't know, I can test that myself.) [07:42:15] heh, I read that as OpenStreetMaps :P [07:42:35] I don't know why it fails, although qualifying paths is a good place to start. [07:42:53] yuvipanda: yeah, that's a source of constant confusion [07:42:59] heh [07:43:43] yuvipanda: When I first encountered those many those discussions about "OSM", I thought, wow, the migration from Toolserver is on speed :-). [07:43:56] :D [07:55:26] "local-test ALL = (root) NOPASSWD: /bin/chown" works, "local-test ALL = (root) NOPASSWD: /bin/chown -R" gives a syntax error in visudo -c. Hmmm. [07:58:57] "local-test ALL = (root) NOPASSWD: /bin/chown test" gives a syntax error as well. Hmmm. Hmmm. [08:06:13] On my Fedora 19 box, "local-test ALL = (root) NOPASSWD: /bin/chown test" works without problems. But now "local-test ALL = (root) NOPASSWD: /bin/chown" which worked flawlessly 10 minutes ago, fails on Ubuntu. *Argl* [08:11:01] It looks like the ":" in "local-test:local-test" needs to be escaped to "\:". [08:14:35] andrewbogott: For a test, could you hot-patch nova/OpenStackNovaServiceGroup.php on wikitech, "$groupName . ':' . $groupName" => "$groupName . '\:' . $groupName" (add one backslash)? The line occurs twice and needs be patched in both. [08:15:07] that won't help with existing groups, will it? [08:15:10] But, sure, one second... [08:15:31] andrewbogott: No, we would have to update the existing LDAP records for that. [08:15:39] (I think :-).) [08:16:28] ok, patched [08:16:44] I'll add a service group on Toolsbeta and see if it works. [09:38:28] andrewbogott: ping [09:38:33] andrewbogott: create a repo for me? :) [09:38:38] andrewbogott: mediawiki/extensions/Popups [09:38:59] yuvipanda: lemme see if I remember how... [09:41:19] yuvipanda: are you going to do an import, or do you want an empty project with an initial commit that you can build on? [09:41:26] andrewbogott: empty [09:41:58] description? [09:42:36] andrewbogott: 'Extension to display popups when you hover over article links' [09:43:46] yuvipanda: ok… git clone https://gerrit.wikimedia.org/r/mediawiki/extensions/Popups [09:43:53] Not sure about permissions, give it a try [09:43:54] andrewbogott: woo, thanks! [09:44:03] andrewbogott: hmm, it says 404 [09:45:28] nevermind [09:47:20] working now? [09:47:40] andrewbogott: yeah [09:47:45] andrewbogott: doesn't have permission to +2 though [09:48:03] andrewbogott: can you just give the default wmf group +2? [09:48:08] yeah, trying to remember how to do that… what's an example project where you can +2? [09:48:17] scfc_de: what did you conclude about that sudo change? [09:48:21] andrewbogott: every other project :D [09:48:24] other than ops/* [09:48:50] mediawiki/* is the right group [09:53:38] yuvipanda: um… ok, better? [09:54:12] legoktm: try? [09:54:38] andrewbogott: I have CR+2 but not V+2 [09:54:54] hm [09:55:00] qchris: around? ^ [09:55:10] Yes. [09:55:16] * qchris reads backscroll [09:55:31] yuvipanda: ok, fixed [09:55:32] I think [09:55:37] legoktm: try? [09:55:59] seems to work, andrewbogott! [09:56:03] I can CR+2 and V+2 but not submit :/ [09:56:06] andrewbogott: ^ [09:56:12] so close! [09:56:18] lol [09:56:19] ok [09:57:02] Mind if I clean the repos permissions up? [09:58:01] qchris: talking about popups? [09:58:05] please do [09:58:06] Yes. [09:58:23] My mistake was not setting the inheritence correctly, I don't think you can change it after the fact [09:58:33] andrewbogott: I guess you'll be the project owner? [09:58:49] yuvi please [09:59:01] qchris: Prtksxna should be the owner [09:59:03] ok. yuvi'll be the repo owner. [09:59:06] and probably yuvipanda too [09:59:15] ok prtksxna and yuvipanda :-) [09:59:18] ok [09:59:44] It would be easiest if I just nuke the repo and recreate. [09:59:51] Any objections against it? [10:00:02] nope [10:00:09] ok. [10:00:16] Gimme a few minutes. [10:01:34] qchris: can you do that via the gerrit interface or does that require alternative means? [10:01:47] andrewbogott: Re sudo change, the ":" is not the only blocker. Could you replace "chown" with "/bin/chown" in nova/OpenStackNovaServiceGroup.php as well? [10:02:05] andrewbogott: I installed the deleteproject plugin this morning that allows you to delete a repo [10:02:21] oh! That would've made this easier :) [10:02:22] andrewbogott: The version we use still allows to do that only through ssh [10:02:25] Well, is making [10:03:40] scfc_de: done [10:03:43] yuvipanda, prtksxna: Could you try whether it's working for you now. (You need to clone afresh) [10:03:56] andrewbogott: Thanks. Testing ... [10:04:01] oh, hmm [10:04:01] ok [10:05:09] qchris: thanks! [10:05:23] qchris: \o/ [10:05:42] So it's working? Great :-D [10:07:01] qchris: trying :) [10:07:04] andrewbogott: Bingo, it works. I'll submit a patch for extension/openstackmanager. Thanks! [10:07:30] ok -- I'm going to revert the changes for now [10:07:37] scfc_de: want to package new nginx? :D [10:07:47] andrewbogott: ^ [10:08:52] yuvipanda: Weren't you the one with better karma in the wikitech UI? :-) [10:09:04] i've never touched it, have I? :) [12:01:36] alias git="HOME=/home/$SUDO_USER git" has to be my new favorite alias [12:02:50] now if I could only get that to be read by become -_-' [12:23:12] valhallasw: Put it in - hmmm - .bashrc or .profile of the tool account? [12:23:26] .profile worked after all [12:23:29] .bashrc is ignored [13:00:41] valhallasw: The "canonical" thing to do is to invoke .bashrc explicitly in one's .profile so that it is sourced regardless of whether one has a login shell or not. [13:03:28] Coren, if you have a few minutes to spare today, could you poke at virt1000 and see why it can't serve up the wiki in /srv? I'm not blocked on that currently but we need it and I'm currently stumped. [13:04:13] andrewbogott: I'm heading for the airport in ~1h [13:04:24] ok, well… someday :) [13:04:41] That's BTW the default for "normal" Ubuntu users; cf. ~/.profile. [13:04:54] I think my fight to SFO has Wifi. If so, I'll take a look. [13:11:01] Coren: do you still have second? Can you tell me the current size of the database you moved from the replica to another storage? [14:43:39] !log deployment-prep restarting udp2log-mw on deployment-bastion. Logstash.wmflabs.org no more receiving fatals logs since Jan 31st [14:43:41] Logged the message, Master [15:12:20] (03PS2) 10JanZerebecki: Add labs ssl key for puppet role::planet. [labs/private] - 10https://gerrit.wikimedia.org/r/109480 [15:14:04] (03CR) 10JanZerebecki: "Added comment at the top of the key file. openssl cli doesn't complain and I hope neither will apache." [labs/private] - 10https://gerrit.wikimedia.org/r/109480 (owner: 10JanZerebecki) [15:52:09] Coren: are you about? [17:01:51] Hi all! [17:02:21] What is the recommended php opcode caching and shared memory storage solution? [17:02:26] APC? [17:02:45] I'm a bit reluctant to install this, with opcache being in php 5.5 [17:03:00] but the standard labs instance config pins wikimedia packages [17:03:21] and I don't know what mess I'll get myself into if I install outside packages. [17:10:39] also in the new ubuntu precise image on labs there is a redundancy in /etc/apt/preferences.d/wikimedia* [17:10:52] both files do the same thing [17:28:36] crickets [17:29:56] dschwen: it'll be a long time before we get 5.5 :P [17:29:59] so go with APC [17:30:23] well, I guess I'm too impatient [17:30:34] already installed 5.5 with APCu [17:30:48] heh :) [17:30:49] opcache+ > APC [17:30:57] dschwen: then that kinda answers your question, doesn't it? :0 [17:30:58] ) [17:31:02] yup [19:23:51] hello there [19:24:03] any help with mysql workbench? [19:26:47] or can you tell me if we can have a PHP application that can execute SQL queries on labs DB? [19:29:43] hello? [19:32:07] Superyetkin: what's the problem you're encountering? [19:32:52] Superyetkin: Hi. Did you read this https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Configuring_MySQL_Workbench [19:33:28] yes, I have read that [19:33:47] Superyetkin: So ? [19:33:48] cannot establish connection [19:35:16] the error code is 10061 [19:36:04] Superyetkin: can you ssh to tools-login.wmflabs.org via console? [19:36:14] what am I supposed to enter in "default schema"? [19:36:22] yes, I can ssh via putty [19:36:57] and use the username and password in replica.my.cnf file but that does not work on mysql workbench [19:37:59] Superyetkin: If you specify ie. enwiki.labsdb as hostname, try enwki_p as default schema [19:38:29] my mysql hostname is trwiki.labsdb [19:38:53] Superyetkin: so default schema would be trwiki_p [19:39:12] ok, but still not working [19:39:58] Superyetkin: did you notice the issue with MySQL Workbench and PuttyGen Keys? [19:40:42] what is the issue? [19:41:10] Superyetkin: it seem that putty keys don't work with Mysql Workbench [19:41:39] oh [19:41:47] is there any alternative? [19:42:02] Superyetkin: you have either to generate a new pair of keys or use the putty pagent [19:42:13] like connecting to wikimedia labs through a PHP application? [19:42:46] hedonil: you can convert keys using puttygen [19:43:23] valhallasw: I think you can convert from v1 to v2 [19:43:32] connecting to wikimedia labs through a PHP application ---- possible? [19:43:47] hedonil: v1/v2? it's putty vs openssh that's the issue [19:44:56] valhallasw: maybe, the same incompatibility openssl /openssh as in oauth [19:45:51] connecting to wikimedia labs through a PHP application ---- possible? [19:46:28] Superyetkin: no [19:47:03] so, what is the best way to create a tool? [19:47:12] ... [19:47:22] for example, how to set up an edit counter script? [19:47:24] I don't understand your question. [19:47:40] Ehm. [19:48:02] I have been using Mediawiki API for long but that provides limited capability [19:48:30] There is a webserver on Tool Labs. You can use PHP there, but I wouldn't call that 'connecting to wikimedia labs'. [19:48:31] I need to run some maintenance scripts and generate reports using the replica database [19:49:46] can I connect to the replica database from within the webserver? [19:50:13] yes [19:50:19] how? [19:51:21] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Connecting_to_the_database_replicas [19:52:11] I meant from within a PHP application [19:53:08] The same way you would do that on any host. [19:53:28] hmm [19:53:36] this is really confusing [19:53:45] Cyberpower678: Hi. how is it going? What are the plans for X editcounter? Migrate from php to python? [20:00:12] hedonil, god no [20:00:29] Cyberpower678: glad to hear this ;) [20:01:39] :p [20:03:22] Cyberpower678: yeah old and busted - but successful. It needs a facelift though :P [20:04:50] hedonil, it's on x-Tools/xtools [20:05:19] valhallasw: link or it didn't happen ; [20:05:39] äh Cyberpower678 [20:06:39] hedonil, how do you mess that up? :p [20:08:32] valhallasw: btw. If I talk about an easy key conversion, I see /one/ button or an oneliner to do this. not export->import->export ->3rd party tool etc... [20:10:26] Cyberpower678: I'm a bit confused and very pissed [20:11:01] Cyberpower678: so/sth fucked up my database again - this has to be inverstigated [20:12:44] hedonil, who? [20:13:10] Cyberpower678: I don't know yet [20:13:47] hedonil, https://github.com/x-Tools/xtools [20:15:09] hedonil: ?? [20:15:20] * hedonil is looking into it [20:15:28] hedonil: PuTTYgen is what these people used initially to create their keypair [20:15:32] so they already *have* it [20:19:05] Cyberpower678: When will the new XCounter be online? [20:19:21] hedonil, dunno. [20:23:57] Hi! I'm trying to run my script on grid but I'm getting the following error "libgcc_s.so.1 must be installed for pthread_cancel to work". Any hints? [20:24:19] DixonD_: You win! [20:24:55] DixonD_: It's in the docs.. You need moar memory -mem 500m [20:25:37] Thanks, I really need to finish reading docs, I guess:) [20:26:16] DixonD_: yw. it seems to be a rite for all fresh php guys on labs ;) [20:26:42] Damianz: pioung :o [20:26:47] we need to fix icinga [20:26:59] there is still 400+ puppet checks waiting [20:27:23] I assume it doesn't have the stupid hook trap thing setup [20:27:33] Or hosts havn't updated recently [20:27:47] yes that's it [20:27:52] the stupid thing isn't there [20:28:07] @notify mutante [20:28:07] This user is now online in #wikimedia-tech. I'll let you know when they show some activity (talk, etc.) [20:28:13] I can look at it after I send some emails... think I remember how it was setup [20:28:27] fuck [20:28:36] wm-bot: fix yourself dud [20:28:36] Hi petan, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [20:41:39] petan you about? [20:42:27] almost [20:42:28] :P [20:44:10] Betacommand . [20:44:30] petan: user_daily_contribs isnt available on labs [20:44:49] aha that I can't fix I am not allowed to touch sql [20:45:00] you need Coren :o [20:48:01] has anybody ever used .lighthttpd.conf on labs? [20:48:41] it doesn't seem to honor url.rewrite_once at all. nor do i see any effect from "debug.log-request-handling = "enable"". [20:49:01] (03PS1) 10Yuvipanda: Ignore messages from the VE sync bot [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/111889 [20:49:51] (03PS2) 10Jforrester: Suppress updates to mediawiki/extensions [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/110830 [20:50:12] (03Abandoned) 10Yuvipanda: Suppress updates to mediawiki/extensions [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/110830 (owner: 10Jforrester) [20:52:04] petan I have an idea/thought that I want to bounce off you [20:53:55] petan: snmp shizzle is up, but names are wrong [20:54:07] ? [20:54:11] which names [20:54:48] Passive check result was received for service 'Puppet freshness' on host 'boogswolibs', but the host could not be found! [20:54:51] need to add the domain [20:55:00] I thought I did this ages ago... maybe it never got merged [20:55:08] was one of the outstanding before move things [20:55:17] no idea why is that [20:55:22] That's all puppet/instance end though, box is sorted [20:55:24] oh maybe I know [20:55:31] we are using fqdn for hostst [20:55:38] yeah, which is correct [20:56:03] This was the problem ages ago... when I 'fixed' it for duplicate names on the second cluster [20:56:03] but nagios needs to know which one it is, maybe some alias would fix that? [20:56:15] The instance knows where it is... well puppet does [20:56:18] So we can just change the trap [20:56:22] And that should fix it [20:56:25] In about 6 months [20:56:27] When its merged [20:56:40] yes but nagios needs to associate this with the definition it has [20:56:51] mhm [20:57:02] how would you change the trap? [20:57:28] Actually 1min [20:57:32] :o [20:57:33] Maybe I can fix this right now [20:57:36] * Damianz opens a man page [20:57:43] !man [20:57:43] https://labsconsole.wikimedia.org/wiki/Special:NovaProject [20:57:47] :o [20:57:48] lol [20:58:15] !petan [20:58:15] Petr Bena - http://enwp.org/User:Petrb (hates python) :D [21:00:10] Yeah I fixed it [21:00:14] Because I'm just awesome like that [21:00:40] how [21:00:54] update https://wikitech.wikimedia.org/wiki/Icinga/Labs :o [21:00:59] we need docs [21:00:59] Passing the full FQDN of the remote the trap came from, rather than stripping the domain [21:01:02] Can't [21:01:05] why [21:01:07] I broke my phone so can't login [21:01:10] you banned? :D [21:01:11] https://gist.github.com/DamianZaremba/8852325 is how to set it up [21:01:12] ah lol [21:01:21] Need to find my codes/bribe Coren [21:01:24] petan what do you think of creating a static symlink to the most current version of the database dumps? [21:01:27] is there a way to verify whether lighthttpd reads the config at all? where is output from "debug.log-request-handling" supposed to show up? it's not in ~/access.log. [21:01:49] * Damianz waits to see if things come green... commands are getting accepted and not erroring [21:01:57] Betacommand: that is something I would think we already have if you didn't tell me we don't :D [21:02:08] petan: we dont [21:02:14] mhm [21:02:20] well we should have it [21:02:36] but problem is I don't even know how / who maintains these dumps [21:02:43] I think apergos is not the person [21:02:48] petan it would make maintaining my dump scanners far easoer [21:03:05] not only that I guess :P [21:03:40] Its annoying to have to look up the new path every time there is a new dump :( [21:06:29] JohannesK_WMDE: the error output is in error.log, with a some delay [21:07:55] hedonil: i know that's where it *should* be. did you ever successfully create rewrite rule with lighthttpd? [21:08:29] JohannesK_WMDE: yes. [21:08:43] hedonil: on labs? [21:09:08] * hedonil looks for the docs [21:10:03] !newweb [21:10:03] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help/NewWeb [21:10:24] JohannesK_WMDE: the rewrite examples have been tested [21:12:00] JohannesK_WMDE: .. and use += rather than only = [21:16:09] hedonil: ok, .lighttpd.conf != .lighthttpd.conf [21:16:33] wrong filename, that was the problem... [21:16:33] JohannesK_WMDE: yeah ;) [21:18:08] * Damianz wonders if andrewbogott_afk is around [21:26:56] Damianz u sure it's fixed? [21:27:00] :/ [22:01:57] (03CR) 10Dzahn: [C: 032] Add labs ssl key for puppet role::planet. [labs/private] - 10https://gerrit.wikimedia.org/r/109480 (owner: 10JanZerebecki) [22:02:05] (03CR) 10Dzahn: [V: 032] Add labs ssl key for puppet role::planet. [labs/private] - 10https://gerrit.wikimedia.org/r/109480 (owner: 10JanZerebecki) [22:06:15] I just logged into my wikimedia-labs bastion account and I have some questions. can someone help me? [22:07:13] I logged into my account using ssh -A hcohl@bastion.wmflabs.org [22:07:19] I get in fine [22:07:34] but now I don't know my password!? [22:07:52] I want to change my shell [22:12:58] there is no tcsh on bastion [22:17:22] hi, eh, just to make sure because it's been a while i needed this [22:17:25] mutante: can u help us with nagios [22:17:29] after merging stuff in labs/private [22:17:34] is there another step to do [22:17:50] nothing like puppet-merge, right [22:17:55] petan: depends? [22:18:03] icinga or real nagios [22:18:11] icinga [22:18:14] what is the issue [22:18:19] we need to fix that puppet check [22:18:30] it sends the data to host but idk how to make it work [22:18:34] is this a new icinga setup? [22:18:39] or the same one [22:18:42] there is some trap thing [22:18:45] the old check [22:18:49] yea, didnt we do that before [22:18:51] is there some new check? [22:18:51] like twice [22:18:57] yes we did [22:19:01] and I forgot :D [22:19:05] so i guess it's still the same thing [22:19:09] this is new installation on brand new box [22:19:11] that there is no script [22:19:16] that starts it on boot [22:19:22] hmm [22:19:22] and each time the instance gets restarted [22:19:24] it breaks again [22:19:33] well it just doesn't work [22:19:35] oh, new installation? [22:19:38] so it's puppet? [22:19:39] yes [22:19:43] aha! nice [22:19:49] new installation, not using puppet tbh [22:19:49] !log beta Manually ran changePassword.php to help someone (password reminder emails don't get sent) [22:19:49] beta is not a valid project. [22:19:54] there is no icinga class so far [22:20:04] Krinkle: deployment-prep [22:20:28] !log deployment-prep Manually ran changePassword.php to help someone (password reminder emails don't get sent) [22:20:29] Logged the message, Master [22:20:33] we still haven't renamed that? [22:20:34] ugh, [22:20:48] petan: the icinga class from production? [22:20:56] arg, doc.wm broken [22:21:01] hold on, be right back , reporting that [22:22:28] mutante: I am not going to just apply some production class for service which is completely different on labs than on production :P [22:22:42] that would break everything horribly [22:23:11] petan: the point has been to finally be able to test production changes [22:23:20] but maybe that's 2 different kinds of icinga you want [22:23:20] well, yes [22:23:32] but I would need to talk to someone who made that class [22:23:32] sorry, but i have been saying this since before install 1 [22:23:36] and we keep fixing stuff [22:23:38] that is manual [22:23:39] or at least test it first [22:23:41] so it's kind of hard [22:23:43] on another instance [22:23:43] to help [22:23:59] of course test it first [22:24:02] but that's what labs is for [22:24:18] it would be a lot easier for me to help [22:24:25] if you say here's this puppet issue [22:24:28] with the prod class [22:24:33] and i could submit gerrit possibly [22:25:03] but back to the current problem, you need to make sure snmptrapd is running [22:25:08] if it's the same issue as before [22:25:21] it's in a mail thread.. hold on [22:26:06] yes it is running [22:27:21] http://lists.wikimedia.org/pipermail/labs-l/2012-March/000128.html [22:27:48] i can get you the example from neon [22:28:09] /usr/sbin/snmpd -Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid [22:28:15] /usr/bin/perl /usr/sbin/snmptt --daemon [22:28:24] /usr/sbin/snmptrapd -On -Lsd -p /var/run/snmptrapd.pid [22:28:41] you should see both, snmptt and snmptrapd [22:32:21] The problem in labs is the naming is not consistent... it should be working fully now (I've made it do a reverse lookup that matches what's calculated as the hostname from puppet data for monitoring) [22:32:38] Though snmptt/snmptrapd is just generally a load of bollocks and should diaf [22:34:14] Btw is there any reasons down hosts are not ignored and up hosts (are sending snmp traps) are ignored? [22:34:17] This seems illogical [22:35:01] * Damianz notes wrong button [22:38:03] i'm not sure i get what you mean and what the actual issue is [22:38:12] hosts who complete puppet runs send a packet out [22:38:23] as long as that packet is recevied by snmtrapd [22:38:32] it passes it to snmptt [22:38:48] and if the hostname matches the hostname in nagios/icinga, (yea, that has to match somehow) [22:38:54] then that passive check stays OK [22:39:02] once those packets stop coming in for too long [22:39:04] it turns critical [22:39:23] that's actually cool, you dont need to actively check things over and over [22:39:34] you just sit there and the monitored hosts keep saying hi [22:39:37] unless they dont [22:39:45] And then on labs it fails, because the script snmptt calls gets the i-xx rather than proj-xx (which is used in icinga, from puppet data)... so now it looks up the correct name based on ip, which is the same logic that is used to generate the configs... so it all works. [22:39:46] which could be for multiple reasons, but you catch them all [22:40:00] yea, exactly what i explained in that mail in 2012 [22:40:04] sorry but i did [22:40:13] and there was another thread about it [22:40:18] the instance naming [22:40:29] then people decided to set it up manually [22:40:34] and get the instance names from labs [22:40:35] which is cool [22:40:46] but now it's broken again and has nothing to do with the prod setup [22:41:06] and i'm being asked about it every couple months again [22:41:25] so either you can script it again [22:41:31] or just live with "ugly" instance names in Icinga [22:41:42] which i don't think is such a big deal [22:42:24] the point being that you had those problems because you setup it up manually, and now it's being used as a reason why [22:42:35] we supposedly can't use the existing stuff [22:42:39] It's working with nice names right now ;) (Or will be once everything calls home... few hundred done so already).. this is an issue I've see go around in loops a few times and I kinda don't care because it could be done so much better. [22:42:58] i have been saying this since March 2012, kthx [22:43:07] and i have explained how it works [22:43:10] in great detail [22:43:22] even brought up the EC2 tool to get the instance names [22:43:25] see list thread [22:44:46] if actual hostnames wouldn't be hidden for some style reason there wouldnt be an issue [22:45:31] maybe one way is to use both [22:45:34] hostname and alias [22:45:37] in icinga config [22:46:49] i think we should be able to test changes in prod in labs, but this way we never will be [22:47:20] and you cant even suggest how to replace snmptrapd with a better system if you think it's so bad [22:48:03] because nothing is in puppet or gerrit [22:48:40] but people say it's just more complicated.. here you go when it's 2 years later [22:48:54] (03CR) 10Legoktm: [C: 032] Ignore messages from the VE sync bot [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/111889 (owner: 10Yuvipanda) [22:48:57] (03CR) 10Jforrester: [C: 032] Ignore messages from the VE sync bot [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/111889 (owner: 10Yuvipanda) [22:49:06] and it's like starting from scratch [22:50:11] !log tools restarting grrrit-wm https://gerrit.wikimedia.org/r/111889 [22:50:12] Logged the message, Master [22:52:11] Damianz: any chance that you can document that :o [22:52:20] See previous link [22:52:31] which one the thread? [22:52:45] https://gist.github.com/DamianZaremba/8852325 [22:54:22] mhm [22:54:30] Krinkle: re bug 54710: Do you want to set it up on a other proxy? Seems already working except that it is blocked by 60865 [22:54:41] Damianz: I don't get it [22:55:00] how does snmpblabla contact icinga? [22:55:04] how does it do that? [22:55:15] through some pipe or what [22:55:19] Yeah [22:55:24] See the first line [22:55:40] When that is set the path in the config becomes a place you can write to [22:55:56] se4598: Well, anything as long as it works, but I don't know what it is on right now. I imagine it might be useful to use Yuvi's HTTPS proxy for all these projects that don't need their own web servers / https setup per se. [22:56:01] What is it on now? [22:56:19] Damianz: ok but how snmpthingies know where to write and what [22:56:26] And why is it giving an ssl error? Would that be solved by switching to Yuvi's proxy? Or is it caused by something in the ganglia project itself? [22:56:33] in which format and so on [22:56:51] I get the part on icinga side I don't get the side on snmpthingie side [22:57:05] maybe I should just go sleep :D [22:57:06] Krinkle: https://ganglia.wmflabs.org/ [22:57:18] snmptrapd calls snmptt, snmptt calls my script with --ip=$theipitgotthemessagefrom, my script then pulls the hosts from ldap and finds the ip (I can't search the aRecord attribute as it's not indexed) and then writes out the correct thing to the external commands file [22:57:29] Krinkle: the same message as for https://ee-dashboard.wmflabs.org/ [22:57:30] se4598: Serves an error by the browser (Error code: ERR_SSL_PROTOCOL_ERROR) [22:57:43] aha I see that line 98 [22:57:44] Yeah, but https://cvn.wmflabs.org/api.php works fine [22:57:50] which uses Yuvi's proxy [22:57:50] that is what I was missing all the time [22:58:00] and various others as well [22:58:45] Damianz: aha, so you did write a script to fix, very nice [22:58:49] does icinga uses the proxy too? [22:58:55] se4598: yes [22:58:55] so we can just add that to puppet [22:59:04] and install it if $realm is labs [22:59:09] and not otherwise and should be it [22:59:18] i can help with that later if you put the script somewhere [22:59:22] bbl [22:59:36] Puppet needs a few tweaks I think, but yeah - we should also add the builder stuff that's currently a clone and crontab'd [23:01:17] Krinkle: ok, then both over the proxy.... as long as nobody knows/investigates the reason for the "ssl_error_rx_record_too_long" [23:04:10] Krinkle: feel free to to add the ee-dashboard to the bug/setup and close the two others in favor this one [23:05:14] se4598: which others? [23:05:27] https://bugzilla.wikimedia.org/show_bug.cgi?id=60865 [23:05:30] https://bugzilla.wikimedia.org/show_bug.cgi?id=57371