[00:06:02] New patchset: coren; "Bump version for packaging previous fixes" [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/67930 [00:06:49] New review: coren; "Version bump" [labs/toollabs] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/67930 [00:06:49] Change merged: coren; [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/67930 [01:22:18] Coren or petan: Can I get either libdigest-crc-perl or libstring-crc32-perl installed? [02:05:27] petan: Also, it looks like sharp-memcached is not binary safe. The following command fails when it should succeed: echo -en 'set test 0 0 4\r\nf\xc3\xb3o\r\nget test\r\nquit' | nc tools-mc 11212 [02:05:40] petan: And this succeeds when it should fail: echo -en 'set test 0 0 3\r\nf\xc3\xb3o\r\nget test\r\nquit' | nc tools-mc 11212 [02:11:11] paravoid: http://www.mediawiki.org/w/index.php?title=Git%2FNew_repositories%2FRequests%2FEntries&diff=708815&oldid=708814 [02:11:13] :-P [03:05:41] why am i suddenly getting jsub errors? [03:05:51] Before I could just use jsub python script.py [03:06:12] Now I'm getting http://pastebin.com/eW3pqYBa [03:08:11] petan: ^^ ? [05:35:59] theo|cloud tools or bots [07:06:23] petan: Kk. :) [07:06:34] here [07:16:23] a930913 NotImplementedError: no ability to handle more than one release [07:16:33] pypi can't convert lxml :/ [07:32:17] @notify anomie [07:32:18] I'll let you know when I see anomie around here [07:36:01] !log tools petrb: installed libdigest-crc-perl [07:36:03] Logged the message, Master [07:49:07] petan: :/ [07:49:34] petan: I could do it locally, right? As in virtual environment. [07:49:44] a930913 make a package from it and I will install it [07:50:17] petan: The most I know about packaging is apt-get ;) [07:53:09] the most I know about python is that it's a snake [07:54:02] petan: It's a group of comedians... [07:54:09] * a930913 sighs. :p [07:55:17] http://3.bp.blogspot.com/-Z-587LkJj_0/TqY-rXZCrkI/AAAAAAAAAN8/UxD_JmFRXOg/s1600/python_snake.gif comedian? [07:55:22] !python is http://3.bp.blogspot.com/-Z-587LkJj_0/TqY-rXZCrkI/AAAAAAAAAN8/UxD_JmFRXOg/s1600/python_snake.gif [07:55:23] This key already exist - remove it, if you want to change it [07:55:25] petan: hey there! Have you solved your vimrc configuration on labs ? :D [07:55:30] !python [07:55:30] Damianz [07:55:32] aha [07:55:35] hashar not really [07:55:53] petan: let me know if you need any help / tip [07:55:55] hashar: but I found out it's not in puppet, so you can safely sudo rm the wmf version :3 [07:56:16] which makes it possible for you to use own .vimrc [07:56:16] what is your issue with vim in labs ? [07:56:24] I don't like the colors [07:56:27] it break terminal [07:56:36] and indentation rules suck [07:56:46] I have to switch to paste mode in order to paste :/ [07:56:51] with my own config I do not [07:57:08] kkk [07:57:30] so when you paste, vim will apply the indentation/autoformatting rules to the text being pasted [07:57:36] yes [07:57:39] which totally break it [07:57:41] if you want to disable that, you can: set paste [07:57:49] I know, that is how to switch to paste mode [07:57:53] but I am lazy :D [07:57:58] one problem solved :D [07:57:58] my own config doesn't require that [07:58:08] idk why [07:58:11] for the colors misbehaving in your terminal, I am not sure what you mean [07:58:18] when I turn off vim [07:58:20] a screenshot would be nice [07:58:22] terminal is all green [07:58:39] when I open other application which change colors it get fixed [07:58:42] like htop [07:58:43] even when simply opening and closing it ? [07:58:46] or mcedit [07:58:48] yes [07:58:54] weird [07:58:55] when I open and close vim - terminal is broken [07:59:28] :q -> htop -> q == annoying [07:59:32] maybe something in your environment. You can try: bash --norc --noprofile [07:59:43] but it happens everywhere on labs :/ [07:59:45] that will start a new shell without loading any bashrc / bash profile .profile .. [07:59:47] and only vim does it [08:00:19] so that is most probably your terminal :( [08:01:43] !pythonguy is this guy master python more than you: http://lh5.ggpht.com/-gjDgXLXmWTQ/TsuuOwSKWHI/AAAAAAAAk4w/XJOKxaGti-c/boy%252520python%252520bff%252520snake%2525206_thumb.jpg [08:01:43] Key was added [08:02:44] hashar it happens in lxterm as well as gnome-terminal :/ [08:02:50] I think in putty as well [08:02:50] petan: it must be some escape sequence that your terminal does not recognize or mis interpret [08:04:09] !pythonwalkthrough is http://stream1.gifsoup.com/view2/1903280/ministry-of-silly-walks-o.gif [08:04:10] Key was added [08:10:36] !tooldocs [08:10:36] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [08:28:23] petan: What's the channel to @join wm-bot again? [08:28:33] #wm-bot [08:28:38] legoktm: Like my walkthrough? :) [08:28:39] it's @add [08:28:42] not @join [08:28:52] a930913: busy sorry [08:28:57] LOL [08:29:26] petan: Ah, that's why :D [08:30:48] petan: It was @relay on wasn't it? [08:31:03] @relay-on [08:31:03] Relay is already enabled [08:31:05] with - [08:31:07] :o [08:31:09] Ah. [08:35:00] petan: I was wondering if it might be worth adding some sort of token authentication to prevent this? [08:35:13] it's already logging it... [08:35:22] but yes that is possible [08:35:23] Logging what? [08:35:26] @verbosity++ [08:35:26] Verbosity: 1 [08:35:28] @verbosity++ [08:35:28] Verbosity: 2 [08:35:34] idk which level it's on [08:35:45] @verbosity+-0 [08:35:46] but on one of the levels it display the source of message in system log [08:35:58] Test? [08:36:32] LOG [6/11/2013 8:35:58 AM]: DEBUG: Relaying message from 10.4.0.220:41103 to #wikimedia-labs:Test? [08:36:48] petan: Source as in user? Surely just IP, in which case... yeah, how does that identify me? [08:36:55] As in, it's from the labs. [08:37:00] That's all you know. [08:37:03] hmm [08:37:22] yes I will implement the tokesn then [08:37:40] @verbosity-- [08:37:40] Verbosity: 1 [08:37:41] @verbosity-- [08:37:41] Verbosity: 0 [08:37:53] the debug log is really huge on verbosity about 8 [08:37:54] :P [08:38:04] petan: :p [08:38:07] like 2000 lines per minute [08:38:10] petan: I know when debuglog gets too big [08:38:11] :o [08:38:15] ツ [08:38:36] AzaToth: You made Twinkle, right? [08:39:07] a930913: aye [08:39:45] petan: a bot I ran on ts before had log eating up quota [08:39:54] heh [08:40:06] AzaToth: Any chance you could help me with WAVE? [08:40:09] gry: so, you said you needed user rights across wikis? [08:40:12] yes I think beetstra or someone was eating whole labs storage in past [08:40:17] yes [08:40:39] gry: user rights to delete stuff and blow up the wikimedia project? :P [08:41:01] gry: I dont know if there is a tool active yet/atm/in planning for displaying such data [08:41:03] let me think... mhm, ok: {{granted}} [08:41:20] petan: thanks, do you know what api query I can use on a username to retrieve their user rights across wikimedia wikis? want to do that to a bunch of users (~40) once only, and I can try to script that [08:41:33] petan: not in the 'grant me al the things' sense, I'm just trying to get a list of such rights for another user's username [08:41:44] that's some nice humour there though, I like it :) [08:41:46] I think there is [08:41:54] I don't think there is api query for that, which is a shame :/ on other hand you can get this from db [08:41:55] but I cant remember which [08:42:03] I think there is a service [08:42:11] yes service [08:42:13] dunno if it's on ts, labs, or wherever [08:42:15] but no query using mediawiki [08:42:36] gry: is it "api" you need, or just a lookup once [08:42:52] just lookup once, it's not a repeat task [08:43:06] what's the username then? :P [08:43:12] * Beetstra munches on the last couple of bytes .. burp [08:43:22] http://toolserver.org/~quentinv57/sulinfo [08:43:27] gry ^ [08:43:28] I have a list of fourty members, http://toolserver.org/~quentinv57/sulinfo/ gives it for one username but not for all [08:43:38] ah ok [08:44:44] Io assume a simple SQL could solve it quicly petan ? [08:44:49] ±spelling [08:44:57] * a930913 nybbles on Beetstra. [08:45:12] a930913: word bro' [08:45:47] AzaToth: ? :/ [08:46:21] OMG [08:46:32] AzaToth you are that guy who moved jsub from /usr/local/bin ? [08:46:48] next time pls create a symlink just to keep backward compatibility [08:47:04] petan: I didn't move them, Coren did the actual work [08:47:09] mhm [08:47:16] and Coren said Coren was going to keep symlinks around [08:47:23] * a930913 sharpens his pitchfork. [08:47:31] he didn't [08:47:32] :P [08:47:40] well, that's his ass thenn [08:48:00] Coren: Coren Coren Coren [08:48:00] which is why all my bots and tools just broke since midnight [08:48:04] hehehe [08:48:06] I fixed it, no problem [08:48:36] AzaToth: Can you help me with WAVE? [08:48:46] a930913: sorry, forgot you asked [08:48:52] whats WAVE? [08:48:54] AzaToth: It's like a huggle/STiki in the browser. [08:49:09] never heard about it [08:49:15] huggle in browser != possible [08:49:40] we were working on it in past, wmf shut it down [08:49:52] haven't used huggle [08:49:55] At the moment, it just hacks into twinkle a bit in order to revert and popup warn. [08:50:04] petan: Why did WMF shut it down? :/ [08:50:09] a930913: linky? [08:50:10] authentication [08:50:33] petan: In terms of it must need rollback? [08:50:41] Or in terms of asking for users passwords? [08:50:48] they said if we were asking user for a SUL password in a browser they would block the access of any such a service to cluster [08:50:58] I'm not asking for passwords. [08:51:05] and, on other hand they provide no other way beside asking for a password [08:51:07] It uses the currently logged in guy. [08:51:09] petan: well oauth is coming too….. [08:51:09] petan: why would you ever ask for password? [08:51:12] which is a deadlock a bit [08:51:19] legoktm this was like years ago... [08:51:25] petan: :D [08:51:27] and oauth is still coming since then [08:51:35] hehe [08:51:46] AzaToth: https://en.wikipedia.org/wiki/User:A930913/wave [08:51:57] AzaToth I would ever ask for a password because oauth is coming for years... [08:52:04] * AzaToth still waits for bug 1 to be fixed [08:52:05] and until it come I will have to [08:53:06] a930913: sorry, need to run, bbl [08:53:49] AzaToth: Kk, ping me. [09:02:34] petan: If they are in their browser, why do you need to authenticate them? ;) [09:13:07] a930913 whart [09:13:17] ah [09:13:22] I will explain to you later [09:13:27] :O [09:15:47] !ping [09:15:47] pong [09:15:52] !log bots petrb: test [09:15:54] Logged the message, Master [09:17:27] Is spawning a number of processes for each enwiki edit a bad idea? [09:22:21] a930913: http://meta.wikimedia.org/wiki/Wm-bot#.40token-on [09:22:37] @token-on [09:22:37] New token was generated for this channel, and it was sent to you in a private message [09:22:49] I don't like that one [09:22:50] @token-on [09:22:50] New token was generated for this channel, and it was sent to you in a private message [09:23:07] petan: tokens are? [09:23:12] oh [09:23:14] authentication [09:23:26] yes [09:23:51] petan: Can you recollect token? (Assume has to be chanop?) [09:24:15] yes [09:24:18] @token-remind [09:25:08] petan: \o/ [09:25:25] !log bots petrb: test [09:25:27] Logged the message, Master [09:29:41] !log bots petrb: this is a test ignore it [09:29:43] Logged the message, Master [09:31:05] a930913: so what was the issue? [09:31:40] AzaToth: I don't know the mw js too well. [09:32:18] a930913: they are (should be at least) documented at http://www.mediawiki.org/wiki/ResourceLoader/Default_modules [09:33:17] sadly some methods there have been created with a single target in mind, thus can be a bit limiting [09:33:50] AzaToth: I want to automate certain events, such as "revert current page with edit summary 'foo', load the user's page and template it with {{subst:bar}} under the l2 heading." [09:34:37] you can do such things either async or sync [09:35:13] i.e. on a mw.Api.post after another mw.Api.post or in the done callback [09:35:43] !log bots petrb: blabla [09:35:45] Logged the message, Master [09:36:03] in twinkle, we'll ignore good practice and just pump the requests to the server [09:36:09] AzaToth: Yeah I know, but your twinkle stuff does most of it already :) [09:36:16] especially on the batch modules [09:36:22] generally we do it async [09:36:39] only when we need the reply, we do it sync [09:37:30] Though in my example, you could do it all sync, because then you wouldn't template without reverting, and it would all be done in the background anyway :) [09:37:35] in twinkle we dont use mw.Api for edits as we made a own lib before mw folks decided to implement mw.Api [09:38:02] yea [09:38:17] reverting is something that can be aborted [09:38:51] for example here I'm making 3 get requests async: https://github.com/azatoth/twinkle/blob/master/modules/twinklearv.js#L335 [09:39:14] because they doesn't depend on eachother [09:41:30] How can you test, without committing a wiki edit? Is that possible? [09:42:40] a930913: you can test watching and unwatching a page or something. [09:42:46] or a purge [09:44:20] legoktm: ? [09:44:49] [02:41:30 AM] How can you test, without committing a wiki edit? Is that possible? <-- what are you trying to test? login session? [09:45:55] legoktm: Something like a revert and warn, for example. [09:46:20] what are you trying to test? your JS that would revert and warn? [09:47:03] legoktm: I mean, let's say you miss a semicolon. Of course you could catch that other ways, but testing it on the wiki, how could you catch it without committing it? [09:47:32] Erm, I don't follow. [09:47:43] Where is this semicolon missing? [09:47:47] In your .js code? [09:47:51] Run a JS linter.... [09:49:00] legoktm: Yeah, but let's say I put "msw" instead of "mw". [09:49:12] load it from your JS console [09:49:13] idk [09:49:47] legoktm: That's why I was asking AzaToth :p [09:50:14] :P [09:57:05] Warning: There is 1 user waiting for shell: Telmar (waiting 0 minutes) [10:02:57] a930913: either test on a test page or on a test wiki [10:08:16] Mmm, thought as much. [10:09:47] Should I make my wave wiki code simply "eval(msg)" and send all the code from the wave UI? [10:10:35] Warning: There is 1 user waiting for shell: Telmar (waiting 13 minutes) [10:12:38] never use eval [10:20:07] AzaToth: Is usually the kneejerk answer ;) [10:21:56] * a930913 -> afk. [10:24:02] Warning: There is 1 user waiting for shell: Telmar (waiting 27 minutes) [10:37:32] Warning: There is 1 user waiting for shell: Telmar (waiting 40 minutes) [10:41:31] !rq Telmar [10:41:31] https://wikitech.wikimedia.org/wiki/Shell_Request/Telmar?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Telmar?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Telmar [11:10:07] legoktm: Are the permissions on your ~/.forward correct? (I. e., writeable only by user.) [11:10:19] * legoktm checks [11:11:09] -rw-rw-r-- 1 legoktm wikidev 28 Jun 4 06:53 .forward [11:11:11] so no [11:11:14] is that an issue? [11:12:19] legoktm: Yes: "legoktm@tools.wmflabs.org (generated from local-dbreps@tools.wmflabs.org) retry timeout exceeded" :-). "chmod 644 ~/.forward" should do. [11:12:54] {{done}}. sorry about that [11:13:16] No problem, you just missed the jsub spam from dbreps :-). I'll change that to "jsub -quiet" later. [11:57:44] !toolsdoc [11:57:48] !tooldocs [11:57:48] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [12:45:09] petan: ping [12:45:53] hey [12:46:09] anomie: I saw the report, will fix it soon [12:46:26] petan: ok, that's what I was pinging about [12:46:39] Also, thanks for installing the crc package [12:46:50] you know you can always use normal memcache on 11211 which is stable :> [12:47:18] That's what I am doing for now [13:07:18] Moo! [13:09:41] Don't scare off the unicorn! [13:19:46] scfc_de I think that Coren, pretending he is a cow, is just trying to attract the unicorn in order to perform something pervert later... [13:20:03] :P [13:20:15] poor unicorn [13:20:17] !unicorn [13:20:18] http://www.ascii-art.de/ascii/uvw/unicorn.txt [13:21:27] !unicorn del [13:21:27] Successfully removed unicorn [13:23:01] !uncorn is what you think that labs are like: http://static.giantbomb.com/uploads/original/1/17172/1419618-unicorn2.jpg what labs are actually like: http://img3.etsystatic.com/000/0/5177778/il_fullxfull.306517207.jpg [13:23:01] Key was added [13:23:43] !unicorn is what you think that labs are like: http://static.giantbomb.com/uploads/original/1/17172/1419618-unicorn2.jpg what labs are actually like: http://img3.etsystatic.com/000/0/5177778/il_fullxfull.306517207.jpg [13:23:44] Key was added [13:23:47] !uncorn del [13:23:48] Successfully removed uncorn [13:48:51] petan: More like http://i.ebayimg.com/t/Steampunk-Fantasy-Mechanical-Robot-Unicorn-Damask-Dressage-Art-Print-8-5x11-/00/$(KGrHqZ,!g4E2eQ7SdtNBN1pzD3epg~~_1.JPG [14:24:47] heh [14:42:39] Coren: rehi - ssh issues look at http://pastebin.com/DRjwXUxP [14:43:30] Oren_Bochman: That looks like normal behavior for connecting to an instance part of a project you are not a member of. [14:43:45] ok [14:46:50] but I am a member of moodle project which has that instance [14:47:24] also projectadmin [14:52:17] Coren: [14:58:07] Oren_Bochman, I can't log in as root on that system either… seems broken. [14:59:00] oh, wait... [15:00:06] it is online and working [15:00:07] just soemthing wrong with ssh [15:01:54] I made a new instance he-moodle-25 yesterday and I can't access that either [15:01:55] can you log into any of the instances in that project? [15:02:01] no [15:04:24] Do you mind if I reboot he-moodle-25? [15:04:32] no [15:04:44] It has never been logged into [15:04:47] Hm… maybe I don't need to. [15:04:50] Try access now? [15:05:14] wow [15:05:16] ls [15:05:48] It looked like the processes in charge of /home were hanging… I killed and restarted everything. [15:05:49] Better? [15:06:08] yes [15:06:58] I was having a similar issue moving from toollabs-login to toollbas-dev [15:07:15] Similar how so? [15:07:40] sec [15:09:12] I could not ssh from tool-login into tools-dev [15:09:17] but now it works [15:10:40] ok then :) [15:12:47] does anyone know if there are any best practices regarding the uid and gid exposed in user database names? [15:12:55] just wondering before I push source code to bitbucket with them in it [15:13:00] if I shouldn't [15:18:30] Nettrom: I think they are fixed, so you can rely on them not changing. (I don't even know where you could get "50380" from -- OpenStack API?) [15:19:35] scfc_de: me neither, just wondering if there's any security concerns about exposing them [15:19:59] because I have no idea [15:22:36] Nettrom: I don't think that knowing the uid/gid could give an attacker any advantage, especially on an open system like Tools where accounts are given out freely. [15:24:52] AzaToth: I am interested in adding twinkle like CSD looging to my user scripts [15:24:58] AzaToth: how do I get strarted ? [15:25:40] well, most hardcore nerds would say to read the source ツ [15:26:19] scfc_de: that's what I thought too, couldn't really think of it as critical info... thanks! [15:26:53] Oren_Bochman: https://github.com/azatoth/twinkle/blob/master/modules/twinklespeedy.js#L1174-L1232 [15:27:03] that's the log code for CSD [15:27:36] Nettrom: The usernames are arbitrary; they're generated from uidgid, but they're set in stone. Even if we change the scheme eventually, only new accounts would be affected. [15:28:33] Nettrom: They're just a convenient method of generating known unique names. [15:28:40] AzaToth: thanks I'll read it and ask if I get stucj [15:28:51] you will [15:30:49] Cyberpower678: you there? [15:31:03] Yes. But not for long. [15:31:09] gonna pm you [15:31:15] ok [15:36:06] Coren: /data/project/anomiebot/labs-completion [15:37:36] ugh. *personally* I think the default bash is already annoyingly overloaded with completions, but I see nothing wrong with the concept. :-) [15:40:27] I just got sick of typing "become ano" and having it not work. Then I did "sql en" and got annoyed enough to make a completion file. But yeah, when the completion doesn't work right (e.g. they left something off the list of extensions a command can handle) it's annoying. [15:44:02] It confuses me more when bash tries to be smart and for example ignores files, because the parameter only takes directories as arguments. But this discussion got me to read the man page: "complete -r" :-). [15:45:11] Coren: Also, I'm surprised all AnomieBOT's processes use only 0M, after a startup peak of ~200M ;) [15:45:40] anomie: That... would also confuse me. How odd. [15:45:50] anomie: And they actually work? [15:46:10] Coren: Yeah. I guess that something or other about how I run them confuses the stats collector [15:46:42] anomie: Clearly. I'd love to know how. Clearly, you're still using VM and the ulimit will still work but it's a pain if we can't see how much they are really using. [15:51:44] Coren: If it helps, the command I use to submit each job is /usr/bin/qsub -q continuous -notify -j y -o "$home/botlogs/$jname.job" -cwd -N "$jname" -hard -l "h_vmem=$memlimit" -b y ./bot-instance.pl "$botnum" "$dir" [15:53:01] Yeah, nothing odd there. Does bot-instance fork and set a pgid? [15:53:30] Nope, I took out the forking when I split it into multiple jobs [15:54:12] Unless something internal in perl forks, I suppose [15:54:37] Meh. It's a cosmetic annoyance at worse, so I'll not fret over it for now. [15:56:55] Wow, load on tools-exec-04 is 132.5% [15:59:04] anomie: Small burst, from what I can see. Not that it's super panicking; 6 working processes over 4 CPUs isn't all that bad. [15:59:49] Coren: I just idly wonder what that bot is doing that it's using -04 all to itself [16:01:42] anomie: Actually, it's just fairly IO expensive from what I can see. [16:02:24] And whatever it was doing was relatively short-lived, the load is climbing back down. [16:04:52] Actually, it's going up and down like a seesaw, but the actual process isn't doing all that much. How interesting. [16:05:33] I think he was using MediaWiki underneath. [16:07:22] Coren: I figured out what's causing the 0M thing: Job "test1" is perl -e '$0="changed \$0"; sleep(600);', and "test2" is just perl -e 'sleep(600);'. test2 shows 31M, while test1 says 0M. [16:21:08] anomie: That doesn't seem to make sense to me. [16:22:08] I guess something is trying to match things up using the process name, so changing the process name makes it not find the process. [16:49:44] Coren: pokie [16:49:55] got following error: http://paste.debian.net/9711/ [16:52:04] AzaToth: In meeting atm, will look at it in a bit [16:55:43] AzaToth: IIRC, this is not Tools-specific. [16:55:52] ok [17:01:37] Coren: Turns out to be simpler, actually. At one point gridengine does a sscanf on the contents of /proc//stat to load utime, stime, and vsize. And having spaces in $0 makes it no longer match the format (%*s only matches non-whitespace). So I suppose it would also break if someone's bot script were named "foo bar.pl". [17:02:48] AzaToth: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=706284 seems to be the case here. [17:03:22] I see [17:03:32] I've not noticed it on my local debian [17:04:03] Do you have an entry with multiple options in /etc/*/sources.list.d? [17:42:51] petan: soooooo.... [17:42:54] Coren: ^^ [17:42:59] hi [17:43:01] a log daemon written in mono? :) [17:43:14] why not :P [17:43:22] Ryan_Lane: Ohai. ^^ to what? [17:43:28] petan: Because mono sucks? :-) [17:43:31] not that [17:43:39] because petan is the only one who knows mono [17:43:41] and that's a bad idea [17:43:49] wm-bot being mono is ok [17:43:50] that is not idea that is fact :P [17:43:59] addshore knows c# as well [17:43:59] because it's not a /critical/ tool [17:44:00] :P [17:44:18] it would very much suck is wm-bot went away, of course [17:44:30] idk... it seems to me that wm-bot is more critical than log daemon which nobody uses :P [17:44:36] but: 1. we shouldn't write our own log daemon. there's tons of them [17:44:50] and: 2. it definitely shouldn't be in mono :) [17:45:00] yes YuviPanda is working on implementation of rsyslog there [17:45:04] we should really use logstash [17:45:12] I am curious how much successful is he going to be [17:45:12] logstash? [17:45:13] * YuviPanda looks [17:45:17] petan: Smartass. :-) But I am also wondering a little why you took the time to write a log daemon; when you talked about making a central logging vm, I expected you'd be using something a little more traditional. :-) [17:45:41] we've looked at it in the past [17:45:48] Coren because I couldnt find anything that is suiting what we need [17:45:48] and the openstack folks are now using it as well: http://logstash.openstack.org/# [17:45:48] hmm, *I* know C# too... [17:45:51] I like splunk myself, but it's not FOSS. :-) [17:46:01] Coren: yeah [17:46:22] YuviPanda: no one on the labs team does, and we're ultimately responsible :) [17:46:39] sure [17:46:52] I don't have anything against mono [17:47:07] * Coren knows enough C# to know he'd rather it be in Python. Anyone who knows him can understand the magnitude of that statement. :-P [17:47:07] logstash uses elasticsearch [17:47:08] logstash look cool why it didnt jump as a result in google search [17:47:17] and can filter at many levels [17:47:25] and has tagging, and hinting [17:47:32] the idea I had in mind was that a tool could send stuff to local syslog, which will forward someplace, and be aggregated tool-wise somewhere searchable [17:47:35] Coren: :D [17:47:45] YuviPanda: that works with logstash too [17:47:49] yeah, I saw [17:47:55] I hadn't heard of logstash before [17:48:00] and the kibana frontend doesn't *suck* [17:48:06] though it isn't wonderful either [17:48:09] the only thing I want is to make sure that the tool isn't something we build ourself :) [17:48:15] +1000 [17:48:26] logstash is written in java? :D [17:48:39] no [17:48:42] worse [17:48:42] or rather, where we can fix bugs by just telling someone else rather than doing it ourselves [17:48:43] :D [17:48:44] I'm looking for a query to get me a user_id given the username, but case insenstive [17:48:51] but that is somewhat fast [17:48:51] petan: jruby! :D [17:48:56] o.o [17:49:06] so... that is something what people in labs master right? :P [17:49:17] that said, we shouldn't need to modify logstash or kibana [17:49:45] YuviPanda: yep. that. [17:49:51] ... jruby? [17:49:53] worst case we write some ruby [17:49:58] well, can't be as bad as Rainerscript [17:50:04] and you guys realize the magnitude of me saying that [17:50:12] * YuviPanda does [17:50:22] Ryan_Lane: have you seen the memcached C# implementation? :D [17:50:32] FutureTense: Mysql isn't your friend for this, but there are shortcuts. [17:50:34] similarly with me willing to install something ruby ;) [17:50:37] FutureTense: For one, you know the first character is uppercased, so you can restrict the query this way. [17:50:42] YuviPanda: hahaha. really? [17:50:46] YuviPanda: why would someone do that? [17:50:50] petan: ^ [17:50:53] you mean the actual daemon? [17:50:56] yes [17:51:05] that's horrifying [17:51:05] it's currently running :D [17:51:09] on tools-mc, I think [17:51:11] Coren: is there an api call that would work? [17:51:12] o.O [17:51:20] along with normal memcached [17:51:25] you guys are scaring me now [17:51:27] for some definition of 'normal' [17:51:43] have I told you about the C# supervisord replacement? [17:51:51] lol [17:52:02] if you guys are patching memcache, you should make your changes configurable and upstream them [17:52:07] hey c# memcached is actually something that people will love in future [17:52:29] YuviPanda: I think supervisord is useless anyway [17:52:30] Ryan_Lane: agreed, which is why IIRC there's no pointers to it. It's just me, petan and legoktm using it atm. [17:52:31] that version is far more configurable than normal memcahe [17:52:55] anomie did some testing as well [17:52:56] just use upstart/systemd/launchd [17:52:59] Ryan_Lane: I doubt memcached will accept patches for this. [17:53:00] and found a bug :) [17:53:09] YuviPanda: why's that? what did you change? [17:53:13] Ryan_Lane: besides, memcached has been untouched for years now [17:53:14] you just disabled STAT, right? [17:53:18] Ryan_Lane memcached comes with absolutely no usable authentication [17:53:19] YuviPanda: that's not true [17:53:20] Ryan_Lane: yeah, pretty much. [17:53:24] oh? [17:53:32] YuviPanda: I don't see why they wouldn't accept a patch for that [17:53:33] we disabled it in normal version and restricted in c# version [17:53:41] so it works but isnt dangerous [17:53:53] anyway, c# version comes with separate memory pools per user [17:54:04] doing this in c would mean... months of programming [17:54:08] Ryan_Lane: hmm, last commit 5 months ago [17:54:09] heh [17:54:19] * Ryan_Lane shrugs [17:54:26] that mean a multiple hashtables for each user [17:54:32] Ryan_Lane: I'll try to do that, putting it behind a config param shouldn't be too hard [17:54:34] is the mono version packaged? [17:54:39] petan: ^ [17:54:44] no, it isnt even finished yet [17:54:49] it is like pre-alpha [17:54:52] hahahaha [17:54:52] FutureTense: Not as far as I know. I know the query action demands case matched usernames. [17:54:56] petan: also the Mono version's authentication isn't even proven to work with *anything* :P [17:55:03] indeed [17:55:07] it is a new feature in protocol [17:55:08] why don't we just use something other than memcache? [17:55:14] Ryan_Lane: suggestions? [17:55:17] there is something else? [17:55:19] :P [17:55:23] there's lots of things [17:55:23] Ryan_Lane: Redis doesn't do per user type things either [17:55:27] it's just a key/value [17:55:38] but [17:55:41] any key/value with a memcache compliant protocol will work [17:55:42] memcache is about performance [17:55:51] it needs to be horribly fast [17:56:00] which apparently c# version is, which surprised me [17:56:03] in our use? probably not as much as you'd think [17:56:06] it is even faster than c on really big data [17:56:36] petan: the tests run on it so far might not have triggered a GC at all yet [17:56:42] YuviPanda: memcache doesn't do per-use either [17:56:45] *user [17:56:47] Ryan_Lane: indeed. [17:56:52] a pre-alpha version written in C# does [17:56:54] Ryan_Lane: hence we took out stat [17:57:09] Ryan_Lane: hmm, so I suppose anything that doesn't have an equivalent of stat should be 'good enough' [17:57:14] YuviPanda it does [17:57:24] the other alternative (my preferred one!) is to tell people 'do not put sensitive data there!' [17:57:26] YuviPanda GC run only when cpu isnt needed or when system is getting out of memory [17:57:34] YuviPanda: what are you using as a key for users? [17:57:54] me? I'm just using it to cache edit counts. [17:57:54] users are expected to prepend all of their keys with something? [17:57:54] legoktm was using it for something similar [17:57:58] we've no guidelines yet. [17:58:07] agian, this is why this is just running silently at one place [17:58:10] heh [17:58:20] rather than with a link at Tools/Help [17:58:21] anyway - there is still FLUSH_ALL [17:58:24] let's try to find a key/value that has tenancy [17:58:31] which effectively removes whole cache for everyone [17:58:32] mongodb? [17:58:34] that is IMHO problem as well [17:58:37] (it does have tenancy) [17:58:47] does it have a memcache interface? [17:58:56] petan: yeah [17:59:10] nope [17:59:12] I guess a memcache interface isn't totally necessary [17:59:19] Ryan_Lane: anything with tenancy can't have memcached interface [17:59:23] right [17:59:23] (or API) [17:59:27] since the API doesn't support it [17:59:30] indeed [17:59:33] MongoDB is packaged, etc [17:59:35] yep [17:59:39] and we also use it in production in our cluster [17:59:42] mongo may be an acceptable optuion [17:59:45] *option [17:59:46] +1 to Mongo [17:59:51] plus there are enough bindings in everything [17:59:51] YuviPanda: I wouldn't say it's in production :) [17:59:54] and it's fast enough for a cache [18:00:10] it's being used by the analytics folks on a single box [18:00:11] and we also use it in 'production' in our cluster [18:00:13] better? :) [18:00:17] heh [18:00:35] I'd say let's make a proposal and send it out to the list [18:00:40] and get ideas back from people [18:00:59] what would *really* be nice would be a solution that could work for all projects [18:01:28] anything that has tenancy would probably work for all projects [18:01:50] yeah [18:02:03] Ryan_Lane: ooh, also - I can't seem to subscribe to labs-l! [18:02:09] no? [18:02:14] what issue do you get? [18:02:19] I've tried twice over the last 4 days, and it neither sent me a confirmation link nor did it send me new emails [18:02:28] is it set to moderator approval? [18:02:39] nope [18:02:42] let me just subscribe you [18:02:47] send me your email in a pm [18:02:59] it may be hitting your spam folder? [18:03:01] anyway - mediawiki doesnt run with mongodb and without memcache it is horribly slow [18:03:07] Ryan_Lane: PM'd. sent. [18:03:12] people who want to run mediawiki on tools will need memcache [18:03:14] petan: writing a mongo driver likely isn't hard [18:03:16] petan: again, I think that is a feature :) [18:03:23] not a bug [18:03:23] (mediawiki on tools) [18:03:26] eh? [18:03:26] (or lack thereof) [18:03:42] as in, 'people not being able to run mediawiki on tools' [18:03:47] we have redis, memcache, and a number of other implementations [18:03:57] YuviPanda but they are supposed to be able [18:04:15] and they are - it will just incredibly eat resources without memcache [18:06:02] Ryan_Lane: i'm watching the logstash video. Hilarious so far [18:07:34] petan: perhaps we could start another similar thing that lets people run medaiwiki... [18:07:40] on labs [18:07:43] without having to do things [18:08:02] maybe yes, but... why? [18:08:12] what is wrong on tools? [18:09:32] it isn't meant to run large, heavy projects... such as mediawiki? [18:09:42] I was thinking it is? [18:09:45] IMO, that is. Coren or Ryan_Lane might have other opinions. [18:09:47] FutureTense: "select user_id, user_name from user where user_name like 'C%' and strcmp(user_name, 'coren' collate utf8_general_ci) =0" is /somwhat/ efficient [18:09:58] petan: even with memcache it'll eat a ton of resources [18:10:01] petan: because of apc [18:10:04] oh mediawiki? oh yeah. anything that can't run without an opcode cache and memcached is 'too heavy' [18:10:10] and if you aren't using apc it'll be slow [18:10:17] YuviPanda: Fundamentally, there's nothing wrong with running mediawiki on tools; I'd rather allow it for the simple cases if possible. [18:10:22] no matter what it'll eat a shitload of memory on the systems [18:10:29] Coren: agreed [18:10:40] I'd prefer it not be the normal MW development environment, though [18:11:03] we need to make this a project, really [18:11:19] Ryan_Lane: My project B is a unified "deploy-a-mediawiki" self-serve thing for the more advanced devs who want to, say, fiddle with extensions and stuff. [18:11:20] (yup) [18:11:27] Coren: \o/ [18:11:34] Coren: that will be awesome [18:11:42] Coren: you should check out the vagrant stuff ori-l did [18:11:48] I haven't had time to do tools or mw. I'd be super happy about it [18:11:53] I am looking at mongo... are you sure it is comparable with memcache? it is a database... that means it is very likely much slower? does it use memory hashtables as memcache? [18:11:55] YuviPanda: I fully intended to base myself on it. :-) [18:11:55] YuviPanda: we have a puppet class that it's partially based on [18:12:02] nice! [18:12:22] we've had a puppet class for this for a while, written by andrewbogott [18:12:24] petan: it has tenancy, and is a good / fast key value store [18:12:27] Ryan_Lane: Something along the lines of service group -> wiki. :-) [18:12:33] Coren: yep. that would be nice [18:12:47] maybe making a service group could trigger an instance creation? [18:12:51] * Coren nods. [18:13:05] this is one of the reasons I'm glad we have our own interface :D [18:13:14] But why an instance? I should expect any one instance could host at least a couple wikis. [18:13:22] well, it depends... [18:13:34] shared memcache is an issue [18:13:40] apc eats a ton of memory with MW [18:14:20] Coren: we should get a gerrit replica on the NFS servers soon [18:14:20] I suppose; though IMO memcache is more likely to be a hinderance than help in a dev environment. [18:14:24] Coren: you want memcache [18:14:38] because otherwise you won't notice cache invalidation issues [18:14:53] which are incredibly common [18:15:59] you know… in selinux you could have tagged ports and daemons [18:16:06] I wonder if apparmor can do the same [18:16:30] bleh. it all runs under the same apache [18:16:33] nevermind [18:16:34] Docker? [18:16:45] I'd prefer to use salt than docket [18:16:49] *docker [18:17:09] salt? [18:17:14] that's... such an ungooglable name [18:17:14] saltstack [18:17:23] ah better [18:17:27] yep :) [18:18:24] Ryan_Lane: I'd still set things up halfway between tools and pure labs; no root and standardized environment. If you need root, get a project and be your own sysadmin. :-) [18:18:25] YuviPanda: subscribed you [18:18:31] thanks, Ryan_Lane [18:18:33] Coren: agreed [18:18:54] YuviPanda: yw [18:20:33] petan: any reason icinga is down? [18:20:43] is it? [18:20:45] :o [18:20:59] nagios bot is here [18:20:59] https://bugzilla.wikimedia.org/show_bug.cgi?id=49414 [18:21:15] nagios bot is ircecho, isn't it? [18:21:24] also, if it *is* ircecho it may need to be upgraded :) [18:21:34] netsplit issues in the bot are solved [18:21:59] ok I will restart the daemon, let see [18:23:01] !log nagios petrb: restarting nagios [18:23:03] Logged the message, Master [18:24:27] petan: why'd you mark fix the vim config as WONTFIX? [18:24:41] funny enough, the majority of the ops team hates our vim config too [18:24:47] I thought you arent going to fix it [18:24:50] aha [18:25:03] I brought the importance level down ;) [18:25:11] ok [18:25:19] the importance levels apparently mean something [18:25:32] I don't remember where they are defined [18:25:54] petan: http://www.mediawiki.org/wiki/Bugzilla/Fields#Importance [18:25:54] Ryan_Lane: if we setup logstash, should we put apache logs to it too? [18:26:07] YuviPanda: we could, yeah [18:26:11] assuming they are filtered first [18:26:20] for IP anonymization? [18:26:30] in the way coren is currently filtering for tools [18:26:31] YuviPanda not just that [18:26:43] YuviPanda check /data/project/.system/logs/public_apache2 [18:26:50] these are filtered [18:27:06] also I suppose even for tools - logs would need to be private [18:27:08] (using filtering tool written in c#) :DD [18:27:08] (sometimes?) [18:27:44] petan: Yeah, I could anonymize with logstash, but error logs remain problematic. [18:27:50] but if you manage to write a regex or awk script for this feel free to replace it [18:28:04] Coren: why exactly? [18:28:35] YuviPanda: Because error logs dump stderr, there is no way to know *what* gets in there, or indeed which tools it comes from with certainty. [18:28:40] right [18:28:48] did toolserver solve this? [18:29:09] YuviPanda: Toolserver had... lax privacy standards we cannot emulate (nor would we want to) [18:29:17] ah! :) [18:29:37] yeah, this would be a problem for custom tools too [18:29:42] Remember one of our objectives is to allow tools to use, ultimately, the same privacy policy the projects do. [18:30:27] Coren: actually, the error_logs might not be problematic in that case [18:30:47] Ryan_Lane: How do you figure? [18:30:47] Coren: because you could do the filtering at the logstash level, before people can see it [18:30:58] or at the elasticsearch level [18:31:16] so that it isn't even indexed [18:31:25] Ryan_Lane: But error logs are completely freeform. For all we know, the tool might stack trace with user credentials or worse. [18:31:31] ugh. right [18:31:50] Ryan_Lane: I mean, sure, /.*/d would work... :-) [18:31:54] Coren: if we can figure out which tool is producing which we can direct it to their project directory... [18:31:54] heh [18:32:07] (except I suppose that isn't that easy) [18:32:17] YuviPanda: Apache 2.2 doesn't even guarantee the error logs aren't interleaved. [18:32:19] if tool writers really want to collect private info like that, they can [18:32:30] YuviPanda: Apache 2.4 gives ErrorLogFormat which would work. [18:32:43] let's move to 2.4! [18:32:45] Coren: we're looking at 2.4 for ssl anyway [18:32:46] (in X years?) [18:33:01] it has a distributed ssl cache [18:33:10] YuviPanda I already downloaded 2.4 packages [18:33:12] Ryan_Lane: Hm. I've taken a look at the dependency graph, and it's non trivial. [18:33:17] it is already in our repository [18:33:19] on tools [18:33:31] which would let us use weighted round robin rather than source hash for the lvs scheduler [18:33:32] Ryan_Lane: But doable. I'd need at least a few weeks to make it reliably work. Is it worth the effort? [18:33:59] probably. I think 2.4 is a pretty good option for ssl [18:34:11] and we want to move ssl to all of the frontends [18:34:55] thanks to things like PRISM and the China issues I think I'd like to move forward with ssl again soon [18:34:56] Ryan_Lane: I need to also bump PHP for this, and ~ 50 packages all told to backport to precise. I was unwilling to do this unless it had wider applicability than Tool Labs, but if it turns out to be useful for prod that shifts the balance. [18:35:02] ugh [18:35:14] bump php to whay? [18:35:15] what? [18:35:18] 5.4? [18:35:33] that's a much more major change, if so [18:36:53] It's 5.4 by default but, IIRC, I can go as "low" as 5.3.9 [18:37:52] Has to be rebuilt against the new apache libs though. [18:41:44] Ryan_Lane: That said, 5.4 is known to run Mediawiki right so it might be worthwhile to "use" the apache upgrade to bump up PHP. [18:42:07] heh. well, that means a bump in production, too [18:42:23] you only really need 2.4 for the proxies, right? [18:43:19] oh, right. need it for error logs [18:43:21] * Ryan_Lane grumbles [18:44:29] Probably the best thing at this junction is to set up a project for testing the setup with a backport. [18:45:02] there's want to move to HHVM, I'm not sure if dev is going to want to risk 5.4 right now [18:45:02] Where every prod package is synced up and mutually compatible. [18:45:29] and 5.4 doesn't have apc support yet, right? [18:45:31] Hm. Can do both; it's no harder. Whether it's 5.3 or 5.4 they have to be rebuilt anyways. [18:46:16] "While many people are experiencing no problems at all with the current SVN releases, there is still the odd report of edge cases from people under certain configurations, or under heavy load." [18:46:50] "APC is at the point now for 5.4 where I don't think there are any more edge cases than we have in 5.3. Neither is perfect, but it is close enough for the majority of sites." -- Rasmus Lerdorf [18:47:56] heh [18:48:14] 5.5 uses Zend opcache, so *that* is going to be a major upaveal. [18:50:41] andrewbogott: maybe you want to add those robots.txt rules to proxy configurations in puppet [18:59:38] liangent: yeah, good idea. [19:00:19] Ryan_Lane: catgraph needs to be accessible from the outside via tcp. can you allocate a public ip for the sylvester host please? [19:01:12] or maybe a subdomain is enough, if it allows any tcp ports to be forwarded, not just port 80 [19:01:21] one sec [19:01:26] the proxy only forwards 80 [19:01:26] ok :) [19:02:06] ideally in the future we'll have something that just forwards ports in a self-service way, but we don't right now :) [19:02:20] i need 6666 and 8090 by default, but it would be best if all ports were forwarded, or at least a range, so i can set up instances with testing code on other ports [19:02:47] did you add those to a security group, and create the instance with that security group? [19:02:58] JohannesK_WMDE: With a public IP, you can just adapt the security group to open the ports. [19:03:03] if not you'll either need to rebuild your instance, or you'll need to add it to the default security groups [19:03:16] that really needs to be fixed in nova. it's so annoying [19:03:19] andrewbogott: btw I can't ssh to the proxy instance now - don't know why [19:03:40] to which, the old one or the one I just built? [19:04:23] JohannesK_WMDE: I've upped your floating ip quota to 1 [19:04:32] JohannesK_WMDE: you can allocate an IP via "Manage addresses" [19:04:33] the old one [19:04:35] Ryan_Lane: thanks [19:04:40] then associate it with the instance [19:04:43] and add a dns name to it [19:04:52] stuck at debug1: Offering RSA public key: /home/liangent/.ssh/id_rsa_wmflabs [19:04:56] liangent: OK, I will look in a moment. [19:05:05] from other labs instances you'll need to use the internal IP address, not the public one [19:05:15] I tried to reboot it but no change [19:07:06] JohannesK_WMDE: you'll need to modify the security groups too [19:07:19] is something amiss on the servers? [19:07:19] (they are firewall rules) [19:07:28] FutureTense: please clarify :) [19:07:52] a query that was relatively fast a few hours ago is slow now [19:12:49] liangent, do you know where robots.txt should go for the nginx proxy? [19:13:08] Or if that's even possible? [19:13:30] …now that I think about it, how would a spider read a file on the proxy? [19:15:16] Ryan_Lane: everything works. cool. [19:15:17] andrewbogott: you'd serve robots.txt from the proxy, rather than proxying it [19:15:21] JohannesK_WMDE: great [19:15:36] Ah, so I need a rule for that particular file… [19:16:35] yep [19:16:45] and it needs to exist on the proxy's filesystem somewhere [19:17:07] * anomie reads scrollback [19:19:57] petan: I found a bug in memcached too, although not as interesting as the one in sharp-memcached. Start memcached, suspend laptop, wake laptop. Memcached is now out of sync with real time by however long the laptop was suspended, so absolute expiry times don't work right anymore. [19:27:03] Eew. Logstash is in ruby? [19:29:50] liangent, I'm testing this now. Look right to you? [19:35:22] Coren: it's actually jruby, I think? :P [19:36:38] Eew! Eeew! [19:36:57] andrewbogott: it now works [19:37:02] what was wrong? [19:37:57] Coren: if only it were in... Groovy! [19:38:01] or better yet, Grails! [19:38:08] liangent: Just a gluster problem; /home wasn't working [19:38:24] Let's ask Ironholds to rewerite it in R [19:38:49] Coren: definitely. And then refactor it down to half its size in 6 months :) [19:39:01] YuviPanda, Coren, I'm pretty sure that subbu is one of the core jruby folks. Maybe he can convince you that it's a good idea [19:39:13] andrewbogott: what's the normal way to fix it then [19:39:32] liangent, you have to ask my or ryan. And nag us to migrate away from gluster. [19:40:35] and for robots.txt, I've already implemented it on instance-proxy.pmtpa.wmflabs [19:40:51] puppetized? [19:40:54] andrewbogott: oh, I'm pretty happy with jruby / java / ruby tools :) I just like 'eew!'ing out coren :) [19:42:02] andrewbogott: but super-cool to know that subbu is one of the core folks there! [19:42:13] andrewbogott: not puppetized and I'm not quite familiar with puppet [19:42:47] liangent: OK. I'm trying to build a new pmtpa-proxy directly from puppet. I think it just about works [20:00:18] Coren: hahaha. i mentioned it was jruby earlier :) [20:00:23] and kibana is ruby as well [20:00:49] andrewbogott: I started work on migrating the homedirs on friday [20:01:07] great! Let me know what I can do to help [20:01:23] Ryan_Lane: I don't mind different languages as long as there's an upstream to forward bugs to. :-) [20:01:31] andrewbogott: I'm just doing syncs right now [20:01:56] andrewbogott: we'll need to do another sync, then set everything read-only, then do another sync and switch to nfs [20:02:11] And then reboot everything? [20:02:21] or restart autofs on them [20:02:37] andrewbogott: a shit-ton of systems are running puppetmaster::self and aren't updating their repos [20:02:46] so, salt isn't working on about 80 of them [20:02:56] it would be nice if we could get their repos up to date [20:03:00] <^demon> mod_rewrite sucks. [20:03:02] Hm… I wonder how many of those are mine :) [20:03:13] back in a bit [20:43:57] https://bugzilla.wikimedia.org/reports.cgi?product=Wikimedia+Labs&datasets=NEW&datasets=RESOLVED [20:43:58] \o/ [20:46:15] https://bugzilla.wikimedia.org/chart.cgi?category=Wikimedia+Labs&subcategory=Infrastructure&name=1289&label0=All+Closed&line0=1289&label1=All+Open&line1=1288>=1&labelgt=Grand+Total&datefrom=&dateto=&action-wrap=Chart+This+List [20:46:19] I guess that's a better chart [21:14:43] * anomie does not like the output of qstat. [21:14:43] * anomie writes /data/project/anomiebot/qstat. [21:19:36] Warning: There is 1 user waiting for shell: Richregel (waiting 0 minutes) [21:24:19] ryan_lane, I'm in the process of emailing everyone who has a puppetmaster::self instance. I'll clean things up after they've had a day or two to respond. [21:30:58] andrewbogott: awesome. thanks! [21:31:08] I'm about to push in an interesting changeset :) [21:31:17] (resize support) [21:31:46] <^demon> Ryan_Lane: I have 2 2-line puppet changes. [21:31:57] ^demon: git is too hard. I can't help you [21:32:29] <^demon> :( [21:32:40] add me as a reviewer ;) [21:33:09] Warning: There is 1 user waiting for shell: Richregel (waiting 13 minutes) [21:33:41] <^demon> Bah, don't merge. [21:33:42] <^demon> Mistake. [21:34:19] ok [21:35:34] <^demon> Ok, fixed. [21:46:35] Warning: There is 1 user waiting for shell: Richregel (waiting 27 minutes) [22:00:10] Warning: There is 1 user waiting for shell: Richregel (waiting 40 minutes) [22:13:40] Warning: There is 1 user waiting for shell: Richregel (waiting 54 minutes) [22:27:19] Warning: There is 1 user waiting for shell: Richregel (waiting 68 minutes) [22:40:48] Warning: There is 1 user waiting for shell: Richregel (waiting 81 minutes) [22:54:23] Warning: There is 1 user waiting for shell: Richregel (waiting 95 minutes) [23:07:53] Warning: There is 1 user waiting for shell: Richregel (waiting 108 minutes) [23:20:10] andrewbogott: regarding your email about the puppetmaster thing, if you just "updating the puppet rep in /etc/puppet so that things get synchronized", nothing should break right? [23:20:41] shouldn't. If you have local changes I'll merge them or stow them in a branch. [23:20:55] ok, sounds good then :) [23:21:23] Warning: There is 1 user waiting for shell: Richregel (waiting 122 minutes) [23:34:52] Warning: There is 1 user waiting for shell: Richregel (waiting 135 minutes) [23:45:33] Can you get blocks from the rep? [23:48:22] Warning: There is 1 user waiting for shell: Richregel (waiting 149 minutes) [23:56:04] a930913: rep? [23:57:27] legoktm: Replication database. [23:57:30] yes [23:57:33] query the ipblocks table [23:58:06] legoktm: And userblocks? [23:58:13] same table [23:58:17] Kk. [23:58:20] Ta. [23:58:37] just poorly named :P [23:59:04] :p