[00:08:37] legoktm: Around? Some local-afch-updater grid jobs are failing because "can't open output file "/afch-updater.out": Permission denied". Perhaps $HMOE/afch-updater.out instead of $HOME/...? [00:09:22] yeah i noticed that last night, busy right now, ill fix it in an hour or so [00:20:24] legoktm: No problem, it just prevents your jobs from running :-). [00:55:03] What's up with tools-login [00:55:05] Connection to 208.80.153.224 timed out while waiting to read [00:55:11] can't ssh into it [00:55:21] Coren: Ryan_Lane [00:56:00] ssh'ing into non-toollabs instances via bastion works fine [01:56:55] Krinkle: I can't log into tools-login.wmflabs.org either, but in a session on tools-dev from before, I can access the filesystem, so NFS seems to be fine. [01:57:56] scfc_de: seems to be stuck at the keys phase. [02:00:04] YuviPanda: Are the directories beneath /public/keys/$USER supposed to be non-empty? [02:00:17] I *think* not, but I'm not sure [02:03:02] Then what's their purpose? :-) [02:03:50] Ah, okay, now I see some files. [02:04:17] /public/keys/scfc/.ssh/authorized_keys, /public/keys/scfc/.bash_logout, /public/keys/scfc/.bashrc, /public/keys/scfc/.profile [02:04:28] Still can't log into tools-login, though. [02:04:30] yeah [02:04:36] still stuck at that part [02:06:36] I can log into tools-webproxy.pmtpa.wmflabs (as member of local-admin). [02:07:13] Krinkle: And tools-dev.wmflabs.org works perfectly. [02:07:34] ➜ ~ ssh -v tools-login.pmtpa.wmflabs [02:07:36] doesn't work [02:08:27] http://ganglia.wmflabs.org/latest/?r=hour&cs=&ce=&m=load_one&s=by+name&c=tools&h=tools-login&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 looks like tools-login is down. [02:09:18] petan's terminator daemon killed gmond last time, perhaps this time something more vital? [02:09:39] Anybody having a session to tools-login? Otherwise I would reboot it. [02:10:06] hehe, that's possible [02:10:08] restart [02:11:43] !log tools Reboot tools-login, was not responsive [02:11:45] Logged the message, Master [02:12:20] scfc_de: should be down [02:14:36] Login to tools-login works again. [02:14:59] :) [02:16:26] Jul 29 00:55:16 tools-login terminatord: System is out of memory, only 104464384 bytes remaining, killing random process [02:17:06] "killing random process" -- I really think we should reconsider using standard, proven Linux OOM killer :-). [02:18:07] Oh, apparently the Linux one killed another process even before. [02:18:38] 'random' process?! [02:18:54] x.x why is it even running with that much permissions there.... [02:18:54] gah [02:19:13] Jul 29 00:53:29 tools-login kernel: [888046.683106] puppet invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0 [02:19:24] (That's Linux.) [02:19:50] how much was uptime, btw? [02:20:58] Jul 29 00:53:51 tools-login kernel: [888046.697824] Out of memory: Kill process 13686 (perl) score 106 or sacrifice child [02:21:09] Jul 29 00:53:51 tools-login kernel: [888046.698539] Killed process 16148 (sh) total-vm:4400kB, anon-rss:96kB, file-rss:508kB [02:21:20] Very descriptive: "perl", "sh". [02:21:28] Uptime, let's see ... [02:22:39] syslog.10.gz:Jul 18 18:13:00 tools-login kernel: imklog 5.8.6, log source = /proc/kmsg started. [02:22:57] So -- 11 days. [02:23:11] not bad [02:23:14] Or rather 10.something. [02:23:19] or bad, depending on how you look at it :) [02:23:31] also I just got an 'add two numbers' program done on INTERCAL! [02:23:32] \o/ [02:24:28] terminatord killed gmond before: "Jul 29 00:51:07 tools-login terminatord: Most preferred process has score 29 : 633 gmond with usage of memory: 6776KB with priority 19 owned by user (id: 999) killing now" [02:25:49] YuviPanda: Let's rewrite memcached in INTERCAL! :-) [02:26:05] scfc_de: :D sure why not... [02:26:24] scfc_de: let's also rewrite ssh while we're at it. It's clearly old and hence shitty and useless and not flexible enough [02:27:00] scfc_de: though to be fair I wish there was a mosh deamon of sorts so we could run it in a more efficient way [02:27:54] Waiting Jobs: 14 Running Jobs: 89 [02:28:58] YuviPanda: Isn't a mosh daemon installed? I remember Coren doing that. [02:29:07] there's a mosh deamon? [02:29:10] I wasn't aware of that [02:29:14] if so then yah, that's good [02:29:39] But I bet it's not in INTERCAL :-). [02:29:51] yet [02:30:04] terminatord is very polite, though: "Your job gmond (633 using 999 bytes of memory) on server tools-login was killed, because the system didn't have enough of free operating memory (only 6776 bytes of memory was remaining in moment) and Your process was one of most resource expensive jobs. This action was done automatically to prevent whole server from dying, and is no way Your fault. System administrators were notified about this [02:30:04] problem and will resolve the issue soon. I am sorry and I wish you a pleasant day. Your terminator daemon" [02:30:36] user friendly, yeah. [02:30:44] scfc_de: https://github.com/yuvipanda/fuckin-angry-intercal [02:30:59] it's... not angry enough yet, mostly because apparently I suck at trying to be angry [02:33:22] YuviPanda: Ts, it's not even part of gcc? :-) [02:33:36] gcc is also old and hence shitty [02:36:53] And tools-login is stuck again?! WTF?! [02:37:17] Ah, okay, just a simple pause. *Argl*, failures cause anxiety :-). [02:40:22] !log tools Restarted toolwatcher on tools-login. [02:40:24] Logged the message, Master [02:48:53] Utilization heatmap is red [03:10:32] (03PS1) 10Yuvipanda: Use argparse to parse parameters for registering stream clients [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76461 [03:10:33] (03PS1) 10Yuvipanda: Add method to list subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76462 [03:10:52] (03CR) 10Yuvipanda: [C: 032 V: 032] Use argparse to parse parameters for registering stream clients [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76461 (owner: 10Yuvipanda) [03:11:27] (03CR) 10Yuvipanda: [C: 032 V: 032] Add method to list subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76462 (owner: 10Yuvipanda) [03:19:22] (03PS1) 10Yuvipanda: Rename to subscriptions.py, which is more accurate [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76464 [03:26:45] (Just before going to bed after a nice weekend of not being around): I need to rebuild tools-login with actual resources rather than the currently skimpy values. [03:27:17] +1 [03:27:24] good night, Coren [03:33:33] (03PS1) 10Yuvipanda: Add delete command to be clear and delete subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76465 [03:33:34] (03PS1) 10Yuvipanda: Perform saving when adding new subscriptions atomically [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76466 [03:34:01] (03CR) 10Yuvipanda: [C: 032 V: 032] Add delete command to be clear and delete subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76465 (owner: 10Yuvipanda) [03:35:49] (03PS2) 10Yuvipanda: Rename to subscriptions.py, which is more accurate [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76464 [03:36:15] (03PS3) 10Yuvipanda: Rename to subscriptions.py, which is more accurate [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76464 [03:36:22] (03CR) 10Yuvipanda: [C: 032 V: 032] Rename to subscriptions.py, which is more accurate [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76464 (owner: 10Yuvipanda) [03:36:37] (03PS2) 10Yuvipanda: Add delete command to be clear and delete subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76465 [03:36:46] (03CR) 10Yuvipanda: [C: 032 V: 032] Add delete command to be clear and delete subscriptions [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76465 (owner: 10Yuvipanda) [03:37:00] (03PS2) 10Yuvipanda: Perform saving when adding new subscriptions atomically [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76466 [03:37:08] (03CR) 10Yuvipanda: [C: 032 V: 032] Perform saving when adding new subscriptions atomically [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76466 (owner: 10Yuvipanda) [03:37:14] woooo spam :) [03:39:10] YuviPanda: first time I saw grrrit-wm speak [03:39:17] :) [03:39:25] it's active in other channels a lot [03:39:30] very few repositories under labs/* for now [07:40:15] Coren: huh? [07:43:48] petan: what's the difference between tools-login and tools-dev? [08:03:43] zhuyifei1999 -login is for bot operation, -dev is for building etc [08:03:48] -dev has more libraries and tools [08:06:18] kjshddkjasjdk [08:06:23] screen -S sessionname -X eval 'stuff "dein befehl mit parametern"\015' [08:06:28] why dos not work [08:06:29] :O [08:06:39] screen -S sessionname -X eval 'stuff "/join #test"\015' [08:06:39] :O [08:07:56] ups. wrong channel. sorrry. [08:25:46] YuviPanda ping [08:25:59] YuviPanda how I retrieve stuff from redis from terminal? [10:39:12] !screenfix [10:39:13] script /dev/null [10:58:20] petan: you can't from our current setup. you'd need to run redis-cli, which is installed only on tools-mc [11:04:50] YuviPanda: isn't there some other terminal client? [11:05:07] i just do python followed by import redis, then connect and do stuff [11:05:18] mhm [11:05:42] what python library [11:05:46] redis? [11:05:53] https://redis-py.readthedocs.org/en/latest/ [11:05:54] ? [11:06:09] yeah, it's already installed clusterwide [11:07:18] YuviPanda: did you see dispatcher-cli (in /data/project/dispatcher/bin) it already works, sort of [11:07:23] you can create new queues using it [11:07:31] and insert /some/ definitions [11:07:34] no, haven't yet [11:07:45] spent some time adding a proper client to my gerrit-to-redis thing [11:07:49] https://wikitech.wikimedia.org/wiki/Bot_Dispatcher#Example_usage [11:08:07] you still haven't told me how you'll handle security for the subscriber [11:08:27] Host: /data/project/dispatcher/hostname and Port /data/project/dispatcher/port [11:08:30] will have to be world readable [11:08:42] if everyone is supposed to be able to execute Script is @ /data/project/dispatcher/bin/dispatcher-cli [11:09:59] petan: so *anyone* can communicate with the deamon. [11:10:04] and do whatever [11:10:33] this is why the gerrit-to-redis subscribe.py is not publicly available to everyone [11:11:48] YuviPanda: loha [11:11:52] hello! [11:12:00] AzaToth: I'm going to 'kill' the 'testing' queue soon [11:12:03] is that okay? [11:12:09] fine for me atm [11:12:17] alright, let me know if you want it back [11:12:20] did you look at the two patchsets left? [11:12:29] there are patchsets left? :| [11:12:31] the draft and the jshint one [11:12:32] not in my dashboard. let me look [11:12:44] looking now [11:12:52] ah, I never added you are reviewer [11:12:54] soz [11:13:44] (03CR) 10Yuvipanda: [C: 032 V: 032] "Thank you!" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76194 (owner: 10AzaToth) [11:14:16] (03PS2) 10Yuvipanda: adding published draft hook [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76193 (owner: 10AzaToth) [11:14:42] AzaToth: looks good to me, but did you test it? [11:15:09] the draft one, nope [11:15:20] AzaToth: meh, let's just do it :) [11:15:26] heh [11:15:46] it's pretty much the same as patchset [11:15:47] (03CR) 10Yuvipanda: [C: 032 V: 032] "Let's test it live!" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76193 (owner: 10AzaToth) [11:16:00] heh [11:16:02] (03PS2) 10Yuvipanda: Fixing jshint hints [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76194 (owner: 10AzaToth) [11:16:15] (03CR) 10Yuvipanda: [C: 032 V: 032] Fixing jshint hints [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76194 (owner: 10AzaToth) [11:16:21] AzaToth: done. want to restart it? [11:16:25] k [11:16:40] hashar: heya! Are there docs on how I can enable jenkins-bot for a tool? [11:16:47] specifically I'd want jshint, and want it to be voting :) [11:17:15] YuviPanda: it's in integration-jenkins-bot-config [11:17:25] integration/jenkins-bot-config? [11:17:26] let me clone [11:17:57] yup [11:18:31] https://git.wikimedia.org/project/integration [11:18:32] no such thing [11:19:20] integration/jenkins-job-builder-config [11:19:59] YuviPanda: sure we get some doc [11:20:42] (03PD1) 10AzaToth: test [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76487 [11:20:47] YuviPanda: works [11:20:55] AzaToth: \o/ thank you ;) [11:21:24] as you notice, it didn't post when I uploaded the draft [11:21:24] AzaToth: I'm thinking that the only thing left to do is, uh, to have a 'ping' mechanism [11:21:34] only when I clicked on the publish draft button [11:21:37] AzaToth: drafts are supposed to be private, no? [11:21:38] so makes sense [11:21:39] !ping [11:21:39] *POOF* "Wadda need?" *POOF* "Wadda need?" *POOF* "Wadda need?" [11:21:45] !ping del [11:21:45] YuviPanda: I wrote two tutorials at https://www.mediawiki.org/wiki/Continuous_integration/Tutorial [11:21:45] Successfully removed ping [11:21:48] probably need one more [11:21:48] !ping is pong [11:21:49] Key was added [11:22:09] hashar: ah, looking [11:22:21] should probably write one to add jslint [11:22:40] hashar: yup! [11:23:16] AzaToth: doing something like 'grrrit-wm: ping' should respond with something like 'Working...' [11:23:19] so we know the bot is up [11:23:35] nothing more, we don't want to become yet another massive trawling can-do-everything bot [11:23:48] hashar: regarding jenkins, I saw you made some work on the zuul last week [11:24:01] I see [11:24:13] AzaToth: yup got a bunch of patch to get merged in ops repo [11:24:24] (03Abandoned) 10AzaToth: test [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76487 (owner: 10AzaToth) [11:24:33] I am not going to enable them this week though since I am heading in vacation by the end of the week :( [11:24:34] nice [11:24:44] not nice [11:24:55] well, nice for you [11:24:59] YuviPanda: on which project do you want to add jslint? And will you be there for the next couple hours? [11:25:20] AzaToth: it is a rather disrupting change so I would prefer being there whenever it screw up heeh [11:25:25] hehe [11:25:36] hashar: labs/tools/grrrit [11:25:36] hashar: I will be sortof around, just woke up so need to brush, eat, etc [11:25:42] merge it friday evening [11:26:01] YuviPanda: I read it as "I just woke up in a bush" [11:26:06] then I will get phone calls! [11:26:17] turn phone off [11:26:18] hehe [11:27:24] petan: your tool mediawiki-mc seem to be broken with "[41dddf8e] 2013-07-29 11:24:54: Fatal exception of type MWException" [11:27:49] zhuyifei1999 / [11:27:52] what you talk about [11:28:02] http://tools.wmflabs.org/mediawiki-mc/w/ [11:28:03] what is mediawiki-mc tool [11:28:05] ah that [11:28:09] ignore that [11:28:15] it's just a test wiki for me [11:28:21] purposefully broken [11:29:44] petan: did you see my comments about authentication? [11:29:53] where [11:31:42] YuviPanda: going out to grab a snack, will be back in less than half an hour [11:31:49] hashar: alright! [11:32:02] 16:38 YuviPanda: you still haven't told me how you'll handle security for the subscriber [11:32:05] 16:38 YuviPanda: Host: /data/project/dispatcher/hostname and Port /data/project/dispatcher/port [11:32:10] 16:38 YuviPanda: will have to be world readable [11:32:13] 16:38 YuviPanda: will have to be world readable [11:32:17] 16:38 YuviPanda: if everyone is supposed to be able to execute Script is @ /data/project/dispatcher/bin/dispatcher-cli [11:32:21] 16:39 YuviPanda: petan: so *anyone* can communicate with the deamon. [11:32:25] 16:40 YuviPanda: and do whatever [11:32:28] 16:40 YuviPanda: this is why the gerrit-to-redis subscribe.py is not publicly available to everyone [11:32:31] petan: ^ [11:37:51] * YuviPanda considers writing soemthing in Go [11:39:56] YuviPanda, hashar I created a picture for easy understanding: https://wikitech.wikimedia.org/wiki/Bot_Dispatcher [11:39:59] :O [11:40:18] petan: I've read that, that doesn't mention the security issue I asked about [11:40:36] ok the picture was added 1 minute ago :P [11:40:45] anyway YuviPanda, did you read the example usage? [11:40:58] when you create a subscription, you will be granted a security token [11:41:08] you need to provide this token in order to modify the subscription [11:41:27] yes anyone can communicate with the daemon [11:41:40] but it will not allow anyone to do anything without proper permissions [11:41:47] in future we can replace this with keystone [11:41:52] but for that it needs to exist [11:41:59] petan: hmm, that actually makes sense. I had thought the token was for something else. [11:42:05] nope [11:42:13] you can try it yourself it already works [11:42:29] just ln -s /data/project/dispatcher/bin/dispatcher-cli ~/bin/dispatcher-cli [11:43:02] will do in a bit, petan [11:43:03] when you type dispatcher-cli --subscribe yuvipanda it will create a redis queue yuvipanda and give you secret token to manipulate it [11:43:15] wait, the redis queue name is going to be 'yuvipanda'? [11:43:27] it's going to be whatever you call it [11:43:39] you can call it mysupersecretqueuenooneeverfind [11:43:39] hmm, right. [11:43:57] re [11:46:39] hashar: lol [11:46:48] hashar: I didn't notice, but you probably didn't notice either [11:47:04] Antoine Musso 1:04 PM (42 minutes ago) [11:47:05] to me [11:47:17] did you send it to me only? because I clicked reply to all [11:47:23] and there was your e-mail only [11:51:29] petan: I understood your architecture [11:51:45] from picture? :) [11:51:46] but I disagree on using the IRC feed [11:51:49] no from before [11:51:56] I disagree as well [11:52:04] but unfortunatelly there is only 1 option: use IRC feed [11:52:17] then you are on your own [11:52:23] and nobody is going to support that [11:52:38] but... don't you see that? [11:52:39] using IRC Feed is dumb [11:52:41] really [11:52:44] it's not that I like irc feed [11:52:48] hashar: i think that's okay, really. Nobody has supported it for a long time and there exists no alternatives. [11:52:50] it's only one option that WMF provides [11:52:57] WMF doesn't give us any alternative [11:52:59] hashar: and anyone attempting to write an alternative is going to be bikeshedded to death [11:53:03] what am I supposed to use hashar [11:53:19] hashar: and hence there is no interest. See two large wikitech-l threads about this over the last 2 years. [11:53:23] what I told you earlier, a json feed [11:53:31] there is NO json feed [11:53:34] where would I get it? [11:53:37] work with Vitaly and Krenair to get MediaWiki core to support JSON feeding of Rcent Change [11:53:41] that is better investment of your time [11:53:47] hashar: once there is JSON feed I will happily use it [11:53:56] but in this moment there is only irc [11:53:59] so I have to use that [11:54:07] you are talking about distant future [11:54:10] this way you will more more have to care about parsing horrible IRC formatting [11:54:11] hashar: also, you should no better than to try to change petan's mind :) [11:54:18] *know [11:54:42] but you guys make no sense, you tell me not to use A but you don't tell me what I should use instead, because JSON feed is sci-fi today [11:54:42] hashar, unfortunately the current JSON lines are too long to go on to IRC [11:54:42] petan: and luckily we have a patch that does the JSON feed [11:54:53] is it merged/ [11:54:56] live on production? [11:55:00] how do I attach to it [11:55:19] should I copy paste from our discussion earlier ? [11:55:27] no [11:55:34] it's not merged [11:55:41] I will use it as soon as it's live [11:55:49] MediaWiki -- (json) ---> ZeroMq --- (subscriber) ---> Redis [11:55:50] done [11:55:52] now I /have/ (not I prefer to) use irc feed [11:55:58] the json part is being handled in core as we speak [11:56:14] the subscriber is a few lines of code just like your dispatcher [11:56:48] though it will receiver the properly formatted message which will make it way more easier for you to maintain [11:56:50] ok I like the idea, in picture as you can see is written "dispatcher parses from irc or anything better source if it exist" [11:56:51] so there is a bit of investement [11:56:57] but that is well worth it in the long run [11:57:04] it doesn't exist right now, but as soon as it's deployed and working I will happily switch to it [11:57:33] so you are going to invest time on an horrible interface instead of helping get the new json feed setup? [11:57:48] that sounds to me like a vast of your time [11:57:48] how could I help? I don't work for wmf... I have no access here [11:58:24] you could help Krenair and Victor which have a patch for MW core that generate JSON [11:58:30] you can try it out report issues [11:58:39] and help them test the code [11:58:43] that would be a nice first thing [11:58:47] I can only work on wmflabs [11:58:56] we can create a project there to test it [11:59:04] or maybe we could enable that on beta? [11:59:06] the faster the code is merged in the faster it will be deployed in production and usable [11:59:16] yeah exactly [11:59:20] hmm [11:59:26] and have it send the UDP json message to MaxSem event logging instance we have in beta [11:59:31] ok so what is needed to install it? is there some documentation? [11:59:38] from there you can work with them to write a Zero MQueue subscriber [11:59:57] just like you would write your own dispatcher, though you will have to use a Zero MQueue interface instead [12:00:25] whenever you land with something worthwhile in labs, we can copy paste the code in prod and blam. [12:00:27] generalized [12:00:30] I still think that this ZMQ thingie is not able to do what dispatcher does [12:00:33] supported by the whole staff 24/24 7/7 [12:00:43] and you no more have to take care about the json feed. It would just be granted [12:00:54] how does the ZMQ filter the RC items from JSON feed? [12:01:01] and since EventLogging / ZMq is well supported by WMF staff, you end up with a better service than the irc feed. [12:01:40] petan: it would not filter them, just like the IRC feed is unfiltered. [12:01:40] ah ok [12:01:43] so your subscriber (or "dispatcher") still have to handle the filtering [12:01:48] ok [12:01:52] it is just that you are using a much more reliable source [12:02:01] and which is trivial to parse (json !! :D ) [12:02:03] sorry [12:02:08] I like the whole concept [12:02:09] well, so let's deploy it... to beta [12:02:11] it is nice [12:02:21] or maybe we should create another picture lol [12:02:25] I just disagree on the source of the events which is probably going to cause you headache in the long term. [12:02:28] because I am started being confused [12:02:47] btw this is cool thing: https://www.gliffy.com/ is there any free alternative? [12:02:56] I created the picture in that XD [12:06:08] Krenair: so what is current status of JSON support in MediaWiki, also how is it going to send the stuff to ZMQ? Is that an extension or core stuff? [12:06:24] PATCH_TO_REVIEW [12:06:43] I have no idea about ZMQ I don't think I'd heard of it until today [12:07:02] It could be core, but it'll likely be extension or even just in the wiki's config [12:08:11] I think extension is best idea [12:08:24] core should only contains most necessary stuff for it to work [12:08:55] also keep in mind 3rd parties use mediawiki and they don't need this [12:09:15] Krenair: can you link me to that "patch_to_review" [12:10:10] https://gerrit.wikimedia.org/r/#/c/52922/10 [12:14:55] petan: I don't know anything better than gliffy.com besides Microsoft Visio :D [12:15:12] maybe it's worth that $4.90 [12:15:14] :P [12:43:07] [02dispatcher-labs] 07benapetr pushed 031 commit to 03master [+0/-0/±4] 13http://git.io/OtJt2w [12:43:08] [02dispatcher-labs] 07benapetr 039f061c2 - lot of clean up and some comments too [12:43:12] :0 [12:43:14] it works [12:45:07] Not-002? [12:45:16] that's some github bot [12:45:20] not my work I just use it :P [12:46:24] It was Not-001 on tm-irc [12:54:53] https://www.youtube.com/watch?v=RQkdB49hBTc LOLZ [13:05:05] (03PS1) 10Yuvipanda: Add requirements.txt [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76495 [13:05:06] (03PS1) 10Yuvipanda: Ensure that queues do not get too long [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76496 [13:06:18] (03CR) 10Yuvipanda: [C: 032 V: 032] Add requirements.txt [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76495 (owner: 10Yuvipanda) [13:06:20] (03CR) 10Yuvipanda: [C: 032 V: 032] Ensure that queues do not get too long [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76496 (owner: 10Yuvipanda) [13:07:28] [02dispatcher-labs] 07benapetr pushed 031 commit to 03master [+1/-0/±5] 13http://git.io/xr7JsA [13:07:29] [02dispatcher-labs] 07benapetr 031715925 - lot more cleanup and file I forgot to push [13:09:26] NFS is stuck again [13:09:27] meh [13:09:52] Coren: ^ [13:09:53] o.o [13:09:56] &trustadd test test [13:09:56] Unknown user level! [13:10:01] &trustadd test trusted [13:10:03] indeed [13:10:05] it is [13:10:09] * Coren grumbles. [13:10:11] bota is stuck too :P [13:10:15] Successfuly added test [13:10:20] it's back o/ [13:10:28] I need to schedule a downtime this week to switch controllers. [13:10:35] Coren: anything from Dell support? [13:11:09] YuviPanda: Not yet. Their linux support is teh suck because it's their own engineers being volunteers for the most part. [13:11:23] sigh [13:11:23] ok [13:11:31] They don't support anything but RHEL officially. [13:11:35] * Coren ponders. [13:11:41] no wonder [13:11:46] Maybe it would help if like, Erik gave them a call. [13:11:57] But then again, ops have a history of odd controller behaviour. [13:12:03] that'll probably help, definitely, yeah. [13:12:11] (for some definition of definitely, at least) [13:12:16] or maybe you could tell them if they won't fix it [[Dell]] will not be very good reference anymore [13:12:51] fix it or we delete you from wikipedia! [13:13:32] ... right. Like a former arbitrator is going to say that, even if that wasn't being an ass. :-) [13:15:22] Coren: any news on uwsgi? [13:15:22] I have an idea for bot extension... mwahahawahuhaha [13:15:36] It's on my plate today. [13:15:43] * petan laughs madly [13:16:02] Coren: sweet [13:16:10] dammit [13:16:18] "your computer is low on memory" freaking windows [13:16:38] maybe I should turn off few of my vboxes [13:16:45] I'm a little weary of trying to juggle hardware on the NFS server right before I leave for over a Week to Wikimania. [13:17:41] What I /am/ hoping to do this week is uwsgi and cron-replacement. [13:17:49] (03PS1) 10Yuvipanda: Fix off-by-one error in trimming queues [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76501 [13:18:06] (03CR) 10Yuvipanda: [C: 032 V: 032] Fix off-by-one error in trimming queues [labs/tools/gerrit-to-redis] - 10https://gerrit.wikimedia.org/r/76501 (owner: 10Yuvipanda) [13:23:52] Coren: have you already started writing the cron replacement? [13:24:11] YuviPanda: No, still in the brain design mode atm. [13:24:34] aka considering the approach. :-) [13:24:34] Coren: do write it up somewhere, I want to help :) [13:24:40] Coren: are you planning on writing it in python? [13:25:15] I'm not going to write a system daemon in an interpreted language, no. [13:25:21] ah, right [13:25:28] forgot its going to run as a deamon [13:25:47] If nothing else, because that introduces too many dependencies. [13:26:06] you mentioned using redis, so that's at least one. [13:26:18] Also, most interpreted languages have real issues with fork() setuid() [13:26:39] I'm considering redis, but I'm increasingly uncertain that it'd be the right solution. [13:26:55] hmm, having never used either, I'll take your word for it (fork/setuid on interperted languages) [13:27:01] Coren: what exactly were you considering using it for? [13:27:45] Storing work queues so that I could distribute the dispatch; but upon consideration I realized that I don't need an extra layer of redundancy if I simply systematically send to the grid. [13:27:58] indeed, [13:28:03] using Redis there seems redundant [13:28:09] when you have SGE already [13:29:07] !log deployment-prep rebuilding l10n cache, has been broken for a while [13:29:10] I doubt I can be of much help in this case :) [13:29:10] Logged the message, Master [13:29:46] What I'm almost certainly going to end up doing is an upstart-like thing; like ~/.gron containing *.conf files, one per job. (Yeah, since it's a cron-thing for the grid I already decided I'm going to call it 'gron') [13:30:25] pronounced 'groan'? :) [13:31:26] Coren: can kick nfs [13:31:27] :P [13:31:32] I just wait 2 minutes for mkdir [13:32:00] btw did you know mkdir is atomic XD [13:32:10] I even heard it's a proper way to implement lock in a thread [13:32:17] * shell script [13:32:41] Pronounced /ɡɔrn/ [13:33:23] :-) [13:33:31] Duh /ɡrɔn/ [13:34:43] you and your fancy IPA [13:36:06] beer? where? :) [13:36:25] sumanah: took me a while to understand that. well played :) [13:37:13] Does that make me a nerd that I have an IPA keymap amongst my normal input methods? :-) [13:38:13] Coren: if we're talking nerd points, I just 3D printed a ST:TNG badge :) [13:38:24] You win. [13:38:38] :D need to post pictures on blog. [13:38:54] I'm printing more tomorrow! [13:38:55] Wow! [13:39:34] (Unless you printed the "future" version from Future Imperfect) :-) [13:39:56] nope, traditional one [13:40:00] not the future one, nor the 'reboot' versions [13:40:00] (That still makes you a nerd, only then the cool overshadows the nerd) :-) [13:40:17] heh [13:40:24] sumanah: it's been something of a long term life goal, to build it with *working* parts [13:40:29] The future on is my fave, with rank bars on the badge rather than pips. :-) [13:40:29] I hope to do that within the next ten years. [13:40:40] oooh what materials YuviPanda? [13:40:40] Coren: of all the future/alt episodes, ST:ENT's alt one was probably the best. [13:40:53] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6normal) [Bug 52217] puppet server broken on virt0 - https://bugzilla.wikimedia.org/show_bug.cgi?id=52217 [13:41:00] @ping [13:41:00] poor puppet died on virt0 :(( https://bugzilla.wikimedia.org/show_bug.cgi?id=52217 [13:41:07] :( [13:41:07] sumanah: regular PLA. https://en.wikipedia.org/wiki/Polylactic_acid [13:41:45] &ping [13:41:46] Pinging all local filesystems, hold on [13:41:46] Written and deleted 4 bytes on /tmp in 00:00:00.0027330 [13:41:47] Written and deleted 4 bytes on /data/project in 00:00:00.0083950 [13:41:51] :) [13:42:02] here we go [13:42:42] when you are not sure if nfs is fucked, &ping is your friend [13:50:41] NFS isn't dead, it's just resting. [13:53:32] lol I am wondering who can read json easier than xml http://json.org/example.html except for machines... [13:53:38] it hurts my eyes [13:53:58] like... reading a binary code is easier [13:57:23] that's because of the excessive indentation [13:57:45] the second example is much better [13:58:11] the third one is just java/xml/enterprise hell [14:09:07] Coren: have you looked at puppet master failing on virt0 ? :) [14:09:30] it gives me a 500 internal server error "Please contact the server administrator" ;D [14:09:48] hashar: Sorry, I hadn't noticed your message. Lemme go look. [14:10:11] sorry should have poked you [14:10:26] I know Andrew did some puppetmaster changes last week [14:13:32] hashar: I'm doing a manual run atm and it /loooks/ like it's working. [14:14:04] There seems to be 17 umptillion iptables changes though. [14:14:16] notice: Finished catalog run in 182.23 seconds [14:14:28] hashar: Puppet no failz. [14:15:52] Coren: same 500 error on my instance :( [14:16:15] [02dispatcher-labs] 07benapetr pushed 031 commit to 03master [+3/-0/±3] 13http://git.io/AfBc4g [14:16:16] [02dispatcher-labs] 07benapetr 03ca270f2 - JSON support (for input / output) [14:17:14] my instance is i-00000390.pmtpa.wmflabs / deployment-bastion.pmtpa.wmflabs [14:17:15] Oh! [14:17:32] valhallasw: are you trying to convince petan that JSON actually is nice? good luck :) [14:17:35] You didn't mean that puppet failed /on/ vitr0, you mean that puppet /from/ virt0 fails! [14:17:43] YuviPanda|Away: no I was asking why it [14:17:49] s so hard to read :D [14:18:00] json looks like a bracket hell to me [14:18:08] considering you haven't listened to anyone who has given you reasons over a long period of time, I don't think I should bother again :) [14:18:17] * YuviPanda goes to do something productive instead [14:19:42] hashar: It's Apache that broke, seemingly. Something about mod_passenger since 7:28 utC [14:20:20] "yeay ruby" [14:20:31] :-] [14:20:33] Coren: remind me, can jobs be submitted from grid nodes? [14:21:05] addshore: It's turned off atm; but I could turn it on if needed. Do you have a good use case for this? [14:21:36] heh, it might be the best / easiest way I can think of multithreading in php [14:22:09] especially in an environment where splitting across several nodes would be good [14:22:51] hashar: Apparently fixed with a swift kick to the apache. :-) [14:23:19] addshore: Actually, you might want to parallel SGE job instead (there's built-in support for that) [14:23:43] hmm, never really looked into them before, got a link to some blurb i can read? :P [14:23:59] Probably. Gimme a sec. [14:25:13] petan: wm-bot has gone nutty in #wm-bot [14:28:01] addshore: Or maybe not. I don't think PHP has MPI support. [14:28:28] :< [14:28:32] Coren: that worked. Thank you! [14:28:53] addshore: So yeah, that'd need that exec hosts also be submit hosts. There's nothing that prevents this technically, but there's the possibility of a job bomb I'm a little concerned about. [14:28:57] oh well, I can think of another way of doing it, although being able to submit from the grid would be nice, or else I will have to have a scrip running in a screen on -dev or -login :P [14:30:39] Coren: nothing preventing job bombs now, since webhosts are submit hosts [14:30:39] Coren: and you can easily have a script on a web host that just relays commands through [14:32:14] I need one thread reading from my db, and say 6 more using the data they are passed from the db and then updating the db [14:32:14] still cant think of the perfect way to do it [14:32:30] addshore: Redis! [14:32:42] addshore: 1 job reads from DB, puts on redis [14:32:48] addshore: other jobs just pop from the redis queue [14:33:07] addshore: you can just start as many 'consumer' jobs as you want, and they'll all just equally read from the same redis queue [14:33:21] addshore: and no need to deal with threading, or submit recursive jobs or whatever [14:33:25] andrewbogott: thank you for the puppetmaster restart on virt0 [14:33:25] i guess thats what I will have to look into now :P [14:33:32] YuviPanda: should I lpush or rpush [14:33:33] somehow I keep confusing andrewboggot and Coren :/ [14:33:50] petan: lpush [14:33:54] petan: and rbpop [14:33:58] b for 'blocking' [14:34:12] petan: also don't forget ltrim, or you will exhaust memory :) [14:34:19] YuviPanda: ever used redis with php? [14:34:31] YuviPanda: LOL that library I use support rpush only [14:34:33] addshore: no, but it should be the same [14:34:43] i was a reallllyl simple class for it ;p [14:34:44] petan: use a better library then [14:34:49] addshore: there's already one installed [14:34:51] let me look [14:34:52] I didn't find a better yet [14:34:55] just a bunch of non-working [14:34:59] YuviPanda: even better [14:35:09] Coren: andrewbogott: could one of you blindly merge a beta shell script update at https://gerrit.wikimedia.org/r/#/c/76504/ please ? :-] it used to be an upstart service, I am now triggering it via jenkins. [14:36:35] addshore: we have https://github.com/nicolasff/phpredis#usage installed everywhere, I believe [14:36:54] addshore: I'll be happy to help in any way possible, with redis :) [14:36:56] petan: C#? [14:37:02] yes [14:37:15] hehe, switch to a more used language then :) [14:37:18] what about using rpush and lbpop [14:37:23] no way [14:37:45] YuviPanda or maybe switch from redis to a more used solution ;) [14:38:24] not like I can convince you about anything, so sure, implement your own thing! [14:38:58] hashar: blindly merging! [14:39:11] andrewbogott: :-] [14:39:29] 'cause I'm about to go to breakfast [14:39:35] that would be fine [14:39:47] that removes a huge piece of obfuscated oddity from beta :-] [14:39:51] you will be praised! [14:42:16] is something going on with DB access from tools-login? [14:43:11] Nettrom: Not that I know of. What is the issue you are experiencing? [14:43:28] Coren: 'sql enwiki_p' doesn't appear to connect, just hangs [14:44:37] Ah, yes. I see why. [14:44:57] Try this. [14:46:20] yes? [14:50:00] YuviPanda: lol I can be easily convinced, but if you ever try to convince to switch from such a beatiful language like to c# to such a horrific crap like python, you will likely fail... [14:50:50] :) [14:51:23] meh I will have to write own library :/ [14:51:43] almost all other libraries from redis.io don't even build [14:53:24] Coren: seems to be working again now [14:53:49] I wish so much git had a proper -v switch [14:54:03] it has that but it just isn't really much verbose [14:54:19] * petan hates to watch cursor blinking and waiting for something [14:56:45] YuviPanda: care to join ##addshore? :> [15:05:52] Coren: Is there an ETA for the DB serversDNS [15:05:52] entries? [15:06:24] scfc_de: This week, I expect. I've got about a half dozen things mostly ready to deploy. [15:06:41] Coren: is uwsgi one of those 'ready to deploy' things? [15:06:48] [bz] (8NEW - created by: 2Antoine "hashar" Musso, priority: 4Low - 6major) [Bug 48501] [OPS] beta: get SSL certificates - https://bugzilla.wikimedia.org/show_bug.cgi?id=48501 [15:07:04] YuviPanda: No, that's one of the "I hope I'll be done in time" things. [15:07:09] Coren: Nice. Otherwise we need to document the IP table stuff, so I can safely reboot tools-login :-). [15:08:19] Coren: On tools-webserver-01, the package popularity-contest is installed (manually) and only on this instance. Can it be removed? [15:08:44] scfc_de: If it's not in puppet, it should either be put there or removed from the node. [15:10:57] !log tools Purged popularity-contest from tools-webserver-01. [15:10:59] Logged the message, Master [15:12:32] Coren: And another (daily) thing: "exim paniclog on tools-webserver-01.pmtpa.wmflabs has non-zero size". Does that file need to be deleted/truncated manually? [15:13:07] scfc_de: It probably does, if only to shut the damn thing up. :-) [15:13:22] (It's complaining about OOM.) [15:13:35] [bz] (8NEW - created by: 2Amir E. Aharoni, priority: 4Unprioritized - 6normal) [Bug 52222] fill http://he.wikipedia.beta.wmflabs.org/ with some useful data from he.wikipedia.org - https://bugzilla.wikimedia.org/show_bug.cgi?id=52222 [15:13:45] (It *has* complained about OOM in the past, to be precise.) [15:15:12] !log tools tools-webserver-01: rm /var/log/exim4/paniclog [15:15:13] Logged the message, Master [15:25:53] hi, can I get a reboot of bots-salebot? I'm stuck ssh'ing in [15:32:00] gribeco: sure, but can you consider moving to tools? :P [15:32:12] I was looking for you [15:32:17] bots project may need to close soon :/ [15:32:24] petan: I would love to; is there a page that describes what to do? [15:32:34] !toolsdocs [15:32:35] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [15:32:50] thanks [15:33:44] gribeco what's your wikitech username? [15:33:51] I might have removed you from acl [15:33:55] ... [15:33:59] petan: gribeco [15:34:02] ok [15:34:18] nope, I didn't [15:34:22] ssh is not responding [15:34:22] ok let's reboot it then [15:34:30] ok [15:34:45] I /think/ I rebooted it [15:34:52] andrewbogott: that labsconsole reboot really need a confirmation [15:34:54] we'll see =) [15:35:02] or one day someone accidentaly reboot something important [15:35:08] petan, it has one now I think... [15:35:15] not that I know of :P [15:35:25] Yes, it has. [15:35:36] I see a spinning wheel and then... it's rebooted [15:35:41] it ask me only if I want to delete instance [15:35:47] * andrewbogott tries it [15:35:50] but when I click reboot it just happens [15:36:02] I just tried [15:36:03] you're right. [15:36:09] Is there a bug for this already? [15:36:15] not from me [15:37:01] From https://wikitech.wikimedia.org/wiki/Nova_Resource:I-000005f9, I got a confirmation page for rebooting. [15:38:43] petan: sorry, ssh is still getting stuck [15:38:48] gribeco@bastion1:~$ ssh bots-salebot [15:38:48] [15:38:48] If you are having access problems, please see: https://wikitech.wikimedia.org/wi [15:38:48] ki/Access#Accessing_public_and_private_instances [15:38:58] hold on [15:38:58] (and then nothing) [15:39:03] I think it didn't really reboot yet [15:39:06] ah [15:39:13] that restart button is binded to power switch [15:39:17] so instance only get a signal [15:39:32] there is no force-shutdown :( [15:44:46] [02dispatcher-labs] 07benapetr pushed 031 commit to 03master [+0/-0/±1] 13http://git.io/l0B1wg [15:44:47] [02dispatcher-labs] 07benapetr 03f5bf198 - new library that supports LPUSH [15:45:32] YuviPanda: lpush resolved [15:45:40] (I think) [15:45:53] shouldn't we let people specify if they want lpush or rpush? [15:47:36] Is Not-002 running on Labs or is that the GitHub bot? [15:47:50] petan: no [15:48:03] scfc_de github bot [15:48:07] k [15:48:12] YuviPanda: poor bot devs :( [15:48:15] you make their life hard [15:55:28] ding dong YuviPanda :> [15:55:34] :D [15:55:41] addshore: is this tool also in labs/tools/? [15:55:42] YuviPanda: and to count the elements still in a list? [15:55:50] YuviPanda: yup [15:56:01] addshore: llen [15:56:27] addshore: https://github.com/nicolasff/phpredis#llen-lsize [15:56:37] and does querying mc use many resourses? ie, if I want to keep checking how long the list is how much should I sleep between each? [15:57:10] addshore: why do you want to check length? [15:57:24] so the master doesnt just fill mc with 1000000 rows [15:57:25] :> [15:57:35] [02dispatcher-labs] 07benapetr pushed 031 commit to 03master [+0/-0/±5] 13http://git.io/n-FNag [15:57:36] [02dispatcher-labs] 07benapetr 0362cd878 - several changes * Changed redis behaviour * Implemented regex checks * Invalid regex items are disabled to prevent system from having too big * load [15:57:41] addshore: http://redis.io/commands/llen says it is O(1), so should be very fast :) [15:57:55] i was thinking something like, push 100 to mc, wait for it to drop to ~5 and add soem more [15:58:05] addshore: should be fine [15:58:05] YuviPanda: is a list unique / can it be made unique? [15:58:13] addshore: you can do that via http://redis.io/commands#set [15:58:19] addshore: but those are sets, not lists. [15:58:30] mhhm, can you pop off a set? [15:58:37] addshore: it will return a random element [15:58:40] SPOP [15:58:44] mhhhm [15:58:48] and it won't 'block' if things are empty [16:00:53] YuviPanda: https://github.com/addshore/addwiki/compare/0053f4722650...db6297eb8f22 [16:01:16] heh, weldone github for not recognising a file move... [16:02:19] up to line 56 in .slave [16:02:25] basically all of .master [16:02:47] addshore: you want brpop, not blpop [16:02:56] addshore: you are pushing on the left, you want to pop on the right [16:03:06] so it is a queue. Right now you have a stack :D [16:04:37] indeed ;p [16:04:54] although as I am waining for it to be empty before adding more it wouldnt have made much difference ;p [16:05:05] addshore: true, but still :) [16:05:31] addshore: also, you should set a prefix that is not public [16:05:32] !tools-doc [16:05:32] !tooldocs [16:05:32] jesus man [16:05:32] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [16:05:34] YuviPanda: is there a way to get an array of everything in the list? [16:05:51] addshore: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Security [16:06:03] heh, there is an api to add things to my db ;p [16:06:23] addshore: you can use LRANGE, which takes a 'start' and 'end' [16:06:25] maybe I will change the key soon ;p [16:07:22] lets pull it to tools and see if it works [16:07:28] :) [16:07:36] &ping [16:07:36] Pinging all local filesystems, hold on [16:07:37] Written and deleted 4 bytes on /tmp in 00:00:00.0002700 [16:07:38] Written and deleted 4 bytes on /data/project in 00:00:00.0068050 [16:07:58] :o [16:08:03] nfs is so lovely stable [16:09:54] PHP Warning: unserialize() expects parameter 1 to be string, array given in /data/project/addbot/src/addwiki/scripts/wdimport.slave.php on line 56 [16:09:56] hehe [16:10:19] serialization is nice but fragile [16:10:26] oh wait [16:10:28] I added $redis->setOption(Redis::OPT_SERIALIZER, Redis::SERIALIZER_PHP); [16:10:35] :o [16:10:45] so its going to return an array? O_o? [16:10:51] addshore: yeah, so automatically serialized and unserialized [16:11:00] so remove your unserialize [16:11:00] so i can pass it an array too ? ;p [16:11:02] you can just pass things directly [16:11:03] yeah [16:12:23] oh wait [16:12:30] it passes it as the second arg! :> [16:13:10] hmm? [16:13:21] Array [16:13:21] ( [16:13:21] [0] => addbotiw:iwlink [16:13:21] [1] => a:7:{s:4:"site";s:4:"wiki";s:4:"lang";s:2:"af";s:9:"namespace";s:1:"0";s:5:"title";s:2:"36";s:5:"links";s:1:"1";s:7:"updated";s:19:"2013-07-25 07:45:12";s:3:"log";N;} [16:13:21] ) [16:14:29] addshore: ah, right. returns the key as the first element [16:14:30] ok [16:14:37] yup :> [16:14:49] gonna use json encode instead of seir [16:15:03] :) [16:15:31] hmm, how to purge the list? :> [16:15:55] addshore: DEL will clear it entirely [16:16:03] addshore: LTRIM lets you reduce it to a known length [16:18:06] so $redis->lTrim('iwlink',0,0); start at 0 stop at 0 leaves me with nothing? [16:18:57] addshore: you can just do $redis->del('iwlink') [16:18:57] if you want to delete all of it [16:18:57] addshore: $redis->delete [16:18:57] rather [16:19:44] addshore: http://redis.io/commands# has list of all commands and explanation. you can also filter by data type on top [16:20:47] lsize returns void O_o [16:21:01] 'void'? [16:21:20] addshore: i think it'll return null if the list doesn't exist, which will be the case with delete [16:21:24] oh, my phpdoc is just wrong :> [16:24:24] bingo, working [16:24:56] addshore: \o/ [16:25:12] :> [16:33:03] hmmm [16:33:11] doesnt seem to be working quite the same on the grid :/ [16:33:29] addshore: what's happening? [16:34:01] nothing :P [16:34:17] the queue appears to always have the origional 50 items in it [16:34:26] and the slave doesnt take any :< [16:34:27] are you popping from the same queue? [16:34:35] yup, its the same code :P [16:34:53] it worked when i ran thw two on interactive clients :/ [16:35:12] addshore: are they running as continous jobs? [16:35:15] any errors? [16:35:17] yup [16:35:22] heh *checks* [16:35:45] hehe [16:35:46] var/spool/gridengine/execd/tools-exec-02/job_scripts/717615: line 4: 24237 Aborted /usr/b$ [16:35:47] libgcc_s.so.1 must be installed for pthread_cancel to work [16:37:05] addshore: hehe, not enough memory :) [16:37:13] yup, silly php [16:37:32] cant remember what I decided it needed [16:40:16] 768 it is ;p [16:40:55] :) [16:41:40] 10 jobs running [16:41:49] *goes to his channel to see what happens* [16:43:28] addshore: works/ [16:43:29] ? [16:43:45] yup [16:44:11] \o/ [16:45:02] this is perfect xD [16:45:04] mwhahahaa [16:45:31] addshore: Redis is nice, no? :) [16:45:59] yus :) [16:46:12] addshore: you should write about this :) [16:47:27] hey ^demon: i think we got provisional approval from hashar for our mediawiki-config change to get CirrusSearch into beta. I also set up the elasticsearch machines so they were online. [16:47:48] <^demon> I saw. I'm totally cool with going live with it today on beta. [16:53:33] YuviPanda: UW on beta commons seems to be working again fwiw [16:53:57] chrismcmahon: hmm, okay. I wanted to run local tests before merging something, but since it seems to work when I manually do it, will just merge and see if things fail on betalabs [16:54:26] YuviPanda: not sure what changed, but login just now started to dtrt [16:54:35] dtrt? [16:55:48] <^demon> manybubbles: So, I was reviewing your prefix search change. Was there a particular reason you went with 100 characters? Page titles can be anywhere up to 255 bytes. [16:56:07] <^demon> Granted most people aren't going to prefix search for something that long, just wondering tho. [16:56:34] do/es the right thing YuviPanda [16:56:41] ah, okay :) [17:07:43] ^demon: It was reasonably random. 100 is really a ton of ngrams to make any way but I can switch it to 255 and we'll have all our bases covered. [17:08:50] <^demon> Reasonable excuse :) I left comments on the change anyway. [17:33:50] <^demon> manybubbles: Ok, everything merged other than that submodule change. How we gonna do this? :) [17:34:20] ^demon: well I think we need to get our mediawiki-config change merged [17:34:32] and after that I think we're ready to run our maintenance scripts [17:34:39] <^demon> Ah yes, I'll have a look at that one [17:34:49] then manually verify, then I can run my automated tests [17:35:16] ^demon: the mediawiki-config one we need to be super careful with, I think. [17:35:41] <^demon> Very careful as I can break production with it too. Lemme make sure I'm not stepping on anyone's toes before we do. [17:37:03] ^demon: I sent an email to QA this morning letting them know we planned on doing this. [17:37:14] I _think_ mobile might like to know as well [17:38:14] manybubbles: we're only running one browser test for search right now, mostly because it tests for a condition that some people think is a bug, I just wanted to be sure to know if that condition changed [17:39:10] chrismcmahon: can you point me to the test? I have the browsertests repository sitting around if it is in there I can check up on it. [17:39:56] manybubbles: https://github.com/wikimedia/qa-browsertests/blob/master/features/search.feature [17:40:04] <^demon> manybubbles: So, there's a MW general deployment in ~20m. Considering our conf changes could also break production if we're not careful, we might want to hold off a tiny bit. [17:40:10] <^demon> (As in, til that deploy is over) [17:40:16] hold! [17:40:25] I can wait [17:41:09] chrismcmahon: which of those is a bug right now? [17:41:19] because we still support all of those. [17:43:39] &ping [17:43:39] Pinging all local filesystems, hold on [17:43:40] Written and deleted 4 bytes on /tmp in 00:00:00.0006530 [17:43:41] Written and deleted 4 bytes on /data/project in 00:00:00.0089070 [17:45:03] manybubbles: bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=44238 [17:45:57] manybubbles: it's kind of subtle :) [17:51:44] chrismcmahon: got it - we've got a pretty standard solution to problems like that: index the source both with and without the accent. [17:53:13] chrismcmahon: it can get a bit more complicated than that, but that is the high level explanation for what to do. [17:54:43] well, now that I think about it we don't actually index both with and without - we squash both the search query and the input document to without - but we do it carefully. [18:11:17] (03PS1) 10Andrew Bogott: Disable root password on labs [labs/private] - 10https://gerrit.wikimedia.org/r/76548 [18:16:51] [bz] (8NEW - created by: 2Tim Landscheidt, priority: 4Unprioritized - 6normal) [Bug 52236] rotate_logs.sh throws exceptions - https://bugzilla.wikimedia.org/show_bug.cgi?id=52236 [18:23:06] Ryan_Lane and YuviPanda, could i get a hand logging into zero-test.wmflabs.org (zero-test.pmtpa.wmflabs.org) ? seems like i can't route to either of them from my wired office ethernet connection or bastion.wmflabs.org. have a Wikipedia Zero health script setup there that i'm tuning, but can't seem to access it. [18:25:07] dr0ptp4kt: do you have ssh setup appropriately? [18:25:24] dr0ptp4kt: I've https://dpaste.de/XB3UP/ [18:25:27] in my .ssh/config [18:25:40] so I'd just ssh zero-test.pmtpa.wmflabs [18:25:43] and it'll go through [18:26:05] dr0ptp4kt: it works for me.... [18:26:21] on bastion: ssh-add -l [18:26:25] is there a reason that this does not work on tools-login: http://tools.wmflabs.org/veblenbot/CategoryListTS [18:26:32] (03CR) 10Andrew Bogott: [C: 032 V: 032] "Will this break everything? Only one way to find out!" [labs/private] - 10https://gerrit.wikimedia.org/r/76548 (owner: 10Andrew Bogott) [18:26:32] it seems to hang [18:26:44] YuviPanda, Ryan_Lane: strange, was working on friday. will update my config file and retry [18:27:06] well, now that I think about it we don't actually index both with and without - we squash both the search query and the input document to without - but we do it carefully. [18:27:07] cbm: works for me [18:27:15] YuviPanda, Ryan_Lane, magically it just works now. didn't have to update my config file. [18:27:17] ignore that last one - shell problems [18:27:17] gremlins. [18:27:20] hehe [18:27:22] heh [18:27:29] hmm. What I mean to say is that wget hangs when I run 'wget http://tools.wmflabs.org/veblenbot/CategoryListTS' on tools-login [18:27:29] let's blame glusterfs [18:28:01] or even if I just try to wget tools.wmflabs.org [18:28:04] YuviPanda, that's always fun. Ryan_Lane, YuviPanda, thanks for doing or not doing whatever it is you did or did not do :) i'll copy that ssh config, btw [18:28:15] YuviPanda: not gluster ;) [18:28:18] or I'd need to fix it [18:28:24] Ryan_Lane: sure, but doesn't mean we can't blame it! :P [18:28:30] :D [18:30:28] YuviPanda: is there any sort of command line redis client? ;p [18:30:30] <^demon> I can't ssh to bastion :( [18:30:37] addshore: there is, but it isn't installed :( [18:30:44] why not? :( [18:30:47] addshore: redis-cli exists, but we don't have it installed [18:30:55] why not? :( [18:30:58] addshore: since it looks like there is no way to install that without installing the redis server too [18:31:07] why not? :( [18:31:12] <^demon> I can ssh to other instances with bastion as the proxy, but not to bastion directly :\ [18:31:24] YuviPanda: install the server and just dont use it? :> [18:31:40] addshore: yeah, probably. but requires puppet trickery [18:31:48] ahh true [18:31:49] addshore: I just do 'import python; import redis' :P [18:31:50] and stuff [18:31:54] you can probably do the same for php [18:32:11] but that involves me writing something instead of just doing something ;p [18:33:08] addshore: :P [18:37:08] Ryan_Lane: so wget does work for you on tools-login to fetch pages from tools.wmflabs.org ? [18:37:15] no [18:37:32] ok, it's not just me then [18:37:37] you can't access Labs public IPs inside of labs [18:37:50] it's a known problem [18:38:41] you should use the private IP [18:40:12] at that point I may as well just use the file name directly :) The main point of using the URI was to abstract away the local details. But as long as it's a known problem, that's fine [18:40:21] cbm: it's tools-webproxy [18:40:30] tools-webproxy.pmtpa.wmflabs [18:42:52] YuviPanda: with redis this is now so perfect :> [18:42:59] addshore: inorite! [18:43:05] it just feels so right and unhacky :D [18:43:11] YuviPanda: check out the feed in my chan again ;p [18:43:13] addshore: and you can speed things up or slow them down by just starting more bots :D [18:43:45] addshore: it's a distributed bot now! :D [18:43:56] with so much potential ;p [18:44:00] can run however many times you want, trivially [18:44:15] also massivly reduces mysql overheads and waits [18:44:47] realistically I might make another process just to write back to mysql [18:45:15] heh, tbh the master could do that O_o while it waits to add more to the redis list [18:45:34] and then the master could control how fast the whole process goes depending on the speed of the mysql db [18:45:35] addshore: true [18:45:47] addshore: indeed [18:45:47] addshore: and you don't even need to do threads [18:46:01] no concurrency, no locking, no race conditons that we have to deal with :P [18:46:08] and if we could submit jobs from the grid I could simply run the master and it could work all of this out itself ;p [18:46:12] * addshore points Coren to the above line ;p [18:46:32] Yeah, there's been sufficient demand for it. [18:46:59] addshore: so master will start jobs depending on speed of mysql writebacks/ [18:47:41] run master, it starts a single slave, if it finds it spends over 50% of the time waiting rather than reading or writing to the db it starts another slave [18:47:50] addshore: right [18:48:00] ofc, have an upper job limit ;p [18:48:04] addshore: you need to be careful with heuristics, since it is easy to start a 'runaway' process thing [18:48:33] * addshore goes to jot something down [18:49:26] addshore: :) [18:50:23] hmm, actually that wouldnt work quite right as currently the master gets the 500 rows that were changed the longest time ago, meaning it is the slaves responsibility to make sure they are updated and not added back to redis next time the master queries the db [18:50:59] addshore: you can just let slaves wait doing nothing, y'know. it doesnt' cost much IO or CPU [18:51:00] just memory [18:51:09] indeed [18:51:25] grrrit-wm, and SuchABot both just 'wait', doing nothing for days on end :) [18:51:28] I think that's fine [18:51:43] since we're just going to be waiting on IO, and not blocking the CPU [18:52:00] might even make sense for my framework to shove some of its cached stuff thats the same between slaves in redis [18:52:20] +1 [18:52:24] don't forget the secure prefix :P [18:52:29] yup ;p [18:52:48] addshore: redis also has support for hashtables inside redis, so you can even put everything in one key :P [18:53:04] addshore: but remember redis isn't a database :P if it runs out of memory it'll start evicting older keys. [18:53:10] now we just need to make sure it doesn't run out of memory :P [18:53:33] meh, *works out how much memory he uses [18:53:46] Coren: can I borrow some of your time at some point to setup a proper tools-redis? I think now that I've managed to convert more people, we'll cross 1G in a month or so [18:53:50] addshore: doubt you'll use that much :P [18:53:52] 500 rows, encoded each is probably on average 255 chars [18:53:55] :> [18:53:58] that's not much :P [18:54:15] 124kb? ;p [18:54:21] YuviPanda: We could do that in HKG [18:54:28] Coren: super [18:54:29] addshore: :P [18:55:10] i would only use about 250mb if I shoved my whole db in there :O [18:55:14] addshore: :P [18:55:20] shame I want to make sure I dont loose it ;p [18:56:24] :P [18:56:36] addshore: it's okay, at some point I'll find a use for mongodb and get it installed... :P [18:58:52] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6normal) [Bug 52237] URL confusion: commons.wikimedia vs commons.wikipedia - https://bugzilla.wikimedia.org/show_bug.cgi?id=52237 [19:21:13] ^demon: left me know when you are ready to do more beta work - I think we're down to merging the config and running the maintenance scripts [19:21:36] <^demon> I'm having lunch. Reedy told me the deploy's about done, so when I'm done eating we'll go forward. [19:22:09] yum [19:23:57] lunch [19:24:01] seriously [19:24:12] * hashar sends ^demon back to east coast [19:24:28] <^demon> But it's hot on the east coast! [19:25:21] except in Canada :D [19:25:22] wait till you come to Hong Kong [19:28:16] <^demon> YuviPanda: Won't be in HK. [19:28:23] ... really? damn [19:28:25] <^demon> Been to China before in July, so yes, I know it's hot there. [19:28:36] <^demon> (July many years ago, that is) [19:28:38] was looking forward to making fun of Gerrit in person [19:32:56] Hey folks. I'm writing a script designed to hit Wikipedia's API. I'd like to test it on another (similar) MediaWiki first. Is there something in labs that would be suitable? [19:33:16] halfak: http://en.wikipedia.beta.wmflabs.org/ perhaps? [19:33:32] Perfect. Thanks so much. :) [19:33:50] :) [19:33:51] petan: sorry, bots-salebot is still inaccessible [19:34:17] (03CR) 10Andrew Bogott: [C: 032 V: 032] Move everything into modules. [labs/private] - 10https://gerrit.wikimedia.org/r/76127 (owner: 10Andrew Bogott) [19:34:20] Ryan_Lane can you have a look [19:34:25] at? [19:34:28] there is instance that doesnt respond nor works [19:34:29] (03PS2) 10Andrew Bogott: Move everything into modules. [labs/private] - 10https://gerrit.wikimedia.org/r/76127 [19:34:31] I cant reboot it [19:34:36] bots-salebot [19:34:42] hold on [19:34:49] oik [19:35:01] thanks =) [19:35:13] (03CR) 10Andrew Bogott: [C: 032 V: 032] Move everything into modules. [labs/private] - 10https://gerrit.wikimedia.org/r/76127 (owner: 10Andrew Bogott) [19:46:35] Coren: could you take a moment with me and update https://www.mediawiki.org/wiki/Wikimedia_Labs#Roadmap ? Right now it still says "Enable database replication - Asher hopes to get this done by January or February 2013" [19:47:03] Oy! That roadmap is... a bit out of date. :-P [19:47:20] yeah [19:47:42] You probably know stuff and can update it with me to save Ryan a bit of work, just getting rid of the really clearly wrong old stuff [19:49:07] Change on 12mediawiki a page Wikimedia Labs was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=750507 edit summary: [WMF] /* Open tasks */ Some updates, much of this is done already [19:51:05] Change on 12mediawiki a page Wikimedia Labs/status was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=750508 edit summary:  [20:07:02] YuviPanda: https://github.com/addshore/addwiki/issues/84 ;p [20:55:13] gribeco: sale is on strike? ;) [20:55:26] Alchimista: the instance is stuck [20:55:47] Ryan_Lane: you may have forgotten about it ;) [20:58:42] (I'm off to a meeting) [21:15:20] Hi! I'm having trouble ssh'ing into bastion. [21:17:49] After uploading my public key to the wiki, I get "Connection closed by 208.80.153.207" when I try to connect. [21:20:15] ragesoss: Coren or petan or Damianz might be able to help you [21:20:20] * sumanah pings 'em [21:20:25] hi [21:20:29] ih [21:20:37] :) [21:20:50] Try ssh with -vv and pastebin the debug output [21:20:52] ragesoss try bastion3, bastion2 is known to be broken [21:21:15] and bastion1 who knows :P [21:21:30] hey there, does someone have admin on "http://en.wikipedia.beta.wmflabs.org/" and would you be willing to autoconfirm an account for me so I can run some tests? [21:21:50] ok [21:21:56] equal name? [21:22:13] equal never put links in quotes or brackets [21:22:23] it makes them hard to click [21:22:32] chrismcmahon: can you help equal? [21:22:37] I can [21:22:39] My client doesn't include the quotes in the link [21:22:49] * Damianz goes back to kicking a server [21:22:50] ah sorry. [21:22:54] Damianz: http://pastebin.com/L67wmDd8 [21:23:13] well, then you would find it hard to open links that have quotes in them, which no link actually should have, but some websites just suck [21:23:28] same result with bastion3.wmflabs.org instead of bastion.wmflabs.org [21:23:45] Hmm [21:23:52] You've never logged in before, right [21:23:58] petan equal I can do that if it needs doing [21:23:59] right [21:24:16] hm... I can do it too, but if u want :P [21:25:03] ok you do it chrismcmahon my password doesnt work :P [21:25:25] petan: afcbot question... [21:25:27] chrismcmahon the account name is just qwerty [21:25:33] Technical_13? [21:25:48] I hope you dont want to see a source code [21:25:59] ragesoss: Could I check - did you upload a key on the wiki? [21:26:00] because last time it took me 2 months to find it [21:26:13] There doesn't seem to be a keydir for you, which normally means you didn't or the script is borked again [21:26:20] Damianz if ur on bastion, check if FS is writable [21:26:26] Can you have afcbot null edit all submissions with the /declined template that haven't been edited in 6 months? [21:26:26] petan: Yeah it is [21:26:31] ok [21:26:35] He doesn't exist in /mnt/public/keys/ [21:26:40] Thanks [21:26:41] Not the normal tmp iissue [21:26:44] Damianz: I did just a bit ago... I had another key already, but it wasn't on this computer. [21:26:58] Technical_13 not without consensus, that is how wikipedia works [21:27:06] before I uploaded the new key, I was getting pubkey denied instead of connection closed. [21:27:16] you need to propose something, people need to discuss that and decide if yes or no [21:27:24] Ah I see - you are there, I typo'd your username heh [21:27:35] once there is a consensus to do this I will create a bot for that [21:27:50] Kk ill make a brfa [21:27:56] hold on [21:28:03] in fact there is already brfa for this :P [21:28:08] I think [21:28:14] If joe Decker doesn't get back to me. [21:28:31] Technical_13 http://en.wikipedia.org/wiki/User:ArticlesForCreationBot there is 1 expired brfa [21:28:32] that would be it [21:28:47] I asked him to add it to his null bot a couple days ago. [21:28:51] I dont remember why it was expired, IMHO the task wasnt needed anymore, the person who proposed it decided not to do that [21:29:21] Technical_13 I dont want to reinvent wheel, if they have a bot for that and they want to do that, let them do that [21:29:47] dont give same task to multiple programmers just to wait who will finish it faster, that is waste of their time XD [21:29:48] It's needed to populate Category:G13_eligible_AfC_submissions [21:30:53] ragesoss: It's possible nfs is playing up, so it can't create your homedir... but I can't view the logs so it's a bit tricky. The main things look right though. [21:30:56] chrismcmahon: Just let me know if you need other information. I'm not sure what's needed to autoconfirm someone on the beta server, if anything. Just don't want to wait four days to test my script :-P [21:30:58] equal user qwerty is now confirmed on beta enwiki [21:31:03] ok. 1) decide who you want to give this task to 2) talk to them 3) talk to people on wiki 4) if people on wiki want this task, ask the prefered bot operator to implement it 5) go to a pub [21:31:05] * Damianz tickles Coren to check the security log on bastion [21:31:08] chrismcmahon: awesome :) thanks! [21:31:24] equal: 'confirmed' should have same rights as 'autoconfirmed' I seem to recall we did that some time ago [21:31:38] I'm doing it backwards... starting at the pub.. [21:31:50] ok [21:31:53] that works too [21:31:59] ;) [21:32:08] Damianz: for rageoss? [21:32:13] Coren: Please [21:32:42] nfs? [21:32:43] &ping [21:32:44] Pinging all local filesystems, hold on [21:32:45] Written and deleted 4 bytes on /tmp in 00:00:00.0008080 [21:32:46] Written and deleted 4 bytes on /data/project in 00:00:00.0104720 [21:32:51] seems to work :0 [21:32:59] &ping [21:32:59] Pinging all local filesystems, hold on [21:33:00] Written and deleted 4 bytes on /tmp in 00:00:00.0003290 [21:33:01] Written and deleted 4 bytes on /data/project in 00:00:00.3581720 [21:33:18] petan: o.0 bota? [21:33:26] yup [21:33:27] ragesoss, Damianz: Simple. He doesn't have shell access. :-) [21:33:34] bota is wm-bot instance running on tools project [21:33:45] Coren: Hmm... I just checked that, he's in the group [21:34:11] uid=2467(ragesoss) gid=500(wikidev) groups=1003(wmf),50156(project-globaleducation),500(wikidev) [21:34:11] ... no project-bastion. [21:34:12] petan: I would suggest you to register wm-bota and grrrit-wm [21:34:19] Oh! His account might be legacy. [21:34:28] Unless we have someone called Ragesoss on the wiki and someone else as ragesoss on shell rofl [21:34:31] That would be confusing [21:34:32] AzaToth: grrrit-wm is not my bot :o and bota is just testing [21:34:41] Hmm yeah - if ex svn etc, might need doing by hand I guess. [21:34:42] true ツ [21:34:47] * Damianz forgot those people exist [21:34:57] ragesoss: What is your username on wikitech? [21:34:58] wm-bot is registered [21:35:09] Coren: ragesoss [21:35:26] impossibl [21:35:31] it must be Ragesoss [21:35:37] -.- [21:35:37] it is case sensitive :p [21:35:37] petan: is wm-bot2 grouped in that registration? [21:35:38] Hm. I don't have my 2auth token with me. [21:35:45] petan: yes, Ragesoss. [21:36:00] Technical_13: not really, it just authed as wm-bot [21:36:16] but yes i should probably group it at some point but I dont know its password [21:36:24] I ask because I tried giving it flags in a channel and couldn't because it isn't registered. [21:36:29] wm-bot: you darn piece of brackets, gimme ur password [21:36:29] Hi petan, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [21:36:43] petan: objdump it's brain [21:36:45] wm-bot: why not? [21:37:11] Damianz: I can't add him to the project atm, I don't have my token with me. [21:37:20] You don't need its password actually. [21:37:26] Coren: Ok - let me grab my phone and add him :) [21:37:30] in fact I downloaded some AI library that would make wm-bot respond but I think people would misuse it [21:37:33] AzaToth ^ [21:37:41] it would be fun though :D [21:37:58] heh [21:38:04] * sumanah would kind of welcome it if someone other than me would play Eliza ;-) [21:38:15] what is that [21:38:28] https://en.wikipedia.org/wiki/ELIZA [21:38:29] Technical_13: wom is "owning" wm-bot? [21:39:03] I would say [21:39:04] ? AzaToth [21:39:04] whom* [21:39:05] community [21:39:22] petan: I would assume someone is the registrant [21:39:25] You must be a member of the projectadmin role in project bastion to perform this action. < seriously [21:39:36] Since when can a member not add a member to a project -.- [21:39:44] been a while now [21:39:55] which user? [21:40:06] imma askin on #wikimedia-ops but no reply yet [21:40:18] AzaToth that would be me [21:40:30] k [21:40:32] the password is stored in plain text actually I am just lazy to ssh :D [21:40:37] Ryan_Lane: Ragesoss [21:40:38] hehe [21:40:43] I will group it tommorow I guess [21:40:51] YuviPanda: ping [21:40:55] pong? [21:41:10] It's been too long since I've tried to add anyone to anywhere, clearly [21:41:14] YuviPanda: grrrit-wm needs to get regged [21:41:24] yeah [21:41:27] want to do it? :P you've access to the config files and stuff [21:41:36] can try [21:42:24] YuviPanda: Miguel merged my commit :D [21:42:28] YuviPanda you know him? [21:42:39] Damianz: added [21:42:47] he invented mono and founded xamarin... :0 [21:42:48] Ryan_Lane: Thanks [21:42:53] ragesoss: Try now! [21:43:21] dammit that is one small commit I can be proud of XD [21:43:41] I feel like if torvalds merged some kernel patch of me :D [21:44:06] petan: :) nice! he's someone I have a lot of respect for [21:44:40] woohoo! thanks Damianz, Ryan_Lane, Coren, petan, sumanah! [21:44:45] yw [21:44:48] :) glad to help [21:45:35] YuviPanda: I will have to shut the bot down for a while during this [21:45:43] AzaToth: oh, then let's not do it now [21:45:47] wait for early tomorrow? [21:45:55] early? [21:45:57] where? [21:46:04] * addshore goes to find the requirments for getting a framework on gerrit and bugzilla [21:46:09] it's always early [21:46:11] it's always late [21:46:14] it's always mid day [21:47:07] YuviPanda: it is weird, he merged my small patch and now I feel like I want to make a bigger one :D I guess I will comment the code or something or do some general cleanup, his redis library in fact is best, just people dont see it [21:47:14] :) [21:47:27] it is only one that really works :D [21:47:43] I dont understand why redis.io recommends other libs [21:47:59] it makes no sense, the 2 most favorite libraries dont even compile without warnings [21:48:57] AzaToth you remind me I should go sleep some time too :D [21:49:18] heh [21:49:24] I dont think oversleeping 2 days in a line is best idea to improve my career [21:50:21] funny is that in fact most of my time in office I spend working for wikimedia projects anyway XD fortunately my boss doesnt know [21:57:51] Coren: I'm not sure who runs this: http://tools.wmflabs.org/geohack/geohack.php?pagename=Handball-Bundesliga&language=de¶ms=49.463611111111_N_8.5175_E_region:DE-BW_type:landmark [21:57:58] but it's hotlinking the labs logo [21:58:00] from virt0 [21:58:06] and it's spamming the shit out of the logs [22:00:05] (03PS1) 10AzaToth: adding password [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76634 [22:03:44] YuviPanda: the password is unsaved atm in config.yaml [22:04:17] I don't know where to put things like that [22:04:35] AzaToth: config.yaml sounds okay. just put a dummy in config.yaml.sample [22:05:00] yea, but I asssume the password should be placed safe somewhere [22:05:16] AzaToth: yeah, make it o-rw? [22:05:18] good enough [22:05:18] no? [22:05:35] ... [22:05:50] I meant safe from loosing it [22:06:31] petan Ryan_Lane: sorry again, bots-salebot was restarted 15 min ago, but I still can't ssh into it from bastion1 [22:06:47] I can't figure out what's wrong with it [22:06:57] I can't even ssh in directly as root [22:07:35] ouch [22:08:00] AzaToth: ah, that :P [22:08:01] (03PS2) 10AzaToth: adding password [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/76634 [22:08:40] AzaToth: does that really work? I thought password was if the IRC server needed password, not for nickserv [22:08:55] gribeco: maybe your bot is killing it? [22:08:56] seems grrrit-wm is logged in [22:08:57] how would I disable the bot? [22:09:07] how do we check, AzaToth? [22:09:14] Ryan_Lane: I restart it by hand, I don't have any init scripts of my own [22:09:26] the instance should be clean until I log into it [22:09:31] I see [22:11:26] YuviPanda: /ns info grrrit-wm or /whois grrrit-wm [22:15:25] [bz] (8NEW - created by: 2Amir E. Aharoni, priority: 4Unprioritized - 6normal) [Bug 52249] localization messages loading issues in http://en.wikipedia.beta.wmflabs.org - https://bugzilla.wikimedia.org/show_bug.cgi?id=52249 [22:25:03] Ryan_Lane: Perhaps it's sitting at the grub screen? [22:25:10] Coren: nope [22:25:12] it comes up [22:25:20] and boots all the way to the login prompt [22:25:56] Ryan_Lane: (All the same, recordfail shouldn't be on for VM images in labs) [22:26:08] recordfail? [22:26:13] Ryan_Lane: Means you can log in through console, then? [22:26:29] as what? root? :) [22:26:32] root has no password [22:26:33] oh [22:26:35] you know what... [22:26:42] puppet is set to run on startup isn't it? [22:26:50] Ryan_Lane: grub option, it records boot failure and marks it so that grub won't autostart next time. [22:26:52] I wonder if that blocks other services from running [22:27:08] Coren: ah [22:27:11] yes, that should be off [22:27:25] I need to puppetize the image creation stuff [22:28:41] so, I can access it via salt ;) [22:29:31] AzaToth: interesting. password isn't documented, but looking at the source it sends it via PASS [22:29:37] I guess Nickserv accepts that too? [22:29:43] nothing weird in dmesg [22:29:47] df hangs [22:29:56] (03PS1) 10Andrew Bogott: Move init.pp into manifests/init.pp where it belongs. [labs/private] - 10https://gerrit.wikimedia.org/r/76641 [22:30:01] I wonder if there's some weird issue with gluster [22:30:10] hehe, blame gluster! [22:30:13] (03CR) 10Andrew Bogott: [C: 032 V: 032] Move init.pp into manifests/init.pp where it belongs. [labs/private] - 10https://gerrit.wikimedia.org/r/76641 (owner: 10Andrew Bogott) [22:38:36] yep [22:38:38] gluster [23:06:09] stupid glusterd processes wasn't accepting mount operations properly [23:06:18] can't wait to get rid of this filesystem [23:06:22] Coren: how's NFS? :) [23:44:09] !log deployment-prep fixed up timeline on beta, it never worked there. Thanks ^demon ! [23:44:12] Logged the message, Master