[00:00:30] That form asks you for your preferred shell name. Are you entering one? [00:00:45] yes, same as the username. [00:05:31] Guest40473, well, on the plus side it looks like we don't already have a cipher. [00:05:44] Please try yet again, using your first choice for both account name and shell name? [00:05:55] ok. [00:07:15] just tried 2 times, and failed. [00:08:16] try again with all-lowercase shell name [00:08:24] ok [00:08:32] And don't do it twice, please, that confuses me :) [00:09:19] oh, that worked! all lower cases for shell name. [00:10:06] yep. I'll see if I can make the caption on that form a bit clearer [00:10:18] Thanks a lot Andrew :) [00:10:29] that would probably be a good idea. [07:36:43] !log stats setup a virtualenv and cloned the github repo to ~/python-cube-api. also installed libyaml-dev [07:36:44] Logged the message, Master [08:18:05] legoktm: what is the labs project? [09:16:16] YuviPanda some idea why ?status doesn't work on beta [09:16:52] hmm, not sure [09:16:57] dunno how it runs in tools itself [09:17:11] it runs same [09:17:15] both projects are identical [09:17:27] difference is that it works on one :P [09:17:30] heh :) [09:17:59] petan: the Redis role is +2'd in puppet :) [09:18:25] the dependent commit needs +2ing from someone, then we can add it to tools-mc, get rid of our homemade instance, and document it [09:18:25] cool, where is it on -beta? [09:18:32] no [09:18:42] petan: it's on tools-mc [09:18:43] we first need to set it up on beta [09:18:43] err [09:18:45] toolsbeta-mc [09:18:47] petan: it is setup on beta [09:18:49] right now [09:18:52] let's do things properly [09:18:53] ok [09:18:58] petan: so I tested it on toolsbeta-mc [09:19:01] with andrewbogott_afk's help [09:19:01] is it using that puppet class? [09:19:03] before it got merged [09:19:04] petan: yup [09:19:08] good [09:19:31] :D [09:19:48] petan: next would be to puppetize our 'restricted' verison of memcached... somehow [09:20:30] petan: is everything on tools puppetized? [09:20:36] like, installation of packages for people, etc? [09:21:27] not everything [09:21:31] packages should be [09:21:50] webservers for example are not [09:21:55] hmm [09:21:59] https://gerrit.wikimedia.org/r/#/c/70212/ is the patch, btw [09:22:03] ok [09:22:13] https://gerrit.wikimedia.org/r/#/c/70212/ isn't +2'd yet, since that'll affect production too [09:22:21] but will be [09:22:21] how many people use reddis now? do I need to announce a downtime? [09:22:31] petan: so far it's just me and addsleep [09:22:43] petan: we can't add it yet, the second patch needs +2ing [09:22:47] ok, he should wake up soon :D [09:22:49] we can do that hopefully tomorrow [09:22:50] k [09:23:00] petan: yeah, and even for him I patched his code to add support for redis ;) [09:23:06] petan: not many people know we had redis [09:23:06] so [09:23:25] we should keep that patched memcache somewhere [09:23:37] right now it's only on tools-mc:/mnt/share [09:23:40] petan: do we have icinga monitoring for stuff on toollabs? [09:23:45] no [09:23:55] ah, ok [09:23:55] depends [09:23:58] what you mean by "stuff" [09:24:01] servers are in icinga [09:24:05] tools not of coures [09:24:08] * course [09:24:10] no no, I mean [09:24:12] the grid engine [09:24:16] nope [09:24:16] or the apaches [09:24:22] there is bug for it [09:24:24] are the apaches on icinga? [09:24:24] ah [09:24:25] ok [09:24:27] apaches could be [09:24:28] dunno [09:25:01] okay, redis isn't [09:25:54] petan: don't think we'll ever use memcachedsasl. Agreed? [09:38:04] idk [09:38:14] I think memcache is quite a cool thing, it just needs some improvements [09:38:18] oh sure [09:38:26] i'm talking about memcachedsasl in particular [09:38:28] and right now lot of people use it and need it [09:38:30] sasl doesn't give us any advantages [09:38:33] sasl version not [09:38:40] yeah, so i was tlaking specifically baout that [09:38:41] it's not even supported much [09:38:44] we definitely need memcaceed [09:38:53] k [12:28:04] phpunit seems to be broken [12:28:32] on tools-login [12:32:55] Coren|DayOff, petan: is possible to have php-mcrypt installed? [12:33:07] hi [12:33:10] sure [12:33:19] you need it on webserver or execution nodes? [12:33:30] petan: On webserver :) [12:33:33] ok [12:33:52] petan: thanks a lot :) [12:33:53] at some point if cli uses it as well, we should probably put it to exec nodes as wel [12:34:13] petan: it may be a good point [12:34:52] !log tools petrb: installing php5-mcrypt on exec and web [12:34:55] Logged the message, Master [12:35:23] ew [12:35:34] php5-mcrypt : Depends: phpapi-20100525 [12:35:35] Depends: php5-common (= 5.4.15-1) but 5.3.10-1ubuntu3.6+wmf1 is to be installed [12:35:57] :\ [12:36:19] petan: weird. Ubuntu (only stock repo) has mcrypt compiled with the right php version [12:36:45] indeed that works [12:37:13] are you using a custom php version? [12:37:25] I don't know what Coren installed :) [12:37:29] :D [12:41:00] installed [12:41:41] paravoid, mutante: how do I tell puppet to install some version of package? [12:41:46] like apt-get package=version [12:41:51] * install [12:41:59] is it = in puppet too? [12:42:26] can I take a look at BracketBot's code? [12:42:42] petan: use the package class? [12:42:48] and setup dependencies appropriately? [12:42:55] YuviPanda what you mean [12:42:56] does A930913 have a public repo? [12:44:03] YuviPanda I have: package { [ [12:44:08] 'a', [12:44:10] 'b', [12:44:12] etc [12:44:16] yeah, that's how you do it [12:44:18] how I specifiy version there [12:44:24] 'a=version', [12:44:25] ? [12:45:15] hmm, reading docs at http://docs.puppetlabs.com/references/latest/type.html#package [12:46:09] I think you've to set it as the nesure property [12:46:10] *ensure [12:46:21] though it is not reccomended [12:54:03] no I didn't, Coren did :P [12:54:08] I don't even know what ensure is for [13:00:32] thanks petan :) [13:02:44] New review: coren; "It seems clear to me that the model is logically chown and 'chown foo *' changes the owner of direct..." [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/70059 [13:02:55] 'nick coren [13:14:48] Coren: It turns out blocking any parts of 10.0.0.0/8 is not such a good idea, because it also blocks if such IPs show up anywhere in the X-Forwarded-For headers. So some random ISP's proxy includes XFF for their internal use of 10.x.x.x, and boom. [13:15:16] * YuviPanda nudges Coren with https://gerrit.wikimedia.org/r/#/c/70064/ [13:16:19] anomie: Ohcrap. We've been bugging the devs for XFF blocking in forever, and now that we got it it circles back and bites us. :-) [13:18:13] YuviPanda: Will merge as soon as Jenkins gets around to giving it its +2 [13:18:23] that doesn't have a +2? [13:18:24] * YuviPanda looks [13:18:43] Coren: err, why did you give it a -2? [13:19:01] Coren: there is a +2 from Jenkins [13:19:13] Because I'm a moron is why. :-) [13:19:27] "Case of the Mondays, eh?" :P [13:19:55] w00t [13:20:07] Yep. Plus a [bleep] sunburn. I should not have let my family take be outside; there's a danegrously close star out there bathing the place in radiation. [13:20:21] heh. 'Use protection' [13:20:33] does https://gerrit.wikimedia.org/r/#/c/70212/ get automatically merged now? [13:20:34] has a +2 [13:20:35] .. [13:20:36] I use very good protection. It's called "roof" [13:20:53] Coren: If we want to be able to make these blocks, I guess the thing to do is add another block option so individual blocks can be excluded from the XFF check. [13:21:23] Coren: <3 wonderful :) [13:21:34] anomie: Or restrict the blocks to the individual IPs we know bots are likely to run on. [13:21:37] thanks for the merge [13:21:51] YuviPanda: np [13:22:07] Coren: is all of tools labs puppetized? [13:23:07] Coren: also can merge https://gerrit.wikimedia.org/r/#/c/70103/? [13:23:17] we aren't going to use sasl for memcached ever, me and petan both agree on that [13:23:27] we'll need to puppetize the memcached instance, but it's not sasl [13:23:27] so [13:23:30] YuviPanda: Most of it except the webservers. [13:23:50] Coren: ah, okay. will they be puppetized at some point? [13:24:25] It's #4 on my todo [13:24:52] heh :) [13:27:05] anomie: Arguably anothing in RFC1918 should lead to XFF block. [13:39:23] hi Coren :) [13:39:35] Hello, fale [13:39:42] I can see an access.log file, but not an error.log. Where can I find it? [13:41:03] Coren: hi, a question. how can we create a wiki (or something so much better moving a wiki from tools) in beta cluster? [13:41:51] Amir1: I... don't know! [13:42:06] fale: Is your code PHP? [13:42:19] Coren: yep, and returning 500 [13:42:32] Coren: who know? [13:42:34] fale: it's probably permissions [13:42:39] petan: ^ [13:42:58] petan: have to be world-readable? [13:43:23] addshore, ping [13:43:23] fale needs to be a) owned by your tool account, b) must not be world writable c) must be readable by www-data [13:43:33] fale: ... You /should/ have a php_error.log with errors starting up PHP [13:43:45] Coren: that doesn't display permissions errors :/ [13:44:01] fale: If you don't, then it never got as far as trying to start it, so it means your script isn't exectuable and/or isn't owned by your tool. [13:44:04] Coren, S7 is still broen. [13:44:05] Coren: in /data/project/lists there is not [13:44:13] Cyberpower678: What's broken about it? [13:44:22] Amir1 why would you ever move a wiki from tools to betacluster o.O [13:44:28] that is far worse place to host a wiki [13:44:29] petan: thanks, going to try the permission fix :) [13:44:30] Coren, everything. [13:45:09] Coren, [13:45:10] cyberpower678@tools-login:~$ sql metawiki [13:45:10] ERROR 2003 (HY000): Can't connect to MySQL server on 'metawiki.labsdb' (110) [13:45:10] Make sure to ask for a db in format of _p [13:45:34] Hm. Was that alias forgotten? Lemme check. [13:46:45] petan: we (me and another Amir) made a universal wiki for testing mediawiki features for RTL langs [13:46:47] http://blog.wikimedia.org/2013/05/30/test-features-in-a-right-to-left-language-environment/ [13:46:59] Cyberpower678 metawiki works to me (alias) db is down I guess [13:47:07] but now We can't run Parsoid on it [13:47:08] ah right [13:47:13] I see [13:47:20] petan: 775 seemed good to me, but the 500 is still there. And I'm not allowed to change owner to make it local-lists:www-data [13:47:25] Amir1 why not? [13:47:28] Amir1: i think best bet would be to create a new project on labs and use it there. [13:47:36] fale you can use take utility [13:47:40] Amir1: that way you get an instance + root, so you can do whate you want :) [13:47:46] parsoid should be simple to run there [13:48:02] fale you don't need to set group to www-data it just needs to be readable by www-data [13:48:09] Coren: Ah, it seems it was. Give me a minute. [13:48:10] fale: if it's world readable then it's ok [13:48:20] * YuviPanda douses Coren in ice cream [13:48:22] petan: no man for take :( [13:48:25] talking to yourself again, are you [13:48:36] fale: not on production, because my package is waiting for approval... [13:48:36] Coren, you just pinged yourself. :p [13:48:40] petan: because whenever i ran it in tools, VE and Parsoid can't connect because both must be ran in a same host which is not possible [13:48:51] fale: just type take "filename" [13:48:55] and pray [13:49:01] hey Cyberpower678 ! [13:49:44] Amir1 ok I think you should ask for a new project on labs maybe... I can't really help here, but beta is not a place for it either [13:49:48] YuviPanda: I think I'm not authorized to do this, [13:49:52] petan: no error returned, but not fixed either [13:49:59] fale ok sec [13:50:04] petan: I requested a billion years ago [13:50:05] Amir1: yes, but you can ask someone who is to create it for you. [13:50:12] Amir1: usually Ryan_Lane creates it for me [13:50:19] but he's not around [13:50:20] Amir1 billion years ago, there was no wikimedia :P [13:50:20] addshore, did you look at the updates? [13:50:22] http://deployment.wikimedia.beta.wmflabs.org/wiki/Global_Requests#fa.wikipedia_and_he.wikipedia [13:50:39] Amir1: betalabs is the wrong place for that, I think. [13:50:39] Coren any chance my user-friendly take get reviewed? [13:50:41] labs != betalabs [13:50:59] YuviPanda I already told him [13:51:27] fale: what is name of tool [13:51:34] petan: lists [13:51:42] petan: o_O It has been. A lot. By at least four people? Or do you mean transcribe those comments to the actual gerrit pacth? If the latter, I can do so sometime early this afternoon. [13:51:50] oh ok [13:52:00] fale: option MultiViews not allowed here [13:52:06] fale: that is error I see in log [13:52:11] Amir1: so I suggest waiting for Ryan_Lane to turn up, ask him for a project, and he'll be able to create it for you :) [13:52:22] petan: I see [13:52:22] I am sure other people can too, but I do not know who they are :( [13:52:25] And I'm not one of them [13:52:43] Coren: ok and what is a result of that review? did you find any issue? [13:53:03] so I've two options, run the wiki on an instance (as YuviPanda said) or request for new project in labs, [13:53:10] all the reported issues so far were already fixed, many days ago [13:53:13] they're both the same, Amir1 :) [13:53:13] petan: There are plenty. It's still completely unsecure by default at the last version I checked. [13:53:24] Amir1: so you 1. get a project, 2. create an instance in the project and run it inside :) [13:53:24] Coren: what makes it unsecure? [13:53:34] Amir1: I think there's a request queue for projects on wikitech. [13:53:38] ok [13:53:49] petan: You're still using pathnames, and have dozens of race condition exploits. [13:54:18] Coren where? I am using pathname on 2 places, in debug log and in function which opens the filedescriptor, no race conditions I could see anywhere [13:54:34] scfc_de: how many wikis are in this queue? [13:54:37] petan: Perhaps you've updated it since, I'll have to check. [13:54:44] Amir1: https://wikitech.wikimedia.org/wiki/Special:FormEdit/New_Project_Request, though nagging Ryan_Lane is probably faster :-). [13:54:45] Coren: that's possible [13:54:56] Coren, what's the status of S7 [13:55:11] Amir1: https://wikitech.wikimedia.org/wiki/Special:Ask/-5B-5BCategory:New-20Project-20Requests-5D-5D-5B-5BIs-20Completed::No-5D-5D/-3FProject-20Name/-3FProject-20Justification/-3FModification-20date/format%3Dbroadtable/sort%3DModification-20date/order%3Dasc/headers%3Dshow/searchlabel%3DOutstanding-20Requests/default%3D(No-20outstanding-20requests)/offset%3D0 [13:55:11] !s7 is dead [13:55:11] Key was added [13:55:15] Cyberpower678: I'm trying to figure out why it work directly, but not through SQL [13:55:17] Amir1: plus https://wikitech.wikimedia.org/wiki/Help:Getting_Started has information on projects vs instances, etc [13:55:18] !s7 | Cyberpower678 [13:55:19] Cyberpower678: dead [13:55:23] :P [13:55:48] !s7 is S7 is dead [13:55:48] This key already exist - remove it, if you want to change it [13:55:59] !remove s7 [13:56:05] !doc [13:56:06] petan: where are the log file you looked at? [13:56:06] There are multiple keys, refine your input: docs, documentation, [13:56:15] !remove is if you want to delete a key, try !key del [13:56:15] Key was added [13:56:30] !key del s7 [13:56:30] Unable to find the specified key in db [13:56:35] !s7 del [13:56:36] Successfully removed s7 [13:56:36] fale: on webserver, not readable :/ [13:56:44] a question: is there a way to move database of this wiki (old wiki on tools) to the instance? [13:56:50] Aha [13:56:53] I don't want waste edits of people [13:56:57] Cyberpower678: they look good :) [13:56:58] Amir1 yes [13:56:59] Amir1: you should be able to dump the sql and load it up [13:57:04] I still havn't had much of a chance to play yet :/ [13:57:08] Amir1 you can use mysqldump [13:57:11] petan: :D. I should have fixed the multiview stuff, but sill 500 [13:57:27] [Tue Jun 25 13:55:31 2013] [alert] [client 10.4.1.89] /data/project/lists/public_html/.htaccess: Option MultiViews not allowed here [13:57:34] very good [13:57:35] addshore, you should download the new updates from GitHub. Then you won't ever have to dowload Peachy again. [13:57:46] and URL will be same? [13:58:06] petan: weird, I just added a conditional statement on that line [13:58:23] what about just removing that entirely [13:58:26] Amir1: URL will be different, but with a little PHP you can setup redirects from current tool to your new instance [13:58:27] (simply we can redirect it, never mind, stupid question) [13:58:31] petan: yep, going to [13:59:29] petan: 500 went away... a good 404 is waiting on the line :D. Thanks :) [13:59:49] hmm [13:59:50] :o [14:00:38] petan: I think I'll try some simple php script before trying again with a framework [14:00:41] Cyberpower678: Should be fixed from -login now. :-) [14:00:53] Cyberpower678: Not allowing -login in was... a bit silly of me. :-) [14:02:29] fale: Does it work for you now? [14:03:00] So i wait here until Ryan_lane shows up [14:03:28] Coren, so how do I connect to centralauth? [14:04:15] Cyberpower678: Ah, centralauth needs its views still (the standard wiki ones clearly won't work). Sometime this afternoon for that one. [14:04:43] Coren: nope, is returning a 404 error from the main tools website (http://tools.wmflabs.org/lists/) [14:05:49] the problem is my .htaccess. Going to fix it [14:05:52] fale: I see "Welcome to Lists Project homepage!" [14:06:11] Coren: yep, just removed the routing lines from htaccess :D [14:08:48] Coren, any news from legal? It's been weeks since you said it's going to be weeks. :p [14:11:06] Coren, petan: I really messed up the htaccess -.-. Fied now :) thanks to you both [14:14:03] YuviPanda: In modules/toollabs/manifests/redis.pp, the top line says Class: toollabs::*execnode* :-). [14:14:14] copypaste faile [14:14:45] let me fix [14:16:58] scfc_de: Coren https://gerrit.wikimedia.org/r/70422 [14:20:37] addshore, https://tools.wmflabs.org/xtools/pcount/index.php?name=Cyberpower678&lang=meta&wiki=wikimedia :p [14:21:00] To think all of these error messages were being suppressed on the toolserver edit counter. :p [14:21:01] :/ [14:21:07] HAHA! [14:21:12] I've got some work to do. :/ [14:21:24] at least they are only notices ;p [14:21:40] But they define the namespaces. [14:21:54] Namespaces the edit counter will not recognize. [14:34:29] hashar: Are you involved with the Gerrit -> Bugzilla notifications? [14:39:00] scfc_de: I am not [14:39:19] scfc_de: qchris and ^demon are. You can open bugs agains Wikimedia > git/gerrit (IIRC) [14:39:45] hashar: Merci. [14:40:59] hashar: Someone was faster: https://bugzilla.wikimedia.org/show_bug.cgi?id=46997 :-). [14:44:52] petan: around? :> [14:53:04] addshore, want to work on the Peachy wiki? [14:53:15] Provide valuable documentatio. [15:03:02] but I havn't started using it yet :D [15:05:39] addshore, busy, busy addshore [15:05:43] :) [15:06:23] addshore, I almost know Peachy inside and out. If you need help, let me know. [15:06:40] Hm. The gridengine accounting file will also need to be logrotated or something. [15:07:04] Coren, you haven't answered my question yet. ;) [15:07:18] Cyberpower678: What question? [15:07:36] Coren, any news from legal? It's been weeks since you said it's going to be weeks. :p [15:08:23] Cyberpower678: News from legal? No. And not for a while, I expect, given their current focus on the privacy policy (which they also probably consider to be a dependency on the answer - whether something is okay to share given the privacy policy rather depends on what the privacy policy /is/) [15:09:07] Ok. [15:09:22] Coren, can you MS me once there's news? [15:09:27] It /does/ make sense when you think about it. [15:09:33] Cyberpower678: You'll be the first to know. :-) [15:09:45] MS=MemoServ for clarification. [15:11:36] Coren, thanks. [15:12:43] Coren: On Toolserver, the first entry in /sge/GE/default/common/accounting seems to be from May 21st, and the size is 107 MByte. Don't know if it's rotated by standard logrotate or something inside SGE, though. [15:13:23] * Coren will look it up. [15:13:26] Ah: http://arc.liv.ac.uk/SGE/howto/rotatelogs.html [15:14:12] Ah. "Best to use logrotate". [15:17:54] Well, that's probably something to test on the actual master and not to try to reproduce on a puppet mockup :-). [15:42:54] Coren: is getting root on toollabs essentially 'keep doing infrastructure work, and if you have to ask for root you do not need it'? :) [15:43:31] YuviPanda: I suppose it is. :-) There are no hard rules. [15:43:53] well, okay then :) [15:44:12] * YuviPanda puts 'get root on toollabs' in 'goals' list [15:44:46] YuviPanda: I do intend to keep the total number relatively low, though. It doesn't take that many sysadmins to make coordination problems overtake the advantage of numbers and better coverage. [15:44:58] Coren: sure, sure. common sense, etc [15:45:24] hell, if there are enough admins then I don't need to be one :) [15:46:55] Coren: can you add role::labs::tools::redis to tools-mc? [15:47:10] wtf [15:47:11] I joined and half of freenode died? [15:47:16] Coren: I tested it on toolsbeta-mc earlier [15:47:18] petan: ^ [15:47:25] petan: Stop breaking freedone. [15:47:27] :-) [15:47:30] I'll add docs later today, and email out to labs-l :) [15:48:32] !ping [15:48:32] pong [15:48:33] hmm, or maybe we need to uninstall the hand-installed version of redis first? [15:53:26] !ping [15:53:26] pong [16:45:08] Centralauth now up. [16:45:20] * Coren remains amused at his low gu_id [16:55:16] Coren, do you happen to know if there's an apt package that installs Moose? I'm trying to run some third-party code, getting Can't locate Moose.pm in @INC [16:55:22] And want to avoid cpan... [16:56:04] andrewbogott: I expect there is given the presence of libany-moose-perl [16:56:26] hm, installed that already to no avail [16:56:52] Yeah, but that uses 'moose or mouse' whichever is available; I expect there is a way to make both available then [16:57:17] p libmoosex-app-cmd-perl - Perl module combining App::Cmd and MooseX::Getopt [16:57:28] That should include Moose in its dependencies. [16:57:50] Watching what it wants to install might be informative. [16:58:28] libmoose-perl [16:58:40] That was too simple. :-) [16:58:44] Coren, yep, that did it, thanks. [17:45:33] Coren did you find the issues in code? [17:46:07] petan: I haven't gotten around to review it yet. It's still early afternoon. :-) I'm going some ops stuff atm, will do so soon. [17:46:25] btw I found issues in your code [17:47:08] it cant handle multiple folder arguments but it is fixed in my version... [17:58:59] petan: Test case? [18:00:56] scfc_de? [18:01:13] I thought you already saw that bug [18:01:42] But Coren fixed that already? [18:02:12] * anomie reads backscroll [18:04:47] andrewbogott_afk, Coren: It's nice that pretty much any Debian-packaged CPAN module is named as "replace '::' with '-', lowercase it, and add 'lib' and '-perl'". [18:05:25] anomie: For the most part. It's the odd cases out that'll getcha. [18:05:50] Coren: Even then, apt-cache search Foo | grep -e -perl usually works. [18:09:23] New review: coren; "Is good." [labs/toollabs] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/70170 [18:09:24] Change merged: coren; [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/70170 [19:40:09] I've gone crazy for a couple hours because it seemed to not save properly the time the system was using to complete my query (that was usng ~10' on TS). After I understood that I had to measure it in less than seconds, because toollabs is freaking fast. Thanks Coren, the other admins and WMF [19:40:27] fale: Thanks. :-) [19:40:54] fale: Send some of that thanks towards binasher, the DB guru. :-) [19:41:49] Coren: not registered on wikitech :( [19:43:28] fale: http://www.mediawiki.org/wiki/User:Afeldman [19:43:30] I think 'Asher' on wikitech. [19:44:01] Coren: thanks :) [19:51:59] Coren: I think you'll not have a lot of work optimizing the query I have... they run so fast, that it may have not really sense to optimize them even further (otherwise I should move from milliseconds to microseconds :D) [20:22:42] Ryan_Lane: Hi, are you there? [20:32:11] any one using phpunit on tool-labs ? I get an error [20:33:22] OrenBochman1: What error do you get? [20:44:50] Amir1: ? [20:44:58] Hi [20:45:02] I've a request [20:45:13] I came earlier but I'm told to talk to you [20:45:46] can i have an instance in beta cluster for running a global test wiki for RTL langs [20:46:05] scfc_de: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 [20:46:06] PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/usr/share/php:/usr/share/pear') in /usr/bin/php [20:46:06] unit on line 38 [20:46:15] Amir1: you're asking the wrong person [20:46:19] hashar: ^^ [20:46:20] me and another Amir are working on this wiki [20:46:33] ;-) [20:46:44] which amir are you ? [20:46:51] User:Ladsgroup [20:47:00] and amir@pywikipedia [20:47:24] OrenBochman1: seems you are running some code coverage but are missing the PHPUnit extension that handles codecoverage [20:47:24] Amir1: are none of the current wikis in beta RTL? [20:47:44] Ryan_Lane: no, none [20:47:56] hashar: I'm not requesting code coverage at all [20:47:58] and it's better to have a global wiki because RTL devs can work together [20:48:01] hashar: ^^ :) [20:48:16] beta isn't a development environment [20:48:18] I get it typing just phpunit [20:48:19] it's a testing environment [20:48:24] not separately on their own languages [20:48:34] pear list -c pear.phpunit.de [20:48:43] it's for testing code before it hits production [20:48:44] or phpunit --verbose StackTest.php [20:48:46] Ryan_Lane: Amir1 we have hewiki iirc [20:48:58] hashar: no we don't [20:49:03] let me check [20:49:13] you're expected to do development in vagrant, or shared development in a labs project [20:49:13] Ryan_Lane: Amir1 and we have a bug report to ask to enable an Arabic wiki. There is already an old arwiki database, so it is just about reenabling it in mediawiki-config.git [20:49:15] deployment.wikimedia.beta.wmflabs.org/wiki/Global_Requests#fa.wikipedia_and_he.wikipedia [20:49:23] then, when it's ready to be deployed, it would hit beta [20:49:33] OrenBochman1: pear list -c pear.phpunit.de that should list you the pear packages [20:49:51] http://blog.wikimedia.org/2013/05/30/test-features-in-a-right-to-left-language-environment/ [20:49:54] read this [20:50:08] Ryan_Lane: Do we want that self-serve make-a-mediawiki-dev-environment setup? It's nontrivial, but I'm seeing incerasing cases where it would be the Right Thing. [20:50:11] I mean we have so many other reasons to have an RTL test wiki [20:50:12] OrenBochman1: http://paste.openstack.org/show/39227/ :-] [20:50:26] I get Channel "pear.phpunit.de" does not exist [20:50:26] Amir1: heh. well, it's your terminology that was confusing me [20:50:31] you said dev, not test [20:50:48] Ryan_Lane: I want to move http://tools.wmflabs.org/wikitest-rtl/w/ to that [20:50:48] because we can't run Parsoid [20:50:51] and test VE in RTL wikis [20:50:51] Amir1: http://he.wikipedia.beta.wmflabs.org/ [20:50:59] Amir1: I kept an hebrew wiki just for you :-] [20:51:13] hashar: I'm just getting started with php unit [20:51:29] Amir1: I'm not Hebrew (I wish I was though) I'm Persian [20:51:33] OrenBochman1: make sure your PHP include_path contains the pear root path [20:51:45] I'm pinging myself god help me [20:51:52] :D [20:52:07] hashar [20:52:26] OrenBochman1: $ pear config-show |grep php_dir [20:52:27] OrenBochman1: PEAR directory php_dir /opt/local/share/pear [20:52:35] Amir1: be a sport and learn some hebrew ;-) [20:52:53] why the hell everyone is either named Andrew, Ryan or Amir [20:52:58] http://tools.wmflabs.org/wikitest-rtl/w/ in this wiki users of four major RTL are coming and making test edits for new features [20:52:59] :) [20:53:29] but we can't make VE ready for them [20:53:34] and that sucks [20:53:39] Amir1: so which language are you interested in ? [20:53:46] we have VisualEditor on the beta cluster [20:53:57] Persian [20:54:05] but my point is something else [20:54:12] and the dreded ZWJ [20:54:24] I want to people of RTL wikis work together [20:54:43] not separately [20:56:00] (what I was a bit successful in making RTL testers together) now in that wiki we have some users of very different wikis working together [20:56:07] hashar: ^ [20:56:36] but If you want to make a beta wiki just for Persian (fa) I don't mind but I prefer a global RTL test wiki [20:57:39] hashar: Amir is very common name in middle east [20:57:47] BTW [20:58:03] http://en.wiktionary.org/wiki/amir [20:58:41] It mean "who orders" [20:59:50] of course there is also the hebrew biblical meaning - "treetop" [21:00:00] so [21:00:17] http://he.wiktionary.org/wiki/%D7%90%D7%9E%D7%99%D7%A8 [21:00:22] Amir1: you want various people to be able to test a RTL wiki ? [21:00:30] hashar: yes [21:00:47] Amir1: on the beta cluster we have a wiki per language just like the normal/production wiki [21:01:06] hashar: but you have other sites too [21:01:18] one min [21:01:18] yeah the idea is to reproduce production [21:01:31] but maybe we can create a rtlwiki that will use english as a default language and let user try out rtl [21:01:48] or you can just do that on the en wikipedia in beta : http://en.wikipedia.beta.wmflabs.org/ [21:02:00] note that the content there can disappear at anytime though [21:02:32] btw why can't you guys run parsoid ? [21:03:22] hashar: http://en.wikipedia.beta.wmflabs.org/wiki/Special:SiteMatrix [21:03:30] other sites [21:03:33] you can add one [21:04:00] OrenBochman1: on beta ? [21:04:28] OrenBochman1: It's a little complicated, VE can't connect to Parsoid because they must run in a same host and when I asked in here people said It's not possible to do that [21:04:41] Amir1: beta is a playground area that seems to fit your need [21:04:46] hashar: I thin Oren means tools [21:05:05] hashar: Of course and that's why I want to migrate there [21:05:12] awesome! [21:05:42] so the beta enwiki is very close to the production enwiki [21:05:45] hashar: http://en.wikipedia.beta.wmflabs.org/wiki/Special:SiteMatrix#Other projects of Wikimedia [21:05:53] we only have arwiki has a RTL wiki but we can add more RTL wikis if needed [21:05:58] make a wiki like this named for example "rtltest" [21:06:18] I thought VE required Parsoid to work [21:06:33] Amir1: why would you need a dedicated wiki ? [21:06:44] OrenBochman1: so yes, It's not working [21:07:18] hashar: because as i said before our testers and developers need to work together [21:07:47] If they work separately they will never find out bugs of RTL langs [21:07:59] for example there is an small rtl lang named ckb [21:08:04] Amir1: so you can work together on the english beta wiki [21:08:15] and have the user use ULS to use a different language [21:08:24] admin of that is working in our wiki because he can have a test wiki [21:08:36] Coren: https://en.wikipedia.org/wiki/Wikipedia:VPT#Fix_the_Toolserver <- I wonder what policy he is referring to [21:08:57] but If we want to make a test wiki for ckb, 1- nobody will notice 2-nobody will work [21:09:13] anomie: I have no idea. [21:09:30] Amir1: isn't it enough to have user with the ckb wiki ? [21:09:36] hashar: using ULS is what we've done in our wiki to make it a global wiki for rtl testers [21:10:01] hashar: of course we don't have enough because there is just one active user [21:10:20] and we can't make a wiki just for one person [21:10:30] but we can do this for all of rtl langs, [21:10:39] do what ? one wiki per language ? [21:10:51] isn't it easier to have each user set the language they want to test? [21:11:03] one test wiki for ckb lang because just one tester [21:11:13] that doesn't know how to run a wiki [21:11:32] (I know how to run a wiki, I'm doing it right now on wikitest-rtl) [21:12:13] so the ckb testing user could connect on the english beta wiki and set its preferred language to ckb [21:12:20] hashar: yes, the user comes to our wiki and set his language to ckb and test our environment in ckb and report bugs to us [21:12:34] anomie: Commented there. [21:13:01] english is RTL, content won't be shown RTLly(!) [21:13:11] *is not [21:13:15] oh the content right [21:15:37] Amir1: so you would need a rtlwiki that would have the content set to be RTL [21:15:46] at first for wikitest-rtl me and Amir set default langs en but we had so many problems that we made default lang a very little lang (dv) and I changed some of l10n of dv lang in my wiki and make look like English :D I changed namespces and stuff like that [21:15:53] yes [21:16:02] daughter crying brb [21:16:03] hashar: we did that already [21:16:12] Amir1: please send me your email by private message [21:16:15] will follow up after [21:16:16] brb [21:16:35] ok [21:37:18] Amir1: back :) [21:37:57] hashar: good :) [21:38:26] is it ok to publish your email to some other wikimedia people? [21:38:41] i would like to write some an email and add you in cc [21:38:41] yeah [21:39:03] or maybe you can follow up directly with chrismcmahon :-] [21:39:11] thank you and you can publish my email [21:39:46] I think if we work as a group would be better but you're the boss here :D [21:40:07] not at all [21:40:09] you are the boss [21:40:20] I am just a tool [21:40:49] we all are [21:40:59] and I'm very happy to be [21:42:19] " I am just a tool" needs to be written down somewhere. :-) [21:42:28] hi Amir1 I'm not sure what we're talking about, but do Cc me on that email :) [21:42:53] chrismcmahon: ok [21:42:56] Coren: bugzilla quips :) [21:43:06] chrismcmahon: yeah you are on cc [21:43:13] by the way hi [21:43:48] :P [21:43:56] Amir1: chrismcmahon is the QA leader @ wikimedia :) [21:44:41] Ryan_Lane: Do /you/ have any idea what Nathan is going on about? Did you get news from Luis in re the TOS? https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#Fix_the_Toolserver [21:45:11] Coren: no clue [21:45:47] Allright, so I'm not crazy [21:48:08] nope [21:50:20] Amir1: mailed chris :-) [21:50:36] Coren: https://bugzilla.wikimedia.org/quips.cgi?action=show :-] [21:52:25] doh [21:52:28] people quoted me there [21:52:53] added it [21:53:23] Ryan_Lane: I'm still up for a nice newbie-friendly setup of mediawiki-in-a-can to live alongside tool labs. [21:53:39] Coren: well, I'd really like to have one mediawiki = one instance [21:53:56] with an all-in-one install of mediawiki [21:53:58] Ryan_Lane: That's entirely doable. [21:54:27] Ryan_Lane: But, IMO, rather the waste of resources. I think we want a "baby's first mediawiki" that doesn't give root. The real pros can always start a project. [21:54:37] yeah, this would be without root too [21:54:51] but mediawiki eats a shitton of memory and other resources [21:55:08] I suppose, but a new VM does have overhead. [21:55:24] I.e.: n mediawiki on a big enough vm << n VMs having one mediawiki [21:55:38] Coren: not much overhead [21:55:59] KSM is pretty good since almost all of our images are basically the same [21:56:07] Hm. [21:56:09] and we do upgrades across all nodes, etc [21:56:14] Amir1: i am off sorry [21:56:21] Either way works, really, and the one VM per mw is simpler to implement. [21:56:21] it does affect IO, though [21:56:25] yeah [21:56:29] that was my thought :) [21:56:30] Coren: Just not /that/ crazy* [21:56:42] one good way to start this is for us to replicate the gerrit repos to the nfs server [21:56:49] Damianz: Or, more precisely, crazy in /some other, unspecified way/ :-) [21:56:52] so that clones of MW are simple [21:57:01] and faster [21:57:25] it would also mean clones of the ops repo would be faster too ;) [21:57:37] What's the current development/deployment model we use? Commit then flee! < my fav [21:57:57] Coren: on another note… and this might be a pain [21:58:10] Coren: we were discussing git in tool labs [21:58:19] and specifically the ability to easily use gerrit [21:58:39] with github integration [21:59:12] YuviPanda has a bot that will take github pull requests and push them into gerrit as changesets [21:59:15] Is a mirror really that much faster given our infrastructure? I fear "more moving parts" [21:59:34] Coren: it would be a read-only filesystem copy [21:59:50] and yes, gerrit is super slow [21:59:55] Yeah, I suppose it is. [22:00:14] Ooo. Close integration w/ github == more contributors. [22:00:21] so…. ^demon and I discussed an automated way to create gerrit repos for tools [22:00:28] (Also == smaller s/n, but I think one is worth the other) [22:00:53] gerrit has a rest api. it also has an ssh interface [22:00:57] Coren: the github bot is running on tool-labs, btw :) And that's also why I wanted redis running properly :) [22:01:03] ah. right [22:01:07] I was going to merge that today [22:01:19] Coren: so… the issue is giving folks access to repos [22:01:22] Redis: memcache after cocaine [22:01:51] Coren: gerrit has ldap support for groups, but service groups are per-project and non-unique [22:02:07] hashar: ok [22:02:22] someone had the idea of renaming "local-" to "-" [22:02:26] thank you [22:02:33] brb [22:02:40] Ryan_Lane: Ow. Make them globally unique? [22:02:45] then we could change the base dn for group searches in gerrit [22:02:53] Wait, if the repo is read-only, what does it change? [22:03:07] this would be creation of read/write repos for tools [22:03:30] Aaah. [22:03:31] with synchronization to github and from github [22:03:35] Hm. [22:03:50] where we could give ownership of the repos to the service group [22:03:56] Right now, I've given gerrit repos for tools who requested it. (I.e.: just the one atm) :-) [22:04:00] You could just create them on github :D [22:04:05] and an interface to configure things [22:04:25] Damianz: not everyone wants to use github [22:04:35] Really sucky point is you still can't delete projects via the interface (unless they fixed that recently), so we'd either be opt in or omg repos [22:04:40] Ryan_Lane: Only you! [22:04:41] :P [22:04:48] and I want to avoid people putting code into github, gerrit, bitbucket, source forge, google code, etc. [22:05:05] +1 to Ryan_Lane [22:05:17] if we give people an easy means of using gerrit and github, I think that would appease most people [22:05:21] worst of all, putting code just inside their /data/project/ and using ssh to access [22:05:31] I totally put code in gerrit, github and replicate it to bitbucket... but I am a bit weird [22:05:33] YuviPanda: yes. that's the worst [22:05:40] Damianz: :) [22:05:58] For most labs stuff github is up to date and gerrit is 'stable' rofl [22:06:13] Coren: of course, that means we'd need to rename the service groups [22:06:41] Damianz: :) [22:06:48] Yeah, and rejigger the NFS stuff, and the DB permissions. That's going to be !fun. [22:06:53] yeah [22:07:06] but the nice thing is, we'll be able to use the service groups for things [22:07:13] Yeah. [22:07:21] so, it's beneficial just past gerrit [22:07:23] I suppose we were being too gingerly with the gids. [22:07:41] heh [22:07:54] (Mind you, I would want to move them to 100000+ instead of stuck uncomfortably in the middle of the 16 bit range) [22:08:15] well, the gids are unique [22:08:22] so, it's not a major issue [22:08:42] ranges kind of suck [22:08:54] it puts awkward limitations into things [22:09:35] well, hell, I hope the gids are unique. I'm pretty sure they are :) [22:11:17] Hm. They should be, atm. Only the names may not be. [22:11:23] right [22:11:24] * Coren examines tools. [22:11:47] we need to implement the atomic method you've mentioned for adding uids and gids [22:11:55] AFAICT, the rename should have not all that much impact on the project itself; few people ever refer to the group names directly. [22:12:03] yeah [22:12:26] so, I'm going to write a proposal of how this'll work on labs-l and wikitech-l [22:12:39] NFS will need more love... but then again it'll also simplify a number of things I had to work around because of the non-unique names. [22:12:46] in addition to providing tool git repos, this MW extension will also allow people to create their own MW extension repos [22:13:10] Ryan_Lane: is that okay, considering the fact that repos can't be deleted? [22:13:23] They can if you delete them from disk and clear the cache [22:13:27] But it sucks ass [22:13:33] Heh. Hindsight is 20-20. In retrospect, non-unique local groups was a mistake. [22:13:37] YuviPanda: ^demon had wanted the mw extension stuff done :) [22:13:41] Coren: heh [22:13:47] Coren: well, it's nice that they are local to the projects [22:13:56] but it's less nice that they aren't unique [22:14:17] Also be sexy with oauth - only allow specific groups [22:14:31] heh. that's going to be harder ;) [22:14:42] well mw vs ldap.... no fun [22:14:49] we're not doing MW<->LDAP group sync right now [22:14:52] Ryan_Lane: Will we want to keep them visible only on their projects? [22:14:59] Coren: yeah [22:15:16] Makes sense; otherwise the group list will become nasty. [22:15:17] otherwise creating a service group in one project would create it in another [22:15:23] and that's leaky [22:16:28] I hate to think what this is going to look like if we ever get rid of osm [22:16:48] Damianz: we can't ever get rid of the authn/z parts of it [22:16:57] or we'd have to extend horizon for it [22:17:14] horizon doesn't have any real authn/z handling except for public clouds [22:17:23] Speaking of, how does the security review of OpenID? Need help? [22:17:26] and it sure as hell doesn't have any LDAP functionality [22:17:27] I'm kinda of the opinion that stuff living in horizon is the right place [22:17:34] Even if it would suck majorly to get it in there [22:17:36] Coren: csteipp is going to do that [22:17:42] Coren: there's one missing feature, though [22:17:48] sreg responses aren't working [22:18:09] Damianz: meh. adding features to OSM is fairly simple [22:18:38] Forces some things into osm though - rather than being abstract [22:19:11] it would be forced through horizon otherwise [22:19:20] the right goal is to move things into the services [22:19:23] "In retrospect, non-unique local groups was a mistake." <- referring to service group ids? [22:19:31] andrewbogott: names. [22:19:32] andrewbogott: local- [22:19:38] ah [22:19:44] rather than - [22:19:59] hm, yeah. [22:20:09] Mind you, doesn't that mean we should disallow dashes in usernames if they aren't already? [22:20:12] Damianz: we're going to move DNS to designate (which will likely be OpenStack DNS) [22:20:15] * Damianz notes it would be rather funny if like labs ddosed wikipedia due to a mediawiki vuln just because of the irony [22:20:22] then we just need to handle puppet somehow [22:20:37] Damianz: wikitech doesn't run on the cluster :) [22:20:44] I'm partly surprised noone has solved the puppetissue already [22:20:46] Ryan_Lane: Yeah, we got usernames with dashes already. [22:20:55] Coren: ? [22:20:58] Ryan_Lane: But if I spun up 500 machines in 100 projects I'd have omg bandwidth [22:21:03] happy-melon:x:1203:550:Happy-melon:/home/happy-melon:/bin/bash [22:21:05] Well no... about 4gb, but meh [22:21:19] It was more funny in theory :( damn life [22:21:20] ugh. right. that's going to be problematic [22:21:40] Coren: this is just groups, though [22:21:46] oh. right [22:21:48] Do you actually stop people having local- in their username now? [22:21:49] :P [22:21:51] damn. still needs to happen for users too [22:21:55] Ryan_Lane: Yep. [22:21:55] Damianz: I think so, yes [22:22:26] Ryan_Lane: There aren't that many of them. The probability of a clash is low enough, IMO, that we could granfather them in. [22:22:50] 47 all told. [22:22:57] so, we'll need to disable - in usernames? [22:23:02] Ah, not even, I also caught dashes in realnames. [22:23:08] realnames? [22:23:13] GCOS field. [22:23:14] ah [22:23:15] right [22:23:20] how many total right now? [22:23:42] 37 [22:23:44] it's just a matter of removing the character from the regex :) [22:23:46] Though if you do remove - from usernames, the stupid validation failure message needs fixing as it's useless iirc [22:23:57] so, old names would work, but new names would be disallowed [22:24:03] * Coren nods. [22:24:33] and we'd need to also put in a check for projects being created as users that might catch that [22:24:43] so, "happy" can't be a project [22:24:47] Why? [22:24:54] create happy project [22:24:55] Oh, you mean just in case? [22:25:02] create happy-melon service group [22:25:04] conflict [22:25:24] The chance of a conflict like this is vanishingly low. Just have the service group creation error out. [22:25:31] true [22:25:37] that's easier [22:25:47] we should really be putting this stuff into an etherpad [22:25:51] * Ryan_Lane starts one [22:26:14] I'z hungry. I need food. [22:26:27] We should just make everyone use uid 0 and be happy [22:26:29] * Damianz troll [22:26:30] * Coren goes to fetch some. AFK for a bit. [22:26:30] http://etherpad.wikimedia.org/UniqueServiceGroups [22:26:33] ok [22:29:02] chrismcmahon, hashar: check mail plz [22:57:37] Coren: ah, you merged the redis changes. thanks [22:57:57] did you pay attention to production when you did so? it's possible this caused a restart of redis in production [23:00:24] * Coren is back, fed. [23:03:04] Ryan_Lane: ... how? From what I could tell, that change was a strict noop to the config unless rename_commands{} isn't empty. [23:03:25] I have a good feeling that added a blank line into the config file [23:03:33] or two [23:03:37] Oh? They used -%> [23:03:43] So that eats the newlines. [23:03:48] ah [23:03:58] maybe it didn't change anything, then [23:04:28] I don't think it should have. I haven't run the actual template to check, but my reading is this should have done nothing to the config file. [23:05:16] Oh... maybe the one extra newline at the end. Darn. [23:05:35] I didn't notice they added an extra newline after the <%%>s [23:06:01] * Coren goes to check the etherpad. [23:07:06] Coren: no feelings on this? https://meta.wikimedia.org/wiki/Requests_for_comment/X!%27s_Edit_Counter#Remove_opt-in_completely [23:07:30] That discussion is silly [23:07:35] agreed [23:07:46] I didn't expect it woul have been apropriate for me to comment? As far as I am concerned, Tool Labs has no opinion on the matter, it's a decision for the maintainer alone. [23:07:48] * Damianz hates the 'lets restrict public data because everyone hates decent tools' [23:08:01] Coren: I'm commenting as myself [23:08:38] I was wondering if you had a personal opinion as a tool author :) [23:08:58] I do, mostly along your own. [23:09:12] also, this is just a step away from the projects grasping control of what tool authors can do [23:09:22] and how they can display research, etc. [23:09:54] Tools should have the wikileak approach - display all teh things [23:09:57] Ryan_Lane: OTOH, it's perfectly reasonable for the projects to expect some control over what tools can do /with their project/. (Note enwp overwhelmingly rejected opt-in) [23:10:04] Unless it was obtained via priveleged access and not for public [23:10:24] But yeah, for collecting and displaying information? Nonsense. [23:10:30] Coren: it's perfectly fine for them to control what bots are doing editing the projects [23:10:38] displaying public data? no [23:10:48] Damianz: yep [23:11:33] if I write an edit counter that uses public data, the hell if anyone is going to tell me I can't [23:11:36] Damianz: The point of the complicated labs DB setup is to make sure that is exactly the case. The objective is "nothing visible from the DB that isn't already available on-wiki" [23:12:08] Which is already more than a lot of paraniod sites (ips etc) [23:12:09] Ryan_Lane: I don't think cyberpower is worried about "allowed to" so much as "will the projects hate me". [23:12:13] heh [23:12:15] good point [23:12:38] Tbh I think people's password hashes would be very useful info to have in labs - from a security research point of view [23:12:43] Some people are evil though [23:12:47] hahaha [23:13:05] Yes, of course. "Research". [23:13:22] We all know Ryan's password is 'I<3PinkUn1corns' [23:13:49] how did you know that!? [23:13:53] hacker! [23:14:17] * YuviPanda grants Damianz asylum [23:14:25] Coren: Properly salted password hashes is probably a better idea than letting people login over http :P [23:14:31] That just came up as '***************' on my screen :p [23:15:12] oh. maybe it only showed up to me since it's my password. maybe you should type yours. it'll probably show up as that too [23:15:15] hmm, 'docker joins the linux foundation' [23:15:26] oreally [23:15:27] Ryan_Lane: would perf totally suck if someone does something with docker on top of one of our instances? [23:15:34] Sure, one broken legs is MUCH better than two broken legs. Doesn't mean I want you to break one. :-) [23:15:51] YuviPanda: probably not [23:15:55] hmmmm, nice [23:16:06] Ryan_Lane: Does 'hunter2' show up as asterisks? [23:16:08] docker is adding a driver to nova [23:16:11] a930913: yes! [23:16:12] :) [23:16:12] there's work going on elsewhere of making accessible sandboxed ipython notebooks [23:16:17] Ryan_Lane: :) [23:16:33] a930913: yeah, if I copy and paste it still shows as 'hunter2' [23:16:44] The openstack docker stuff looks interesting.... tempted to play with it for the build stuff on our new research project at work [23:16:56] Ryan_Lane: oh, so with that the docker containers can directly run on the cluster itself? [23:17:03] YuviPanda: yep [23:17:03] without needing an instance to run ont op of? [23:17:05] nice! [23:17:06] yep [23:17:18] no clue when it'll be showing up [23:17:24] will we have it if it does show up? [23:17:30] we'd want to separate that onto different hardware if we wanted to use it [23:17:32] i'm not sure what use we'll have for *that* tho [23:17:33] yeah [23:17:39] I don't trust containers on the same hardware as the vms [23:17:43] true [23:17:56] well, containers on VMs are good enough for ipython notebook stuff, I guess [23:18:00] I don't trust containers full stop that much [23:18:00] * Ryan_Lane nods [23:18:11] Coren: the labsdb's are accessible to all of labs, right? [23:18:13] not just ools [23:18:14] *tools [23:18:31] Damianz: they aren't trustable with root access, but are fairly trustable when things are run as non-root [23:18:33] YuviPanda: Ostensibly, yes. In practice, there is some dark magic that complicates things. [23:18:53] Coren: so, 'currently no'? [23:18:54] mhm [23:19:11] Copy your .replica file from tools and it works [23:19:12] YuviPanda: 'currently you need to add iptables and a hosts file' [23:19:16] Until petan breaks your passwords [23:19:26] Coren: iptables on the... client? [23:19:28] or the server? [23:19:39] client. [23:19:43] it's a pain currently [23:19:44] YuviPanda: On the client. It's crap, we need to fix that for real soon. [23:19:49] hmmm, okay [23:19:59] Ryan_Lane: We need to fix that for real soon. [23:20:06] yeah [23:20:07] this stuff is addictive, I must go back to getting mobile stuff done :| [23:20:08] Ryan_Lane: Because it's crap. :-) [23:20:13] YuviPanda: :) [23:20:22] Ryan_Lane: Coren would you be coming to WIkimania? [23:20:26] Coren: well, easy enough to add iptables on the servers [23:20:32] YuviPanda: yes [23:20:33] YuviPanda: I'll be there, including DevCamp. [23:20:38] nice! [23:20:49] Is it this year mania is in jp? [23:20:52] To spread the Gospel of the Laps. [23:20:57] Damianz: Hong Kong. [23:20:58] YuviPanda: and yes, devops is addictive :) [23:21:03] Ah yeah [23:21:13] * Damianz wonders if he needs a trip to the dc in HK around that time [23:21:29] Ryan_Lane: indeed. [23:21:34] Ryan_Lane: We don't actually need iptables on the server, just bind new IPs and have the mysql listen to them [23:21:50] (In addition to their current ones) [23:22:21] for DNS we just need to add the DNS names to LDAP [23:22:47] Sure, but what process do we use to maintain those. Snarf out of dblist? [23:22:51] so tools-mc already has a 'hand-rolled' Redis instance [23:22:56] Coren: yeah [23:22:57] and now we have a redis class setup on puppet [23:23:06] if we apply the role::labs::tools::redis to tools-mc [23:23:10] what'll happen to the hand-rolled redis? [23:23:14] Ryan_Lane: Simple enough to do. We want to add ndots=1 to the resolve.conf on servers though. [23:23:25] YuviPanda: it'll be reconfigured and restarted, if anything needs to change [23:23:28] so that foo.labsdb uses search [23:23:51] Ryan_Lane: ah, okay. we did change one thing (snapshot -> aof), so I guess that'll happen [23:23:53] but that's ok [23:24:06] Coren: why would foo.labsdb not search? [23:24:14] Coren: oh. you mean on the labsdb servers? [23:24:33] is it necessary for them to resolve the foo.labsdb names? [23:25:35] No, I mean on the instances. By default, ndots=0 which means that foo.labsdb look only for foo.labsdb. I'd want to put them under foo.labsdb.pmtpa.wmflabs and foo.labsdb.eqiad.wmflabs [23:25:47] it should search right now [23:26:00] Not if there is a . in the name [23:26:04] ah. ugh. right [23:26:10] unless ndots=1 [23:26:12] Coren is just a second class citizen - db.wmflabs ftw [23:26:15] (which is a good thing anyways) [23:26:29] Damianz: Still wouldn't search. [23:26:41] Coren: default is 1 [23:26:47] foo.labdsb.wmflabs would be okay, but still need ndots. [23:26:54] It is? When did they change this? [23:27:00] Ah, then no worries. [23:27:02] :-) [23:27:16] I thought that foo.x worked :) [23:27:54] Ryan_Lane: I'll do the DNS insertion then. Where do you want it running from? [23:28:04] Coren: it's an ldap insertion [23:28:08] Oh, we need IPs to resolve to. [23:28:10] you'll need to add a SOA [23:28:17] and the entries [23:28:21] do it on virt0 [23:28:25] kk [23:28:37] do an ldapsearch with a base of ou=hosts [23:28:50] well, ou=hosts,dc=wikimedia,dc=org, of course [23:29:00] I take it openstack isn't going to trample over those? [23:29:03] it'll have examples [23:29:09] openstack doesn't handle this right now [23:29:14] mediawiki does [23:29:19] Ah, okay. [23:29:24] hm. it may actually show up in the domain list after you do this [23:29:58] the dns code really needs to be replaced by designate