[01:18:07] What's the password for a tool account? [01:18:16] (for sudo) [01:18:24] PiRSquared: no password [01:18:25] use [01:18:28] become [01:18:31] not sudo [01:18:32] if [01:18:41] I'm trying to start cron [01:18:50] if become is asking for a password, you need to log out and back in [01:19:54] PiRSquared: also see https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Scheduling_jobs_at_regular_intervals_with_cron [01:20:02] that entire page is reccomended reading [01:26:28] YuviPanda: oh, I was reading https://wikitech.wikimedia.org/wiki/Help:Cron [01:26:48] yeah wrong page :D [01:26:55] labs != toollabs [01:27:01] and that causes lots of confusion [01:27:10] the page you were looking at is for labs in general [01:27:14] not applicable to toollabs [01:27:27] ah, sorry [02:05:31] !ping [02:05:31] !pong [02:05:34] ok [02:11:36] YuviPanda: is this correct usage: jstart -once -continuous program args [02:11:54] How long does it take for it to start? [02:11:59] for a program that runs continusly yes [02:12:02] a couple of seconds [02:12:10] check tools.wmflabs.org?statys [02:12:13] grr [02:12:14] status [02:12:24] for an easy way to check if it has started [02:12:32] Thanks! [02:12:48] hmm also it might be jsub instead of jstart but it is possible both will worj [02:13:04] PiRSquared: you should also find .out and .err files in your tool home folder [02:31:48] YuviPanda: how can I kill a jstart? [02:33:42] oh. jstop, duh [05:54:12] Coren: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Privacy don't exist... [05:54:22] Coren: yet it is linked from https://tools.wmflabs.org/ [05:54:42] Is WIP [05:54:52] Destination probably should say that. [05:54:55] is empty! :/ [05:54:57] sure [05:55:02] hey Coren [05:55:10] should i just ask you then? wondering about a use case [05:55:13] Not really here. [05:55:14] Coren: TParis requested a project for UTRS, think you can approve it? [05:55:16] ah, okay :D [05:55:26] * YuviPanda does not see a Coren [05:55:31] jeremyb: If it's really quick to explain. [05:55:40] Otherwise, bedtime soon. :-) [05:55:46] Coren: pretty quick. [05:56:07] Coren: newish tool (existed before but just deployed to tool labs). https://tools.wmflabs.org/citeimage/ [05:56:31] Go on? [05:56:39] Coren: fetches from 3rd party client side with JS [05:56:53] in this case from www.loc.gov [05:57:08] Ah. Simple, really, need to plop the disclaimer on before any fetching takes place. [05:57:57] and we can store disclaimer seen state in a cookie? [05:58:07] example of source code for someone that already does this? [05:59:49] jeremyb: I don't think I have examples handy; but yeah, storing a 'has seen this' cookie is okay. [06:00:20] The idea is just to make sure that the enduser understands that they'll connect to somewhere that doesn't use our privacy policy before they do so. [06:03:20] ok. maybe it's better to cache at the tool and serve the JSON up locally and parse as JSON instead of JS (jsonp) [06:03:27] +1 jeremyb [06:03:32] is usually faster that way too [06:04:03] YuviPanda: how should i cache? is there some easy way with lighttpd or some other labs service? [06:04:14] YuviPanda: currently this tool is entirely static (HTML/JS) [06:04:23] you sadly have to do your own thing, but is easy-enough to write [06:04:35] sure. but why don't we make it reusable [06:04:46] jeremyb: indeed, if I do write it I'll make it reusable [06:04:59] also, there's caching built in to nginx and apache. not lighttpd? [06:05:00] jeremyb: so you can just do tools.wmflabs.org/remotefetcher/ [06:05:02] * jeremyb goes searching [06:05:02] and it'll get it [06:05:07] lighthttpd also prolly has it [06:05:13] hrmmmm, interesting scheme :) [06:05:39] jeremyb: and then you can fetch remote things, get back whatever directly, and don't have to display a disclaimer [06:05:50] right, that was the point :) [06:06:14] yeah [06:06:20] the next thing i thought of was maybe the content fetched could be evil. but as long as it's JSON instead of JSONP that should be fine [06:06:30] jeremyb: heh, yeah [06:06:58] jeremyb: file a bug and assign to me? [06:07:55] > This module is a 3rd party module and is not included in the official distribution. You can download the patch from here: [06:07:59] http://redmine.lighttpd.net/projects/1/wiki/Docs_ModCache [06:09:08] heh, yeah, not happening [06:09:17] YuviPanda: is there a varnish in tool labs already? [06:09:22] nope [06:09:45] i've a dynamic proxy solution that I need to finish up and deploy. then we can easily get varnish in front of it [06:10:14] it's going live labswide soon, but not on tools yet [06:10:34] yuviproxy? :) [06:10:39] i'm using it already [06:10:41] oh? [06:10:49] for what? [06:10:52] I didn't realize :D [06:11:10] i have one wiki at ~3 different hostnames [06:11:14] ah [06:11:20] i think 2 are yuviproxy [06:11:21] https://crisiswiki.wmflabs.org/wiki/Main_Page [06:11:28] if it has https then yes, it is :D [06:11:46] jeremyb: I'm considering working on getting all tools their own domain name too [06:11:54] so it'll be .tools.wmflabs.org :D [06:12:10] will make a lot of things super easy [06:12:51] YuviPanda: i was thinking we should think about getting labs in http://publicsuffix.org/ [06:13:20] oh [06:13:25] hmm [06:13:37] jeremyb: moving to .tools.wmflabs.org would definitely help with cookie privacy [06:14:42] YuviPanda: well i decided more people should discuss than just me thinking about it... [06:14:48] jeremyb: hehe [06:14:51] email labs-l? [06:15:19] nah, i'm way too busy this week [06:15:23] i should be asleep in fact [06:15:30] jeremyb: Go to sleep! [06:15:37] I... have also been too busy for labs work :( [06:15:41] have a patch in on wikibase tho [06:23:57] YuviPanda: so, if i wanted to work on integrating somehow with varnish or yuviproxy or something... how? [06:24:10] YuviPanda: and generally do you need help with yuviproxy? [06:24:31] jeremyb: currently, it needs performance testing [06:24:38] I don't know how many req/s it can do [06:24:51] I need to do that and tune that before I can considering putting it in front of tools [06:25:11] YuviPanda: ok... and how to monitor metrics to see how it's doing? [06:25:21] jeremyb: you don't need anything on the server [06:26:02] jeremyb: you need to setup a server on a host that's 1. fast and 2. is serving just one tiny thing [06:26:16] and then use something like httperf or ab to hit that, and then see how many reqs/s it can do [06:26:31] and then hit your host *directly*, and see how many reqs/s *that* does [06:26:37] and then you can calculate the overhead easily [06:26:53] YuviPanda: i meant e.g. to see load avg, CPU use, etc. [06:26:59] YuviPanda: i am NDA'd fwiwi [06:27:12] jeremyb: i bet that's on ganglia [06:27:29] jeremyb: if not, you could help add it! :D [06:27:41] ok. well maybe let's talk in a week :) [06:27:54] for now i'm disabling the tool i think, maybe will add a warning tomorrow [06:27:59] jeremyb: http://ganglia.wmflabs.org/latest/?c=project-proxy&h=dynamicproxy&m=load_one&r=hour&s=by%20name&hc=4&mc=2 [06:28:13] jeremyb: okay! Thanks for the offer! [06:32:44] *click* [06:33:04] ok :) [13:47:26] !log deployment-prep upgrading varnish on all caches. [13:47:32] Logged the message, Master [14:13:06] !log deployment-prep changing Parsoid from 4 months old cdbfdbb to 986c1e7 [14:13:11] Logged the message, Master [14:14:25] !log deployment-prep deleting and reinstalling Parsoid node modules dependencies [14:14:31] Logged the message, Master [14:18:20] !log deployment-prep Flow was no more functional due to some backtrace in Parsoid daemon ({{bug|56781}}). Solved by upgrading Parsoid, reinstalling its dependencies and restarting it. Test page is http://en.wikipedia.beta.wmflabs.org/wiki/Talk:Flow_QA [14:18:26] Logged the message, Master [16:17:54] YuviPanda: I took another look at whether or not Utrs is a project or not, and it seems to be listed on the projects page [19:07:32] Is there any root available who can help me with my homedir which still belongs to root so I cannot access it? [19:07:48] you should specify which machine you're talking about [19:08:05] tools-login [19:08:25] !log wikimania-support Updating scholarship-alpha to latest sprint work [19:08:25] krd@tools-login:/$ ls -l /home | grep krd [19:08:26] drwx------ 2 root root 20 Oct 13 14:44 krd [19:08:27] Logged the message, Master [19:08:51] bd808: hah, so funny to see that. never was a WMF thing before [19:09:19] krd: yes, i see the same thing [19:09:43] maybe Coren could help? [19:10:04] jeremyb: Yeah. We're going to be hosting it this year. I've been doing quite a bit of code cleanup on the historic app to get ready for a security review. [19:10:15] Yeah, that's broken. I can fix, gimme a sec [19:10:23] Ryan_Lane / Coren: are either of you available to approve a new project [19:10:25] ? [19:10:40] TParis: link? [19:10:57] TParis: this is for a bot/tool, right? [19:11:07] why can the tools project not handle your need? [19:11:08] unblock ticket request syste [19:11:16] I need to be able to see the HOST_ADDR [19:11:29] for what purpose? [19:11:29] YuviPanda told me that the toolslab cant do that [19:11:46] !log wikimania-support Updated scholarship-alpha to 73cddcd [19:11:47] Logged the message, Master [19:11:48] Because the project handles English Wikipedia unblock requests. Checkusers need to be able to see the requesting address. [19:11:59] krd: Fixed [19:11:59] TParis: you got your plurals backwards [19:12:42] Coren: Thank you! [19:12:47] Coren: https://wikitech.wikimedia.org/wiki/New_Project_Request/Unblock_Ticket_Request_System [19:13:02] TParis: You realize that you'll need to put the disclaimers before the user submits a request, right? [19:13:55] I currently have the disclaimer in the privacy policy, but I can add it to the appeal page as well if needed [19:13:56] !log wikimania-support Created twig cache directory [19:13:58] Logged the message, Master [19:14:03] please create a debconfwiki project [19:14:10] https://wikitech.wikimedia.org/wiki/Wikitech:Labs_Terms_of_use#If_my_tools_collect_Private_Information... [19:14:13] TParis: ^^ [19:14:16] essentially the same use case as crisiswiki was [19:14:20] Legal has specific verbiage. [19:14:32] TParis: Who are the projectadmins? [19:14:34] Coren: oh, cool, nice to have prewritten [19:14:44] I copied it verbaitam to my privacy policy, but I can also put it on the appeal page [19:14:51] DeltaQuad, CrazyComputers, and I [19:17:12] ... there already /is/ an UTRS project. [19:17:43] With you, THO, CSteipp and DQ [19:17:51] * jeremyb has edited the terms of use now :) [19:18:03] !log wikimania-support Dropped and recreated database tables [19:18:04] Logged the message, Master [19:18:11] Perhaps you should coordinate with whichever of them has already started the job? :-) [19:18:22] (it wasn't me, I didn't even know I had access.. :P ) [19:18:54] Coren: Interesting. I was under the understanding that we were an instance on toolslab [19:18:56] There are no instances in the project though; so I guess it's just preemptive project creation. :-P [19:19:05] Okay, now I'm confused [19:22:19] "When we say a “Labs Project”, we mean the virtual environment that your account on Wikimedia Labs allows you to use." seems very um wrong? [19:26:27] A labs project is, basically, a set of virtual machine administered by the project admins. [19:26:41] Tool labs is one of those labs projects. [19:27:10] Think of it as a "customer" of sorts. [19:27:43] TParis: So yeah, you already have an utrs project, and you're already admin of it. So you didn't even need me. :-) [19:28:12] Okay, Ill see if I can get it working [19:28:49] Coren: Is there any possibility to get "cvs" installed at tools-login? [19:29:07] ... cvs? Really? [19:29:13] ow [19:29:38] * Coren checks to see if that's still even maintained. [19:29:42] Yes, it works and I'm using it. [19:30:07] Yeah, it's still maintained (wow!). I'll install it. [19:30:12] Thx [19:34:40] krd: Puppet push in progress; should arrive presently. [19:34:49] thank you. [19:35:09] All done. [19:35:42] * Coren half-expects someone to requests RCS, now. :-) [19:36:26] 208.80.152.0/22 is the only prefix tools-login uses? [19:36:59] any ipv6? [19:50:46] Coren: we were using cvs in production until maybe 6 months ago [19:51:07] ... I am *so* glad I never saw that. :-) [19:51:24] krd: No IPv6 yet [19:52:09] he left [19:52:19] cvs was used for maybe squid or something? [19:59:28] Ryan_Lane/Coren: I still seem to be having trouble. I am on bastion but I cant seem to connect to the UTRS project [19:59:42] is there a specific instance you are trying to connect with? [19:59:43] i see two instances on wikitech.wikimedia.org: utrsdb and utrsweb [19:59:46] utrsweb [19:59:58] says Permission denied(publickey) [20:02:42] I got in just fine [20:02:44] !log wikimania-support Updated scholarship-alpha to 673923c [20:02:45] Logged the message, Master [20:03:04] TParis: can you try now? [20:03:18] you're going through bastion.wmflabs.org? [20:03:32] are you using key forwarding or proxy command? [20:06:03] Umm, neither? [20:06:08] I just ssh'd [20:06:15] that's not going to work ;) [20:06:25] you need to go through the bastion [20:06:37] Heh, thought thats what I was doing [20:06:37] which means you either need to use proxy command or agent forwarding [20:06:39] !access [20:06:39] https://wikitech.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [20:06:47] I'm ssh'd to bastion [20:06:55] I just cant get to utrsweb from there [20:07:06] right, but to ssh past the bastion you need to forward your agent to the bastion [20:07:23] or you need to connect directly to the utrs systems using proxy command [20:11:52] yeah, that help page is definitely not written for beginners [20:11:58] okay, lets see if I can figure this out [20:12:30] I recommend ProxyCommand, BTW. [20:12:48] Does putty support proxycommand? [20:13:14] putty does agent forwarding [20:13:18] That I don't know. I use openssh in Linux. [20:13:48] then you can give a command to execute on connection [20:14:07] * anomie googles for "putty proxycommand" and finds some promising results [20:20:57] TParis: maybe useful, i can't vouch for it: https://wikitech.wikimedia.org/wiki/Help:Access_to_instances_with_PuTTY_and_WinSCP#How_to_set_up_PuTTY_for_proxying_through_bastion.wmflabs.org_to_your_instance [20:21:05] TParis: please stop using windows. kthx [20:21:51] haha, I actually have like 6 or 7 different OS's at home [20:22:02] so use one that's not windows :) [20:22:41] I have Windows 7, Mac OSX, Ubuntu 12, Windows Vista, Windows XP, Windows 98, and Android OS [20:23:01] Most of them are vitual machines, mind you, but I have em :) [20:54:11] TParis: can you add me to the UTRS project? [20:54:28] Ryan_Lane: Coren UTRS probably needs a public IP, since they want to use the user's IP [20:54:36] can you give 'em one? [20:58:01] andrewbogott: ^ [20:59:15] YuviPanda: You want me to add you too, or just allocate the IP? [20:59:27] I've added him [20:59:27] andrewbogott: alllocate an IP? [20:59:27] TParis: can you also add me as admin? You can de-admin me afterwards [20:59:49] YuviPanda: ok, quota=1 [20:59:53] ty and [20:59:55] ty andrewbogott [21:00:18] np [21:00:51] done [21:00:54] thanks AndorraLaVella [21:00:56] andrewbogott* [21:02:04] * YuviPanda refreshes [21:02:19] With that public IP, can I ssh to it directly now? [21:02:39] TParis: once we use it, yeah. I'm going to allocate it to utrsweb now [21:02:59] I see okay [21:03:20] sorry, just trying to learn while you do it ;) [21:03:45] TParis: sure! [21:03:58] TParis: you can see IP / host name associations in 'Manage Addresses' [21:04:01] TParis: in the sidebar [21:04:19] Found that part, didnt realize I had to associate it to an instance after it got assigned [21:04:23] allocated* [21:04:43] yeah, so IPs are 'quota'd per project, and the project admins can assign them to whichever instance [21:04:48] and can reassign whenever they want too [21:04:55] they are called 'floating' IPs for that reason [21:05:20] YuviPanda: I take it this can't be done by proxy? [21:05:24] I see [21:05:29] andrewbogott: nope, needs access to user's IP [21:05:42] Not to track or record it I hope [21:05:48] Yes [21:06:06] andrewbogott: this is the long-used UTRS project, so I'm going to assume that TParis knows what he's doing :D [21:06:11] It's the English Wikipedia unblock project. Checkusers need access to the information used to create an appeal. [21:06:24] Coren instructed me to use the privacy message on the appeal submission page. [21:06:43] Ah, ok, that's what I was about to say… 'make sure you have a privacy disclaimer' [21:07:03] andrewbogott: can you login to utrsweb with the root key? [21:07:07] just trying to check if it is dead [21:07:16] TParis: do you know who created these instances? [21:07:50] No, I don't. I wasn't even aware they existed until today. [21:07:53] YuviPanda: Not dead, gluster seems to be working OK. [21:08:26] andrewbogott: hmm, icmp ping doesn't go through. blocked by default, I guess [21:08:43] Are you using the public IP or internal IP? [21:08:56] tried public first, now internal [21:09:02] okay, internal works [21:09:03] yay [21:10:29] TParis: I'm going to create a new instance and move the public IP to that. should make things easier. utrsweb seems to be on an older image too. [21:11:16] I can't tell who created utrsweb. utrsdb seems to've been created by 'DeltaQuad' [21:11:38] I do not know who that is, but can provide email contact for them offline if you want it. [21:12:08] nah, I'm going to leave it there and just create a new instance [21:12:33] that seems better anyway :) [21:12:43] yup yup [21:12:45] YuviPanda: Sounds good [21:12:46] Although I guess it would be nice to free them up if they're unused... [21:12:48] Sorry, I was afk [21:12:56] Had to grab a beer, it is veterans day after all [21:13:01] * andrewbogott is right now trying to hire an intern to address the 'free up if unused' problem [21:13:03] andrewbogott: yeah, I can do that once we've been in touch with the others [21:13:34] andrewbogott: I know DQ pretty well [21:13:36] what does count as unused? [21:13:41] Meet him personally at the Berlin Hack-a-thon [21:15:07] andrewbogott: ugh, access to all is 0.0.0.0/32? [21:15:10] right? [21:16:29] YuviPanda: I think it's the other way 'round -- /32 limits to the last bit [21:16:32] So you want /0 [21:16:34] gah [21:16:35] right [21:16:52] giftpflanze: That's part of the question :) [21:17:04] ah :) [21:17:06] I mean, for sure if the person who created it says "you can delete that" then it is unused. [21:17:17] But other than that, it's a hard problem. [21:17:20] heh [21:17:32] Maybe 'no incoming network traffic for X months' or something like that? [21:18:02] andrewbogott: is there a class I need to apply for letting ssh via public IP work? [21:18:06] i added the security group? [21:18:24] Hm… I think ssh is open by default. [21:18:25] Not working? [21:19:00] TParis: 'security groups' are labs way of doing firewalling. If you look at 'Manage Addresses' link in the sidebar, it lists the different groups that have been created for the project. in the 'manage instances' link, each instance lists what security groups it is assigned to [21:19:09] andrewbogott: nah, just created instance, waiting for it to boot. just asking :D [21:19:36] YuviPanda: I was wondering about that. It seemed like it should relate to user groups but it appeared to deal with IPs and ports [21:19:50] TParis: yup. terribly named, IMO :D [21:19:55] lol [21:20:19] TParis: also, you can't add / remove secuity groups to an instance once it is created, but you can edit the rules themselves. another terrible thing :P [21:21:35] 'security groups' is what OpenStack calls 'em. They should just be called 'firewall rules' imo [21:21:50] TParis: try 'ssh utrs.wmflabs.org' now? [21:23:08] TParis: I'm going to give myself sudo now. You can check that out in the 'Manage Sudo Policies' link [21:23:11] same error as before "Permission denied (publickey)" [21:23:28] I checked to make sure my public key was the same as the one on Wikitech [21:23:33] And I checked my passphrase [21:23:35] It doesnt make sense [21:23:37] sense* [21:23:51] TParis: ugh [21:23:54] TParis: ssh -v? [21:24:11] TParis: so we can be sure it is attempting the correct public key? [21:24:48] I can copy it again to be sure [21:25:06] okay! [21:25:31] andrewbogott: do i need to restart machines for sudo policy to take effect? [21:25:37] okay im in :D [21:25:45] Hm… I don't know. Won't hurt :) [21:25:49] I deleted the key I had, copied over my backup [21:25:51] TParis: wooot! [21:26:23] TParis: okay, so first thing I'm going to do is to switch storage from current default GlusterFS to NFS [21:26:32] NFS isn't the default yet, but GlusterFS is rather unreliable [21:26:40] I trust you, but do you mind telling me what that does? [21:26:45] Just out of curiosity [21:26:51] what does unreliable mean? [21:27:03] TParis: sure! [21:27:04] Nah, but what they are. Are they filesystems? [21:27:09] I like you YuviPanda complains about NFS quite a bit, but also the first thing he does with any new instance is... [21:27:12] TParis: this is 'project shared storage' file systems [21:27:32] TParis: it is a shared file system that is shared across the different instances in the project [21:27:39] But, yeah, NFS is probably better... [21:27:49] I haven't gotten gluster complains lately, but maybe that's because no one uses it anymore [21:27:50] oic...like a SAN? [21:27:58] andrewbogott: hehe, see - if gluster gets stuck, someone needs to come here and poke you guys. if NFS gets stuck, the angry toollabs folks will needle you all :D [21:28:07] TParis: sortof, but a lot.. lowertech? [21:28:11] okay [21:28:19] shared drive? [21:28:26] TParis: yeah, a much closer analogy [21:28:42] TParis: Pretty much nothing important should be stored 'on' your instance. Stuff in /data/project or /home is mapped to redundant systems and backed up and such. [21:28:43] got it [21:28:47] but better performing than Windows' implementation, and better integrated into linux :) [21:28:52] Whereas instance storage itself is… mostly just there as scratch pad for the running instance. [21:29:02] And less easily recovered in case of disaster. [21:29:13] Okay, I get that [21:29:13] TParis: this also means that your home directory is shared across all the instances [21:29:20] okay [21:29:24] which is quite useful [21:29:48] So the instance is just, essentially, a virtualized configured service. [21:30:02] TParis: it is meant to be, yeah. [21:30:05] Got it [21:30:16] TParis: theoretically, you should be able to kill an instance, create a new identically configured one and lose nothing [21:30:28] since all data is on Shared Storage. [21:30:48] I see, so I already have UTRS loaded on a toollabs account somewhere, so that same data will appear here? [21:31:07] TParis: nope, it's from a *different* project. so won't appear [21:31:13] TParis: yw [21:31:25] andrewbogott: is that an accurate understanding? shared storage is 'scoped' per project [21:31:29] andrewbogott: I saw your name in the userlist - almost thought I created an account for myself somehow [21:31:37] AndorraLaVella: Haha, sorry for the ping [21:31:37] YuviPanda: yep, correct. [21:31:48] TParis: yeah, I was tinkering, I'll remove myself. [21:32:00] YuviPanda: So same storage, different part [21:32:09] yup [21:32:11] andrewbogott: Not a big deal, we just share a name ;) [21:32:16] different access controls [21:32:28] TParis: i'm rebooting the machine now. [21:32:41] Man, I used to always have my name to myself. But then the 80's came along... [21:32:57] * YuviPanda still has his name [21:32:57] hehe '86 [21:33:31] * TParis hears My Little Pony downstairs [21:33:40] woot, NFS! [21:34:00] TParis: so, I configured the instance to use NFS [21:34:08] TParis: this can be done by applying 'classes' to each instance [21:34:16] TParis: you can check that out by going to 'Manage Instances' [21:34:25] TParis: and clicking 'configure' next to the instance we are interested in [21:34:29] (in this case, utrs-primary) [21:34:57] TParis: each checkbox adds a 'puppet class' to that particular instance, and configures it in a particular way [21:35:28] TParis: the definitions for these puppet classes lives in the operations/puppet.git repository, and we can make changes to them via gerrit if need be [21:35:45] TParis: does that make sense? [21:35:48] That's way above my head now [21:35:52] heh [21:35:53] okay [21:36:12] So, you just clicked that role::labsnfs:client and it writes linux config files to use nfs? [21:36:19] TParis: yup! [21:36:22] it's magical! [21:36:23] Did you uncheck the other one somewhere? [21:36:34] TParis: nope, glusterfs is there by 'default', so it isn't needed [21:37:00] TParis: all this magic happens via http://puppetlabs.com/ (https://en.wikipedia.org/wiki/Puppet_%28software%29) [21:37:50] TParis: so I'll just find what class to use for PHP+Apache+MySQL, tick it, and we'll have that running :) [21:38:52] So part of the puppet class for role::labsnfs:client is to also unwrite the config for glusterfs? [21:39:30] andrewbogott: ^ [21:39:40] TParis: from my understanding, it just overwrites it [21:39:48] Okay [21:40:04] I'm going to apply the ' role::lamp::labs' class now [21:40:34] linux apache myself php? [21:40:46] TParis, I've never paid much attention to what it does but, yeah, it turns off Gluster access for the instance. [21:40:48] Do I need the utrsdb if we put mysql on this box? [21:41:01] TParis: there's a cron running every 30 mins or so, checking what all roles have been applied in the wikitech UI and actually applying them to the instance itself [21:41:11] But any new instance will default to gluster, so it can get confusing, having two different storage systems within the one project. [21:41:13] you can 'force' it to run instantly by doing 'sudo puppetd -tv' on the instnace [21:41:18] TParis: nope, we don't need utrsdb. [21:41:22] Someday (in the year 2000) we'll have project-wide puppet configs. [21:41:47] TParis: I intend on killing both of those other instances once we hear from the folks who created them [21:41:54] andrewbogott: Someday skynet will take over and just do it for us. [21:42:05] PUPPET IS SKYNTE! [21:42:10] awkward typo [21:42:41] YuviPanda: DeltaQuad has taken a back seat to this project, he's to busy in school. As far as I know, he may have created these instances but he never used them. [21:42:46] right [21:42:58] I can get the whole UTRS system working with the help your giving me [21:43:01] TParis: still, would be nice to get a confirmation before I completely kill it. it isn't recoverable [21:43:04] once we kill it [21:43:08] Okay, let me see if I can grab him [21:43:16] and it's trivial enough to delete [21:43:26] andrewbogott: are there per-project instance related quotas? [21:43:35] i guess they are based on CPU cores or something [21:43:49] Yep, a CPU quota [21:44:00] Um… should be visible on the quota page [21:44:02] for a projectadmin [21:44:17] andrewbogott: i've... never seen the quota page :P [21:44:36] See it now? [21:44:42] * andrewbogott wonders if it still works [21:44:55] * YuviPanda waves at DeltaQuad [21:45:04] DQ: YuviPanda and andrewbogott are helping me set up a instance on utrs project for UTRS. YuviPanda needs to know if you did anything with the other two intances you created or if he can delete them [21:45:06] hey YuviPanda [21:45:15] hey DeltaQuad! [21:45:18] utrsweb and utrsdb [21:45:22] TParis: http://utrs.wmflabs.org/ has a working apache instance! [21:45:39] I created those? O_o [21:45:40] Yay, thank you so much [21:45:48] DeltaQuad: Yes, yes you did. [21:46:13] oh ya 'cause I think I was trying to mirror what simon was doing [21:46:56] andrewbogott: my plan for this instance is to setup a backup script that rsyncs code and mysql data to NFS from instance storage regularly. thoughts? [21:47:51] TParis: it shouldn't hurt anything to delete them, but they may have been for data seperation purposes. I would have to have a long chat with Simon about it again [21:48:29] TParis: I guess you can start setting it up from here? [21:48:35] mew, i use mutexes in my threaded script but still the output gets mangled (or the input i haven't checked) :( [21:48:39] Yuvi, I can if I wont be in your way [21:48:49] TParis: hah, nope :) [21:49:09] You've really been a godsend YuviPanda, thank you so much for the help [21:49:27] TParis: :D [21:49:42] TParis: I'll still need to setup a backup cron of sorts, will do that soon [21:50:20] TParis: barnstars appreciated :P [21:50:59] Haha, well yes, absolutely deserved [21:52:08] TParis: a couple of notes [21:52:23] TParis: remember that instances are supposed to be able to go away at any time. Everything important should be on NFS [21:52:44] TParis: oh, and NFS is mounted on /data/project, so you can put things there [21:53:01] it's slower than instance storage tho, so putting things like mysql data files, etc, there are a bad idea [21:53:22] TParis: but if you are storing files directly, please make sure they are under /data/project [21:53:30] Okay, so the php files should go there? [21:54:10] Or just a copy of the php files? [21:54:14] TParis: I am hoping the PHP files are on git somewhere :D [21:54:18] if not, they should be [21:55:01] They were until Crazycomputers complicated the shit out of it. I've been making changes directly to the files since than and I havent uploaded to git in months [21:55:20] ugh [21:55:32] But he's a git master, once he returns he'll get it all worked out [21:55:33] I'd *highly* reccomend putting them back on git somewhere [21:55:49] okay, I'll see about updating them [21:57:10] TParis: okay! [21:57:23] * TParis smiles at DeltaQuad [21:57:30] TParis: I just wanted to re-iterate the idea that anything that's not in /data/project should be considered ephemeral [21:57:32] Your a bit of a git-master yourself DQ [21:57:34] just avoiding future surprises :D [21:58:29] I just sent CrazyComputers an email, I'll try to get him on tonight to sort it out [21:58:30] TParis: let me know if you need any further help :D I'll slink away now [21:58:36] Thanks YuviPanda [21:58:40] yw, TParis! [22:00:02] TParis: wtf, no, Chris is [22:00:20] I barely, barely understand it [22:00:24] enough to operate it [22:01:14] Well, I emailed him [22:01:45] On another note, DQ, I just installed a new feature into UTRS yesterday. It uses the API to detect a person's on-wiki permissions to verify if they have the correct access [22:01:49] Caught two CUs who arnt CUs anymore [22:11:24] andrewbogott: does chowning /data/project cause issues? [22:11:29] i guess it's a bad idea in general [22:11:40] YuviPanda: rsyncing seems fine. sample code: https://gerrit.wikimedia.org/r/#/c/94149/ [22:11:50] YuviPanda: Yeah, I wouldn't chown the whole thing, just a subdir. [22:12:15] TParis: chowning the whole thing can cause random unexpected issues, I think. Would suggest chowning a subdir [22:12:46] even if I leave the group as root? [22:14:00] TParis: it's not particularly determinable. It might not cause problems at all, or it might silently corrupt data. we don't know, and hence I suggest against doing that [22:16:33] okay [22:33:45] Hi I just want to run few SQL queries, I logged into bastion, what should i do next [22:36:07] Arjunaraoc: heya! [22:36:21] hi YuviPanda [22:36:23] Arjunaraoc: if you want to run queries against the wiki databases [22:36:27] you shouldn't be on bastion [22:36:40] Arjunaraoc: read https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [22:36:44] you need to use toollabs [22:40:49] Ok. I requested a tool account https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Arjunaraoc [22:41:21] Arjunaraoc: let me grant [22:42:12] Arjunaraoc: added! [22:42:46] thanks YuviPanda [22:42:52] yw Arjunaraoc [22:47:53] I got the query working, thx Yuvi [22:48:09] Arjunaraoc: yw! [22:48:24] Arjunaraoc: in the future, I'm planning on writing a simple web interface that people can use to run queries [22:48:30] no need to create an account, etc [22:48:36] Arjunaraoc: do you think that'll be useful for you? [22:52:25] YuviPanda: I guess most tools basically translate the request to a specific query. I think you are suggesting a general query. As some knowledge of database queries is required, I am doubtful about its utility. [22:53:14] It might help, as generally available terminals do not support the indic display well so far. [22:53:21] hmm, alright [22:53:26] ah, yeah, that's a pain [22:53:38] anyway, I'll let you know if I do that :D [22:54:00] There is some talk of a new terminal in KDE that does manage indic. I have not tried. [22:54:40] I see [23:02:28] andrewbogott: does http://utrs.wmflabs.org/index.php satisfy the terms of use? [23:03:19] andrewbogott: Do we also need to require our checkusers to agree to the Wikimedia Labs terms of service since they will have access to that data? [23:03:27] Our normal users (Wikipedia admins) dont have access to it [23:10:24] Ryan_Lane: can i get a public ip on crisiswiki? temporarily, have to receive a few files from a non-labs person [23:10:39] you can scp without a public IP [23:10:50] they don't have a labs account [23:10:59] i could just use my own personal VM i guess [23:12:32] i was setting up an sftp only (local) system user [23:12:41] how would they upload the files otherwise? [23:12:42] eww [23:12:50] upload them using your account on their behalf [23:12:57] right, but how to get from them [23:12:58] or make them get accounts [23:13:06] can't they upload to the wiki? [23:13:09] i wish they just put it up on http somewhere [23:13:20] i did think of that! [23:13:35] you can also set up webdav [23:13:40] hrmmmm [23:14:09] webdav through the proxy? [23:23:09] jeremyb: yeah, why not/ [23:23:14] just don't go crazy ;) [23:23:27] i don't know much about webdav [23:23:46] i think i've given up and started creating a new node somewhere else [23:24:43] (don't have time to spare right now, want to get the files from them now because they're willing now and are actually answering mails [23:24:46] ) [23:25:00] if i wait maybe they'll stop responding :) [23:28:20] why so silence is this room [23:28:29] should we speak every 5 secs? [23:28:39] ur not speaking ur typing [23:28:39] but sure [23:28:47] Ryan_Lane: btw, idk if you saw the request before. can you make a debconfwiki too? in that case current hosting is fine but similar use case of staging/dev/evaluating new extensions [23:29:07] hm. we really need to make a mediawiki project [23:29:15] a project for each of these is seriously overkill [23:29:19] do you have any ideas what that would look like? [23:29:23] yes [23:29:28] both of these have substantial spam issues and so the first step is to address those [23:29:30] did I write a doc on this? [23:29:33] idk [23:29:45] what project [23:29:49] https://wikitech.wikimedia.org/wiki/Projects/mediawiki_Labs_project [23:30:29] what I'm proposing is basically a PaaS using trebuchet (the salt stack deployment system I wrote) [23:30:37] I'd so get on that if I weren't on apps. [23:30:38] grrr [23:30:41] seem lame [23:31:01] YuviPanda: we obviously just need to switch you to ops [23:31:02] Ryan_Lane: you've seen docker i guess? [23:31:12] i've been thinking about docker for tool labs [23:31:19] jeremyb: yes, I think I mentioned docker in that doc [23:31:26] hm, maybe I didn't [23:31:42] Ryan_Lane: heh, yeah. maybe once the current app is done [23:31:43] lets just melt it [23:31:43] anyway, I think docker/containers are the way to go for this [23:31:44] and start over [23:31:53] :P [23:32:01] * YuviPanda has yet to start playing with docker/similar things [23:32:24] lets communicate with ET's [23:32:25] :) [23:32:55] lets use a satelite :P [23:33:59] how productive :P [23:34:38] I think docker isn't really necessary for what we want, though [23:34:59] Ryan_Lane: yeah, that page doesn't mention docker or containers [23:35:16] Ryan_Lane: and I think we had a discussion with paravoid in the office about this the last time I as there [23:35:24] I think just autoscaling instances would work fine for this [23:35:51] if load on the instances is too high, add a new instance and move things around in the yuvi proxy [23:35:55] yup [23:36:06] Ryan_Lane: I need to load-test the proxy. [23:36:07] by having git reference clones of everything we can have all the repos on every system [23:36:11] any ideas on how to do that accurately? [23:36:22] I bet there are plenty of places to optimize [23:37:02] also, since trebuchet uses pillars for configuration, we can load the repo config dynamically using external pillars [23:37:33] which means we can create repos on the fly :) [23:37:52] hm. I don't actually have a good way of testing that [23:38:00] basically you'd want to see if you can saturate the link [23:38:31] I doubt you can before https uses the CPU [23:38:42] memory likely won't be an issue [23:38:56] also, it's possible that the redis lookups will be the bottleneck [23:39:11] YuviPanda: I'd set up 2-3 backend instances and make a proxy for each [23:39:23] hmm [23:39:24] yeah [23:39:24] then I'd generate a large number of requests to them [23:39:33] i bet redis lookups will be the bottleneck [23:39:35] ideally using static resources on the backends [23:39:57] right [23:40:02] Ryan_Lane: i could do it from inside labs itself [23:40:10] this is one of the reasons using varnish with VCLs would be an interesting solution for this problem [23:40:14] Ryan_Lane: directly hit the hosts, then hit them through proxy [23:40:21] because you don't need to look them up every time [23:40:24] and see how much of a difference it makes [23:40:33] Ryan_Lane: yeah, but I'd be interested in seeing how much performance this gives us [23:40:38] yep [23:40:41] Ryan_Lane: since it might be more than good enough [23:41:00] well, also, if you add a driver model to the api, you can support both nginx and varnish for this [23:41:10] and people can choose which one they want to use [23:41:24] true, once writes the varnish backend :D [23:41:25] so it's not a matter of rewriting :) [23:41:51] well, the idea would be that people could work on a number of solutions in parallel without it being an issue [23:42:01] sure! [23:42:09] as long as the driver API doesn't change [23:42:09] i think the APi would make that rather easy too [23:42:19] well, you'd need two APIs [23:42:27] the client API and a driver API [23:42:41] where the client API is REST and the driver API is python [23:42:45] invisible-unicorn currently has a RedisStore [23:42:50] which is used to store the backends [23:43:03] no reason it can't have whateverVCLWillUseStore [23:43:16] right, so you'd abstract that into a driver API, and RedisStore would be a client of that API [23:43:45] the abstraction is good so that you can version the driver api [23:43:52] right [23:44:49] funny enough, with enough effort this project will probably be nicer to use than openstack's load balancer service [23:45:15] which looks like an over-engineered nightmare [23:45:25] haha! [23:45:39] although if we add URL routing, we can't really have load balancing [23:45:47] why not? [23:45:58] it works perfectly fine in varnish and nginx as far as I know [23:46:38] in both nginx and varnish you set up backends [23:46:50] then reference the backends from the url routing [23:46:50] Ryan_Lane: it's a redis limitation, I'd think [23:46:55] oh? [23:47:20] Ryan_Lane: can't think of a way to do both url routing and load balancing without increasing the number of redis calls [23:47:35] we currently do only 1 redis call per req. if we do both, that'll go up to at least two [23:48:00] can you combine both calls into a single one? [23:48:14] I thought redis has batched queries [23:48:30] or do you need the data from the first call to make the second? [23:50:23] Ryan_Lane: hmm, you can batch them yeah [23:50:27] Ryan_Lane: maybe i'm overthinking it [23:50:35] Ryan_Lane: you don't need first call to make the second, no [23:50:41] yeah, batch then ;) [23:50:47] that should be fast enough [23:50:47] Ryan_Lane: this is also why I was hoping to get perf data :D [23:50:51] yeah [23:51:21] making two calls would indeed be a little slow [23:51:37] it's still O(1), just 2x the constant [23:51:45] Ryan_Lane: at least the connections are pooled [23:51:55] oh, that's not so bad then [23:51:59] but there's still a bit of latency [23:52:13] yup [23:52:17] and if this system were to scale you'd split redis apart from the web servers [23:52:26] Ryan_Lane: why so? [23:52:30] which increases the latency and the cost of individual queries [23:52:47] Ryan_Lane: I can have redis live on the same server, but have them replicate amongst each other [23:52:47] because you'd add multiple proxies and load balance between them [23:52:49] easy enough [23:52:56] ah, right [23:53:19] :D [23:53:23] it scales as a redis/nginx unit [23:53:28] indeed [23:53:51] well, then even multiple calls shouldn't be terrible [23:53:57] yeah [23:54:22] this is one case where varnish VCLs wouldn't be as nice ;) [23:54:29] because you'd need to replicate them to the servers somehow [23:54:37] Ryan_Lane: when do you think we can roll out the current version more broadly? [23:54:39] Ryan_Lane: heh, yeah! [23:54:48] I think it's probably good to go [23:54:50] Ryan_Lane: in fact, if you look at the guys who did hipache, they were replicating nginx config files [23:54:57] similar to how we'd have to replicate vcls [23:55:03] and then they had to graceful the nginxes [23:55:15] and as the number of rules got higher, their config meta-management system got worse [23:55:17] you don't need to graceful varnish [23:55:23] yeah, so that's an advantage [23:55:28] you just need to load the config, and the configs can be versioned [23:55:34] right [23:55:34] also, you could combine it with trebuchet [23:55:42] keep the vcl in a centralized git repo [23:55:50] do a two phase deployment [23:56:00] hell, you don't even need two-phase [23:56:13] hehe :D [23:56:14] yeah [23:56:21] just replicate the git repo, then send a varnish command to load the new repo with the tag as the version [23:56:27] yup [23:56:30] if you revert, it reverts to the older tag [23:56:58] I should add a one-stage command to trebuchet [23:57:04] not everything needs two-phase [23:57:08] s/stage/phase/ [23:57:15] I should check out salt properly soon [23:57:33] Ryan_Lane: I've been engulfed in the new app, so haven't had much time for labs stuff :( [23:57:39] yeah [23:57:42] it's no worry :) [23:57:51] you've already done one awesome project in it [23:58:02] more than most people can say ;) [23:58:42] Ryan_Lane: heh [23:58:52] two if you count redis on toollabs :P [23:59:05] ah, right [23:59:57] Ryan_Lane: one of the things I want to hack on is ipython notebooks on toollabs, and a open-to-all-simple way to run sql queries on toollabs