[01:04:09] is login working? password submit form hangs indefinitly [01:04:36] [[Special:UserLogin]] that is [01:04:46] on wikitech.wikimedia.org [01:10:11] yurik: confirmed [01:10:12] looking [01:10:45] ah [01:10:55] backups [01:11:06] who needs them? [01:11:29] yurik , it finally completed for me, after 53 seconds [01:11:35] it's back to normal now [01:11:38] the backup finished [01:11:41] works for me now [01:11:44] oh yeah,it just finished too [01:12:02] we need to backup and purge all of the server admin log crap [01:13:20] Ryan_Lane, thx, another thing - could you add dr0ptp4kt to loginviashell? [01:13:48] i tried adding him to medawiki-api, but it required that [01:14:45] one sec [01:16:02] yurik: done [01:16:30] Ryan_Lane, thanks! [01:16:33] yw [08:17:33] !log bots root: killing swap on bsql01 and usin 10gb of that space for db files [08:17:36] Logged the message, Master [09:36:21] hashar, does beta still use git-deploy? [09:36:35] MaxSem: nop [09:36:38] MaxSem: we had a sprint before EQIAD migration [09:36:44] maxsem@deployment-bastion:~$ sync-dir wmf-config [09:36:44] No syntax errors detected in /home/wikipedia/common/wmf-config [09:36:44] no dsh on this host, aborting [09:36:48] to test out git-deploy on beta, but that has been abandonned [09:36:55] MaxSem: hooo sorry [09:37:06] MaxSem: so yeah that is slightly different. One simply has to sudo as mwdeploy [09:37:08] and git pull [09:37:14] lol [09:37:23] /home/wikipedia/common is linked to /data/project [09:37:27] which is shared by all instances [09:37:42] hmm, I sudoed as root - did I break anything?:P [09:37:45] the apaches boxes symlink it as /usr/local/apache/common-local or something [09:37:51] yeah you broke everything :-] [09:38:08] yay [09:38:15] alias mwdeploy='sudo su --login --shell /bin/bash mwdeploy' [09:38:18] works well :-] [09:38:45] just shown everything back to mwdeploy:mwdeploy [09:38:54] will eventually one dir migrate to scap / sync-dir [09:39:01] but that need an update of all those scripts [09:39:21] okay, the changes have been deployed [09:39:40] mobile login works now, but did I break anything else? [09:39:52] just other users will have to sudo root [09:39:55] I will fix the perms [09:40:04] I should get the change automatically deployed [09:40:35] we have a jenkins slave on deployment-bastion now so that should not be too hard [09:41:18] fixing perms right now [10:51:50] Hi! My shell request is fullfilled here: https://wikitech.wikimedia.org/wiki/Shell_Request/DixonD But I can't ssh bastion.wmflabs.org [10:52:13] Could anybody help me? [10:59:26] DixonD1 hi [11:00:06] petan: Hi [11:00:32] seems you already have the permissions [11:00:35] !access [11:00:35] https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [11:00:39] did you read this? [11:00:46] Actually I don't see my name in the bastion project [11:00:57] mhm... let me check [11:01:18] @labs-user DixonD [11:01:19] That user is not a member of any project [11:01:22] indeed [11:01:38] !log bastion +DixonD [11:01:41] Logged the message, Master [11:01:46] DixonD1 ur there [11:02:00] @labs-user DixonD [11:02:00] That user is not a member of any project [11:02:10] it takes some time to bot to update its db [11:02:20] but you should be able to ssh to bastion now [11:02:24] @labs-user Beetstra [11:02:24] Beetstra is member of 2 projects: Bastion, Bots, [11:02:32] :-) [11:02:34] did not know [11:02:44] :o [11:06:32] well, I still have the same problem: "Server unexpectedly closed network connection." I will try to reupload the ssh key [11:06:43] DixonD1 try bastion2 [11:06:46] !bastion [11:06:46] http://en.wikipedia.org/wiki/Bastion_host; lab's specific bastion host is: bastion.wmflabs.org (208.80.153.194) bastion2.wmflabs.org (208.80.153.202) see !access [11:07:27] the same with bastion2 [11:08:15] ok, can you try using ssh with -vvvv option [11:08:34] then pastebin it somewhere with 10 minutes expiration or that (it may contain some IP's and such) [11:09:37] @notify Ryan_Lane [11:09:39] This user is now online in #wikimedia-dev so I will let you know when they show some activity (talk etc) [11:13:07] DixonD1 sorry [11:13:13] I did a mistake :o [11:13:38] DixonD1 I gave you access to bots instead of bastion [11:13:40] fixed now [11:16:03] hoorray, I've got to bastion2) [11:16:24] actually, I need access to bots as well [11:16:39] that was my initial intent [11:18:34] ok you already got it :D [11:19:12] thanks a lot! [11:20:28] !botsdocs [11:20:28] https://wikitech.wikimedia.org/wiki/Nova_Resource:Bots/Documentation [13:06:43] petan: one more question. I don't have sufficient rights to create a directory in /data/project on bots-{3,4} [13:08:08] DixonD1 first of all, you shouldn't use bots-1, -3 or -4 [13:08:19] you can create that folder in /data/project/userdata [13:08:37] which instance should I use then? [13:08:51] !botsdocs [13:08:51] https://wikitech.wikimedia.org/wiki/Nova_Resource:Bots/Documentation [13:08:55] it is recommended to use grid [13:09:24] that means, you will store your bot on a network storage and use qsub or similar tool in order to submit your job [13:09:34] it will then run on a random execution node based on load [13:11:51] in case you needed a help setting it up let me know... or in worst case you could just manually run it on some execution node (bnrX) but that is a bad idea [13:15:53] let's start from the beginning. After I connected to bastion, to which instance should I connect? I'm just following https://wikitech.wikimedia.org/wiki/Help:Move_your_bot_to_Labs, I'm not really experienced with linux [13:16:30] bots-gs [13:18:00] Ah, sorry, I see it now from !botsdocs [13:18:31] np [13:18:40] just tell me what you need to do and I will tell you how [13:29:52] well, right now I just want to clone code from the github repository and schedule it to run once per day, for instance [13:30:21] okay, you can git clone on bots-gs to some shared folder like your home [13:30:27] or /data/project/userdata/somefolder [13:32:13] then, you should once start your thing to check it works - by doing "qsub -q main.q [13:32:31] for example qsub -q main.q blah.csh [13:35:34] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661799 edit summary: [+726] /* How */ wip [13:37:58] !log bots petrb: installing some basic and useful packages on master server [13:38:03] Logged the message, Master [13:42:17] DixonD1 once you find out it works you can insert this to cron on -gs [13:42:43] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661802 edit summary: [+731] /* How */ mo' [13:51:07] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661804 edit summary: [+845] /* Simple, one-off job */ doc [13:57:49] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661805 edit summary: [+426] /* Continuous tasks (such as bots) */ documentation is good [14:12:55] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661808 edit summary: [+886] /* Web services */ mo' doc [14:14:37] @notify Ryan_Lane [14:14:37] This user is now online in #wikimedia-dev so I will let you know when they show some activity (talk etc) [14:24:57] Coren: Does https://www.mediawiki.org/w/index.php?diff=661804 mean, "jsub -once" will croak when the job already exists? qcronsub is completely silent if no errors occur, and I find that much saner :-). [14:25:39] if you are going to make it silent, then implement -v option that is not silent [14:25:44] because I hate silent commands [14:26:08] btw I tried your jsub - it was silent and nothing happened [14:26:22] so I fallen back to working qsub :P [14:27:27] scfc_de: It will croak, with a message in your .err file by default so you have a trace of "oops, the previous job lasted too long" [14:29:30] Coren: Okay, if continuous jobs aren't started by cron, that may be feasible. [14:30:02] You'd normally don't want to start a continuous job with cron. eeew. [14:30:10] Coren: Tradition :-). [14:30:11] Use jstart [14:30:12] :-) [14:30:32] jstart foobot [14:30:36] jstop foobot [14:30:39] Yeay simple! [14:31:22] Coren: Will jsub have no option to have different files for stdout/stderr per invocation à la qsub's $PREFIX$JOBID.out (or something like that)? [14:32:28] scfc_de: I'd rather not, that's actually a major point of user confusion. If the user is "advanced" enough that they want jobids in output file, I'm sure they can work with qsub. :_) [14:33:07] Personally, if you care about output files, you probably want to use tempfiles you name yourself (foo.err.$$ or whatever) [14:33:28] That's what I'd do if I wanted to use the output imediately after a -sync y for instance. [14:33:43] Coren: Yep, but otherwise you have race conditions: The user truncates/deletes the file, and a parallel invocation's output goes to /dev/null. [14:34:01] I'm not so much concerned that *I* may not be able to cope with that :-). [14:34:20] I would expect that this is a much /less/ frequent scenarion from newbie users. [14:41:10] Coren: Well, I associate newbies with "being very fast at deleting files and clicking on buttons they don't understand" :-). Bots operators must then just note that they need to stop and start their bot if the log gets too large. Otherwise, your setup looks really nice. [14:42:45] scfc_de: Truncating should be fine, actually, given that sge opens the output files with O_APPEND [14:44:57] I've never tested what happens when process A is appending at position X, and process B truncates the file. Does A continue at 0 or at X + 1 with probably sparse in front? [14:45:55] Unix filesystem semantics say that if you have O_APPEND then any write() will write at the current file length (atomically, natch) [14:46:58] What gluster might do.... :-) [14:47:04] Gluster is going away anyways. [14:47:42] Ah: http://pubs.opengroup.org/onlinepubs/7908799/xsh/open.html. Okay, that solves that :-). [14:50:52] petan: thanks for help, finally I've managed to configure and schedule my bot [14:51:00] DixonD1 yay [14:51:06] did you use qsub? [14:51:12] yep [14:51:22] okay cool, don't forget to cron it :P [14:51:30] is your bot using sql? [14:53:13] this one isn't [14:53:36] ok [14:54:02] is there any hope to have database replication set up anytime soon? [14:54:18] that's a question for Ryan_Lane [14:54:30] ok [14:55:35] of course personal sql databases are available [15:04:16] Change on 12mediawiki a page Wikimedia Labs/Tool Labs was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661814 edit summary: [-450] /* Getting started */ point to new doc instead [15:21:49] DixonD1: Being worked on I believe [15:45:12] Coren, we still don't have an actual schema for service groups, do we? [15:51:16] andrewbogott: Not that I know of. We'll just leap on Ryan the moment he comes on. :-) [15:51:37] * Coren ponders. [15:52:02] andrewbogott: Do we actually need schema changes? [15:52:47] i don't know. I thought that 'We'll be adding two OUs' implied schema changes, but maybe not. [16:01:42] I don't think it does; AFAIK, adding an OU just requires normal write permission to LDAP to, well, just add the OUs. :-) [16:02:36] Hm. Though you'd want /OpenStack/ to do that automagically when you create the project; unless you do so when trying to add the group (OU missing? Add OU) [16:20:36] paravoid: Can I ask you a question about ceph/rados? [16:20:42] oh hi [16:21:14] hi :-) [16:21:27] of course, shoot :) [16:21:40] currently mod_tile does three round trips through rados to serve a single tile [16:22:06] one for stating the tile, one to retrieve the header of the meta tile, and one for the actual tile data [16:22:15] ok [16:22:45] given the latency, I presume it is probably best to try and do some local caching [16:22:59] the network latency? [16:23:07] it's all going to be on the local DC [16:23:17] we're not going to do ceph over wan [16:23:38] but I guess my actual question is how best to store some of the meta data [16:23:55] like what? [16:24:05] currently I store file access time and extra meta data as a header in the actual data blob, rather than using rados attributes [16:24:29] Is there a performance difference to doing it either way? [16:24:44] or other reasons why one would be preferable to another? [16:25:40] the rados attributes should be faster to look up [16:26:02] and it looks cleaner to me [16:26:08] but it doesn't actually seem that important [16:26:20] from a quick glance, I'm no ceph performance expert :) [16:27:09] OK, I'll need to look into it some more to see why I am only getting about 1000 tiles/s out of the rados backend [16:27:37] I saw your mail [16:27:43] we use bobtail, not argonaut [16:27:47] (0.56.x, not 0.48.x) [16:28:08] it shouldn't be a huge difference [16:28:42] installing 0.48.x was as simple as aptitude install ceph.... [16:28:58] same with 0.56 :) [16:29:04] the ceph people provide an apt repo [16:29:05] but if it makes a big difference, I'll need to try and install 0.56 locally [16:29:10] for all debian + ubuntu LTS [16:29:35] 0.56 has cephx (authentication) which makes it a bit more complicated, but you can easily disable that [16:29:39] and basically be exactly like 0.48 [16:30:26] also, 1k vs. 15k is a problem, but if it's the gap is smaller but ceph is still slower it's still going to be a benefit [16:30:38] due to being able to read from hundreds of spindles with ceph [16:30:42] Oh, I just noticed after updating to ubuntu raring a few days ago, I actually now have 0.56 [16:30:48] haha [16:30:49] so I need to redo the performance tests [16:31:45] Yeah, that was all from ram, so disk performance didn't matter. [16:41:47] A wild Ryan_Lane appears! [16:42:07] no schema changes needed [16:42:19] posixuser for users [16:42:27] posixgroup + groupofnames for groups [16:42:46] I think inetorgperson for users is unnecessary in this case [16:43:55] Ryan_Lane: Do you want to create the OU themselves upon project creation or check-and-create as the first service group is added? [16:44:11] with ceph 0.56 I get about 1600 req/s. So definately better, but still not great [16:44:49] It looks like it spend 200% CPU time in ceph-osd and about 150% CPU in apache plus some in ab [16:45:04] Coren: nah, we should create the OUs on project creation [16:45:11] and should make an ldif for the others [16:45:23] So I guess the thing is CPU limited in ceph-osd. Which if you split it accross multiple servers might not matter as much [16:47:03] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661840 edit summary: [+750] db [17:00:12] how are SSH keys handled within Labs? are they in LDAP? [17:01:02] stwalkerster: They live in ldap, but instances get the keys via a read-only shared volume which is periodically updated from ldap [17:01:31] ah, cool. thanks :) [17:18:44] Coren: Where is the config of SGE saved? I'm interested to look into them [17:19:53] Jan_Luca: It's all reachable from qconf, usually through the options starting '-s*'. There's a lot of them, and they are a little baroque, but 'man qconf' gives a complete (if opaque) overview. [17:20:49] Coren: Oh, I can use qconf as normal user, I don't know that :-) [17:21:08] Jan_Luca: You just normally cannot change most things. [17:21:18] OK [17:22:51] iirc you can even use qmon [17:31:06] Coren: Yes, qmon works, but X via two SSH tunnels and half way around the world isn't necessarily a pleasure :-). The splash screen stayed on for several minutes :-). [17:31:38] scfc_de: Splash screens are teh evulz. Unnecessary pixmaps. [17:32:52] What are "tickets", BTW? [17:34:44] scfc_de: They're another way to allocate resources, when you want to distribute them unevenly. (Say, dept X has the right to use 60% of the cluster, and dept Z only 40%) [17:35:08] scfc_de: And you can give ticket overrides for important jobs that need more resrouces, etc. [17:36:35] Coren: Okay. [17:38:56] Coren: The leftmost tab of the main qmon window is obscured by the "GRID ENGINE" icon for me, but I apparently have the possibility to "Add" there (whereas this button is grayed on "Queue Instances" and "Hosts"). Should that be this way? [17:39:54] Leftmost tab is 'pending jobs'. Yes, you can add jobs. :-) [17:41:28] Sure? It looks like http://gridengine.info/files/qmon-hosts_default.png which would suggest "Cluster Queues" :-). [17:45:02] Coren: http://toolserver.org/~timl/qmon-1.png [17:45:30] Huh. It'd let you add a queue? How.... pointless. [17:46:11] I don't know. Could it be that the real magic is in "Queue Instances", and the "Queues" are just blueprints? [17:46:34] * Coren will check once the meeting is over. [17:47:02] Oh, wait, that might just be "add a job on the selected queue" [17:48:35] I'm tempted to try "Modify" :-), but I'll wait till your meeting's over. [17:59:47] Wikinaut: please don't send us emails about random things [17:59:51] Wikinaut: use the wikitech-l list [18:00:25] Wikinaut: we are likely not the right people to answer your question [18:03:13] od -x -N 16 /dev/random | mailx -s "Random thing" labs-l@lists.wikimedia.org [18:05:29] Ryan_Lane: OpenID question: can we map the UID back to a user? [18:06:30] uid? [18:06:49] not sure what you mean [18:07:06] you mean the id we're using for the delegation? [18:07:08] Right, the current scheme uses the user id of the user as part of the identity (and not the username). Can we map that back to a username at need? [18:07:15] yeah [18:07:21] I believe so [18:07:29] for sure inside of mediawiki [18:07:44] Because I'm thinking there are some tools that will want to be able to query 'Is that identified user a checkuser on enwp' for instance. [18:07:47] it's the user's id in the dayabase [18:07:56] oh [18:08:13] my typing is really bad lately :( [18:08:28] well, it would really be nice to give back a list of groups [18:08:35] but that's not going to work so well [18:08:54] since we can really only give back global groups [18:10:20] I'd have to look through special pages and the api to know for sure about your question, though [18:10:49] I'm looking through the API now. [18:11:11] list=users could work, if we could give a global userid rather than just ususers [18:11:27] https://www.mediawiki.org/wiki/API:Users [18:11:38] ususers wants usernames [18:12:19] yeah [18:12:34] most things want a username [18:12:57] Shouldn't be too hard to allow usgid also [18:13:16] And that'd give the local username too which would be perfect for use with everything else. [18:13:19] well, openid comes back with info, including the username [18:13:53] Ryan_Lane: Hm. Does it? [18:13:53] the user is asked if they are willing to share that info when they log in [18:14:03] * Coren needs to test that. [18:29:43] scfc_de: Let's see about qmon [18:38:59] Coren: Okay. Shall I start qmon again? [18:47:25] scfc_de: Yeah, tell me what it looks like it's letting you change. [18:47:53] scfc_de: I'm betting that it doesn't actually check preemptively whether you're allowed to change something and will just fail, but it's worth checking. :-) [18:48:58] Coren: Okay, starting now, will take a few minutes :-). [18:53:54] Coren: qmon is up, now looking at "Cluster Configuration". Should I try setting, let's say, "mailer" from "/usr/bin/mail" to "/bin/false"? (I think mail isn't working anyhow ATM :-).) [18:56:37] scfc_de: That seems safe enough [18:57:19] Coren: And you're right: 'denied: "scfc" must be manager for this operation' :-) So everything's fine. [18:57:32] It's effing crappy UI design, though. [18:59:35] Yep. [19:00:10] (Especially, since on "Queue Instances" some buttons are grayed out, so it's the inconsistency that sucks.) [19:04:16] Ryan_Lane, for entries like ou=groups,,ou=projects,dc=wikimedia,dc=org, is the cn "groups" or "groups,"? [19:14:20] andrewbogott: it needs to be a valid dn ;) [19:14:40] it's cn=groups,cn=testlabs,ou=projects,dc=wikimedia,dc=org [19:14:44] for instance [19:17:52] Do you mean ou=groups,cn=testlabs,ou=projects,dc=wikimedia,dc=org? [19:18:03] * andrewbogott is ever more confused [19:20:13] Oh, wait... [19:22:18] Well, ok, my actual question is: what do I use as my pattern to match the groups entry for a given project? '(&(&(cn=groups)(cn=))(objectclass=groupofnames))' ? [19:24:34] andrewbogott: ooohhh [19:25:08] better would be: objectalsss=groupofnames [19:25:21] with a base of "cn=groups,cn=,ou=projects,dc=wikimedia,dc=org" [19:25:29] *objectclass [19:26:07] Oh, ok! That's like what I tried originally but it wasn't working… maybe just a typo or something. [19:26:24] Well, not exactly what I tried. Hm. [19:30:10] Ah, much better [19:31:26] (setf (get 'actual-usefulness 'lisp) nil) [19:31:43] Erm. [19:32:24] * Coren will never get used to ^W closing windows instead of WERASE [19:32:52] I mean, of course (setf (get 'lisp 'actual-usefulness) nil) [19:38:37] andrewbogott: I probably just broke nova-precise2 [19:38:42] it'll be momentary [19:40:23] Ryan_Lane are you planning on ability to hot plug more storage to an instance? [19:40:34] like /dev/vdc [19:40:44] etc [19:40:44] not currently, why? [19:40:51] it would be cool for some things :> [19:41:04] like if we ran out of space on sql we could online extend it etc [19:41:41] it would likely come from slower storage, if we did [19:41:53] ah [19:42:11] you'd really want to put it fully on that storage, rather than using local storage, if we did that [19:42:38] but that wouldn't give me ability to create own fs [19:42:54] mhm [19:43:01] own fs? [19:43:06] why wouldn't it? [19:43:09] it would be block storage [19:43:13] if I used shared storage? [19:43:16] like gluster [19:43:27] that wasn't what you were asking about ;) [19:43:33] ah [19:43:37] then I didn't understand you [19:43:39] and that wasn't the answer I gave [19:43:50] Ryan_Lane: ATM, documentation I'm writing for the tool labs are on mw.org. Better on wikitech, you think? [19:44:07] Coren: until all project documentation is moved, let's keep it on mediawiki.org [19:44:15] Coren: guillom wants to move the project docs [19:44:15] Ryan_Lane: kk. [19:44:58] Ryan_Lane is there any progress on setting up that dedicated sql server? do you think it could be possible for me to get access there in future given that I would sign these papers you talked about? I am really interested in doing dba stuff :P [19:45:36] hm. I'm not sure how we're segregating that from production [19:45:37] petan: I think that was me telling you about said papers. :-) [19:45:50] Coren ryan actually did some time ago as well :P [19:46:16] and for dba related work, you need to talk to notpeter and binasher [19:46:23] well I meant dba work on that labs dedicated sql [19:46:38] I thought these 2 are working only on production things? [19:46:48] they are our dbas [19:47:18] ah, so it's not going to be set up by you? [19:49:16] probably not [19:49:23] not the dbs themselves [19:49:27] access to them, yes [20:10:44] Ryan_Lane, so I checked the 'redis' class, but it doesn't work since the default directory is /a/redis. [20:10:58] Is there a way I can override that through the web interface, or do I have go full self-hosted? [20:11:27] I see the variables section at https://wikitech.wikimedia.org/wiki/Special:NovaPuppetGroup, but I don't know if it's relevant. [20:13:33] superm401: you can use /dev/vdb and mount it on /a/ [20:14:13] hashar, I guess... Why is the default that anyway? [20:14:24] superm401: I did that in manifests/role/lucene.pp [20:14:31] look for 'vdb' [20:14:54] the relevant snippet http://dpaste.com/1029329/ [20:15:40] superm401: the puppet class are usually written for production [20:15:49] it is up to us to adapt them for other uses :-] [20:16:24] ahh redis has a $dir = '/a/redis' [20:17:03] superm401: so you could add a new role class for labs in manifests/role/redisdb.pp [20:17:34] that would set the $dir to something else ( like /mnt/redis ) or simply mount vdb on /a and leave $dir [20:18:24] labs does not let you pass parameter to a parameterized class, so you need a role class [20:19:18] hashar, what if I add a labsredis that subclasses the main one with sane defaults? [20:19:40] There is already labsmediawiki, etc. [20:20:02] ah or that yeah [20:20:05] maybe :-] [20:21:02] ah I see manifests/role/labsmediawiki.pp [20:21:22] sounds like the same need to be done for manifests/role/redisdb.pp [20:21:36] Yep, I'm gonna go for it. [20:21:41] something like role::db::redis::labs [21:17:31] <^demon> Ryan_Lane: Got a couple easy puppet changes for gerrit: https://gerrit.wikimedia.org/r/#/c/54678/, https://gerrit.wikimedia.org/r/#/c/54677/, https://gerrit.wikimedia.org/r/#/c/54514/ [21:18:10] <^demon> Oh, and https://gerrit.wikimedia.org/r/#/c/53173/ [21:26:16] ^demon: that's way more than a couple ;) [21:26:24] <^demon> Well, < 5 ;-) [21:27:33] Change on 12mediawiki a page Wikimedia Labs/Account creation improvement project was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=661913 edit summary: [+39] /* Current account creation process */ [21:38:37] ^demon: you're asking me to merge something you say needs an upstream change [21:38:44] <^demon> That's been merged. [21:38:47] ok [21:51:13] ^demon: what happened to gitblit? [21:51:33] <^demon> It's slow. [21:51:33] <^demon> On big repos. [21:51:46] slower than gitweb? [21:51:55] <^demon> On mediawiki/core, yes. Most other repos, no. [21:52:06] <^demon> Plus it doesn't like uppercase letters in repo names. Which is stupid. [21:52:11] :( [21:52:34] I don't think anything will like mediawiki/core it's huge [21:52:40] I hate gitweb [21:52:53] <^demon> I need to repack mediawiki/core. [21:52:58] <^demon> And probably operations/puppet. [21:53:03] <^demon> Most other repos are fine. [22:04:17] hi ceradon I wanted to ask you something but I forgot what it was :P [22:04:26] always this bot remind me :D [22:05:31] petan, Well I asked you if you could give me access to the db a couple days ago but you were idle. [22:05:43] I can't create databases. [22:05:52] ah that [22:05:56] you should be able to do that [22:06:04] what does it say when you try to make one? [22:06:14] did you use that system.create_db function [22:06:28] call system.create_db("ceradon_data"); [22:06:29] etc [22:06:41] everyone can use this to create new db's [22:07:05] i could of course give you all grant to create new db's but that wouldn't sort out grants [22:07:07] this function does [22:07:27] uh. I killed gerrit ! https://gerrit.wikimedia.org/r/#/c/54986/ [22:07:45] ceradon does it work now? [22:07:55] Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. [22:08:01] ah. Back again [22:08:24] Wikinaut how many times we need to tell you don't store ur porn in gerrit :P [22:08:26] it's too small [22:08:49] you can't commit 2gb files :P [22:08:49] Anyone have time to look at a 5-line puppet manifest that almost works? https://gerrit.wikimedia.org/r/#/c/54970/ [22:09:06] superm401 I like "almost" part [22:09:07] :P [22:09:21] I get: "Invalid parameter dir at /etc/puppet/manifests/role/labsredis.pp:4 on node puppet1.pmtpa.wmflabs" [22:09:38] Of course, puppet and I have a difference of opinion. [22:09:50] There is a dir parameter at modules/redis/manifests/init.pp in the operations/puppet repo so I don't know why it fails. [22:10:00] petan: http://git-annex.branchable.com/ ?:) [22:10:20] Wikinaut: the server restarted because we pushed a config change to it [22:12:32] superm401: hm. that should work [22:12:56] I'm running it self-hosted on puppet1 on labs. [22:13:01] And I get the above error. [23:16:06] * Coren could use more ce and clarity critique of https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Help [23:24:17] Coren, what would happen if you wanted to throttle users (as you can't get the ip)? [23:25:02] Not sure what you mean by "throttle users"; you mean users that are not otherwise logged into some application that are misbehaving? [23:25:31] (Because if they are otherwise identified to the application, then the application can obviously use that identity) [23:25:33] usually tools don't log in users [23:25:50] Right. [23:26:08] * Coren ponders. [23:26:15] or you may want to be able to checkuser [23:26:27] I stored ips for the answers of a survey [23:26:39] just in case someone sent many answers [23:27:05] (I didn't used them in the end, although we detected people clearly lying in their answers...) [23:27:50] did you write that jsub script? [23:28:02] instead of -sync, I would have called that -wait [23:28:05] Platonides: That's the very thing this is meant to prevent without a clear Ok from legal, actually. They really want to make certain that the privacy policy can't be circumvented by tool maintainers since we want to keep the entry requirements near nothing. [23:28:39] "-sync y" is a qsub option simply passed transparently by the script. I could have a "-wait" alias. [23:28:46] s/have/add/ [23:29:07] Yes, I wrote the j* scripts. [23:29:45] seems very well organised [23:30:04] About IPs, I'd need to consult with legal, and to figure out some method. I'm pretty sure we can never give geolocatable information, but I might be able to arrange some form of hash to stand in for addresses instead. [23:30:38] I would add a note about using something like #!/usr/bin/env python, instead of making them do a which [23:30:57] I don't think so, IPv4 space is very small [23:31:27] Oh, no, using env is /way/ too dangerous. Never ever rely on paths for a user-invokable scripts. :-) [23:32:10] dangerous? [23:32:22] well, I guess they could have modified their own PATH [23:32:47] The danger here is peeps who put . in their paths. [23:33:39] It's always better to shebang with an absolute path for something which can be invoked externally. [23:33:47] but I don't think the program spawned from the web server would contain it [23:33:55] I think it would inherit the PATH from the webserver [23:34:00] If someone knows enough about things to use env, they will know the risks. :_) [23:34:32] well, putting . in PATH is dumb [23:34:33] Coren: quick read as a total noob, I would like to see a few more concrete examples more complex than the simple case you have for "$ jsub program-or-script". Correct output from qstat, qdel or jstop might be nice to have for example. [23:35:04] night [23:35:10] chrismcmahon: I was worried that this would make it too much of an infodump... perhaps in collapsed boxes might be nice. [23:35:44] * Coren takes notes. :-) [23:35:49] Coren: could be. I always like "you should see this" sorts of things in these sorts of pages. [23:36:29] That's sensible. [23:36:40] Lemme try something, see if you like it. [23:45:43] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661939 edit summary: [+801] /* Continuous tasks (such as bots) */ [23:45:53] chrismcmahon: Like that? [23:46:25] Coren: looking... [23:46:40] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661940 edit summary: [-9] /* Continuous tasks (such as bots) */ ce [23:47:29] Coren: nice! elegant, too, I like the expanded example, and it has all the good feedback information [23:50:51] I'll try to provide something along those lines for all the important points, then. [23:51:40] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661941 edit summary: [+12] /* Continuous tasks (such as bots) */ more ce [23:55:30] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/Help was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=661943 edit summary: [-22] /* Continuous tasks (such as bots) */ rm unnecessary detail