[00:00:20] [bz] (8ASSIGNED - created by: 2Maarten Dammers, priority: 4Normal - 6normal) [Bug 48851] Database inserts are slow - https://bugzilla.wikimedia.org/show_bug.cgi?id=48851 [00:05:26] hrmmm [00:48:49] anyone seen cobi recently there are reports of Cluebot not archiving [00:59:27] Apparently the continuous job on tools wasn't so continuous [00:59:39] * Damianz thinks it should be editing again now [01:00:14] Betacommand: ^ (Since I'm like 10min late) [01:00:32] Damianz: are you making sure that it always exits with an error? :) [01:01:20] Nope, it's a bit clungy so could exit cleanly in theory... probably need to tweak it, but not right now got build documentation to look at [01:02:55] right [01:51:03] Damianz: looks like its still not working right [01:52:44] What's up? It seems to be doing stuff in the logs, even if it is a little slow [01:55:17] Damianz: 1 edit in the last 30 minutes...... [01:55:46] Let me try something [01:56:54] thats not just being kinda slow, that's glacier slow [02:00:43] Hmm tools-login seems to have broken, wonderful [02:02:16] Coren: *Jab* [02:02:22] tools-login has a load of like 70 [02:02:32] The NFS server is currently overloaded. [02:02:36] :( [02:02:54] From what I can tell, it's going back down now; the instances should catch up shortly. [02:03:11] Vim just gave me my terminal back after quiting 2min ago heh [02:03:35] (Too many damn cron jobs on -login makes it pile up horribly at the least provocation) [02:03:40] Not breaking my bot though - it's doing the same on the other instance... think it's just behind hmm [02:04:08] Coren: Make a random cron job to sed crontabs for on the hour to a random number past the hour... bet less than 10 people complain [02:04:30] It shouldn't break anything, just slow things down. Filesystem was molassey for a while, but other than that nothing gets impacted. [02:04:59] Makes me sad I can't screen from a service account [02:05:08] The problem isn't a bunch of crontabs at the same time every hour, it's too many people running things every minute or every other minute, etc. [02:05:25] Damianz: Just screen from your own account before you become the service account. [02:05:33] I had a rule at my last job - if you made a min crontab, it got deleted lol [02:05:35] Same end result. [02:05:52] Coren: Yeah - but I was already in the directory I needed to be in... so I had to type an extra line [02:05:58] Yeah, I'm going to start cracking down on people mistreating cron. [02:06:02] * Damianz adds 'RSI inducing' to tools features [02:08:43] Damianz: Please to not run cluebot on -login! [02:08:59] Coren: Testing something, unless you have a better host for that [02:09:05] -dev! :-) [02:09:12] there is dev? [02:09:14] * Damianz goes there [02:09:19] Danke. :-) [02:09:50] * Damianz stabs screen for dropping his forwarded key [02:11:16] screen doesn't hold keys! That'd be horribly insecure. [02:11:30] It would be nice for the lazy though [02:11:37] legoktm: Why are you running stuff on -login? [02:11:51] legoktm: /data/project/legobot//dispatchstats.py [02:12:34] Oh, ffs. I see at least three heavy php and python jobs in cron. I need to start editing crontabs. [02:13:03] Coren: Oh. [02:13:06] Sorry [02:13:06] echo "no"; exit 1; wq! chmod 00? [02:13:25] I totally forgot about that [02:15:00] Coren: commented out [02:15:15] tsk, tsk. :-) Send it off to the grid. :-) [02:15:46] Well I should figure out what those scripts actually do.... [02:16:01] Coren: Could you start calling it the cloud, so I can make fun of you? [02:16:02] :P [02:16:20] Damianz: Nope. [02:17:21] * Coren kills the worst offenders. [05:58:48] New patchset: Ori.livneh; "Correct typo in passwords::mysql::eventlogging" [labs/private] (master) - https://gerrit.wikimedia.org/r/74113 [08:22:18] !log deployment-prep beta jenkins jobs statuses are now listed on the CI main page at https://integration.wikimedia.org [08:22:21] Logged the message, Master [08:22:48] hashar: hey, I have jenkins newbie questions [08:23:03] paravoid: I will be more than happy to help :-] [08:23:46] so we're soon going to have a repo for DNS [08:23:52] where people will submit zone changes [08:24:10] so I want jenkins to check those and set V+1/- [08:24:24] I'll prepare the black magic to do so [08:24:25] but [08:24:49] what's the interface going to be? [08:24:53] jenkins spawning a binary? [08:25:12] is it going to clone the gerrit changeset somewhere or is my script going to do that? [08:25:22] iirc named as a way to lint a set of config [08:25:24] oh heh, just realized it's the wrong channel [08:25:39] we can follow up in -ops or -dev if you want [08:25:45] nah, it's fine [08:25:57] so you were saying [08:26:20] so when you send a patchset in Gerrit, it sends a JSON notification over ssh to Zuul [08:26:39] Zuul receive the JSON notification, fetch the patchset from Gerrit and merge it with the tip of the repository branch [08:27:21] Zuul (if properly configured) then trigger a Jenkins job passing the git project name, the merge commit (your patchset merged on tip of branch) and various informations [08:27:36] Jenkins start the jobs with the parameters which are being used by a git Jenkins plugin [08:27:47] the plugin would then clone the repository and checkout your patchset [08:28:12] from there we can add build steps such as "running a shell script" [08:28:38] which might be something like: named --lint $WORKSPACE/files/named/* [08:28:38] where would it clone it? [08:28:46] it is cloned on the Jenkins worker [08:29:09] we need to put it in a special directory structure for gdnsd to work [08:29:17] somewhere hidden like /srv/ssd/jenkins/workspaces//workspace , but that is exposed as the env variable WORKSPACE [08:30:02] what kind of tests do you want to achieve ? [08:30:17] I guess a basic step would be to validate whether the zone files are valid (aka linting) [08:30:25] yes [08:30:30] so, we're going to run gdnsd [08:30:45] a more complicated one would be to boot a gdnsd server and run a script that does DNS queries on it and assert results (aka integration test) [08:30:52] gdnsd has a "gdnsd checkconf" argument (which actually checks zones too), that would be the first thing to run [08:31:03] but [08:31:18] to run this, either of these two things need to happen [08:31:32] a) have zones at /etc/gdnsd/zones and config at /etc/gdnsd/config [08:31:56] b) have them under a random dir with a specific structure, e.g. /tmp/tmp.XXXXX/etc/{zones,config} [08:32:08] and then run gdnsd with a -d that /tmp dir, which is the "chroot" mode [08:32:32] so my script can do all that [08:32:56] set up a temp dir, copy from the workspace, prepare the zones, run gdnsd checkconf and then cleanup the dir [08:33:19] you could run gdnsd checkconf directly in the workspace I guess [08:33:29] gdnsd --checkconf $WORKSPACE/path/to/config/files [08:33:53] no [08:33:55] not possible [08:34:05] for multiple reasons :) [08:34:09] the pass are hardcoded ? [08:34:14] grr [08:34:19] the paths are hardcoded ? [08:34:22] a) if you give gdnsd a directory it needs to be under 'etc' and our git tree doesn't have that [08:34:41] so if you say gdnsd -d /tmp/foo checkconf, it'll look for the zones under /tmp/foo/etc/zones [08:34:52] and our git tree won't have etc [08:34:55] that's (a) [08:35:32] b) we'll use templates for zones, so the repo we need to run some other stuff to compile zones anyway :) [08:36:06] so you want to figure out a way to expand all the zone files from templates [08:36:07] ok, so, I'll prepare a script that takes a directory with a checked out tree as input and does all things necessary [08:36:14] oh I have that, that's easy [08:36:40] the expanded zone files would be written under $WORKSPACE/etc/zones [08:36:48] and then you can gdnsd -d $WORKSPACE [08:37:01] yeah, I can create an arbitrary structure under $WORKSPACE [08:37:44] and we can ask Jenkins to wipe all the files in the workspace before starting a build [08:37:57] this way you are sure to always run the tests starting with a clean start [08:38:07] oh so workspaces are permanent? [08:38:12] by default? [08:38:17] yup [08:38:30] the git plugin as an option to wipe them before fetching the code [08:38:37] and there is also a plugin that let one wipe it [08:38:57] for most repository, we do wipe the workspace entirely and start everything fresh (including doing the git clone again) [08:39:16] but since the workspace dir and the repositories are on the same SSD device, git creates hardlinks and seems to be very fast at cloning [08:39:36] I had to many issues earlier in keeping the workspace and simply run git clean -xqdf (remove everything un tracked) [08:39:43] that sometime left oddities in the workspace :/ [08:40:01] * hashar reads gdnsd man [08:41:58] don't worry about that :) [08:42:03] paravoid: yeah that should be easy [08:42:05] ok so my script can basically do [08:42:16] mkdir $WORKSPACE/chroot [08:42:43] do you need a chroot to run checkconf ? [08:42:44] mkdir $WORKSPACE/chroot/zones; authdns-gen-zones $WORKSPACE/templates $WORKSPACE/chroot/zones [08:43:00] no, but the alternative is using /etc/gdnsd :) [08:43:43] hehe [08:44:04] so yeah in the end it is pretty simple [08:44:13] the longest / hardest part is figuring out what commands to run [08:44:21] and make sure they will run fine under Jenkins constraints [08:44:37] (and getting ops to merge puppet change) [08:45:06] which repo are the zone files in ? is that ops/puppet ? [08:45:18] I noticed a git repo operations/dns/ [08:47:56] the chroot is easier and a bit more safe [08:47:56] if we also use a random chroot dir name we can run multiple tests at the same time too [08:47:56] not that we'll have so many updates [08:49:56] jenkins supports running the same job in parralel [08:50:03] there's an operations/dns? [08:50:06] it simply execute in different workspaces [08:50:07] I haven't pushed the repo yet [08:50:14] put that was my intended repo [08:50:22] something like workspace , workspace@2 , workspace@2 [08:51:34] ah it's empty [08:51:47] but it has two commits that I need to get rid of somehow [08:52:39] create an orphan branch [08:52:50] and cherry pick the changes you need on that orphan branch [08:52:57] or you can do a git rebase --interactive [08:52:59] which let you remove commits [08:53:10] you might want to try on a copy [08:53:38] 11:00 wohho. [08:53:41] oh I have a history already [08:53:57] I just don't think gerrit will allow me to override master completely [08:54:00] drop those two commits [08:54:10] I have a tree that I converted from SVN [08:54:11] :) [08:55:22] you can force push in gerrit if needed [08:55:46] I can? with git push -f ? [08:55:57] so operations/dns currently has two commits [08:56:01] one is an empty one [08:56:06] and the other is .gitreview [08:56:09] yes [08:56:12] master is pointing there [08:56:22] you can push your content and override them entirely [08:56:34] with something like: git push -f origin HEAD:master [08:56:47] you will need to grant yourself force push right in Gerrit interface [08:57:05] ah [08:57:15] I knew git push -f but last time I tried it Gerrit rejected me [08:57:22] but it was the right thing probably [08:57:26] and I can do that anytime? [08:57:27] that is disabled on all repos iirc [08:57:56] paravoid: https://gerrit.wikimedia.org/r/#/c/74122/ :-) [08:58:04] that is the configuration to let you force push [08:58:09] it is done on https://gerrit.wikimedia.org/r/#/admin/projects/operations/dns,access [08:59:13] merged [08:59:18] oh hah [08:59:21] you rock hashar :) [08:59:27] my job :] [08:59:36] been playing with Gerrit for the last 18 months or so hehe [09:01:13] pushed [09:01:22] it's not the final one, I'll force push a few more times until we deploy [09:01:25] (svn is still read-write) [09:03:04] full history [09:03:22] all the way back to 2011 [09:05:15] paravoid: you can also push your current work under a temp branch [09:05:26] and we can setup a jenkins job to pointing to it [09:06:10] nah, the CI part can wait [09:06:16] paravoid: I have created a branch named "jenkins" :D [09:06:20] until we deploy [09:06:25] we have no checks now at all [09:06:34] well i am in vacation in 7 business days [09:06:52] so if you want CI before september, we need to have it done soon :] [09:07:10] for how long are you leaving? [09:07:21] all of Aug? [09:07:40] for 3 weeks [09:07:56] first week i will be barely reachable, second week I can handle the critical issues / catch up [09:08:01] third week out again [09:08:04] that is a french thing :-] [09:08:07] okay [09:08:11] yeah I know :) [09:08:40] anyway other people are able to assist such as Timo and marktraceur [09:08:47] and other random folks too [09:11:11] don't worry, I'll figure it out :) [09:17:18] mkdir -p "$WORKSPACE"/chroot/zones [09:17:19] authdns-gen-zones "$WORKSPACE"/templates "$WORKSPACE"/chroot/zones [09:17:20] gdnsd -d "$WORKSPACE" [09:17:29] that would need the gdnsd package installed on the jenkins slave though [09:17:42] yeah I'll create an authdns::ci class [09:17:48] it also needs authdns::scripts [09:20:27] that will need to be included on all the Jenkins slave [09:20:36] I don't think I have a proper role class to do so yet [09:21:13] so the jenkins template is at https://gerrit.wikimedia.org/r/74124 [09:21:15] that is some yaml [09:21:36] but you can skip that and just edit the job manually in the jenkins interface while logged in with your labs account https://integration.wikimedia.org/ci/job/operations-dns-lint/configure [09:21:52] scrolling at the bottom is a 'Execute shell' which is where you can put your stuff [09:25:49] and zuul triggers https://gerrit.wikimedia.org/r/74125 [10:52:32] paravoid: and I wrote a lame tutorial about what I did https://www.mediawiki.org/wiki/Continuous_integration/Tutorial/Adding_basic_checks :) [10:52:39] \o/ [10:59:36] hashar: how is zuul triggering this job? [11:03:02] matanya: it uses Jenkins REST api [11:03:11] ah, thanks [11:03:22] so basically some HTTP query like /job/somejobname/buildNow?param=value¶m2=something [11:03:33] then Jenkins notify back Zuul with the result [11:04:01] one day I will have to write a technical documentation explaining all the interactions [11:06:35] paravoid: of course the job fails terribly since operations/dns is empty and gdnsd is not installed :-] https://integration.wikimedia.org/ci/job/operations-dns-lint/2/console [11:06:54] I'll fix the script, no worries [11:07:02] (operations/dns isn't empty anymore though) [11:07:16] https://integration.wikimedia.org/ci/job/operations-dns-lint/ws/ :-) [11:07:19] that show the workspace [11:19:40] paravoid: looking at operations/dns all commits belong to root [11:19:45] was that the case in svn? [11:20:00] yes... [11:20:13] :( [11:20:17] yeah [11:20:36] at one point I commited something with my username [11:20:50] then noone else used --user or whatever the svn option is [11:21:03] so it ended up having a bunch of commits that had my name but weren't mine [11:21:14] so I actually rewrote those to root now [11:21:53] make sense [11:22:16] but, history with root is better than no history [11:22:32] I am really happy to have the DNS zone made public [11:22:41] that is going to let volunteers fix up entries :-] [11:24:22] yep [11:24:25] that's the plan [11:24:56] I remember using some dns checker [11:25:03] it was doing an AXFR and making sure the PTR entries were properly set [11:25:11] or report SOA errors / double CNAME etc [11:25:26] but maybe I wrote it, I can't remember [11:25:44] had it to run as a nagios check and report an issue for the NOC to fix them up :-] [11:28:34] http://support.microsoft.com/kb/321045 :-) [11:28:37] by microsoft [11:29:40] @notify Coren [11:29:40] This user is now online in #wikimedia-labs. I'll let you know when they show some activity (talk, etc.) [11:47:10] !paste [11:47:10] http://pastebin.com [11:47:13] !paste del [11:47:13] Unable to find the specified key in db [11:47:21] @search paste [11:47:21] Results (Found 1): pastebin, [11:47:24] !pastebin del [11:47:25] Successfully removed pastebin [11:47:34] !pastebin is http://tools.wmflabs.org/paste/ [11:47:34] Key was added [11:48:17] http://tools.wmflabs.org/paste/view/ac45194d \o [11:48:23] finally we have own pastebin :D [11:56:51] * Coren pokes Krinkle. [11:57:54] Coren: Hi, I was wondering about what the plans are to get other tables known at toolserver to Tool Labs (or WMF Labs in general). I know the production wiki contents are now replicated, but meta databases such as toolserver.namespaces and toolserver.wikis are a blocker for some of tools I was about to migrate. [11:58:41] https://wiki.toolserver.org/view/Toolserver_database [11:59:13] Krinkle: Well, it's on my list, but there's like 27 different things ahead of that. That said, if someone makes that data available it'd only take me minutes to make it globally available. [11:59:35] Coren: It needs to be generated periodically based on wmf-config [11:59:52] There's even a bz tracking that, iirc. [11:59:54] e.g. namespaces changes, wikis get added [12:00:03] Got a link for me? [12:01:42] https://bugzilla.wikimedia.org/show_bug.cgi?id=48626 [12:02:05] Part of my concern is that this needs to be maintainable. [12:03:23] It is manually maintained at toolserver, however I think it can be done automatically nowadays. All the data is "somewhere", just not in a database. [12:04:54] Right. "Manualy maintained" == "Not maintainable" :-) [12:04:58] However this is case where some data is better than no data. For maintainability of tools, fortunately many toolservers found out the hardway that this way you'll never need to update things like adding support a particular wiki, namespace or language. You can just use those tables are the source for the selectors in the UI. [12:05:16] At which point maintaining it in one place is still more maintainable than requiring people to do it manually [12:05:24] because at this point in time it will mean they won't do it [12:05:52] I mean, I'm not going to do all this for each tool separately. [12:06:14] Oh, I agree. [12:06:45] Hence, if the data exists, I will make it available. [12:07:07] Coren: Anyway, just saying, I've been asked to migrate tools before November (all wmf staff has, to give a good example). But I can't do that without this information. Even a potentially outdated copy of that toolserver has now would be good enough. Even if the data is missing 2 or 3 recently added wikis and some name spaces chnages, the majority is there. [12:07:14] I can do a dump for you right now. [12:07:19] it is public on the toolserver [12:07:46] Probably easier for you to do it, I don't now the mysqldump commands very well and I'd have to send it around etc. [12:07:52] *sigh* I'm not a fan of temporary hacks; they have the bad tendency of becoming permanent hacks. [12:08:17] But yeah, if it's a blocker I'll see what I can do when I find a bit today. [12:08:22] Yes indeed. But unless you can prioritise writing an automated script for generating this data into MySQL within a month or two.. [12:08:37] Let me know if you need anything. [12:09:46] did Toolserver ever have memcached? [12:10:07] Krinkle: ^ do you know? [12:12:01] YuviPanda: just checked, they don't have php5-memcached installed. They might have memcached itself running, but I doubt it. Too much potential for key conflicts. You'd have to run it yourself, which might be possible if you can run it from your /home/*/bin [12:12:11] Krinkle: ah, okay. Just checking [12:12:25] we've Redis, and memcached that we want to kill, so 'hard to port toolserver tools' is not an argument at all, then [12:12:38] I think I did that once actually. They do allow listening on ports (e.g. to run a ZNC irc bouncer) [12:12:59] Yeah, that's one less problem ) [12:13:45] indeed [12:24:32] @infobot-share-trust+ #wikimedia-labs-offtopic [12:24:32] You inserted channel #wikimedia-labs-offtopic to shared db [13:36:05] Cyberpower678: Oh, wow. https://meta.wikimedia.org/w/index.php?diff=5656451&oldid=5656420 shows a complete lack of knowledge about how things ever worked. [13:43:03] @jb bla [13:43:07] @unjb bla [14:07:31] that entire RFC just seems to be an invitation for someone to setup the tool in another instance and make it all public [14:07:48] meh [14:14:28] Coren: Re "most scripting language require around 300-350M of virtual memory to load completely" on the help page, I decided to do some experimenting. I submitted jobs that just did the equivalent of "php -r 'sleep(60);'" for various languages: the Perl version reported a peak VMem of 17.1M (and ran fine with h_vmem=20M), Python reported 28.7M (and ran fine with h_vmem=30M), PHP reported 257.1M (but failed with h_vmem=300M, it needed h_vmem=350M) [14:14:28] , and nodejs reported 640.0M (but failed with h_vmem=700M, it needed h_vmem=750M). [14:15:06] And for fun, I uploaded the (dynamically linked) lua binary from my system. It reported 13.7M, and ran fine with h_vmem=14M. [14:15:17] anomie: Yeay data! [14:16:13] All those numbers are with 1M = 1024*1024 bytes, BTW. [14:16:23] yes, M vs m [14:16:48] I keep forgetting that... idiosyncracy of sge. [14:17:21] Coren: did we ever push default from 256 to 512? [14:17:22] I don't think M vs m is any sort of real standard, is it? But gridengine uses that distinction. [14:17:47] BTW, reported vs needed is expected; sge only records usage at interval, whereas the startup code from linux-ld.so does release some stuff after initial setup so it peaks a bit higher. [14:17:48] Oh, I misread "sge" as "age". Nevermind [14:18:44] Nobody sane ever uses powers of ten to count memory. Nobody honest uses them to count disk space. :-) [14:19:49] And only someone insane would ever arrive at 1.44M when measuring 3.5" hd floppies (since it requires defining a meg as 1000*1024 bytes) :-) [14:20:39] I was *just about* to mention 1.44M floppies, after I verified that I was remembering correctly. [14:20:50] oh wow, I never knew that. [14:22:37] YuviPanda: Are you young enough that you never really had to deal with floppies? [14:22:54] anomie: I had to deal with them for 2-3 years, from about 11-14, I think [14:22:59] but didn't know of the 1024*1000 thing [14:23:31] Bought a rather expensive 256MB flash drive at 14 or 15, I think :) [14:23:49] I still have boxes at home [14:23:56] well, a box. it probably has 4-5. [14:24:09] anomie: waay too young for non 3.5" though :) [14:25:48] * anomie missed 8", but thanks to having a 10-year-old computer at age 10 has dealt with both 5.25" disks and a cassette tape recorder. [14:29:48] * Coren has had fun with 8" floppies. Lots of nostalgia there. [14:30:03] First of, they had over a *meg* of space. [14:30:24] (when 5" floppies were lucky to head for 200K) [14:30:36] YuviPanda ping [14:30:37] And the noise... still has me nostalgic. [14:30:42] petan: pong [14:30:46] YuviPanda: you want tools-redis? :o [14:30:50] Coren: I don't remember modem noise, ever :) [14:30:57] dialup modem noise that is [14:31:05] (You see, "head load" on 8" drives doesn't mean "start the motor", it means "load the head". Ka-klunk!) [14:31:59] And they had big seek stepper motors. I can still remember the sequence of clunks and crunches that meant FLEX was booting on my S09 (my first real computer) [14:32:03] petan: yes, if we have an instance that runs redis, what is the point of calling that tools-'mc'? [14:32:28] A rose, by any other name, smells just as sweet. [14:32:43] Coren: my memorable sounds with a computer early on were just playing with sound() and nosound() with sleep patterns, from dos.h [14:33:03] Coren: I should have you review code where all temperoary variables are named after characters from movies you never see [14:33:08] YuviPanda: there is no point just a reason [14:33:09] Ironholds does that, it's infuriating. [14:33:25] historical reason. [14:33:29] it was originally memcached, redis is just alien :P [14:33:41] That way lies my nostalgia: http://oldcomputers.net/swtpc-s09.html [14:34:05] *shrug*. I don't get the redis hate. [14:34:19] Back in 1981. :-) [14:34:35] ogod. That's over 32 years ago. [14:34:44] * YuviPanda calls Coren old man [14:37:35] * Coren realizes he got his first computer before most of his colleagues were born. [14:39:39] Coren: clearly, we need to hire Stallman [14:39:47] and make him a... community liason? :) [14:42:12] ... no. [14:43:20] I can think of worse ideas, but they all revolve along the lines of involving Kim Jong Un in diplomatic roles. :-) [14:44:05] hmm, Kim Jong Un as part of arbcom? [14:44:14] we *hire* him and then put him on arbcom [14:44:19] and create a new flag. [14:44:32] and pull a piratebay and announce we are moving our servers to NK [14:44:49] too much work. [14:55:04] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 50622] Special:NewPagesFeed intermittently fails on beta cluster; causes test failure - https://bugzilla.wikimedia.org/show_bug.cgi?id=50622 [14:56:38] [bz] (8REOPENED - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 50623] Entering AFTv5 feedback causes error - https://bugzilla.wikimedia.org/show_bug.cgi?id=50623 [15:00:57] I just did memcached in my bot a month ago, and now I have to change to redis? Sigh. [15:02:14] anomie: it is a rather very quick change, and you might find that other things might also be sped up by using redis [15:02:27] anomie: and remember to use a large-enough prefix to avoid clashes! :) [15:04:53] YuviPanda: Define "large enough". With memcached, I'm using "AnomieBOT:". [15:05:06] !tools-help [15:05:06] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [15:05:07] moment [15:05:20] anomie: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Security [15:05:27] anomie: 'openssl rand -base64 64' is my preference [15:06:04] YuviPanda: Also, my memcached interface code will automatically encrypt (most of) the values being stored ;) [15:06:14] anomie: doesn't prevent me from *overwriting* it :) [15:06:38] we've disabled all(hopefully!) ways of listing keys from Redis [15:06:41] but still [15:08:48] hi Coren [15:09:05] liangent: Hey there. [15:09:08] YuviPanda: BTW, please install libredis-perl for me. [15:09:20] anomie: ah, okay. patch coming. I thought I had.. [15:09:29] Coren: that bug? [15:09:32] see mail [15:09:50] YuviPanda: Looks like you have php and python already, but not perl. [15:09:59] Oh, yeah. Can you give me ~20 mins then I'm all yours. [15:10:01] anomie: okay, sending puppet patch now. give me 5 [15:10:06] ok [15:10:47] YuviPanda: No rush, I won't start coding for at least 6 hours, and I may not get to it until next week. [15:11:30] anomie: okay :) [15:14:40] Coren: two trivial patches in labs puppet for you to +2. [15:16:58] yurik: Which is the other? [15:17:16] Coren: https://gerrit.wikimedia.org/r/74161 [15:17:21] and autocomplete strikes again :) [15:18:20] Coren: ty [15:18:33] anomie: perl redis library should be installed on next puppet run, in ~30 mins I guess [15:21:47] YuviPanda: how complicated is it to migrate data from 1 redis to other one? [15:22:07] petan: rather simple. we can just copy the AOF files, point them and it should work [15:22:14] right [15:22:18] we should schedule a switch then [15:22:34] OR we could just start the new redis and tell all people to move their data and bots to it? [15:23:31] petan: hmm, perhaps simultaneously with the memcached removal? [15:23:52] ok switch or both and then kill one? [15:24:03] these are 2 different approaches :P [15:24:13] both and then kill one is more developer friendly [15:24:14] kill tools-mc [15:24:16] setup tools-redis [15:24:19] switch is sysadmin friendly [15:24:26] hmm? [15:24:34] we can create tools-redis right now [15:24:37] tell people to switch to that [15:24:39] and then kill tools-mc [15:24:41] all by itself [15:24:47] Coren: thoughts? ^ [15:24:51] no need to do data migration :) [15:25:26] Yeah, keep both for a while, give people time to switch on their own schedule. [15:25:31] I can either a) "move data from tools-mc, kill it and start tools-redis in 1 moment" or b) "start tools-redis right now and let developers move their data and update their bots to new hostname until there is dead line when tools-mc get killed" [15:25:36] (b) [15:25:37] Coren: yes that's what I mean [15:25:48] also /var/ isn't on NFS, right? :) [15:25:59] for fuck sake not :P [15:26:01] :) [15:26:12] well [15:26:16] at some point, parts of it are [15:26:20] like /var/mail [15:26:21] :D [15:26:59] petan: create tools-redis then? there's a puppet class that you can apply and that is all that is needed [15:27:05] okok [15:27:12] petan: ty [15:27:23] I will try your class right now... [15:27:37] I am wondering how's gluster on tools... [15:27:44] because new instances seems laggy [15:38:10] Coren: what's up with tools-redis? glusterfuck? or nfs? [15:38:18] are new instances still gluster? [15:38:42] petan: You shouldn't even be asking the question. Nothing tools should go anywhere near gluster [15:39:18] Coren: so new instances are using gluster or not? [15:39:38] I am wondering why I can't ssh [15:39:47] because nfs seems to work on other boxes [15:39:54] if gluster does I have no idea [15:40:01] petan: Nothing uses nfs by default; you need to give the instance the proper class in puppet [15:40:14] ok, I would be happy to do that, but for that instance first needs to work [15:40:36] and my problem is that after like 30 minutes I built it, it still doesn't respond much [15:41:10] I mean, I wasn't asking if I should use gluster or nfs, I was asking why the instance is broken :P [15:41:59] I'm just guessing here, but maybe you need to turn on NFS first, in order to get homedirs, in order to log in [15:42:23] petan: From Wikitech, dude. Give it one of the tools roles that include NFS. :-) [15:42:27] so does it mean that when I create a new instance it will not work until I apply some puppet class that uses nfs? [15:42:37] petan: Listen to andrewbogott, for he speatheth sooth. [15:42:49] I didn't understand what he meant. this used to work in past [15:43:01] the past was broken, in that case :) [15:43:06] I created instance, it was on gluster, I applied the class, ran sudo puppetd -tv twice, then rebooted [15:43:09] I guess without the NFS role, there is no homedir, so no sshing [15:43:31] YuviPanda no, it was like what I said [15:43:56] andrewbogott: what exactly changed that it doesn't work this way anymore? did you completely shut down gluster for tools project? [15:44:05] petan, I can give gluster a nudge and see if that's an issue… but I'm pretty sure that creating an instance with the nfs class selected ahead of time is the 'right' way to do this. [15:44:36] I was told by Ryan in past that creating an instance with any class preselected is evil and that I would eventually burn in hell if I do that [15:44:46] now you are telling me the opposite... [15:45:03] past == 1/2 years ago :P [15:45:08] clearly, there is no way to not burn in hell [15:45:44] ok, forced a restart of gluster tools-home volume (which I'm surprised exists at all) [15:45:56] andrewbogott: can't it be killed? [15:46:15] petan: As with most moral codes, that is correct in general but not so much in all specific cases :( [15:46:29] YuviPanda why should it be? gluster was always part of new instance creation process, and until default ubuntu images works with nfs right away it should work [15:46:40] YuviPanda, not if we want to support petan's workflow :) [15:46:41] I see [15:46:59] Someday soon we hope to support default project-wide puppet settings. [15:47:13] Where 'hope' means it's my job to implement but I'm not working on it currently. [15:47:20] like you can't propose a default way of a) "create a broken instance nobody would be able to ssh to" b) apply some class c) pray [15:47:42] I'd prefer to have some control over the instance creation :P [15:47:49] petan, you're right, the proper solution is to have different default puppet classes apply to things in that project. [15:48:13] well, I think that all new instances should use nfs by default, if you really want to get rid of gluster soon [15:48:18] petan: Anyway… try it now? [15:48:22] ok [15:48:25] ah, right. so in the glorious future we will have 'create an instance, it already has working NFS, boot into it' [15:48:38] we should clearly replace gluster with SMB :) [15:48:42] hm I always get stuck with debug2: we sent a publickey packet, wait for reply [15:48:44] petan: Yeah, as soon as we're sure the 14-day-failure is resolved that will probably happen. [15:48:59] YuviPanda: samba? :D are you serious [15:49:07] have I ever been not serious? [15:49:12] you know it uses protocol invented by microsoft lol [15:49:18] so is C# [15:49:22] not really [15:49:34] mono is awesome, but C# was invented by mIcrosoft :P [15:49:37] *Mi [15:49:46] anyway SMB protocol is pretty undocumented thing that follows basically no standards because microsoft never made some [15:49:50] Does smb have a case-sensitive mode? Otherwise that would be very confusing! [15:50:00] it's hacked formerly-proprietary protocol [15:50:01] we should also replace all our SSDs with USB3 hard drives. I heard it is much faster than USB [15:50:01] 2 [15:50:40] CIFS is now actually pretty well documented. Doesn't suck any less for it. [15:50:56] YuviPanda c# was heavily financed by microsoft, but people who invented it / were hired by microsoft do not really had any other connections to it in past, AFAIK creator of C# is former designer of PASCAL language [15:50:58] USB3 is standardized, and hence clearly does not have these issues [15:51:28] There's a server in our cluster called 'tridge' which I always assumed was an smb server based on the name. [15:51:32] But I have no other evidence. [15:51:34] petan: Nicklaus Wirth commited C#? Isn't he dead? [15:51:38] like microsoft put a lot of money to it, but it was kind of "made properly from beginning" while that protocol which samba uses was "bleh from beginning" [15:52:01] petan: "Made properly?" C# [15:52:04] Coren: let me find some reliable informations, more reliable than my brain :D [15:52:14] * Coren chuckles. [15:52:16] Coren: he means Anders, who wrote Turbo Pascal. [15:52:21] not Pascal. [15:52:24] ah probably that guy [15:52:45] Coren: C# is a fairly nice language, since the main alternative for it is Java. [15:52:45] C# manages to be worse than C++ from a language design pov. That's one hell of a feat. [15:53:05] C++ and C# don't really compete. And Lambdas and LINQ <3 [15:53:08] without those it is just... meh. [15:53:22] Coren: every language that doesn't produce binaries which are able to run natively are going to be worse at some point :P but also better at some other point [15:53:28] also we should replace gluster with smb over USB3 :) [15:53:48] and NFS with FUSE+HTTP [15:53:56] c++ and c# are as much similar as php and delphi :P [15:53:59] There is no reason why C# can't be target-compiled, even if jit. I'm not talking about implementation but about design. [15:54:11] aude, any objection to me doing things like this all over the wikidata classes? https://gerrit.wikimedia.org/r/#/c/74099/ [15:54:17] Coren: it can be. AOT exists, and runs a lot of stuff on iOS [15:54:29] without JIT [15:54:53] btw andrewbogott the answer was "no" [15:54:56] assuming we have both the same definition of 'target-compiled' :) [15:55:03] C# is horribly, horribly designed to make a chimera out of the worse of C, C++ and Java with some clear inspiration from Objective C. Python is better. [15:55:05] andrewbogott: it doesn't work :( maybe I will try the "prayer's way" [15:55:19] Coren: now you hurt my feelings [15:55:20] petan, you mean still not working? [15:55:23] :P [15:55:29] python is creepy, c# <3 [15:55:39] petan, what's the instance name? I'll see if my root key works. [15:55:41] creepy & crappy [15:55:42] Coren: agree that Python is better, but clearly it is better than Java. C or C++ I do not know enough to say. And C# 2 != C# 3 [15:55:47] LOL [15:55:53] also how did we get to language wars again? [15:55:56] andrewbogott: tools-redis [15:56:10] YuviPanda, because holy wars are more fun that coding! [15:56:13] *than [15:56:29] andrewbogott: true. but this isn't even a war, really. it's mostly me agreeing with everyone except petan. [15:56:31] well [15:56:37] except petan (about python) [15:56:39] * petan sends hordes of lemmings over YuviPanda [15:56:43] that's my army [15:56:56] since we both agree C# is nice, but for different reasons (since he doesn't seem to care for the nice functional aspects of the language) [15:56:59] I just hate python :o [15:57:09] we all tend to agree that C# 1.0 as it was released first was a horrible POS [15:57:16] I don't think that Coren ever agreed c# is nice :P [15:57:16] I'm a little hurt that objC isn't even participating in the conflict. [15:57:25] Like Switzerland, it stands to the side and quietly prospers [15:57:36] hehe, 'prospers' :) [15:57:55] it's more like this tiny government propped up by a crazy dictator... (apple) :) [15:58:44] Coren: btw, C# now has full compile-to-targetted-no-jit-native-code for both ARM and x64. Nothing for x86 but that's fine by me. [15:58:53] hey I need to go... I hope my army will finish YuviPanda without me [15:58:57] * petan pats his lemmings [15:59:33] petan, well, I've logged in and confirmed that /home won't mount. [15:59:41] Not sure that helps us so much [15:59:56] andrewbogott: all right I will apply some class and pray then [16:00:20] but this way of instance building is just wrong [16:00:42] Yeah, that's probably the next thing to try. [16:01:22] 32-bit x86 should just die it doesn't need to exist [16:01:52] brion: indeed, and most of it seems to be gone now. [16:02:03] brion: most x86 now is just running on nice x64 hardware anyway [16:02:07] i remember being disappointed when apple went intel for macs that they started with 32-bit instead of going straight to x86--64 [16:02:09] brion, 4g of memory should be enough for anyone [16:02:26] 32-bit is gone with the latest versions of OSx at least [16:02:39] but i still have to download fat binaries cause of people on old systems. grr! [16:02:52] I GOT 64-BIT PROBLEMS [16:03:00] brion: blame Apple [16:03:17] you got 63 problems but a bit ain't one? :) [16:03:31] heya, Ryan_Lane: are the ldap hosts names the same in as they are in production? [16:03:31] at least we don't have 68k+PPC+i386+x86-64 fat binaries [16:04:11] heh [16:04:12] i remember the first power macs…. the OS had to include a 68k emulator because not all of the OS was recompiled for PPC [16:04:22] never minds apps :) [16:05:41] okay food time [16:05:42] brb [16:15:48] andrewbogott: that's instead of git? [16:16:06] aude: It's just a different way to manage the git modules. [16:16:20] that's fine, though.... [16:16:26] Right now we're using a hand-made git::clone class, I want to replace it with an official upstream module [16:16:31] well, 'official' [16:16:45] we'll keep maintaining the single node for now but also in need of a "multi wiki" test setup [16:16:55] The only thing the new lib doesn't support is specifying a timeout. I think that's moot now that Gerrit is faster. [16:17:04] so i'd like to make a module that uses $wgConf and such to achieve it [16:17:14] sure. [16:17:16] andrewbogott: no preference about git or vcs [16:17:21] 'k [16:17:45] we want, for example, a test (en)wikivoyage and test (en)wiki client [16:18:14] i'm thinking how to do it.... i've done this in my own dev setup [16:18:53] and not have to do things like drop and recreate all the databases and complete reset every 2 weeks :) [17:00:48] kma500: hi [17:01:05] hi [17:01:14] kma500: I am in the videochat. [17:01:22] Are you having any trouble joining? [17:01:31] yes. [17:07:11] kma500: when Coren said "The rules for running bots on projects are decided by the project themselves and are not part of the Labs rules (although we will certainly have a provision that you should abide the rules of every project you interact with). " when he wrote "project" he meant "WMF wiki" [17:08:02] Yes. Two overloaded meanings of "project". [17:08:13] Gets ambiguous when the context includes both. :-) [17:09:52] anomie, I know. I was going to respond to that statement, but wasn't sure what to say. It left me speechless. :/ [17:10:50] Coren: This is why I always prepend and clarify - Labs Project, Gerrit Project, WMF wiki. [17:11:29] Bye everyone [17:12:11] kma500: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [17:12:42] kma500: https://www.mediawiki.org/wiki/Wikimedia_Labs [17:13:25] kma500: https://www.mediawiki.org/wiki/Developer_access [17:14:00] kma500: this is not something you would improve yourself but it is good for you to know about as a resource: https://meta.wikimedia.org/wiki/Mailing_lists/Overview#MediaWiki_and_technical [17:15:08] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2012-September/005332.html [17:16:25] sumanah: Do you need me to join the call or is IRC enough? (Just saw the email) [17:16:44] kma500: I think IRC is enough right now, but right now I am in the "spam kma500 with a zillion links" portion [17:16:48] Coren: ^ [17:17:00] Coren: I may change my mind in the next 90 min [17:17:02] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-February/005645.html [17:17:04] kk [17:17:08] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-February/005656.html [17:17:13] kma500: http://www.mediawiki.org/wiki/Wikimedia_Labs/Toolserver_features_needed_in_Tool_Labs [17:17:26] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-February/005713.html and https://www.mediawiki.org/wiki/Wikimedia_Labs/TODO [17:17:52] kma500: http://www.mediawiki.org/wiki/Toolserver/List_of_Tools and the thread starting at http://lists.wikimedia.org/pipermail/toolserver-l/2013-February/005715.html [17:18:23] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-February/005752.html and http://www.mediawiki.org/wiki/Toolserver/List_of_Tools and http://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Design [17:19:06] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-March/subject.html#start allows you to look at, for instance, just Coren's updates [17:20:02] kma500: as you read these, keep in mind that sometimes they are obsolete, but sometimes they give useful information re what the expectations of some community members are, or what the overall vision of Tool Labs is [17:20:12] kma500: http://lists.wikimedia.org/pipermail/toolserver-l/2013-April/005952.html from April: "Tool Labs: Summary of Feedback on the Draft for the Roadmap" [17:21:23] kma500: and -- since it's reasonably recent -- it would make sense for you to at least skim the Toolserver lists from all of May & June: http://lists.wikimedia.org/pipermail/toolserver-l/2013-May/thread.html and http://lists.wikimedia.org/pipermail/toolserver-l/2013-June/thread.html [17:21:59] again, that's the Toolserver list, but it includes a bunch of the people whom we would like to support in their efforts, and who are currently on TS ("TS" is short for Toolserver) [17:23:59] kma500: https://wikitech.wikimedia.org/wiki/Help:Getting_Started and https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help and https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools are key pages on wikitech.wikimedia.org that need better clarity and may need to be expanded; for each of those, also check the Talk (discuss) page [17:24:22] kma500: https://wikitech.wikimedia.org/wiki/Help:Contents may also be useful [17:26:26] kma500: within MediaWiki, there's a way to see all the subpages of a specific page. So https://www.mediawiki.org/wiki/Special:PrefixIndex/Wikimedia_Labs/ has about 44 pages and you should read all of them :) [17:27:04] note that some of them are super obsolete. For instance, Coren do you think it's ok to basically delete https://www.mediawiki.org/wiki/Wikimedia_Labs/Create_a_bot_running_infrastructure which is from late 2011? [17:27:26] kma500: https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Needed_Toolserver_features probably needs a lot of updating [17:27:28] sumanah: Yes, that's completely outdated. [17:28:48] kma500: I think it might make sense for https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Migration_of_Toolserver_tools to be something we work on during the sprint next week, but that's just a guess [17:29:06] Coren: is https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/TODO something you update? I see it hasn't been updated since mid-May [17:29:21] whereas https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Roadmap_en has been updated in early July [17:30:03] kma500: https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/List_of_Toolserver_Tools gives you a bunch of tools. Some of them have source code available so you can try running one on Tool Labs to see what impediments you face [17:30:06] sumanah: For the most part, all of that stuff is complete. I think there is one missing {{done}} all told. [17:31:18] Change on 12mediawiki a page Wikimedia Labs/Create a bot running infrastructure was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=738009 edit summary: [-1336] this is now Tool Labs [17:31:19] Change on 12mediawiki a page Wikimedia Labs/Create a bot running infrastructure was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=738009 edit summary: [-1336] this is now Tool Labs [17:31:43] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/TODO was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=738012 edit summary: [-102] near-final update [17:32:22] Coren: what does "render" mean on that TODO? is it https://meta.wikimedia.org/wiki/RENDER ? [17:33:01] sumanah: Yes. I expect Silke will direct her efforts in that direction upon her return. [17:33:15] Coren: got it! Makes sense. [17:33:32] kma500: http://lists.wikimedia.org/pipermail/labs-l/ -- honestly it might make sense for you to just skim the entire archives [17:33:42] it goes back to Jan 2012 [17:34:49] kma500: and with that, I am now reasonably sure I have given you links to everything you could use to learn about Labs with an emphasis on Tool Labs [17:35:49] thanks! [17:43:03] kma500: http://tools.wmflabs.org/ [17:44:28] That reminds me to poke Luis about the TOS. [17:48:19] [bz] (8RESOLVED - created by: 2Johannes Kroll (WMDE), priority: 4Unprioritized - 6normal) [Bug 51359] tools-mail doesn't deliver mails - https://bugzilla.wikimedia.org/show_bug.cgi?id=51359 [18:08:05] kma500: yes, if you run into pages that are full of obsolete and misleading information, it's ok to just copy down the few useful nuggets for your own use and then REDIRECT to a more up-to-date page :) [18:08:49] and yes, anyone can revert if they disagree :) [18:09:06] kma500: at the bottom of any MediaWiki page, on mediawiki.org or wikitech.wm.o or whatever, you will see the last-updated date [18:21:05] https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta [18:26:16] Coren: kma500 and I just thought "gosh it would be nice if searching for toolsbeta on wikitech.wikimedia.org got me to https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta " :) I should check for an open bug [18:27:03] I think that search on wikitech, atm, is hopelessly broken. At least, that's what I think I've heard Ryan_Lane say [18:27:28] broken? in what way? [18:27:34] ah [18:27:50] so. I removed Nova_Resource from the search results [18:27:56] you need to use the "Everything" search [18:28:25] we should move the project pages into a different namespace [18:28:42] I'd say "project", but that's a kind of weird quasi-special namespace [18:30:59] kma500: https://meta.wikimedia.org/wiki/Bots [18:31:17] kma500: specifically https://meta.wikimedia.org/wiki/Bot_policy [18:31:21] <^d> Ryan_Lane: No, it's just the canonical namespace for NS_PROJECT :) [18:31:31] <^d> Same reason you can't use Category: or File: :) [18:32:01] so, we can use Labs_Project: [18:32:13] <^d> Certainly [18:32:31] Coren: "The rules for running a bot on the Labs are fairly liberal (while still in draft, they sum up as "don't break anything")." - where's the draft? [18:33:09] for the namespace, I'd love "Labs_Project" [18:35:35] sumanah: The TOS draft exists, afaik, only in a google document. [18:41:37] let me add the new namespace [18:41:43] I'll need to modify openstackmanager [18:41:51] then rename all the project pages [18:42:06] easy enough, overall [18:42:43] hi. I was wondering if the only way to create a tools account was via the tool labs web interface? [18:43:39] kma500: yes [18:43:44] (I think kma500 is asking whether there is also some additional way to create a developer account via email or shell or something. I believe the answer to that is no) [18:43:46] thanks! [18:44:23] it's technically possible for an ops person to make an account on behalf of someone else, but there's no reason for doing so [18:45:06] Coren: blocking seems to have been working, right? no one complaining about being blocked incorrectly? [18:45:33] Ryan_Lane: None to date. [18:45:40] great [18:46:45] whoops, I misunderstood - she literally did mean "tool account" and not "user account a person uses to log into Labs" [18:47:25] (the latter being your developer access account, your user account) [18:47:33] (your Labs account) [18:49:43] Ah. No, the web interface is also the only way. [18:50:15] kma500: I know I have mentioned at least one page that *links* to https://wikitech.wikimedia.org/wiki/Help:Terminology . Let me make sure you see that. :) [18:51:55] Coren: I remember that someone set up a kind of sample/example/test to show people sort of a minimal skeleton of a Tools Labs account -- does this sound familiar to you? I don't see it in the tools.wmflabs.org list [18:52:13] I want to show it to Kirsten [18:53:25] well, I'm adding API actions [18:53:39] when we have OAuth support in mediawiki we can probably make that available through the command line [19:01:16] kma500: so, Etherpad is pretty great [19:01:31] thanks for showing me [19:01:38] kma500: if you go to http://etherpad.wmflabs.org/ you can create a new pad [19:01:55] kma500: if you just click "new pad" then it gives you a new pad with a randomly generated "name" (URL) [19:02:01] or you can tell it to create one with a specific title [19:02:21] like Labs Will Take Over The World or whatever [19:02:27] : ) [19:02:36] and then you have a link that you can give out to people on IRC, email, whatever [19:02:46] okay [19:03:22] since Etherpads are ephemeral, the best practice is to have a writing sprint or a meeting or whatever where you all collaborate in Etherpad, and then at the end of the sprint or meeting, move the notes to a wiki page for archiving and future collaboration, and say in the Etherpad "moved to [url]" [19:04:08] okay [19:06:32] kma500: so I'd love for you to own the plan for the doc sprint -- after you review all the stuff I've given you links to (and feel free to search around on wikitech.wikimedia.org and mediawiki.org and meta.wikimedia.org for more stuff too!) you will have some ideas of what the info-rich pages are [19:07:03] so you could put up an Etherpad with "here's an idea of the page I want to make or links to the pages that need a revamp, and here are the links to pages with info to draw from AND THEN DELETE/REDIRECT" [19:07:26] If you think there oughta be one giant getting-started page, cool. If you want to have it be a bit more federated, also cool [19:09:13] if people on labs-l get a link to that Etherpad by the end of Monday then they can start helping out early Tues [19:09:26] okay. I'll go through the pages you sent and see what seems to make most sense to keep, what to redirect, and what to use. Right now, I'm thinking that consolidating most info in one place/page would make it easiest to see the whole picture and access. [19:10:10] kma500: yes, makes sense [19:11:26] Thanks for all your help today, sumanah! [19:11:44] :) [19:24:16] Coren: https://gerrit.wikimedia.org/r/74211 [19:24:47] that + defining the namespace in LocalSettings + doing a rename (plus subpages) for all project pages should work [19:25:12] anything pointing to the current pages will get a redirect and we can modify those when we find them [19:25:22] Hi everyone. [19:26:09] Ryan_Lane: can you magically give me some Internet? [19:26:16] CP678|iPhone: eh? [19:27:14] More seriously though can you go to the Cyberbot project folder, open CyberbotII and paste the 2 lines found in spambot.err [19:28:22] Ryan_Lane: ^ [19:29:02] I have absolutely no idea what you are talking about [19:29:19] What do you not understand? [19:29:29] I don't modify other people's stuff [19:30:01] Open /data/project/cyberbot/CyberbotII/spambot.err [19:30:16] is there no other maintainer for this bot? [19:30:28] And paste the two lines you see in that file here. [19:30:51] Ryan_Lane: do you know who I am? [19:30:58] I do not [19:30:58] no [19:31:38] oh. you want me to paste the output into the channel? [19:31:45] I thought you wanted me to modify the bot :D [19:31:48] PHP Warning: Invalid argument supplied for foreach() in /data/project/cyberbot/bots/cyberbot-ii/externallinks.php on line 55 [19:31:48] PHP Catchable fatal error: Object of class ResultWrapper could not be converted to string in /data/project/cyberbot/bots/cyberbot-ii/externallinks.php on line 83 [19:32:05] I am Cyberpower678 on an iPhone. I am the maintainer. [19:32:20] cases like this is why you should have a second maintainer ;) [19:32:30] I have no Internet right now so I can't do it myself. I'm trying to debug the script. :p [19:32:34] * Ryan_Lane can't verify people through IRC [19:32:58] You can't look at the cloak? [19:33:14] ah. indeed I can [19:33:22] :p [19:33:23] my irc client isn't really an irc client [19:33:41] it's a chat client that has been have IRC badly beaten into it [19:34:07] I don't trust having a second maintainer at the moment. Besides, it really doesn't need anymore maintainers. The bot is functioning as it should. [19:34:24] heh [19:36:19] Well I was able to fix the second problem. :p [19:36:24] heh [19:36:43] About giving me magic Internet... [19:36:48] Ryan_Lane: ^ [19:37:00] can't help you there ;) [19:37:17] I kind of need the database on labs now. :p [19:38:34] CP678|iPhone: sounds like you should think about "bus factor" [19:38:44] what happens if you have to go away for a week? [19:39:26] Ryan_Lane: /data/project/cyberbot/bots/cyberbot-ii/externallinks.php [19:39:43] It's my newest script. I've just finished it. [19:39:52] sumanah: ?? [19:39:54] https://en.wikipedia.org/wiki/Bus_factor [19:41:54] sumanah: that's not going to happen. When it does come time that I will go away, I will find someone to replace me. [19:42:33] CP678|iPhone: but what if it happens unexpectedly? [19:42:41] How? [19:42:48] CP678|iPhone: Your iPhone gets stolen. [19:43:11] scfc_de: that's going to make the project fail? [19:43:38] CP678|iPhone: Then you can't find someone to fix stuff. [19:44:14] scfc_de: if something needed fixing, I would take it off the grid first. [19:44:32] I've never needed anyone to fix anything over the phone. [19:44:46] CP678|iPhone: If you don't have InterNet, and your iPhone is gone, you can't do anything. [19:45:02] scfc_de: and? [19:45:26] If the bot goes haywire, that's what the run pages are for. [19:46:04] The code is publicly accessible so anyone can takeover the bot if needed. [19:46:59] And the code itself is very stable. It can continue to run on its own for some time before it fails. [19:48:26] CP678|iPhone: "When it does come time that I will go away, I will find someone to replace me.": You are not always in a situation where you are able (or just want) to do that. [19:49:07] True. It will be noticed and the bot will be taken over. [19:49:40] do... you really not trust *anyone* else, at all? [19:49:50] to add them via, I think, 4 clicks, on wikitech? [19:50:18] YuviPanda: it would let them access my configuration files. [19:50:26] I don't want that. [19:50:28] hence the word 'trust'. [19:50:28] :) [19:50:32] oh well. [19:51:18] CP678|iPhone: Didn't you want to create a new bot user on ... beta (?) so that the bot no longer has your personal credentials? [19:51:41] scfc_de: no. [19:52:54] * CP678|iPhone sees what's causing the first bug. [19:53:47] Coren: did we move manybubbles off NFS onto raw metal? [19:54:11] YuviPanda: I've just stopped trying at this point, actually [19:54:28] Bubbles? Did someone say bubbles?!? [19:54:32] :D [19:54:38] ... to get off NFS? I thought he had access to raw disk where you can get dbs setup? [19:55:22] YuviPanda: I'm waiting for a DB-aware ops to +2 my puppet. [19:55:27] ah, okay [19:59:30] manybubbles: Just to make things fun, Asher is OOO. I'll try to rope someone into looking at this. :-) [19:59:42] Coren: thanks! [20:00:27] Well I'm going to let my phone charge. Bye everyone and thanks Ryan_Lane. [20:00:45] I'm about to board a plane for Germany. [20:22:55] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6major) [Bug 50622] Special:NewPagesFeed intermittently fails on beta cluster; causes test failure - https://bugzilla.wikimedia.org/show_bug.cgi?id=50622 [21:22:28] Change merged: Ryan Lane; [labs/private] (master) - https://gerrit.wikimedia.org/r/74113 [21:23:12] Damianz: this can be abandoned, right? https://gerrit.wikimedia.org/r/#/c/26441/ [21:23:59] yes - it's mostly pointless now [21:52:37] beta labs down with a fatal from WikibaseLib.php [22:00:20] Coren: tools.wmflabs.org is unresponsive [22:00:57] wmf? [22:01:11] wfm* even [22:01:26] hmm, back now. was down for ~30s-1m [22:01:26] coren wasup man ? [22:01:41] Coren: is there a limit of 'how many minutes should it be down before I ping Coren'? number? [22:01:58] YuviPanda: Yeah, the NFS server still occasionaly stalls, but it quickly recovers. Still tracking down the issue. [22:02:08] ah, okay. [22:02:16] YuviPanda: I should say anything below 2min shouldn't be something to worry about. [22:02:22] Coren: got it [22:02:52] OrenBochman: Dinner is up shortly. :-) What can I do for you? [22:02:57] Ryan_Lane: I run a query on the spanish wikipedia database via php but I get an error, so i log in and do the same and I get a nicer error " Table 'eswiki_p.revision_userindex' doesn't exist" [22:03:07] should this be the case [22:03:14] Coren: is that the root cause for the outage at http://en.wikipedia.beta.wmflabs.org/ [22:03:21] Hm, no, the _userindex tables should exist. [22:03:43] my thinking as well [22:04:08] guessing no [22:04:25] chrismcmahon: No; the symptom of that problem is that file access pauses for a little bit, it causes no visible errors except a short delay. [22:05:42] chrismcmahon: In fact, unless the files being read weren't recently, they'd be cached and only write accesses would be impacted. [22:06:59] Coren: bon a pettit [22:07:53] [bz] (8NEW - created by: 2Chris McMahon, priority: 4Unprioritized - 6critical) [Bug 51578] beta down with fatal from WikibaseLib.php - https://bugzilla.wikimedia.org/show_bug.cgi?id=51578 [22:09:16] hey all, has anyone had trouble sshing into all labs instances as of about a week ago? [22:09:29] I'm puzzled because things used to work, and my key still works on other machines [22:09:55] Coren: any OpenSSH configuration changes on bastion? [22:09:57] erosen: which instances specifically? [22:10:09] limn0 and wikimetrics are the two that I've tried [22:10:11] and when you say "other machines" which other machines do you mean? [22:10:19] stat1 [22:10:27] can you ssh directly into bastion? [22:10:31] nope [22:10:49] i've been looking over the ssh -vvv output for a bit and can't see any obvious clues [22:11:01] try to ssh into bastion now [22:11:44] erosen: ^^ [22:11:55] Ryan_Lane: tried: ssh erosen@bastion.pmtpa.wmflabs to no avila [22:11:57] avail [22:12:09] is that the correct bastion / hostname? [22:12:11] no [22:12:16] bastion.wmflabs.org [22:12:31] that worked [22:12:48] that pobably solves things; sorry for the trouble and thanks for the quick response [22:12:52] yw [22:12:57] I guess I must have mangled my config at somepoint [22:15:52] OrenBochman: In re eswiki, I'm not seeing a revision table. Lemme check something, I have a suspicion. [22:24:03] OrenBochman: Fixed. It is as I thought; we had *two* eswiki databases, one of them old and broken and a new one that's working. I switched so we now connect to the correct one. :-) [22:28:42] [bz] (8NEW - created by: 2spage, priority: 4Unprioritized - 6normal) [Bug 51580] configure beta labs for SUL2 - https://bugzilla.wikimedia.org/show_bug.cgi?id=51580 [22:35:01] [bz] (8NEW - created by: 2Ryan Lane, priority: 4Unprioritized - 6normal) [Bug 51581] Deployment-prep deploys from master and uses a submodule with submodules - https://bugzilla.wikimedia.org/show_bug.cgi?id=51581