[00:06:08] Cyberpower678: I figured out the db issue [00:06:37] Betacommand, send the details to me in memoserv. My work break is over now. [00:06:46] Gotta go. [00:07:04] Cyberpower678: sendt [08:03:43] Coren: ping [08:09:42] Coren: ping [09:16:40] Coren: ping [09:27:03] Coren: Where are you? [09:31:38] zhuyifei1999: i hav eating ^^ [10:18:47] zhuyifei1999: given timezones probably sleeping [10:19:20] Betacommand: ok [11:26:28] Coren: ping [11:39:13] petan: when is Coren going to wake up? [12:01:47] petan: [[en:WT:WikiProject Articles for creation#Category:AfC submissions with missing AfC template]] is looking for you with an AFCbot question. [12:01:54] @link [12:01:54] http://enwp.org/WT:WikiProject_Articles_for_creation#Category:AfC_submissions_with_missing_AfC_template [12:05:37] T13|needsCoffee: what is needed from me [12:06:12] zhuyifei1999: unfortunatelly I don't share room with Coren, so I have no idea when he wakes up :P [12:07:10] hasteur was paging you because apparently AFCbot hasn't done anything since 28 Aug 2013. [12:15:22] petan: That's correct [12:17:34] aha [12:17:40] might have died? [12:18:01] or got blocked :/ [12:19:22] LOL [12:19:37] someone enforced https on all wikimedia sites without some kind of preparation I guess [12:19:55] well afcbot doesn't support https [12:20:05] so it can't work in this moment [12:20:12] Whell POO! [12:20:40] that's trully funny to enforce https on wikipedia [12:20:45] what a silly idea [12:21:08] Don't blame the sysops, blame the herd mentality [12:21:31] I don't really blame anyone I just think it's silly [12:22:15] especially given that 90% of traffic comes from people who don't even use any account and don't give a fuck about security [12:24:46] petan: How do you populate en:User:Petrb/Wierd pages? I might slap together a quick script to traverse the list and finish out the current set. Do you have any objection as a temporary workaround? [12:25:08] that is NOT a current set [12:25:13] it's a very obsolete set in fact [12:25:18] current set is on labs [12:27:51] Hrm... I'll see if I can get with you in ~8 hrs to see if I can put together a temporary script to populate up the list. [12:37:22] @link [[en:User:Petrb/Wierd pages]] [12:37:22] http://enwp.org/User:Petrb/Wierd_pages [12:48:27] Coren: ping [13:12:02] !ping [13:12:02] !pong [13:12:35] !ping [13:12:35] !pong [13:19:35] Coren, ping [13:22:09] Coren: ping [13:22:25] poor C oren [13:22:40] petan: :P [13:23:10] you guys know I can help you if it's not DB related? [13:23:35] petan: I know [13:23:47] but it's DB related [13:26:04] petan: see PM [13:29:15] petan, the cyberbot node doesn't seem to be working right. [13:29:48] node? [13:29:51] what do u mean [13:30:01] don't tell me cyberbot has own node XD [13:30:22] Coren created a new node for me with a lot more available job slots. [13:32:22] aha [13:32:24] name? [13:33:20] I got it [13:33:38] Coren: ping [13:33:58] petan, do you know what's worng? [13:34:01] zhuyifei1999 maybe create a ticket for that instead of pinging him? [13:34:08] Cyberpower678: you didn't describe the problem [13:34:37] I submit jobs to it and it deosn't show up in qstat. Those that do can't seem to access the scripts. [13:35:03] petan: I'd rather ping [13:35:30] hmm [13:36:00] Cyberpower678 maybe because you have many jobs currently running on other nodes? [13:36:12] makes no sense. [13:36:18] how you start them? [13:36:36] same as I start normal jubs but with -q cyberbot [13:37:01] does it say anything as error? [13:37:15] Can't access source file. [13:37:21] aha [13:37:23] that would be it [13:37:37] is that error from qsub or jsub? [13:37:43] jsub [13:39:16] :/ [13:39:33] maybe try with qsub then? [13:39:41] jsub is just a wrapper for qsub [14:04:11] [bz] (8ASSIGNED - created by: 2DrTrigon, priority: 4Unprioritized - 6enhancement) [Bug 53625] Install python module opencv (v1 and v2) - https://bugzilla.wikimedia.org/show_bug.cgi?id=53625 [14:07:59] Coren: ping [14:10:26] zhuyifei1999: what do you need? [14:11:11] Betacommand: the question is: where's the table "text" [14:11:44] zhuyifei1999: not sure its available [14:12:16] so I have to ask coren [14:13:10] zhuyifei1999: Im actually fairly sure that due to how we use external text storage we dont have access to the text field [14:13:31] Betacommand: ??? [14:14:31] Coren, ping [14:14:57] /grep Coren.*ping for this channel looks funny [14:15:18] zhuyifei1999: the WMF doesnt use the text table, they store it in a external format [14:15:46] Betacommand: so only api can access? [14:16:11] zhuyifei1999: that or using a dump [14:16:26] let me grab the docs on text storagd [14:16:44] Betacommand, thanks. I'm such an idiot. [14:17:02] I mixed up the db connection settings. [14:17:21] I attempted to connect to enwiki_p via tools-db [14:17:24] :/ [14:18:15] zhuyifei1999: https://wikitech.wikimedia.org/wiki/External_storage [14:20:38] petan, the cyberbot node doesn't seem to have access to the files. [14:20:43] zhuyifei1999: so yeah, you need to use the API [14:23:07] grabbing ~70,000 pages from the api? [14:24:51] if you need to grab 70,000 pages, use the dump then [14:26:14] Cyberpower678 what files? [14:27:17] petan, brb [14:34:43] YuviPanda, I'm running into trouble generating gpg keys because the VM is entropy-bound. That said, I think that things should be mostly working now… can you install the package without complaint? [14:34:58] testing now [14:35:17] also haha, I just was reading about entropy sources and the problems they can cause to VMs today, and here it is... :D [14:35:19] * YuviPanda sshs [14:36:39] YuviPanda: is it possible to read within a script? [14:36:53] I mean the dump [14:37:10] yeah, but they're kinda big and you'll have to write your code to take that into account [14:37:21] andrewbogott: no luck, still 'cannot be authenticated!' failures on puppet run [14:37:27] crap [14:37:36] andrewbogott: hmm, perhaps I need to run puppet twice [14:37:43] andrewbogott: once for labsdebrepo to do its thing [14:37:48] and then for it to take effect [14:37:52] let me try that [14:38:25] YuviPanda: use which one of http://dumps.wikimedia.org/commonswiki/20130827/ ? [14:38:36] I tried with a normal apt-get install; does that not complain about unsigned repos? [14:38:37] zhuyifei1999: they're already there in toollabs [14:38:42] andrewbogott: that also did [14:39:01] for all category, currrent version only? [14:39:11] YuviPanda: where? [14:39:21] OK, short of duplicating your puppet run, how can I test this? [14:39:37] zhuyifei1999: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Public_dataset_dumps [14:39:49] andrewbogott: ssh to proxy-dammit and run puppet? [14:39:50] :D [14:40:01] andrewbogott: it seems 'stuck' on labsdebrepo [14:40:05] andrewbogott: on the puppet run [14:40:09] andrewbogott: probably looking for entropy? [14:40:24] Um… wait, which problem are you getting? A minute ago you said 'cannot be authenticated' [14:40:52] andrewbogott: so I ran puppetd -tv [14:40:57] andrewbogott: and it complained 'cannot be authenticated' [14:41:01] andrewbogott: and failed installing [14:41:06] andrewbogott: however, puppet itself continued to run [14:41:18] andrewbogott: and it is currently at the labsdebrepo part [14:41:20] let me do a paste [14:41:30] I'm running it too, and I get different behavior... [14:41:37] everything runs except for the timeout [14:41:38] andrewbogott: https://dpaste.de/XYytG/ [14:42:06] what the heck? Totally different from what I see on the same VM [14:42:06] andrewbogott: aaha, and now I get a timeout [14:42:08] for keygen [14:42:15] err: /Stage[main]/Misc::Labsdebrepo/Exec[keygen]/returns: change from notrun to 0 failed: Command exceeded timeout at /etc/puppet/manifests/misc/labsdebrepo.pp:49 [14:42:18] notice: Finished catalog run in 315.99 seconds [14:42:19] Anyway… is there a standalone command that will test for the status of that package? [14:42:34] YuviPanda: which to use for all category, currrent version only? [14:42:44] andrewbogott: sudo apt-get install nginx-extras? [14:42:50] zhuyifei1999: I think it is current version only? [14:43:17] YuviPanda: revision [14:43:45] zhuyifei1999: I think so, yeah. Look at the files? [14:43:54] zhuyifei1999: addshore worked on them a while ago, he might have more details [14:44:08] @notify addshore [14:44:08] This user is now online in #huggle. I'll let you know when they show some activity (talk, etc.) [14:45:02] YuviPanda, are you in another puppet run? I can't apt at all, it's locked. [14:45:10] andrewbogott: nope [14:45:11] YuviPanda: thanks [14:45:24] andrewbogott: ah, try again? [14:45:33] yep, better [14:45:38] andrewbogott: I had tried an apt-get install nginx-extras and let it hang. Sorry about that [14:45:41] killed it now [14:45:46] should be unlocked [14:47:22] petan, I'm back. [14:47:30] zhuyifei1999: if you are looking for category based information, you should also look at https://wikitech.wikimedia.org/wiki/Nova_Resource:Catgraph [14:47:39] So I execute the same commnd and add -q cyberbot [14:47:59] The cyberbot node can't seem to access the scripts in the cyberbot project. [14:48:16] Cyberpower678: I couldn't send you with memoserv: I don't know your freenode registration username. :-) [14:48:39] Cyberpower678: Your problem was that your commandline had -mem 8m. 8m isn't even enough to start the shell that will start your command. :-) [14:49:16] Oh? [14:49:30] Coren, err. My username is Cyberpower678. :p [14:49:57] Huh. I tried that and it said it didn't know you. Maybe I had a typo. [14:50:06] Coren, how much memory do I need? [14:50:28] It depends on what you are trying to run. As a rule, PHP is happy with ~350m [14:51:00] That means, I will eat up my 2gb in no time. [14:51:17] Cyberpower678: No, no, don't confuse vmem and actual RSS [14:51:30] dammit, commandline apt-get doesn't care about repo signing. So all of my testing to this point was bogus [14:52:02] If you are running all PHP scripts, you can probably fire dozens of those before you actually run afoul of 2G of ram. [14:52:04] andrewbogott: oh? it does to me [14:52:13] WARNING: The following packages cannot be authenticated! [14:52:13] nginx-common nginx-extras [14:52:14] Install these packages without verification [y/N]? [14:52:59] Mine just says [14:52:59] After this operation, 1,688 kB of additional disk space will be used. [14:53:00] Do you want to continue [Y/n]? [14:53:05] andrewbogott: say Y [14:53:10] andrewbogott: it will give you the warning after that [14:53:15] Oh, dang. OK. [14:55:50] andrewbogott: There is a workaround. [14:56:44] andrewbogott: Add [ trusted=yes ] to the *.list pointing at the repo. [14:57:19] Which, in a case of a repo we control, is true. :-) [14:58:31] okay, brb [14:59:31] Well, dammit, that sounds a lot easier [15:00:07] Coren, why does it need that much memory to start? [15:00:28] Also can anybody access the cyberbot queue? [15:00:45] Cyberpower678: That's the sum of php, the memory it allocates, and all the dso its linked to. [15:00:52] Cyberpower678: No, you alone. [15:01:49] Tasks successfully started. [15:07:52] Coren, I've transferred all, but 5 tasks to the cyberbot node. [15:08:06] Why not the other five? [15:08:07] !logs [15:08:08] http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ [15:08:09] !logs del [15:08:10] Successfully removed logs [15:08:23] !logs is raw text: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ cute html: http://tools.wmflabs.org/wm-bot/logs/index.php?display=%23wikimedia-labs [15:08:24] Key was added [15:08:37] Coren, 2 of them are in the middle of doing something, and the other 3 are python scripts. [15:09:02] Coren, what happens when I hit the memory limit? [15:09:06] Cyberpower678: Python also works on your queue; my comment about "all PHP" was re the actual footprint. [15:09:39] Python, eats a lot of memory. [15:10:00] Cyberpower678: The box will get fairly slow as it starts trashing to swap. If you go *way* over, the OOM killer will start murdering your bots. But you're using, like, 12% of your memory atm so it's not panicky. :-) [15:14:20] Coren, according to Ganglia, where my node is also visible, I'm using 50% [15:18:16] Cyberpower678: You're not looking at the right numbers; your current usage is (in MB) 247 / 2003. When you look at the ganglia stats; you have to ignore the green regions (buffers, cache). [15:20:13] Cyberpower678: You get your own node, but only if you stop using the general ones. :-) [15:23:27] I will move them in due time. [15:24:35] Well, the time is due now really -- but I don't mind leaving the general queues open for you over the weekend so that you can do so when you have time to yourself. [15:25:21] the spamscan task won't finish until 9/13 and the spambot taks won't finish until tomorrow. [15:26:51] Oh, no worries, I'll not kill any of your jobs. [15:27:22] Just make sure their next runs, at least, or on the right queue. [15:27:43] Coren, trusted=yes on the source line? "deb file:///data/project/repo/ / trusted=yes" doesn't parse [15:28:09] andrewbogott: "deb [ trusted=yes ] file:///data/project/repo/ /" [15:28:18] ah, ok, thank you. [15:29:50] Coren, I like how my own exec node shows up with all the other exec nodes. :D [15:30:36] ... [15:30:48] did we really need whole exec node? [15:36:50] petan, it was Coren's alternative to raising the job slot's limit. [15:36:54] I accepted. [15:39:30] petan: It's a more efficient use of resources for edge cases like Cyberpower. [15:40:17] I just don't think that cyber bot is edge case :P IMHO it's just another bot that does a lot of tasks... [15:40:37] 16 processes may eat less resources than 1 heavy process [15:40:50] he could as well create 1 task which runs in 16 threads [15:41:25] I mean, he does run lot of processes, but that doesn't really mean it's resource expensive bot [15:42:10] petan: He could have, but this we he specifically gets 1 cpu and 2G of ram to manage as he sees fit, and can neither consume slots nor resources from the other tools. Win-win. [15:42:20] ok [15:42:20] s/this we/this way/ [15:43:07] * Cyberpower678 is extremely happy, so another win. [15:44:36] drugs can accomplish that too [15:44:44] -.- [15:44:57] That wouldn't solve my bot problem. [15:45:03] who knows :P [15:49:30] YuviPanda|brb: I think it's happy now. [15:56:14] andrewbogott: swell! checking [15:56:31] * YuviPanda puppets [15:57:25] !logs [15:57:25] raw text: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-labs/ cute html: http://tools.wmflabs.org/wm-bot/logs/index.php?display=%23wikimedia-labs [16:00:53] andrewbogott: works! [16:00:54] yay, ty! [16:08:56] Coren, I've updated the crontab. The only scripts that are still running in the normal nodes are those 5 I mention earlier. [16:10:01] Cyberpower678: Cool beans. [16:16:36] Coren: btw, IIRC we don't have snapshots anymore, so perhaps update https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Time_travel? [16:17:10] They're coming back soon, almost certainly during the two weeks I'll be in SF. [16:30:37] Coren, if the grid fails, does that mean my node will keep running on it's own? [16:30:54] Leaving my bot operational? [16:31:15] That depends what you mean by "grid fails". If you mean the scheduler, then yes. If you mean your node specifically, no -- but the jobs will get restarted when it comes back up. [16:42:55] * valhallasw wonders why Eclipse always makes him want to smack his head into a wall [16:43:25] valhallasw: why are you using Eclipse? [16:43:54] YuviPanda: I'm trying to see if it makes using Gerrit from windows less painful [16:44:05] That was before I remembered it's Eclipse. [16:44:07] hahahahahahaha [16:45:01] Internal error validating repository [16:45:02] java.lang.reflect.InvocationTargetException [16:45:03] \o/ [16:45:30] \o/ [16:46:36] basically, the screenshots always look really nice: oooh, integration with both gerrit and bugzilla! let me try that! [16:46:49] but it is Eclipse. Never Forget! [16:47:46] ok, so it doesn't want to login to gerrit [16:48:41] and now it's in an infinite loop! [16:49:28] and of course, clicking the 'cancel' button does nothing except graying out the 'cancel' button. [16:50:38] * valhallasw cries in a corner [16:50:59] * YuviPanda pats valhallasw [16:51:01] there there [16:51:08] this is why you don't use eclipse [16:53:47] ok, let's try the nightly build of the connector [16:54:43] Gerrit 2.5 is supported. Support for Gerrit 2.6 is still work in progress [16:54:47] Well, that explains. [16:55:51] I'd almost consider paying for pycharm, which is also supposed to have gerrit integration [16:56:08] they have open source licenses, after all... [16:56:58] http://i.imgur.com/TS77f.jpg :D [16:57:08] valhallasw: Writing a stupid-simple plugin for our stripped down workflow might not be too hard [16:58:35] I'd almost consider learning COM programming to interface with TortoiseGit ;-) [16:59:04] Oh, right, Windows. Never mind. [17:12:49] heh [17:12:58] * YuviPanda 's workflow doesn't involve git review anymore [17:13:47] Coren: you around? [17:20:30] how do you take ownership of all files and sub directories of a folder? [17:25:51] Betacommand: use 'take'? [17:25:59] Betacommand: 'take '? [17:26:31] YuviPanda: that will work for sub files too? [17:26:49] Betacommand: yes, it is recursive by default, IIRC [17:27:47] YuviPanda: what do you do instead of git review? [17:28:04] chrismcmahon: git push gerrit HEAD:refs/for/master [17:28:55] YuviPanda: huh. istr at one point that didn't work. guess it changed. [17:29:08] chrismcmahon: really? that's always been the official way to get changesets into gerrit [17:29:20] git review isn't really endoresed by the gerrit folks [17:29:30] I might be wrong then [17:29:54] <^d> At its core that's what git-review does. [17:29:59] <^d> Plus some other rebasing black magic. [17:30:15] yeah, and I guess now that I prefer doing the black magic myself, if needed... [17:30:19] <^d> YuviPanda: And aliases, man. `git push-review` [17:31:01] ^d: hmm, for some reason I never have gotten big into aliases. [17:31:37] * Betacommand is finally finishing up his labs move [17:32:22] <^d> [alias] [17:32:22] <^d> repack-everything = repack -a -d -f --depth=250 --window=250 [17:32:23] <^d> push-review = push origin HEAD:refs/for/master [17:32:24] <^d> up = pull --ff-only [17:32:26] <^d> log-graph = log --graph --oneline --branches --all [17:32:28] <^d> rollback = reset --hard HEAD~1 [17:32:30] <^d> amend = commit -a --amend [17:32:32] <^d> crap = reset --hard origin/master [17:32:34] <^d> sub-up = submodule update --init [17:32:36] <^d> YuviPanda: ^ [17:32:41] haha@ crap [17:33:05] <^d> rollback is nice too. [17:33:07] <^d> I do that one alot. [17:33:56] you repack a lot? [17:36:22] ^d are other people besides me getting the "unpacker error" when trying to push? [17:37:28] chrismcmahon: I got it y'day, a couple of other people did too, IIRC [17:37:51] YuviPanda: just lucky I guess [17:45:48] <^d> chrismcmahon: Yes, it's known, across multiple repos. [17:57:35] OK, Eclipse, it was nice seeing you again. Farewell. [17:57:40] rm -rf eclipse [17:58:03] :D [17:58:14] valhallasw: try IntelliJ and friends, if you have to [17:58:25] YuviPanda: yeah, that's pycharm [17:58:35] right [18:09:07] ^d: fwiw, my push just went through. not sure what you did, but thanks! [18:12:01] Betacommand: Am now (just done with lunch). What can I do to you? [18:12:28] Coren: fixed it already [18:12:31] Heh. Should have been "ohcrap = reset --hard origin/master" [18:13:06] Coren: do you know how to get git-review working in windows? [18:15:20] Betacommand: Except for my gaming box, I haven't used Windows since well before git existed. I know we have a number of Windows users on labs-l though, so asking there will probably find at least one who has some git-fu. [18:15:32] Betacommand: I'm not sure anyone uses git review on windows [18:15:48] Betacommand: ask valhallasw, or Reedy, perhaps? [18:15:58] I don't [18:16:04] well, valhallasw then [18:16:11] I do all git stuff (well, gerrit based) stuff in a linux VM [18:16:15] YuviPanda: Ive been talking with valhallasw and no luck [18:16:19] I think siebrand does git-review from Windows [18:16:33] IIRC, petan also does some Windows; and I know he does git reviews, so perhaps he does them from there. [18:16:43] * Betacommand wonders why the hell this has to be such a pain in the ass [18:17:08] Betacommand: Your question mentionned "Windows"; that in itself is the answer. :-) [18:17:09] Does it play ball under cygwin? [18:17:24] Reedy: I'd expect it would. Cygwin is posixy enough for most things. [18:17:25] Coren: svn works like a charm [18:17:27] well... [18:17:32] Betacommand: xqt had mixed results with tortoisegit, but it's not great [18:17:41] Betacommand: you don't need to use git review for gerrit [18:18:01] Ryan_Lane: then how does one submit patches? [18:18:03] also, if you install python for windows, installing git review should be relatively straightforward [18:18:16] Ryan_Lane: ive installed it fine [18:18:17] Ryan_Lane: no, it's not, because it needs both command line git and command line ssh [18:18:19] git push origin HEAD:refs/for/master [18:19:00] you also need to add the git hook for gerrit [18:19:06] Betacommand: one more reason to have github integration, really - github has a nice windows client. [18:19:23] valhallasw: or just say fuck git and use svn [18:19:28] :P [18:19:33] From my experience today, I can say pycharm's gerrit plugin and eclipses gerrit plugin are both crap. [18:19:39] Betacommand: you can do that with github. [18:23:38] trust me. use git for a while and you'll never understood why you liked svn [18:24:56] Ryan_Lane: Ive been trying and just cursing out git the whole time [18:25:02] Ryan_Lane: compare tortoisesvn and tortoisegit, and you'll understand. [18:25:14] I use tortoisesvn at work [18:25:15] Ryan_Lane: try running gerrit integration on windows, and you'll understand [18:25:27] easy. don't use windows [18:25:35] in 10 years windows will probably not exist anymore [18:26:15] windows is a piss-poor environment for OSS devs [18:26:45] that said, git-review is just there to make things easier [18:26:49] it's not needed at all [18:26:54] ^d, for instance, doesn't use it [18:27:21] <^d> s/doesn't use it/refuses to use it/ [18:27:21] as long as you have the necessary git hook installed, and you push origin HEAD:refs/for/master you're fine [18:29:15] I really don't get the purpose of tortoisesvn/git they just hide things from you [18:29:24] Ryan_Lane: the point is there are people who would like to contribute who are also running windows. We need to help these people instead of sending them away with a 'well, better switch to linux!' [18:29:40] yeah, we do [18:29:42] Not everyone is a power-user. Some people just have a patch they would like to submit. [18:29:46] agreed [18:29:53] valhallasw: well, you were missing when I said you don't need git-review [18:29:58] as long as you have the necessary git hook installed, and you push origin HEAD:refs/for/master you're fine [18:30:40] I have git on windows :\ [18:31:09] valhallasw: it's not a matter of switching to linux. installing a linux vm is simple [18:31:38] For most people, it's not. [18:31:44] I understand where you're coming from, but Windows is a relatively hostile environment for OSS devs [18:32:06] I'd rather call it an 'impedance mismatch'. [18:32:35] the windows way of doing things - with an explorer plugin et cetera - does not match well with a command-line based workflow [18:32:45] microsoft encourages its own platform, and does absolutely nothing to enable OSS dev [18:34:00] * Ryan_Lane shrugs [18:34:19] there's CoApp [18:34:29] either way, it's just OS wars. the problem is that most devs don't use it, and it isn't supported well. [18:34:46] Ryan_Lane: if you want the counterexamples to how MS actually does try to encourage OSS dev (albeit not through Windows changes), I can give 'em [18:35:27] sumanah did you see http://tools.wmflabs.org/wm-bot/beta/ ? I think you might like it... [18:35:38] thank you petan! [18:35:41] once I implement some search etc [18:35:45] I had not taken a look yet! [18:35:55] it even display part / joins if you request it [18:35:55] actually I do quite a bit of open source python programming in windows that I port to other OSes without any issues [18:36:09] cool! [18:36:36] Ive had quite a few pywiki patches over the years [18:36:44] I've had ... 1 :) [18:36:51] I started working on pywikipedia in windows, and I know at least two pywikipedia developers who like to develop from windows. [18:36:53] basically fixing the copyright year on some files to include 2013 [18:37:19] valhallasw: Im a third [18:38:02] valhallasw: i'll see if I can get g2g back up at the tech days [18:38:07] valhallasw: or I can give you access to the tool account... [18:38:16] yeah, Windows support is good. YuviPanda I didn't know GithubToGerrit was down! D: [18:38:18] Ive got a gerrit account and was trying to submit patches per the documentation and getting pissed off [18:39:01] sumanah: it's been for a week, yeah. I rewrote it to make it a lot more robust, and then haven't had time to swtich it on [18:39:05] nod [18:39:14] I ended up saying fuck it, and making valhallasw commit the, [18:39:17] *them [18:39:28] Betacommand: in the meanwhile, feel free to mail me patches at any time. [18:39:41] at least Git makes it easy to distinguish author & committer [18:39:43] I might take a few days to actually submit, though :-) [18:39:55] valhallasw: Ill look at my code base and see what I should push upstream [18:39:59] valhallasw: in the meantime, also feel free to just send pull requests. I can manually sync fast enough :) [18:40:27] * Betacommand has a shitton of custom code [18:41:28] petan: I found a bug. http://tools.wmflabs.org/wm-bot/beta/index.php?start=07%2F01%2F2013&end=09%2F06%2F2013&display=%23wikimedia-office is only showing me stuff from Sept 5 [18:41:44] YuviPanda: not sure what I can send requests on, though, as I don't know what does what ;-) [18:42:19] sumanah: yes I didn't convert text logs to mysql, so older data are available in txt logs only [18:42:26] valhallasw: :) i get notified on any pull requests to github.com/wikimedia/* [18:42:29] oh, sorry for calling it a bug petan [18:42:38] no problem :D [18:42:41] YuviPanda: oh! I see :-) [18:42:44] Betacommand: ^ [18:42:53] valhallasw: does https://www.mediawiki.org/wiki/Developers/Maintainers help at all? :) [18:42:56] oh, *what* does what [18:43:16] Betacommand: I'll try to see how well the github windows client works. [18:44:07] Oooooh. This makes me a happy valhallasw [18:44:18] it clones subrepositories automagically! [18:44:44] See also git clone --recursive [18:44:51] * Ryan_Lane sighs [18:44:53] WTFPL [18:45:03] you guys realize that isn't an OSI approved license, right? [18:46:01] Ryan_Lane: well, according to the OSI: [18:46:04] Comments: It's no different from dedication to the public domain. [18:46:17] which screws people in lots of countries [18:46:25] valhallasw: at this point I've the commands to convert a pull req into a gerrit patchset almost in muscle memory, so... [18:46:37] YuviPanda: I'm just going to try some things, then. [18:46:45] YuviPanda: ha! [18:46:50] ty [18:46:53] Apache and BSD licenses are clearer and do the exact same thing [18:47:00] same with MIT [18:48:18] Ryan_Lane: It's technically a free software license according to the FSF, but they recommend the X11 license (for shorter programs) and Apache 2.0 (for longer ones) instead [18:48:44] the labs ToU technically require OSI approved licenses... [18:49:16] I guess we're going to have to get an exception for WTFPL since people don't seem to understand it's a horrible license :( [18:49:57] it is rather clearly worded... [18:50:19] clearly worded and well defined are not the same thing [18:50:38] and it's especially not the same as good from a legal perspective [18:50:49] why? [18:50:54] 'do whatever' [18:50:57] is all the license says [18:50:57] WTFPL = public domain, which offers no legal protection in a number of countries [18:51:31] oh wait, am I getting into a license discussion on IRC?! [18:51:36] * YuviPanda smacks self on the head [18:51:41] well, you use WTFPL on labs [18:51:51] this is a necessary discussion [18:51:57] since we don't allow it in the ToU [18:52:06] Ryan_Lane: AFAICT it's not public domain, you retain copyright [18:52:24] I think that's following the letter a bit too strictly. [18:52:33] if an exception is what is needed, can WTFPL get one? [18:52:46] that's what I was just saying [18:53:09] it's lame that people just don't use apache, mit, bsd, x11 or any of the other permissive licenses instead, though [18:53:15] Ryan_Lane: I don't see why "OSI approved open source license, or a free software license as defined by the Free Software Foundation" wouldn't be an appropriate way to phrase this [18:53:39] the FSF is OK with the WTFPL, so problem solved. [18:53:46] Exactly [18:53:47] Ryan_Lane: what license is better than wtfpl [18:53:51] any license [18:53:58] just tell me one [18:54:10] apache, mit or bsd [18:54:14] I understand how having code out there without any attached license is bad [18:54:17] if you want the same style of license [18:54:30] WTFPL is a license. and as marktraceur it says it is even okay by the FSF! [18:54:41] changed to bsd [18:54:50] petan: thanks :) [18:54:55] yw [18:55:17] YuviPanda: it lacks legal protection in numerous countries, hence my problem with it [18:55:31] I kind of like wtfpl because it is compliant with my philosophy regarding law [18:55:33] I'm fine adding an exception for it, if legal says ok [18:56:05] I like the wording marktraceur suggested. broader, and still in scope. [18:56:22] I used to like GPL until I found it is a bit restrictive [18:56:28] it's just silly to add an exception when there's sane equivalents that offer legal protection for people in countries that don't have public domain [18:56:32] especially when it comes to comerical environment [18:56:43] petan: indeed. the less restrictive licenses are bsd/apache/mit [18:56:44] Ryan_Lane: WTFPL _isn't_ public domain [18:56:50] (I still license my libraries and (other stuff) as apache) [18:56:54] marktraceur: see the OSI's rejection decision [18:57:01] I saw some nice xkd or something regarding these [18:57:09] * xkcd [18:57:32] I seem to have found a bug in github for windows [18:57:58] Ryan_Lane: The authour retains copyright [18:58:14] valhallasw: github for windows?? I thought it's a site [18:58:20] "Comments: It's no different from dedication to the public domain. Author has submitted license approval request -- author is free to make public domain dedication. Although he agrees with the recommendation, Mr. Michlmayr notes that public domain doesn't exist in Europe. Recommend: Reject" [18:58:22] petan: they have a windows git client [18:58:33] I'm not well-versed in this part of the law, but I'd be more comfortable hearing from someone who is than assuming the OSI know what they're doing and the FSF have no clue [18:58:38] hold on, is git affiliated to github? [18:58:43] petan: Noooo. [18:58:58] so they have own git client? [18:59:00] yes [18:59:01] mhm [18:59:02] Meh, I use the ISC license as "absolute simplest" [18:59:13] I would just stick with official git client, it works nice [18:59:16] I use it in windows too [18:59:41] @link [[ISC]] [18:59:41] https://wikitech.wikimedia.org/wiki/ISC [18:59:47] @link [[w:ISC]] [18:59:47] https://en.wikipedia.org/wiki/ISC [19:00:00] seems Linus Torvalds wrote like 3 things, Linux, git, and .. a program to log your diving activity [19:00:14] @link [[w:ISC license]] [19:00:14] https://en.wikipedia.org/wiki/ISC_license [19:00:14] I'm not going to start a 'not everyone is a long-time experienced command line user' discussion for the second time this evening, sorry. [19:00:35] hahaha [19:00:36] valhallasw: do they have a way for you to publish bug reports for their windows client? [19:00:36] looks good [19:00:40] http://www.gnu.org/licenses/license-list.html [19:00:54] WTFPL version 2: "We do not recommend this license." [19:00:55] YuviPanda: contact / support mail address [19:01:54] Ryan_Lane: But it's listed under GPL-compatible free software licenses. [19:02:29] sure. public domain is also compatible with gpl [19:02:44] so is bsd and mit and apache2 [19:03:55] http://en.wikipedia.org/wiki/Hacktivismo_Enhanced-Source_Software_License_Agreement heh [19:04:37] mutante: heh, 'no evil!' [19:04:43] heh. that's liike that horrible no evil license [19:04:43] yeah [19:05:09] Who was it that got a specific exemption to to evil with it again? :-) [19:05:10] Whoa wait [19:05:20] From that OSI minutes document - "Comments: Matt Flaschen reviewed v1.1..." [19:05:27] Our superm401? [19:05:39] Coren: nobody, I'd think? [19:05:49] you know we have (or had) a bunch of OSI people, right? [19:05:52] Oh goodness, we have all manner of people [19:05:58] mutante: SURPRISE! [19:05:58] I didn't know that [19:06:11] err [19:06:13] alolita, danese cooper, etc. etc. [19:06:15] not mutante, but marktraceur. [19:06:26] YuviPanda: It was even more surprising to mutante! [19:06:36] heh [19:07:03] ok. food. I guess I'll put in for a freaking exception for public domain and WTFPL [19:07:12] Heh, lvillaWMF is on the board now [19:07:18] maybe I should always just fork and relicense WTFPL when I see it [19:07:23] and keep up with all changes ;) [19:07:25] Oh, no, that was the JSON license! :-) [19:07:32] Coren: apparently IBM - Because of this restriction, according to Crockford, IBM asked Crockford for a license to do evil, such that their customers could use it.[6][7] [19:07:51] Ryan_Lane: That's usually what I do [19:07:59] I mean, obviously I can do whatever the fuck I want, so I'll fork and relicense in such a way that forbids only the author from using it [19:08:02] YuviPanda: haha:) [19:08:07] (http://dev.hasenj.org/post/3272592502/ibm-and-its-minions) [19:08:23] :) [19:08:32] Ryan_Lane: The "Go Fuck Yourself Public License" [19:08:38] oh my god yes. [19:08:47] in fact. I may write a bot for this [19:08:53] for github projects [19:08:57] haha! [19:08:58] I'm so excited [19:09:07] where I fork any WTFPL github project [19:09:16] and relicense it as GFYPL [19:09:34] Best. Project. Ever. [19:09:36] do it! :D [19:09:36] :D [19:09:45] I mean, just for the lulz [19:10:00] at the same time, people might actually start using my forks and that would be horrible [19:10:12] WMF: Furthering free knowledge with excessive profanity since 2013 [19:10:17] Coren: from the linked article, '“I give permission for IBM, its customers, partners, and minions, to use JSLint for evil.”' [19:10:41] Ryan_Lane: then again, you have to watch out which WTFPL you find. [19:10:48] https://github.com/endeav0r/darm/blob/master/LICENSE < that one would give you trouble [19:10:58] oh wait [19:11:08] that's the WTFPL, no? [19:11:11] YuviPanda: The acknowledgement of minions is especially touching. [19:11:13] that one actually only permits to copy the license, not the rest of the project [19:11:27] Coren: indeed! equal opportunity, all that. [19:11:40] ok. food time :) [19:11:57] valhallasw: that's copyright for the License *itself* [19:11:59] YuviPanda: hm, you're right. [19:12:07] 50 mins till reboot of virt node [19:12:14] valhallasw: the terms of the license itself are mentioned in the document body under the line item 0 [19:12:25] which states clearly, 'You just DO WHAT THE FUCK YOU WANT TO.' [19:13:06] <^d> Can we use the DWTFYWWI license? [19:13:43] <^d> Which is itself licensed under DWTFYWWI [19:14:05] haha [19:14:08] different from WTFPL [19:14:19] It would be great if they were licensed under each other [19:14:26] ^d only if it's compatible with the other licenses [19:14:27] :P [19:14:39] but something tell me it's compatible with everything [19:15:01] ^d: I like that license thank you very much. [19:15:11] <^d> I do too :) [19:15:29] ^d: I'd use it but I'm already commited to WTFPL... [19:16:24] ^d: btw, git.wikimedia.org is down (you probably already know) [19:17:23] <^d> Again? I just restarted it like 45m ago [19:19:05] <^d> Hrm, it's running :\ [19:19:08] <^d> Wonder wtf is up [19:22:27] I never knew microsoft released all source codes for .net O.O [19:32:21] I hear that there is a process for creating binary packages for labs? [19:32:50] php really needs partial classes [19:33:12] awight: I think AzaToth might be the guy to talk with [19:33:19] he is packaging guru :P [19:34:25] petan: rad, thanks. I also discovered https://wikitech.wikimedia.org/wiki/Git-buildpackage [19:34:36] interesting [19:36:27] awight: http://www.debian.org/doc/manuals/maint-guide/build.en.html#debuild [19:38:40] mutante: yes, thanks for the link. The big question is, what process I need to follow to make the binary pkg palatable to WMF labs and security buffs. [19:39:13] well, when i did it, i made it so that debuild -us -uc would give you a .deb after cloning [19:39:34] but i didn't actually have to compile, my package just copies files [19:39:50] and then opsen run the debuild command to their own satisfaction? [19:39:54] awight: yep [19:40:16] Ryan_Lane: ok that might not make me go crazy ;) [19:40:18] Ryan_Lane: should those all be in operations/debs? [19:40:24] ideally, yes [19:41:06] awight: request a new gerrit project for it first, then send gerrit change, add reviewers [19:41:18] first step is via wiki [19:41:38] got it! [19:42:17] Ryan_Lane: hmm, operations/puppet.git doesn't have a license file? [19:53:21] Ryan_Lane: thoughts on https://gerrit.wikimedia.org/r/83127? [19:53:54] Ryan_Lane: one of the things is that there will be different lua files for toollabs and general labs proxying. I am guessing these should be specified in the role, but then I'll have to put the files in /files? [19:54:16] why different lua? [19:54:44] Ryan_Lane: toollabs would be from tools.wmflabs.org// [19:54:52] general one would be just [19:55:08] hmm, I could just have them be 'dynamicurlrouter' and 'dynamichostrouter' [19:55:11] I really wish we'd use virtualhosts for tools rather than /toolname/ [19:55:22] toolname.tools.wmflabs.org? [19:55:26] yeah [19:55:29] me too [19:55:30] more flexible [19:55:32] why don't we? [19:55:35] Coren: ^^ [19:55:36] :) [19:56:20] Because it greatly complicates presenting valid SSL certificates, for one. [19:56:31] no it doesn't [19:56:36] *.tools.wmflabs.org [19:57:02] SAN: tools.wmflabs.org [19:57:53] http://ganglia.wikimedia.org/latest/?c=Virtualization%20cluster%20pmtpa&h=virt12.pmtpa.wmnet&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2 <-- did you see we have a lovely new virt node? [19:59:34] Hm. That /could/ work; U alaways found subdomains to be inane though, and virtual hosts to be more of a pain to manage than proper rewrite rules -- but that's probably apachisms. Can you add and remove virtual hosts dynamically with nginx without changing the actual config and thus possibly breaking the server? [19:59:54] Coren: indeed, that's the entire thing I'm doing :) [20:00:40] * Coren grumbles. [20:00:43] heh [20:00:58] changing the urls is easy with rewrites, too [20:01:44] Ryan_Lane: also, the lua would still have to be different - tools can have /help running something else, and /somethingelse running something else. [20:01:51] tools.wmflabs.org/(.*)/(.*) -> $1.tools.wmflabs.org/$2 [20:01:57] It's still teh suck. I dislike virtual hosts for the same reason I dislike NAT; it's an ugly hack that's not actually useful but circumvents flaws in the design of the deamon. [20:02:10] whaaaaa? no way. it's way better :) [20:02:24] Ryan_Lane: each url prefix would need to be routed to a different host/port configuration. so... differnet lua [20:02:29] it also allows the better apache 2.4 feature with each virtual host running as a different user [20:02:47] YuviPanda: ah. right [20:03:02] actually, that may not be a bad feature for labs as a whole, either [20:03:10] maybe you want different url paths going to different backends? [20:03:52] shouldn't that be at the instance level? [20:03:52] and also would that even be useful for labs as a whole? [20:03:59] that's a relatively normal feature [20:04:26] instance level? [20:04:29] hmm, so match first on host, then on url prefix [20:04:40] instance level as in, you have an apache or nginx on your instance that does that particular bit of routing [20:04:41] you may want /help going to host1 and /dieinafire going to host2 [20:04:58] which is what tools is doing, right? [20:05:10] right [20:05:21] or maybe you want both going to the same place, but same same [20:05:28] or even you want /help going to port 9343 on host1 and /dieinafire going to port 9999 on host1 [20:05:31] yeah [20:05:49] I think it's likely doable to keep the lua the same [20:05:59] hmm, yes. [20:06:03] that makes sense. [20:06:30] Ryan_Lane: hmm, but can we still rename the module to dynamicproxy? [20:06:33] shit. I'm late on the reboot [20:06:37] Ryan_Lane: That apache can only do container-like things on hostnames and IP is a design flaw, not a justification for misusing DNS to indicate tenants. [20:06:45] Ryan_Lane: Wait! [20:06:50] ? [20:06:56] I bet it's generic enough for me to extract it out into its own module [20:07:02] Ryan_Lane: Forgot that was now, gimme a sec to shutdown the mysql of tools-db neatly. [20:07:05] I didn't do it yet. whats u[? [20:07:06] ah [20:07:07] ok [20:07:21] The rest is okay, but that's better shut down neatly. :-) [20:07:26] that's fine [20:07:29] have at it [20:07:31] okay, let me finish up the API [20:07:34] we really need to move that to hardware :) [20:08:25] Ryan_Lane: Yes. Was waing on Asher to help me with the puppet class but... well. [20:08:31] heh [20:08:40] Ryan_Lane: All set. Have at it! [20:08:44] ok [20:09:18] this'll take a while [20:09:23] stupid ciscos take ages on boot [20:10:03] * Betacommand grumbs about not getting a reminder on terminal about the reboot [20:10:15] well, that's not easy [20:10:16] Ryan_Lane: By happenstance, tool labs impact will be fairly well contained. We lose both bastions but none of the grid. [20:10:19] I'm rebooting at the host level [20:10:24] it doesn't really inform the instances [20:10:34] Coren: heh. cool [20:10:37] so.... [20:10:47] there's a feature in openstack we could use to make your life easier [20:11:08] you can ensure instances boot on different nodes [20:11:14] Ryan_Lane: Im used to the toolserver where we got reminders [20:11:33] I sent a notice to the list :) [20:11:46] Ryan_Lane: timezone's threw me off [20:12:00] that's why I used UTC. heh [20:12:22] Ryan_Lane: I still thought I had two hours [20:13:02] * Ryan_Lane grumbles [20:13:09] I'm going to need to powercyle [20:13:14] it's hanging on shutdown [20:13:42] I kind of expected that [20:14:41] testing memory [20:14:57] stupid cisco is so slow to boot :( [20:19:32] Coren: around? [20:19:43] wtf. why is mysql running on this node? [20:19:55] or somebody else of the higher ups on labs [20:20:05] QueenOfFrance: Yep. Idling while I wait for Ryan to do his thang. [20:20:08] Need help in tracking down if this is a labs bot and if so, which one it is https://en.wikipedia.org/wiki/Special:Contributions/10.4.1.125 [20:20:40] ok. looking to reboot the nodes now [20:20:52] QueenOfFrance: It is. It's a bot running on node 6 of the grid. [20:21:09] Coren: any possibility of telling which one it is from your end? [20:21:29] Coren: prime suspects is legobot I'm told [20:21:48] QueenOfFrance: If it has a distinct UA I can checkuser it, but it's not obvious otherwise. Lemme go do it now. [20:23:45] Coren: its one of two operators [20:24:40] Legoktm or cyberpower678 [20:24:48] Hm. "php wikibot classes" isn't all that illuminating for an UA, but I see lego does use that UA. [20:24:53] god damn it. what's the flag to tell mysql to print regular text output? [20:25:09] Ryan_Lane: vs what? [20:25:24] vs that table [20:25:56] from the cli [20:26:11] Yeah, definitely one of legoktm [20:26:53] QueenOfFrance: Incidentally, keeping that IP softblocked on enwp should be okay; enwiki bot rules says "no editing logged off" [20:27:10] ah. got it [20:27:40] QueenOfFrance: Yep. {{confirmed}} to be legobot [20:30:37] Coren: did you want to reboot your own tools instances? [20:30:43] if so I'll remove them from the reboot list [20:30:48] otherwise I'm rebooting slowly [20:30:52] so it may take a while [20:31:31] I'll split them into another file [20:31:38] Ryan_Lane: Does it hinder you at all if I reboot them myself? [20:31:41] no [20:31:46] I recommend it, in fact [20:31:49] Heh. [20:31:52] Ryan_Lane: rebooting what> [20:32:05] Directly on virt11 or through wikitech? [20:32:05] the tools instances [20:32:10] through wikitech [20:32:13] why? [20:32:16] I go do so now then. [20:32:19] because I rebooted virt11? [20:32:27] nfs? [20:32:45] the schedules maintenance? [20:33:32] virt11 is node or nfs? are you rebooting only tools instances or some other too? [20:34:26] meh I can't find that mail [20:34:55] all instances on virt11 [20:34:59] virt11 is a virt node [20:35:00] petan: virt11 had to be rebooted; some of the tools instances were on it. I'm now bringing them back up. [20:35:21] In two phases; infrastructure, then bastions. [20:35:34] ah ok [20:35:41] I found that list of instances... [20:36:22] I just don't get this... if virt node is rebooted, the instances don't get up automagically? you need to start them using reboot? [20:36:29] I should really write a script for reboots [20:36:34] petan: Need to start them manually. [20:36:39] petan: I don't have the flag set to automatically boot them [20:36:42] using reboot? [20:36:52] Ryan_Lane: wikitech needs stop start buttons too :P [20:37:02] sometimes shutting down an instance could be useful [20:37:04] stop/start? [20:37:06] yes [20:37:09] like shutdown [20:37:09] I'd prefer instances always run [20:37:16] why? [20:37:22] doesn't it eat more resources [20:37:23] otherwise they aren't getting security updates and puppet runs [20:37:31] hmm [20:37:39] if they aren't being used they should get deleted [20:37:54] well maybe some instances don't need to be used temporarily [20:38:00] deleting and recreating takes a lot of time [20:38:01] shutdown systems are vulnerable sytems [20:38:05] shutdown / start takes few minutes [20:38:34] I disagree systems that are down are in fact most resistant from hackers :P you can't hack a machine with no power [20:38:43] Looks like Tool Labs is back to full joy. Grit status: "There was maintenance?" [20:38:46] but well I get the point [20:38:48] but you can easily hack one that's just coming up [20:38:53] sure [20:39:10] Coren: where you see that? :P [20:39:16] I really hope this waitio is just because of the boots :) [20:39:39] otherwise I'm going to be annoyed [20:40:40] running jobs 93? [20:40:41] server restarted? [20:40:44] how did they all start? [20:40:51] Danny_B: yes [20:40:56] sigh [20:41:08] why can't notices be sent ahead? [20:41:23] Danny_B: there was a notice [20:41:27] Danny_B: there was some e-mail, but in fact I would prefer some motd notices, Coren :o [20:41:27] three days ago [20:41:44] are you subscribed to labs-l? [20:41:44] yes I noticed it 3 minutes ago :P [20:42:21] since gmail start putting wikitech emails to "social" tab I don't read them so much [20:42:24] What waitio? [20:42:38] petan: It was also on labs-l [20:42:48] Coren: yes that one is in social tab as well :P [20:43:01] I should probably fix my gmail [20:43:02] petan: It really shouldn't. :-) [20:43:26] Danny_B: If you had jobs sent to the grid, they should not have been affected by the maintenance. [20:43:43] well, dab & nosy always set up some notification which was saying first every hour then like 30 mins, 10 mins, each min "the server is going to be rebooted" so everybody connected knew [20:43:55] good wm-bot always survive every outage XD [20:44:12] Coren: is there any howto for jobs in grid? didn't find any. no manual - no grid... [20:44:36] !toolsdoc [20:44:37] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [20:44:41] Danny_B: ^^ [20:44:44] Coren: http://ganglia.wikimedia.org/latest/?c=Virtualization%20cluster%20pmtpa&h=virt11.pmtpa.wmnet&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2 [20:44:53] it's still doing reboots, of course [20:45:09] so I don't find that abnormal [20:45:15] Danny_B: I complained the same thing about no notice [20:45:17] Danny_B: In particular, section 8 "Submitting, managing and scheduling jobs on the grid" is what you want. :-) [20:45:45] again, are you guys subscribed to the list? [20:45:55] Ryan_Lane: Doesn't look all that worrisome to me given the rush to disk for all those vms starting. [20:45:59] yep [20:46:27] Ryan_Lane: yes, but with timezone issues, its still nice to get a notice when using terminal [20:46:30] I gave 3 days notice and listed the time in UTC ;) [20:46:35] please please please, next reboot be sure you set up the notice the way we are used to from toolserver [20:46:43] Ryan_Lane: people forget or confuse timezones [20:46:49] I have no clue how the TS does this [20:46:59] Betacommand, Danny_B: I can't do a shutdown with noisy notices, but I'll make sure there is a very visible MOTD on the bastions for next time. [20:47:02] Ryan_Lane: they just broadcast a message to all users [20:47:19] what's wrong on notices? [20:47:20] Ryan_Lane: It's physical hardware; Beta is referring to the normal shudtown wall [20:47:35] I have mixed feelings on wall [20:47:40] but that's doable on the bastions [20:47:43] Betacommand obviously knows the proper terminology [20:47:45] Danny_B: Because it wasn't the instances being shutdown, but the actual host. [20:48:02] Coren: it should still be do-able [20:48:22] meh. I can salt it to all bastions [20:48:29] it might not be done in the exact same method, but the principle is sound [20:48:33] * Danny_B is not techie, just enduser who wishes more user comfort in this, please [20:48:33] Betacommand: It should. I suppose we can use salt for this. Is are there grains for is-on-virtXX Ryan_Lane? [20:48:46] we should have a bastion grain [20:49:16] Danny_B: I know your feeling, I thought I had two hours before the reboot [20:49:26] Coren: https://wikitech.wikimedia.org/wiki/Salt#Grains [20:49:36] you can add them via puppet [20:49:45] we should have a bastion role anyway [20:50:17] just a side note: i can imagine this could be one of the reasons why people hesitate to migrate from ts to tl [20:50:29] that the behavior in crucial things is not the same [20:50:32] because we've had two scheduled outages? [20:50:42] or have we had three? [20:50:50] I can only remember two. [20:50:58] two in what? 5 months? [20:51:07] i remember more restarts than two including the today's one [20:51:15] both were relatively non-impactful [20:51:30] Danny_B: Yes, I'll work on being noisier about scheduled maintenance windows. But you should really subscribe to labs-l if you aren't. :-) [20:51:31] but no, it's not about doing a restart, but about no online warnings [20:51:57] online warnings are often useless [20:52:01] but we'll do them in the future [20:52:05] Coren: many thanks on behalf all those who will appreciate it [20:52:15] Ryan_Lane: thank you as well [20:52:15] Ryan_Lane: Often, but clearly we have endusers here that would have appreciated them. :-) [20:52:26] indeed. hence why we'll do them in the future :) [20:52:44] RESOLVED FIXED [20:52:45] ok. all instances should be rebooted [20:52:46] ;-) [20:52:46] Ryan_Lane: its nice to have a few minutes notice when working in the terminal [20:52:53] Betacommand: indeed [20:53:06] <^d> What's a terminal? Do they have that in windows? [20:53:07] thats the main perk to these notices [20:53:15] hm. 43 virts up [20:53:16] ^d: yes they do [20:53:18] that's 2 short [20:53:25] * Ryan_Lane wonders which 2 didn't reboot [20:53:26] ^d: i am on windows now, but on toolserver ;-) [20:53:36] <^d> What's a tool server? [20:53:41] :-P [20:53:53] ^d: this solaris based thing [20:53:53] Danny_B: DFT [20:53:59] :) [20:54:06] dft? [20:54:13] <^d> Ryan_Lane: What's solaris? Is that a kind of windows? [20:54:18] Dont Feed the Trolls [20:54:28] ^d: it's kind of like windows. it's closed source [20:54:36] Ryan_Lane: ^ [20:54:38] Ryan_Lane: Any words on migration/resize btw? It'd be nice if I didn't have both of Tool Labs' bastions on the same virt box. [20:54:40] ;) [20:54:49] (Also they could use being embiggened) [20:54:56] Coren: I can cold-migrate them now if you'd like [20:54:58] <^d> Betacommand: If people didn't feed me I'd starve ;-) [20:55:07] Coren: is text available via db ? [20:55:15] onto virt12, which is relatively empty [20:55:24] resize is waiting on a review [20:55:27] ^d: and that would be a bad thing™ [20:55:28] if you'd like to do it [20:55:39] https://gerrit.wikimedia.org/r/#/c/68126/ [20:55:49] <^d> Betacommand: If I starved, Ryan would have to go back to managing Gerrit for everyone. [20:55:52] <^d> I don't think Ryan wants that. [20:55:53] Betacommand: ^d is not a troll, he's just one of those crazy devs who obviously hasn't seen a daylight for a while, so has different way of thinking... ;-) [20:55:58] no. i definitely do not [20:56:13] Coren: thanks! [20:56:16] Danny_B: no, he's definitely a troll [20:56:18] Danny_B: I know I was joking, Im very similar to them [20:56:18] kind of like me [20:56:25] Betacommand: No; the clustering we have doesn't actually *have* text tables. It'd actually be faster to use the API to fetch a revision text than through a replica of the external store -- even if we had the resources for it. [20:56:43] what is closed source? is that compatible with WTFPL? [20:56:47] YuviPanda: :) [20:56:48] Coren: thats what I thought [20:57:13] io is back to normal [20:57:20] ^d: what is gerrit anyway? [20:57:26] Ryan_Lane: There are already people on both now, so maybe next time. Or I'll use -dev to test resizing once it's full of happies. :-) [20:57:28] I still need to fix virt11's disk space issue [20:57:32] Danny_B: a pain in the ass [20:57:39] +1 [20:57:52] <^d> This silly tool that Ryan picked because all the other tools were worse ;-) [20:57:58] yep [20:58:06] and ^d was kind enough to make better [20:58:28] * Betacommand grumbles about lack of support in non *nix OS's [20:58:38] Gerrit. It Sucks Less™ [20:58:56] Betacommand: Most IDEs do have git support. [20:59:06] speaking about gerrit, i never made it work properly :-/ i always have to clone instead of pull otherwise it wants me to submit bunch of changes since the last review submit [20:59:15] Coren: git yes, gerrit no [21:00:06] Coren, I agree wholeheartedly that Gerrit sucks less than libxml2 - but that's it;) [21:00:46] <^d> Danny_B: That's mainly because git-review is pretty braindead. [21:00:58] <^d> Gerrit itself shouldn't freak out on that. [21:01:31] i always rather cancel it and do the clone and my edits again [21:01:49] <^d> I hope you're at least cloning on disk rather than wasting the network io :) [21:01:49] rather than submitting crazy unwanted stuff [21:02:25] ^d: i clone just operations/mediawiki-config recently ;-) [21:03:47] Ryan_Lane: https://github.com/yuvipanda/invisible-unicorn the API for the labsprox [21:03:48] y [21:03:53] unfortunately none of the tutorials on mw.o deals with this issue [21:04:26] Danny_B: do a 'git fetch gerrit' whenever that happens. always fixes it for me. [21:04:29] and yes, git review sucks [21:05:03] YuviPanda: in which step? after commit before review? [21:05:16] (mind to put it in mw.o tutorial, pls?) [21:05:29] i know i am not the only one who was dealing with it [21:05:42] couple people mentioned it to me in ams hackathon [21:05:43] can you try it and let me know if it works? [21:06:05] whenever git review complains about 'omg so many commits, are you sure?', just quit it and do a 'git fetch gerrit' and then a git review again [21:06:14] no patch to submit atm, but i'll try to remember [21:06:36] thanks for the tip [21:06:37] Danny_B: If that makes you feel any better, I've been doing this for nearly 30 years and git/gerrit still throws me in for a loop now and then (although I did manage to internalize the typical workflow after a month or two) [21:07:08] haveing a better tool would make me feel better ;-) [21:07:18] but good try though ;-) [21:13:00] Hi everybody! Just a little quastion: I have a shell access to tools-login.wmflabs.org and I would like to export some queries into external files (CSV, for example). What's the best way to do that? [21:31:07] elgranscott: you mean you have a mysql shell ? [21:31:40] elgranscott: one way is like ... SELECT .. INTO OUTFILE '/tmp/foo.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' [21:31:41] Coren: I think it's RFC bot [21:32:30] KittyReaper: The IP editing bot? Definitely legobot; it has a UA no other bot has. [21:33:32] hmm [21:33:46] (del/undel) 06:35, 6 September 2013 (diff | hist) . . (+14)‎ . . Help talk:Citation Style 1 ‎ (Adding RFC ID.) [21:33:55] Coulda sworn RFC bot did that... [21:34:39] mutante: thanks, but I meant if there is any tool installed to do that "easier" [21:35:35] https://en.wikipedia.org/w/index.php?title=Special%3AContributions&target=RFC+bot&namespace=1 [21:37:29] elgranscott: i don't know what's installed specifically. another option would be to put your queries in a file and mysql -X < file.sql . the -X gives you xml output [21:43:48] Are labs "vd*" volumes something like local storage? As in, are they appropriate for things like compilation? Glusterfs turns out to be too slow for that sorta thing. [21:44:13] awight: yes. use vdb [21:44:19] it's local storage [21:44:19] OK thank you [21:44:57] yw [21:45:15] awight: you might also want to use NFS for your instances. glusterfs is slow [21:45:47] hehe, slower than NFS is quite a slur [21:46:07] :D [21:46:22] I remember those days-- don't hard mount or you'll be sorry! [21:51:41] awight: No, actually, hard mounts is what makes it stable; soft mounts'll get cha. :-) [21:52:08] networking gear isn't what it used to be; you don't get significant packet loss on local nets anymore. :-) [21:52:21] hard mounts used to be a nice trick to freeze the OS [21:52:39] I'd rather get a failure than have a timeout that never returns [21:53:58] * awight chuckles at having reignited a 15-yr-dead flame war ;) [21:54:19] this war, this vendetta, this Sicilian thing must end! [21:57:20] KittyReaper: Yes I took over RFC bot. [21:57:26] Ah. [21:57:41] I even posted that on WP:BOTR.... [21:57:50] > Hrmph. Special:Contribs/10.4.1.125. I fixed it so that shouldn't happen again in the future. Legoktm (talk) 06:49, 6 September 2013 (UTC) [21:59:07] <^d> Ryan_Lane: Can I get +bcrat on wikitech? [22:02:15] <^d> Or Coren, maybe? [22:03:13] I can, but I'd rather Ryan okayed it first; there are side effects to most bits on wikitech. [22:04:05] <^d> I guess I could use it to give myself more permissions ;-) [22:04:14] <^d> But I'm really needing it for renaming users. [22:06:22] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53867] Install C++ header boost-python - https://bugzilla.wikimedia.org/show_bug.cgi?id=53867 [22:11:13] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53868] Install binary exiftool from package 'libimage-exiftool-perl' - https://bugzilla.wikimedia.org/show_bug.cgi?id=53868 [22:11:13] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53867] Install C header libdmtx - https://bugzilla.wikimedia.org/show_bug.cgi?id=53867 [22:13:56] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53869] Install binaries pdftotext and pdfimages from package 'libimage-exiftool-perl' - https://bugzilla.wikimedia.org/show_bug.cgi?id=53869 [22:15:24] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53870] Install binary ffprobe from package 'libav-tools' - https://bugzilla.wikimedia.org/show_bug.cgi?id=53870 [22:17:15] [bz] (8NEW - created by: 2DrTrigon, priority: 4Unprioritized - 6normal) [Bug 53869] Install binaries pdftotext and pdfimages from package 'poppler-utils' - https://bugzilla.wikimedia.org/show_bug.cgi?id=53869 [22:17:16] [bz] (8NEW - created by: 2Peter Bena, priority: 4Lowest - 6trivial) [Bug 53704] Packages to be added to gerrit - https://bugzilla.wikimedia.org/show_bug.cgi?id=53704 [22:33:03] Coren, ping [22:56:38] Cyberpower678: What's up? [22:57:14] Is it typical for the database to go away after 5 minutes max of nothingness? [22:58:06] Coren, ^ [22:58:37] Cyberpower678: I'm not sure I understand your question. Do you mean loosing the connection when idle? [22:58:43] My bot scans the external links table, which is over 60000000 links big. [22:59:14] It processes that table in batches of 15000 and if it finds a blacklisted link, it gets added to the local database [22:59:27] Every new iteration my bot executes $dblocal = new Database( 'tools-db', $toolserver_username2, $toolserver_password2, 'cyberbot' ); [22:59:39] The database shouldn't disconnect you during a query, but your library might have a timeout? [22:59:50] to ensure the connection to the database remains active. [23:00:29] But my long script just crashed with: [23:01:04] PHP Fatal error: Uncaught exception 'DBError' with message 'Database Error: MySQL server has gone away (code 2006) INSERT INTO blacklisted_links (`url`,`page`) VALUES ('','')' in /data/project/cyberbot/Peachy/Plugins/database.php:155 [23:03:53] Cyberpower678: Err... this happened once, like about 3 hours ago? [23:04:08] I think. [23:04:41] Error thrown at 4:13 PM EDT [23:04:59] http://lists.wikimedia.org/pipermail/labs-l/2013-September/001598.html\ [23:05:02] http://lists.wikimedia.org/pipermail/labs-l/2013-September/001598.html [23:05:07] Cyberpower678: ^^ [23:05:32] This was a planned outage, announced 3 days ago. [23:05:59] I'm not subscribed. [23:06:30] Cyberpower678: You really, really should be. All outages, maintenance and changes are announced there. [23:07:10] It's a good thing, my script is designed to recover from a crash. [23:07:24] Yep. All the good ones are. [23:19:58] Coren, archive status? [23:20:08] Coren: https://en.wikipedia.org/wiki/Special:Contributions/10.4.1.125 looks like a labs bot running logged out? Do we have a good way to track the person down? [23:20:29] Jamesofur: Known to be legobot, and he is aware of the issue. [23:20:34] great thanks [23:20:43] Hi [23:20:45] Yes. [23:20:54] thanks much legoktm :) [23:21:00] np [23:21:06] Jamesofur: Tracking down the actual bot requires a bit of CU as well, it's not quite possible to know just from the server end which of several. [23:21:36] Or, you could look at the relevant bot pages where the operator stated that it was their bot :P [23:23:09] that was my standard plan if no one knew lego ;) clearly "whichever bot does MfD's " ;) [23:23:26] Coren: you can ignore Pb's chat ping :) [23:29:06] I think there's a tool out there that searches for identical edit summaries. [23:42:15] Coren, only 2 more scripts are running on the regular nodes.