[00:05:28] Platonides, is there even a certificate for that domain? [00:08:16] i have no idea. the only thing is that it's bothering me that some tools are only returning http links and not https, since i'm not logged in when i go to http links [00:08:44] to me it doesn't matter if it has a certificate, i just want my https links :P [00:19:31] Krenair, I think there's a certificate for *.wmflabs.org, but only on a few machines [00:19:39] Jhs, why would you be logged in there? [00:19:51] wmflabs is not in SUL [00:20:00] Platonides, not there. [00:20:06] We use a *.*.wmflabs self-signed cert for wikitech-test.wmflabs.org [00:20:21] but some tools return links to wikipedia pages, and when i click them i am not logged in [00:20:37] e.g. http://bots.wmflabs.org/~bene/items_by_cat.php?lang=no&cat=Kirker+i+S%C3%B8r-Tr%C3%B8ndelag&missing=on [00:20:39] It's invalid, but works if you really just want HTTPS without checking the identity of the host [00:20:45] there's https everywhere [00:21:05] Platonides, the browser addon? [00:21:08] yes [00:21:14] I can't suggest a better fix... [00:21:55] well, I guess mediawiki could give you a http cookie to make it redirect you to https [00:22:40] thanks, that works beautifully [00:22:44] but that wouldn't really give security [00:22:49] i knew of its existence before, but never actually tried it [00:22:50] oh, fine :) [00:23:04] guess i forgot to try it out. but i just did, and it works great :D [00:23:39] I'm glad it was so easy to 'fix' your problem :) [00:23:57] :) [00:24:53] good night! [01:48:18] Ryan_Lane: Krenair: wow, i never would have imagined someone was thinking password hashes. :( [01:48:33] he couldn't possibly have read Ryan_Lane's earlier mail on that thread and though that, right? [01:49:11] I can see how you would easily misunderstand from the word 'salt' alone [01:49:20] But the rest of the context... [01:50:17] i mean he's even subscribed to wikitech-l afaik [01:50:33] but who knows how much he reads. i certainly don't read everything on that list [10:12:21] Ryan_Lane: good morning, master [10:15:03] openid: _instead_ of insane ignoring-verfication-failures, we could think of supply data curl option CURLOPT_CAINFO [10:16:16] and CURLOPT_CAPATH . see CURLOPT_CAINFO The name of a file holding one or more certificates to verify the peer with. This only makes sense when used in combination with CURLOPT_SSL_VERIFYPEER. Requires absolute path. [10:16:17] CURLOPT_CAPATH A directory that holds multiple CA certificates. Use this option alongside CURLOPT_SSL_VERIFYPEER. [10:16:30] http://www.php.net/manual/en/function.curl-setopt.php [10:17:07] as optional key in $wgOpenIDForceProviders array . [11:30:37] @notify addsleep [11:30:37] This user is now online in #huggle so I will let you know when they show some activity (talk etc) [16:17:25] addshore [16:17:41] bnr1 is down o.O [16:18:22] no it's not... [16:21:18] @notify ceradon [16:21:18] I will notify you, when I see ceradon around here [16:21:23] @seen ceradon [16:21:23] petan: Last time I saw ceradon they were quitting the network with reason: Quit: http://www.kiwiirc.com/ - A hand crafted IRC client N/A at 3/17/2013 3:03:57 PM (01:17:25.5907590 ago) [16:24:35] !log bots petrb: killing hang addshore processes from 1 [16:24:39] Logged the message, Master [16:26:17] Coren is there a way to temporarily disable queue for certain host so that new jobs can't be submitted there but existing can continue running [16:30:08] Merlissimo: ^ [17:05:24] !log bots petrb: qdeploying postfix [17:05:29] Logged the message, Master [17:15:25] @notify ceradon [17:15:25] I will notify you, when I see ceradon around here [17:19:14] !log bots replaced exim4 and setup local delivery so that mail now works on local system [17:19:16] Logged the message, Master [17:19:26] !log this will be very useful when debuggin problems with SGE [17:19:26] this is not a valid project. [17:19:32] !log bots this will be very useful when debuggin problems with SGE [17:19:34] Logged the message, Master [17:21:43] petan|wk: it's very likely that puppet is going to break what you did [17:21:56] also, direct mail almost always gets marked as spam [17:21:57] arrrgh [17:22:06] Ryan_Lane: but this is localhost mail [17:22:17] legal also doesn't want us storing mail [17:22:17] so that when system send a mail to root I can type # mail [17:22:19] to read it [17:22:41] what? how is reading system emails connected to legal? [17:22:54] these are mails from cron and daemons not people [17:23:07] * Ryan_Lane nods [17:23:47] is it possible to override this puppet config? [17:23:53] not really [17:23:59] it's in our base classes [17:24:01] or replace exim4 with postfix globaly - it's 20 times better [17:24:06] hahahahaha [17:24:09] dude. come on now [17:24:11] it's email [17:24:21] postfix is seriously better [17:24:25] no. it's not. [17:24:29] than exim? [17:24:31] they are MTAs [17:24:36] yes, both are [17:24:39] it doesn't matter what's being used [17:24:48] we could use sendmail and it wouldn't matter [17:25:07] ok so how am I supposed to read system mails when that's the only way of reading logs from SGE? [17:25:08] anyway, what's needed is a proper relay [17:25:20] sge produces only emails nothing else [17:25:33] if it had a nice log I would be happy [17:25:48] and mark and paravoid have both mentioned they'd make one, when they got time [17:26:01] ok first step could be to disable that creepy alias of root [17:26:14] I hate when people from ops come here blaming us from spamming your inbox [17:26:18] it should never have been set up like that [17:26:36] almost no one on labs even knows it's redirected to your box [17:26:56] the issue is that it goes to our normal relay [17:27:02] yes, indeed [17:27:17] so, like I mentioned, the solution is to make a proper relay [17:27:18] * Ryan_Lane shrugs [17:27:33] I think the best solution would be to keep local inboxes - just as default ubuntu has so that 0 inboxes would get spammed [17:27:44] I don't want this emails to go to any real email [17:27:49] I want to keep them on that box [17:28:09] so that anyone with sudo can read them [17:29:24] getting 100000 mails from SGE is what noone wants [17:29:25] I'd prefer that as well as otherwise you always have some lag between when the mail was sent and when it can be read. [17:30:01] (But any kind of relay is better than /dev/null.) [17:37:57] also, Merlissimo is there a way to limit total memory usage per all jobs - not per 1 task [17:38:22] in SGE there is a way to limit memory per task but I need to drop queue when memory usage on 1 box exceed some value [17:38:38] total memory usage not of 1 task but of all taska [17:38:40] tasks [17:39:51] or Coren [17:40:31] exactly what I was predicting happened - one box got OOM, I would like to sort it out gracefully instead of Coren's way (killing) [17:43:15] by somehow flagging that box as unusable for job submitting so that no more jobs get submitted there [17:43:27] until this problem is solved [17:45:54] petan|wk: Do you mean hard limits or the limits supplied by users? [17:46:32] I don't know the difference [17:46:50] what I need is that if Xmb of ram is used the box is no longer used to launch new tasks [17:47:02] of total ram, not of ram per task [17:47:12] for example, one node has 8gb or [17:47:15] of ram [17:47:23] so that when 7.8 is used no more jobs go there [17:47:32] that's what I would like to set [17:47:49] but only limits I found are per task [17:48:03] I need per node [17:49:56] I thought that SGE does this by itself. Toolserver has the ressource "virtual_free", but I don't know how the admin setup is done (cf. https://wiki.toolserver.org/view/SGE#Mandatory_resources). [17:50:18] it aparently doesn't do that itself :/ [17:50:25] it only watches load [17:50:29] it doesn't care about ram [17:52:07] A quick google suggests that you need to define a consumable resource, and then specify its consumption per job. [17:52:27] !log bots petrb: somehow apache was installed to ibnr1 and no one logged it - next time log it and discuss before, deleting [17:52:32] Logged the message, Master [17:53:17] petan|wk: local inboxes are a great way for us to run out of disk space [17:53:37] reading mail on a system sucks [17:53:50] almost no one actually does it [17:53:54] no one cleans it up [17:54:21] petan|wk: http://jeetworks.org/node/93 [17:54:47] Ryan_Lane: that's problem of sysadmins of these boxes - they should configure their system properly [17:54:53] hahaha [17:54:59] if there are no problems they will never have more than 1mb of mails per year [17:55:04] I don't expect people to do things "right" [17:55:08] I know better [17:55:28] scfc_de: thanks [17:55:33] and no. it's not a problem for them [17:55:35] it's a problem for me [17:55:37] petan|wk: apache was on ibnr1? O_o [17:55:44] addsleep: yes [17:55:53] if their instances all eat all of their disk space, we're fucked [17:56:55] local mail is a poor workaround to the problem [17:57:09] a proper relay. that's the correct solution [17:57:14] and petan|wk I tried to use complex values to restrict the spread of jobs on oge but they didnt seem to do anything when I tried. [17:57:27] aha :/ [17:57:54] Ryan_Lane: but where do you want to relay that mail? [17:58:04] I don't want to receive million of mails to my personal box [17:58:05] to projectadmins [17:58:11] then stop using SGE? [17:58:15] eh [17:58:18] tell that to coren :> [17:58:38] or make SGE not send you emails for everything? [17:59:26] SGE doesn't do that, BTW. [17:59:28] I sure as hell don't want millions of emails writing to the local disks [18:00:06] that would be such a huge waste of IO [18:01:12] I give SGE explicit flags (-m be) to send me mail [18:01:25] scfc_de: whatever we are using then, it does that. Instead of writing log files it sends an email for every log entry [18:01:58] No manual entry for qsub. [18:02:00] perfect [18:02:47] scfc_de: remember you asked me about local mail delivery if it's still broken [18:03:29] addsleep: howcome your jobs were running on -1 [18:03:35] when there was no entry in qstat [18:03:53] looks like your jobs went out of control somehow [18:04:02] maybe a bug in sge? [18:04:03] :o [18:04:04] petan|wk: no idea... [18:04:21] saper: On Toolserver? [18:04:24] my only cron is the cron on gs, so unless you messed something up when you edited it ;p [18:04:25] + if you do ps -ef on bnr1 you will see dozen of jobs that aren't supposed to run [18:04:34] scfc_de: yes... [18:04:34] addsleep: nope lol [18:04:41] addsleep: I was just commenting / uncommenting etc [18:04:55] Which host? DaBPunkt needs to restart aliasd then probably. [18:05:42] scfc_de: no, I am not sure it's broken at all.. didn't test [18:06:14] saper: Ah, you mean some JIRA issue with River? [18:07:25] saper: https://jira.toolserver.org/browse/TS-955 ? [18:10:03] addsleep: almost no task is on bnr1 and it's overloaded :/ [18:10:14] * addsleep goes to check [18:11:58] 109 processes... [18:12:37] oh wait [18:12:41] i was on bastion xD [18:32:03] scfc_de: updated with https://jira.toolserver.org/browse/TS-955?focusedCommentId=22948#comment-22948 [18:53:06] addsleep: did you fix it? log thigs [19:04:38] addsleep: ur too sleepy [19:30:27] !ping [19:30:27] pong [19:49:47] Ryan_Lane: I think I have found a small problem with your OpenID patch two weeks ago. Can we talk ? [22:36:48] Hi! [22:37:50] Can anybody help to find out how to start working with wikilabs? I guess the Bots project is the most appropriate for me [22:39:38] Hi DixonD [22:40:04] you can create an account at https://wikitech.wikimedia.org/wiki/Main_Page [22:40:28] you then need to be added to the bots project [22:40:35] and also generic access to labs [22:40:36] I have an account already [22:41:01] can you connect to bastion? [22:41:20] I have no idea( [22:41:28] that's probably a no :P [22:41:46] :) [22:41:52] what's your username? [22:42:02] DixonD [22:43:02] I think you need to make a shell request [22:44:26] !account [22:44:26] in order to get an access to labs, please type !account-questions and ask Ryan, or someone who is in charge of creating account on labs [22:44:31] !account-questions [22:44:31] I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your preferred email address. 3. Your SVN account name, or your preferred shell account name, if you do not have SVN access. [22:44:42] this is the old system... [22:45:08] I see a ?Labslogbot creating requests, but it didn't make one for you [22:45:14] and I don't know how to force it [22:45:26] or a document [22:45:41] you could create a page like those at https://wikitech.wikimedia.org/wiki/Category:Shell_Access_Requests manually [22:46:11] I guess outstanding requests it will be checked by Ryan at some point [22:46:20] I'm afraid I don't have karma to give you shell [22:46:49] Ok, I'll do it that way [22:46:52] thanks [22:47:39] this may be the form: https://wikitech.wikimedia.org/wiki/Special:FormEdit/Shell_Access_Request [22:48:17] the manual seemed to be https://wikitech.wikimedia.org/wiki/Help:Getting_Started [23:04:50] petan|wk: nope, had to go back to work :P