[00:00:59] [sfero] [/mnt] $ sudo -s [00:00:59] sudo: ldap_start_tls_s(): Can't contact LDAP server [00:00:59] sudo: unable to resolve host sfero [00:01:20] what instance is this? [00:02:27] Ryan_Lane: i-000004b3 [00:03:02] hm. dns is failing [00:03:04] * Ryan_Lane sighs [00:05:27] weird. dns isn't working at all on this instance [00:05:53] i assigned it an IP and a hostname earlier today (after you allocated one) [00:05:54] resolv.conf is empty [00:06:03] hah [00:06:10] / is full [00:06:46] ugh [00:06:47] tiny [00:06:53] you need to rebuild this instance [00:06:56] don't use tiny [00:07:05] I wonder how I can hide that flavor type [00:07:20] if I just delete it I think bad things will occur [00:09:46] *clears throat, intones presidentially* We shall rebuild. [00:11:54] hm. maybe it just marks it as deleted [00:12:33] did you delete it yet? [00:14:04] not yet, sorry [00:14:08] i'll do that now [00:14:12] wait a sec [00:14:19] k [00:14:44] well, seems I broke it [00:15:09] yep [00:15:11] sure did [00:16:40] ok. fixed [00:16:44] damn it [00:18:12] Ryan_Lane: dunno if you're in the office but apparently internet is "going down for the night"(!) [00:18:18] yeah [00:18:21] so i may disconnect at any moment [00:18:25] It needs its beauty sleep. [00:18:26] so, I'm going to stop fucking with this right now :) [00:18:34] would be lame to cause an outage then lose internet [00:18:36] it ran out of 0s and 1s [00:18:46] yeah fair enough, thanks for looking [00:19:02] your problem is directly related to / being full [00:19:14] resolv.conf was overwritten by dhclient [00:19:21] but it couldn't actually write to the file [00:19:25] so, no dns settings [00:19:34] which means you can't contact the dns server [00:19:37] or ldap [00:19:41] * RogueMadman smiles winningly at Ryan_Lane since he's here. [00:19:51] RogueMadman: howdy [00:20:06] with the internet going down i don't think i need dns anyway [00:20:18] Howdy. Would it be possible to create the cvresearch project for westand and myself, with 1 public IP? >.> [00:23:12] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 17% free memory [00:25:32] RECOVERY Current Users is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: USERS OK - 0 users currently logged in [00:26:02] RECOVERY Disk Space is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: DISK OK [00:26:22] RECOVERY Current Load is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: OK - load average: 0.27, 0.41, 0.47 [00:26:35] RogueMadman: I know Damianz recently wrote https://labsconsole.wikimedia.org/wiki/Help:Contents#Requesting_A_New_Project about that [00:26:42] PROBLEM Free ram is now: WARNING on aggregator2 i-000002c0.pmtpa.wmflabs output: Warning: 7% free memory [00:26:48] RogueMadman: try that out? [00:26:53] I did. [00:27:02] RECOVERY dpkg-check is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: All packages OK [00:28:02] RECOVERY SSH is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1 (protocol 2.0) [00:28:14] (If you click Outstanding Requests you should see it.) [00:28:23] ah -- sorry, didn't check that, just wandered in. [00:28:32] RECOVERY Total processes is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: PROCS OK: 218 processes [00:28:43] that's negligent of me [00:28:53] Not at all. :) [00:30:03] RogueMadman: I think Damianz and petan and Ryan_Lane are probably busy right now fixing things but I could be wrong [00:30:21] Sure thing. Was just checking. :) [00:30:38] I'm happily occupied beating my Kerberos server with a stick in the meantime. [00:30:51] ha [00:30:53] RogueMadman: ah. I totally forgot about the project requests [00:30:56] sorry [00:31:27] No problem; it's new. :) [00:31:42] RECOVERY Free ram is now: OK on aggregator2 i-000002c0.pmtpa.wmflabs output: OK: 93% free memory [00:34:49] you have a request somewhere? [00:35:03] was it on the email list? [00:35:16] indeed it was [00:35:38] RogueMadman: can you tell me a little about this? [00:35:49] you say you're moving stuff from another service [00:35:50] Yeah, I sent the e-mail (because it was a holiday weekend) then I think Damianz decided there should be a better way to make requests than IRC/e-mail. [00:35:52] is it from toolserver? [00:35:54] Ryan_Lane: Yeah. [00:36:06] Ryan_Lane: I was being a little snarky. :p [00:36:08] you know we don't have replicated databases yet, right? :) [00:36:13] It doesn't need them. [00:36:16] ok. cool [00:36:25] It's just proving impossible to get Perl packages or anything installed over there. [00:36:36] on TS? [00:36:40] Indeed. [00:36:49] what's your user name in labs? [00:36:55] and the username of the other person? [00:36:59] Labs's architecture (sudo access, etc.) is going to be much more maintainable for us in the long run. [00:37:02] madman / westand [00:37:08] ah. as in the email :) [00:37:22] ^^ [00:38:04] I'm not sure if westand already has bastion access but I'm pretty sure he's uploaded his public key and everything. [00:40:31] I just added westand [00:40:37] he didn't have shell access [00:40:57] project is created [00:40:58] 10/11/2012 - 00:40:58 - Creating a project directory for cvresearch [00:40:58] 10/11/2012 - 00:40:58 - Created a home directory for laner in project(s): cvresearch [00:41:16] remember to avoid using the home directories :) [00:41:42] also, don't add your public IP to the instance until you are ready for the public to access it [00:42:00] Change on 12mediawiki a page Developer access was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=592404 edit summary: /* User:Arcane21 */ [00:42:16] Ryan_Lane: Will do. Many thanks. :D [00:43:16] Change on 12mediawiki a page Developer access was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=592405 edit summary: /* User:Wtmitchell */ [00:44:24] ah. this project request form needs to be modified [00:44:47] it creates pages based on the requestor name [00:44:55] it should be based on the project name [00:44:57] Yeah, I noticed that. xD [00:45:57] 10/11/2012 - 00:45:57 - User laner may have been modified in LDAP or locally, updating key in project(s): cvresearch [00:45:57] oh [00:46:01] it's already been fixed [00:46:26] Change on 12mediawiki a page Developer access was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=592406 edit summary: moved 2 that are done [00:48:19] I don't seem to be a member or admin of the project; propagation thing? [01:06:52] PROBLEM Total processes is now: WARNING on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS WARNING: 173 processes [01:11:52] RECOVERY Total processes is now: OK on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS OK: 94 processes [01:26:12] PROBLEM Disk Space is now: CRITICAL on hume i-000003cc.pmtpa.wmflabs output: DISK CRITICAL - free space: / 37 MB (2% inode=45%): [01:31:13] PROBLEM Disk Space is now: WARNING on hume i-000003cc.pmtpa.wmflabs output: DISK WARNING - free space: / 50 MB (3% inode=45%): [02:38:43] RECOVERY Free ram is now: OK on bots-sql2 i-000000af.pmtpa.wmflabs output: OK: 22% free memory [03:01:42] PROBLEM Free ram is now: WARNING on bots-sql2 i-000000af.pmtpa.wmflabs output: Warning: 15% free memory [03:48:52] PROBLEM Current Load is now: CRITICAL on build2 i-000004b7.pmtpa.wmflabs output: Connection refused by host [03:49:32] PROBLEM Current Users is now: CRITICAL on build2 i-000004b7.pmtpa.wmflabs output: Connection refused by host [03:53:53] RECOVERY Current Load is now: OK on build2 i-000004b7.pmtpa.wmflabs output: OK - load average: 0.14, 0.56, 0.41 [03:54:33] RECOVERY Current Users is now: OK on build2 i-000004b7.pmtpa.wmflabs output: USERS OK - 0 users currently logged in [04:10:12] PROBLEM Free ram is now: WARNING on orgcharts-dev i-0000018f.pmtpa.wmflabs output: Warning: 14% free memory [04:30:12] PROBLEM Free ram is now: CRITICAL on orgcharts-dev i-0000018f.pmtpa.wmflabs output: Critical: 4% free memory [04:33:32] PROBLEM Total processes is now: CRITICAL on aggregator-test1 i-000002bf.pmtpa.wmflabs output: PROCS CRITICAL: 232 processes [04:34:42] RECOVERY Free ram is now: OK on aggregator-test1 i-000002bf.pmtpa.wmflabs output: OK: 92% free memory [04:40:12] RECOVERY Free ram is now: OK on orgcharts-dev i-0000018f.pmtpa.wmflabs output: OK: 95% free memory [04:49:13] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 18% free memory [05:26:46] I'm not able to ssh into labs right now [05:26:48] ssh: Could not resolve hostname bastion.wmflabs.org: nodename nor servname provided, or not known [05:27:03] Though I can access bots.wmflabs.org from my browser [05:29:24] And I just got in [05:29:26] Yay :D [05:31:30] heh, I got that error one hour ago [05:59:13] RECOVERY Free ram is now: OK on ipv6test1 i-00000282.pmtpa.wmflabs output: OK: 21% free memory [06:07:13] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 18% free memory [06:33:52] PROBLEM dpkg-check is now: CRITICAL on sube i-000003d0.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [06:37:53] PROBLEM Disk Space is now: WARNING on echo-xmpp i-00000351.pmtpa.wmflabs output: DISK WARNING - free space: / 556 MB (5% inode=91%): [06:41:12] PROBLEM Disk Space is now: WARNING on testing-arky i-0000033b.pmtpa.wmflabs output: DISK WARNING - free space: / 74 MB (5% inode=51%): [06:42:12] RECOVERY Free ram is now: OK on ipv6test1 i-00000282.pmtpa.wmflabs output: OK: 23% free memory [06:47:43] PROBLEM Disk Space is now: CRITICAL on mw1-21beta-lucid i-00000416.pmtpa.wmflabs output: DISK CRITICAL - free space: / 6 MB (0% inode=51%): [06:47:53] RECOVERY Disk Space is now: OK on echo-xmpp i-00000351.pmtpa.wmflabs output: DISK OK [06:56:13] PROBLEM Disk Space is now: CRITICAL on hume i-000003cc.pmtpa.wmflabs output: DISK CRITICAL - free space: / 0 MB (0% inode=44%): [06:56:13] PROBLEM Disk Space is now: WARNING on conventionextension-trial i-000003bf.pmtpa.wmflabs output: DISK WARNING - free space: / 73 MB (5% inode=51%): [07:31:26] aeraeraerazr [07:31:32] so labs is dead for me thanks to DNS :-] [07:33:06] labs-ns1 apparently does not have all the DNS entries :/ [07:50:12] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 18% free memory [07:55:13] RECOVERY Free ram is now: OK on ipv6test1 i-00000282.pmtpa.wmflabs output: OK: 22% free memory [09:11:51] grrr [09:14:08] Yeah dns just bliped again :( [09:17:05] [bugzilla]: 3[Bug 40825] Labs DNS configuration only points at a single LDAP server (4Ryan Lane) https://bugzilla.wikimedia.org/show_bug.cgi?id=40825 [09:22:51] [bugzilla]: 3[Bug 32163] Please list the fingerprint(s) of the server (4T. Gries) https://bugzilla.wikimedia.org/show_bug.cgi?id=32163 [09:23:52] [bugzilla]: 3[Bug 34685] Enable irc feed for labsconsole.wikimedia.org site (4Peter Bena) https://bugzilla.wikimedia.org/show_bug.cgi?id=34685 [09:26:53] [bugzilla]: 3[Bug 36422] easily reload all apaches (4Antoine "hashar" Musso) https://bugzilla.wikimedia.org/show_bug.cgi?id=36422 [09:31:19] [bugzilla]: 3[Bug 36511] e-mail sending from labs (4Peter Bena) https://bugzilla.wikimedia.org/show_bug.cgi?id=36511 [09:35:16] http://bots.wmflabs.org doesn't resolve [09:35:24] Ryan_Lane ^ [09:35:24] dns crapness again [09:35:30] since when [09:36:12] it's working again [09:36:16] :o [09:36:16] beh, it happens occasionally only [09:36:29] it doesn't work to me stil [09:36:31] still [09:36:34] central EU [09:36:45] its a random on/off. [09:36:51] o.o [09:36:53] meh [09:36:55] Basically the authoritive servers will randomly stop responding [09:37:00] Because they loose connection to ldap [09:37:29] More annoying because they don't go offline so the recursive servers cache the result as no entry [09:37:29] wait a moment I thought labs got own DNS server or not? [09:38:05] so problem is with subdomain or wmflabs.org? [09:38:22] wmflabs.org [09:38:23] they do [09:38:26] wmflabs.org resolve to me [09:38:28] labs-ns1.wikimedia.org and labs-ns0.wikimedia.org [09:38:32] bots.wmflabs.org doesn't [09:38:46] What dns resolver are you using? [09:38:47] so it's a problem in our dns, not 3rd service [09:39:01] you mean command or primary server? [09:39:10] what your system is using for dns [09:39:10] probably google's atm [09:39:32] google's responds for me [09:39:44] meh it works now [09:39:45] Basically when the query fails (IE bots in A) recursers cache it as no A record because it's not a servfail as far as they know [09:39:47] that sucks [09:39:54] So you get weird cache issues everytime it stops responding [09:40:04] likely [09:51:56] [bugzilla]: 3[Bug 37807] nscd negative cache is way too long (4Ryan Lane) https://bugzilla.wikimedia.org/show_bug.cgi?id=37807 [09:54:19] [bugzilla]: 3[Bug 39781] Add a new openstack network in pmtpa (4Ryan Lane) https://bugzilla.wikimedia.org/show_bug.cgi?id=39781 [09:54:49] petan: looks fun ;-D [09:54:57] [bugzilla]: 3[Bug 39781] Add a new openstack network in pmtpa (4Ryan Lane) https://bugzilla.wikimedia.org/show_bug.cgi?id=39781 [09:58:53] [bugzilla]: 3[Bug 40943] Fix the instance types (4Damian Z) https://bugzilla.wikimedia.org/show_bug.cgi?id=40943 [10:01:01] Damian Z: for bug 40943, the flavours on Special:NovaInstance is wrong [10:01:15] (I meant Damianz) [10:01:33] but the final instance is correct with 4/8 gig ram [10:01:55] [bugzilla]: 3[Bug 40945] Sudo policies don't work for new instances (4Damian Z) https://bugzilla.wikimedia.org/show_bug.cgi?id=40945 [10:01:57] What's the disk size on that? [10:02:15] its also different [10:02:21] awesome [10:02:24] like m1.large is 80G on /mnt [10:02:38] Hmm so just the labels are wrong then [10:02:39] (just recall the old formats, I think its a bug with OpenStackManager itself) [10:02:46] yeah [10:03:47] Cool, I can sort bots when I get home then [10:03:56] * Damianz gives up spamming people via bz for the min [10:03:56] [bugzilla]: 3[Bug 40943] Fix the instance types (4Damian Z) https://bugzilla.wikimedia.org/show_bug.cgi?id=40943 [10:05:32] Hmm, there is an entry on SkipSkins for default => 'wikimania', why? [10:06:12] Maybe you'd like to be skinless! [10:06:22] On another note I should go checkout of the hotel before they yell at me [10:06:45] lol [10:07:35] Might go work in the bar for a while before heading into town for lunch on a slow trip towards the airport [10:19:32] PROBLEM Current Users is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:20:14] PROBLEM Disk Space is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:20:54] PROBLEM Current Load is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:20:54] PROBLEM Free ram is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:22:24] PROBLEM Total processes is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:22:54] PROBLEM dpkg-check is now: CRITICAL on lynwood i-000004b8.pmtpa.wmflabs output: Connection refused by host [10:35:58] 10/11/2012 - 10:35:57 - User dereckson may have been modified in LDAP or locally, updating key in project(s): commons-dev [10:36:13] 10/11/2012 - 10:36:13 - Updating keys for dereckson at /export/keys/dereckson [10:38:53] PROBLEM Current Load is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:39:33] PROBLEM Current Users is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:40:12] PROBLEM Disk Space is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:40:52] PROBLEM Free ram is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:42:22] PROBLEM Total processes is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:43:09] Hello [10:43:11] status: dns problems <-- oh okay [10:43:12] PROBLEM dpkg-check is now: CRITICAL on lynwood i-000004b9.pmtpa.wmflabs output: Connection refused by host [10:45:52] RECOVERY Free ram is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: OK: 848% free memory [10:47:22] RECOVERY Total processes is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: PROCS OK: 84 processes [10:48:12] RECOVERY dpkg-check is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: All packages OK [10:48:52] RECOVERY Current Load is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: OK - load average: 0.09, 0.51, 0.41 [10:49:32] RECOVERY Current Users is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: USERS OK - 1 users currently logged in [10:50:13] RECOVERY Disk Space is now: OK on lynwood i-000004b9.pmtpa.wmflabs output: DISK OK [11:35:07] @seen Ryan_Lane [11:35:07] petan: Ryan_Lane is in here, right now [11:57:15] petan: any step-by-step guide for setting up a mediawiki on labs and publish it to the public? [12:25:56] liangent no [12:26:03] liangent you are welcome to create one :D [12:26:04] !help [12:26:04] !documentation for labs !wm-bot for bot [12:26:09] !documentation [12:26:09] https://labsconsole.wikimedia.org/wiki/Help:Contents [12:26:15] !docs [12:26:15] View complete documentation at https://labsconsole.wikimedia.org/wiki/Help:Contents [12:26:18] :) [12:26:25] !do [12:26:25] There are multiple keys, refine your input: docs, documentation, domain, [12:31:36] hi, i can't seem to login to bastion now [12:32:29] it works now [12:35:57] aude: if can't login = timeout, please consider to add in your /etc/hosts file a temporary entry with 208.80.153.207 bastion.wmflabs.org [12:37:46] Dereckson: thanks [12:38:57] You're welcome. [12:39:14] !log wikidata-dev add repoBase and repoApi test client settings [12:39:16] Logged the message, Master [12:39:17] !bastion [12:39:17] http://en.wikipedia.org/wiki/Bastion_host; lab's specific bastion host is: bastion.wmflabs.org which should resolve to 208.80.153.194; see !access [12:39:20] there is IP [12:39:21] :P [12:39:27] good :) [12:39:34] I think [12:40:00] there was an issue with the DNS server this morning [12:40:03] probably still the cases [12:40:04] case [12:40:08] hey hashar [12:40:14] you had some comments to my feed? :D [12:40:18] IIRC labs-ns1.wikimedia.org is missing some (all?) entries [12:40:42] no idea if we have a bug opened for it yet [12:41:19] petan: no idea for now. I lack time to properly look at it. Is that a C# / written from scratch bot ? [12:41:40] plus I would avoid polling an RSS feed to get bug notification [12:41:56] hashar: good to know [12:42:00] well, it's in c# dunno what you mean by from scratch [12:42:14] it's just another module I wrote for wm-bot [12:42:21] ohh [12:42:29] so at least it reuses some code from wm-bot ;] [12:42:37] it's wm-bot itself :D [12:42:43] it doesn't use some code, it uses all code [12:43:04] how does it matter [12:46:23] well that means you are using code that works :-D [12:47:02] what is "polling RSS feed" [12:47:28] you mean poluting? :P [12:52:00] err [12:52:02] polling [12:52:03] to poll [12:52:13] aka getting something repeatly [12:52:20] repeatedly [12:52:22] it gets is every 20 seconds [12:52:37] yeah that it is what polling is ;-] [12:52:46] it retrieves only list of bugs modified in last 1 hour, so it's pretty quick [12:52:48] query [12:55:16] Putty says: bastion.wmflabs.org .. host does not exist [12:55:54] Beetstra: yeah there is a DNS server issue ongoing [12:56:03] OK, so it is not me [12:56:20] you can workaround it by adding an entry in your /etc/hosts file : 208.80.153.207 bastion.wmflabs.org [12:56:32] windows has a similar file hidden somewhere under %WINDIR% [12:56:37] (I'm in a strange country - some things don't go through the countries firewall .. but that is not it then) [12:56:59] should be : %SystemRoot%\system32\drivers\etc\hosts[5] [12:57:04] ERR: %SystemRoot%\system32\drivers\etc\hosts [12:58:26] or just put an alternative bastion profile in putty - using the IP :-) [12:58:30] thanks! [13:02:12] so I still don't understand what the "Configure instance" interface mean [13:02:31] what will happen if I select a "software package"? [13:02:38] it'll be installed? [13:03:25] [bugzilla]: 3[Bug 40947] labs-ns1.wikimedia.org NS server (4Antoine "hashar" Musso) https://bugzilla.wikimedia.org/show_bug.cgi?id=40947 [13:03:41] good bot :-] [13:04:04] updated topic [13:07:33] PROBLEM Total processes is now: WARNING on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS WARNING: 190 processes [13:12:32] RECOVERY Total processes is now: OK on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS OK: 97 processes [13:16:40] [bugzilla]: 3[Bug 40947] labs-ns1.wikimedia.org NS server (4Antoine "hashar" Musso) https://bugzilla.wikimedia.org/show_bug.cgi?id=40947 [13:17:36] petan: asi neni potreba aby to psalo [bugzilla] kdyz je to zrejme z obsahu, a nevim, jak to vyrabi link, ale jde udelat kratsi (...org/1234) [13:17:55] Danny_B|backup RSS? [13:17:59] lol fail bot [13:18:08] Hydriz who's fail [13:18:20] no, did you see what just happened? [13:18:34] the user is wrong [13:18:41] it's not [13:18:49] it's $author [13:18:50] bug 40947 [13:18:56] the person who created ticket... [13:18:58] but why author? [13:19:04] because RSS gives that [13:19:16] zzz [13:19:18] check RSS feed from bugzilla, that is what suck, not my bot [13:19:28] don't use the feed [13:19:29] it just re post what it get [13:19:32] use the email [13:19:38] Danny_B|backup lol [13:19:40] as the bot on #mediawiki does [13:19:48] how I tell the email to give me only bugs for wikimedia-labs? [13:20:01] I would need to create email account for every feed I would like to use [13:20:06] isn't that in the mail? [13:20:32] in #mediawiki bot spam the channel with 100% of all bugs, no matter if they have anything common with mediawiki [13:20:43] that suck way more [13:21:08] in this feed you can specify anything you want directly in search form [13:21:10] I got changes pending for that ;-D [13:21:13] still pending review though [13:21:27] heh [13:21:31] though the perl script parse the bugzilla notification emails headers [13:21:34] that's why I prefer to bypass gerrit [13:21:36] which is not really user friendly hehe [13:22:28] err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Package[apache2] is already defined in file /etc/puppet/manifests/webserver.pp at line 91; cannot redefine at /etc/puppet/manifests/webserver.pp:42 on node i-000001cc.pmtpa.wmflabs [13:22:31] what does this mean? [13:22:45] liangent that means you configured puppet wrong [13:22:47] you selected some apache config twice [13:23:01] (which you shouldn't use in puppet) [13:23:48] Danny_B|backup regarding [bugzilla] it can be basically anything, I couldn't think of anything better right now [13:23:58] it's a name of RSS feed [13:24:15] @rss+ news http://feeds.bbci.co.uk/news/video_and_audio/world/rss.xml [13:24:15] Item was inserted to feed [13:24:16] :P [13:24:19] Unable to parse the feed from http://feeds.bbci.co.uk/news/video_and_audio/world/rss.xml this url is probably not a valid rss, the feed will be disabled, until you re-enable it by typing @rss+ news [13:24:22] ok let me try to check them one by one [13:24:22] damn [13:25:14] so  webserver::apache2  is already included in webserver::php5 ? [13:25:53] something like that, I forgot which one duplicates [13:27:11] just configure bugzilla to return fields you need in the feed [13:27:29] atm it does not have a sense to return the username as it is always the same [13:28:22] I don't think it can be configured or I don't know how [13:29:30] templates [13:29:34] afaik [13:35:04] is there a list of 'packages' with description text [13:35:27] or I would simply use apt. that seems clearer [13:36:52] PROBLEM Free ram is now: WARNING on dumps-bot3 i-000003ef.pmtpa.wmflabs output: Warning: 19% free memory [13:39:41] is there any proxy available now? [13:39:52] based on discussion of [Labs-l] saving up some public IP by using a common proxy? [14:24:27] @configure style-rss=[bz] (8$bugzilla_status - created by: 2$author, priority: 4$bugzilla_priority - 6$bugzilla_severity) $title - $link [14:24:28] Value [bz] (8$bugzilla_status - created by: 2$author, priority: 4$bugzilla_priority - 6$bugzilla_severity) $title - $link was stored into style-rss to config [14:24:43] :o [14:27:07] i don't see a reason to display the bug reporter since it's always the same person thus its informational value is nearby zero [14:27:21] unless it's a new bug? :P [14:27:35] $title is also always same as well as $link [14:29:13] but they both tell you about what the post is, reporter name does not [14:29:29] but it tell you who reported the bug [14:31:11] which is in fact not important for monitoring the bugs [14:31:16] and their updates [14:31:34] besides it will often hilight such users [14:36:35] !log wikidata-dev wikidata-dev-2: switched from Wikidata branch to master branch, updated public demo repo, English and Hebrew clients [14:36:37] Logged the message, Master [14:46:30] !log wikidata-dev wikidata-dev3: forgot to log yesterday: also switched branch from Wikidata to master. [14:46:32] Logged the message, Master [14:47:23] !log wikidata-dev wikidata-dev-3: added html validation to crontab [14:47:24] Logged the message, Master [15:05:33] PROBLEM Total processes is now: WARNING on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS WARNING: 190 processes [15:09:13] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 18% free memory [15:14:12] RECOVERY Free ram is now: OK on ipv6test1 i-00000282.pmtpa.wmflabs output: OK: 26% free memory [15:17:50] [bz] (8 - created by: 2, priority: 4 - 6) VIDEO: Belgium's Ghent Altarpiece restored - http://www.bbc.co.uk/news/world-europe-19906745#sa-ns_mchannel=rss&ns_source=PublicRSS20-sa [15:25:32] RECOVERY Total processes is now: OK on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS OK: 97 processes [15:32:08] lol [15:32:16] @rss- news [15:32:16] Item was removed from db [15:32:47] parsing bbc as bugzilla isn't fun [16:13:24] hm… is deployment broken or did it move? [16:13:57] oh! Apparently asking that question was enough to make it start working again. [16:18:43] !log wikidata-dev wikidata-dev-3: allowed wikidata system user to log to a file [16:18:47] Logged the message, Master [16:19:22] labs-morebots: Can't you just call me "Mistress"? ;) [16:24:26] The bot calls Leslie 'Mistress' so it must be possible to configure. [17:08:52] PROBLEM Current Load is now: CRITICAL on vumi-metrics i-000004ba.pmtpa.wmflabs output: Connection refused by host [17:09:32] PROBLEM Current Users is now: CRITICAL on vumi-metrics i-000004ba.pmtpa.wmflabs output: Connection refused by host [17:10:12] PROBLEM Disk Space is now: CRITICAL on vumi-metrics i-000004ba.pmtpa.wmflabs output: Connection refused by host [17:11:02] PROBLEM Free ram is now: CRITICAL on vumi-metrics i-000004ba.pmtpa.wmflabs output: Connection refused by host [17:12:12] PROBLEM Free ram is now: WARNING on ipv6test1 i-00000282.pmtpa.wmflabs output: Warning: 18% free memory [17:12:22] PROBLEM Total processes is now: CRITICAL on vumi-metrics i-000004ba.pmtpa.wmflabs output: Connection refused by host [17:13:52] RECOVERY Current Load is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: OK - load average: 0.65, 0.90, 0.53 [17:14:32] RECOVERY Current Users is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: USERS OK - 0 users currently logged in [17:15:12] RECOVERY Disk Space is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: DISK OK [17:16:02] RECOVERY Free ram is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: OK: 4950% free memory [17:17:22] RECOVERY Total processes is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: PROCS OK: 120 processes [17:36:53] PROBLEM Free ram is now: WARNING on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Warning: 19% free memory [17:59:22] PROBLEM Disk Space is now: CRITICAL on labs-nfs1 i-0000005d.pmtpa.wmflabs output: DISK CRITICAL - free space: /export 249 MB (1% inode=58%): /home/SAVE 249 MB (1% inode=58%): [18:02:13] RECOVERY Free ram is now: OK on ipv6test1 i-00000282.pmtpa.wmflabs output: OK: 22% free memory [18:08:52] PROBLEM dpkg-check is now: CRITICAL on build2 i-000004b7.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [18:25:56] 10/11/2012 - 18:25:56 - User spetrea may have been modified in LDAP or locally, updating key in project(s): analytics [18:51:56] andrewbogott: when is the dns session at the summit? [18:52:51] I see that it's 'scheduled' but not for when. Looking... [18:53:32] Wednesday, 1:50 [18:53:39] cool [18:53:40] thanks [18:53:52] there's a bunch of #openstack-infra people interested [18:54:05] <^demon> Ryan_Lane: http://noc.wikimedia.org/~demon/gerrit-2.4.2-1-ga076b99.war is the final build we're going to use tomorrow. [18:54:27] <^demon> It's a build of t [18:54:29] 2.4.2? [18:54:32] <^demon> https://gerrit.wikimedia.org/r/gitweb?p=operations%2Fgerrit.git;a=shortlog;h=refs%2Fheads%2Fstable-2.4-wmf [18:54:33] <^demon> Yeah [18:54:42] we aren't going to 2.5? [18:59:17] <^demon> Can't until Shawn fixes LDAP group inheritance. [18:59:33] why upgrade then? [18:59:44] <^demon> hashar needs a specific patch I've pulled in. [18:59:46] did we backport some stuff? [18:59:48] ah [18:59:48] ok [18:59:51] <^demon> Yeah, it's for his Zuul work. [18:59:56] * Ryan_Lane nods [18:59:59] sounds good [19:04:32] PROBLEM Total processes is now: WARNING on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS WARNING: 190 processes [19:04:39] ^demon: we are also probably going to upgrade gallium to Precise :-] [19:06:03] <^demon> I'd prefer to just go 2.5, but I don't know when LDAP's going to be fixed. If it happened like...tonight...I'd go straight for 2.5 :p [19:07:14] hashar: upgrade in place? [19:35:17] mutante: hopefully not :-] [19:35:35] mutante: err which upgrade are you talking about? [19:39:47] [bz] (8NEW - created by: 2Damian Z, priority: 4Unprioritized - 6normal) [Bug 40943] Fix the instance types - https://bugzilla.wikimedia.org/show_bug.cgi?id=40943 [19:51:31] * andrewbogott => niece's birthday party, on and off unpredictably during the evening [19:58:58] i can't resolve bastion.wmflabs.org -- presumably this is the dns problem alluded to in the title. anyone have the public IP handy? [20:03:24] nevermind, found it: 208.80.153.207 [20:04:32] PROBLEM Total processes is now: CRITICAL on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS CRITICAL: 283 processes [20:04:47] hashar: gallium [20:05:01] mutante: not upgraded [20:05:07] mutante: I have replied to faidon on ops list [20:05:22] mutante: we want to schedule that during the morning or devs will complain :-] [20:05:35] there is less activity during European morning [20:06:07] oki [20:06:21] and not going to do that on a friday anyway :-] [20:06:28] [bz] (8RESOLVED - created by: 2Ryan Lane, priority: 4Unprioritized - 6enhancement) [Bug 39781] Add a new openstack network in pmtpa - https://bugzilla.wikimedia.org/show_bug.cgi?id=39781 [20:09:54] [bz] (8RESOLVED - created by: 2Ryan Lane, priority: 4Unprioritized - 6normal) [Bug 37807] nscd negative cache is way too long - https://bugzilla.wikimedia.org/show_bug.cgi?id=37807 [20:10:43] [bz] (8NEW - created by: 2Peter Bena, priority: 4Normal - 6normal) [Bug 36511] e-mail sending from labs - https://bugzilla.wikimedia.org/show_bug.cgi?id=36511 [20:12:03] hmm. is there an issue with DNS? [20:12:43] PROBLEM Disk Space is now: UNKNOWN on mw1-21beta-lucid i-00000416.pmtpa.wmflabs output: Invalid host name i-00000416.pmtpa.wmflabs [20:13:03] PROBLEM SSH is now: UNKNOWN on ganglia-test2 i-00000250.pmtpa.wmflabs output: Usage:check_ssh [-46] [-t timeout] [-r remote version] [-p port] host [20:13:20] [bz] (8NEW - created by: 2T. Gries, priority: 4High - 6normal) [Bug 32163] Please list the fingerprint(s) of the server - https://bugzilla.wikimedia.org/show_bug.cgi?id=32163 [20:14:01] colors :) [20:14:11] Yellow is unreadable on grey... [20:14:19] fine on black :) [20:15:12] PROBLEM host: i-0000049b.pmtpa.wmflabs is DOWN address: i-0000049b.pmtpa.wmflabs check_ping: Invalid hostname/address - i-0000049b.pmtpa.wmflabs [20:15:12] PROBLEM host: i-000004ab.pmtpa.wmflabs is DOWN address: i-000004ab.pmtpa.wmflabs check_ping: Invalid hostname/address - i-000004ab.pmtpa.wmflabs [20:15:12] PROBLEM host: i-000004ae.pmtpa.wmflabs is DOWN address: i-000004ae.pmtpa.wmflabs check_ping: Invalid hostname/address - i-000004ae.pmtpa.wmflabs [20:15:12] PROBLEM host: i-000004a1.pmtpa.wmflabs is DOWN address: i-000004a1.pmtpa.wmflabs check_ping: Invalid hostname/address - i-000004a1.pmtpa.wmflabs [20:15:47] JasonDC: What's up? [20:16:51] http://www.isup.me/wlm.wmflabs.org but the IP is fine http://www.isup.me/http://208.80.153.140/ [20:17:13] seems to have started last night [20:18:22] RECOVERY host: i-000004a1.pmtpa.wmflabs is UP address: i-000004a1.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.64 ms [20:18:32] RECOVERY host: i-0000049b.pmtpa.wmflabs is UP address: i-0000049b.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 4.30 ms [20:18:42] RECOVERY host: i-000004ae.pmtpa.wmflabs is UP address: i-000004ae.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.71 ms [20:18:42] PROBLEM host: i-0000041b.pmtpa.wmflabs is DOWN address: i-0000041b.pmtpa.wmflabs check_ping: Invalid hostname/address - i-0000041b.pmtpa.wmflabs [20:18:42] PROBLEM host: i-000003f4.pmtpa.wmflabs is DOWN address: i-000003f4.pmtpa.wmflabs check_ping: Invalid hostname/address - i-000003f4.pmtpa.wmflabs [20:18:42] PROBLEM host: i-000003f3.pmtpa.wmflabs is DOWN address: i-000003f3.pmtpa.wmflabs check_ping: Invalid hostname/address - i-000003f3.pmtpa.wmflabs [20:18:42] PROBLEM host: i-0000049a.pmtpa.wmflabs is DOWN address: i-0000049a.pmtpa.wmflabs check_ping: Invalid hostname/address - i-0000049a.pmtpa.wmflabs [20:18:43] PROBLEM host: i-00000415.pmtpa.wmflabs is DOWN address: i-00000415.pmtpa.wmflabs check_ping: Invalid hostname/address - i-00000415.pmtpa.wmflabs [20:18:46] [bz] (8RESOLVED - created by: 2Ryan Lane, priority: 4Unprioritized - 6normal) [Bug 40825] Labs DNS configuration only points at a single LDAP server - https://bugzilla.wikimedia.org/show_bug.cgi?id=40825 [20:18:52] JasonDC: RECOVERY - Auth DNS on labs-ns1.wikimedia.org is OK: DNS OK: 0.096 seconds response time. nagiostest.beta.wmflabs.org returns 208.80.153.219 [20:18:54] Try again? [20:19:02] RECOVERY host: i-000004ab.pmtpa.wmflabs is UP address: i-000004ab.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 2.31 ms [20:19:12] RECOVERY host: i-0000049a.pmtpa.wmflabs is UP address: i-0000049a.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.85 ms [20:19:42] seems to work now :) [20:19:52] RECOVERY host: i-000003f4.pmtpa.wmflabs is UP address: i-000003f4.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.69 ms [20:20:09] And another... [20:20:22] RECOVERY host: i-000003f3.pmtpa.wmflabs is UP address: i-000003f3.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 1.14 ms [20:20:22] RECOVERY host: i-00000415.pmtpa.wmflabs is UP address: i-00000415.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.44 ms [20:20:22] RECOVERY host: i-0000041b.pmtpa.wmflabs is UP address: i-0000041b.pmtpa.wmflabs PING OK - Packet loss = 0%, RTA = 0.53 ms [20:20:32] PROBLEM Current Users is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:20:42] PROBLEM dpkg-check is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:20:52] PROBLEM Total processes is now: WARNING on dumps-bot2 i-000003f4.pmtpa.wmflabs output: PROCS WARNING: 155 processes [20:21:02] PROBLEM Current Load is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:21:12] PROBLEM Disk Space is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:21:52] PROBLEM Free ram is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:21:52] PROBLEM Free ram is now: WARNING on dumps-bot1 i-000003ed.pmtpa.wmflabs output: Warning: 17% free memory [20:21:52] PROBLEM Free ram is now: WARNING on dumps-bot3 i-000003ef.pmtpa.wmflabs output: Warning: 15% free memory [20:22:02] PROBLEM Disk Space is now: WARNING on ve-roundtrip2 i-0000040d.pmtpa.wmflabs output: DISK WARNING - free space: /run 522 MB (5% inode=99%): [20:22:42] PROBLEM Disk Space is now: CRITICAL on mw1-21beta-lucid i-00000416.pmtpa.wmflabs output: DISK CRITICAL - free space: / 4 MB (0% inode=51%): [20:23:02] PROBLEM SSH is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CRITICAL - Socket timeout after 10 seconds [20:23:16] [bz] (8RESOLVED - created by: 2Antoine "hashar" Musso, priority: 4Unprioritized - 6major) [Bug 40947] labs-ns1.wikimedia.org NS server - https://bugzilla.wikimedia.org/show_bug.cgi?id=40947 [20:23:42] PROBLEM Total processes is now: CRITICAL on ganglia-test2 i-00000250.pmtpa.wmflabs output: CHECK_NRPE: Socket timeout after 10 seconds. [20:24:32] PROBLEM Total processes is now: WARNING on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS WARNING: 189 processes [20:27:22] !log bastion DNS issue fixed by adding a LDAP failover on the NS servers. https://bugzilla.wikimedia.org/show_bug.cgi?id=40825 [20:31:34] hashar: !log does not support URLs with http(s):// [20:33:04] sierously ? [20:33:10] that is a serious bug ehhe [20:33:22] !log bastion DNS issue fixed by adding a LDAP failover on the NS servers. http://bugzilla.wikimedia.org/show_bug.cgi?id=40825 [20:33:45] it does not like me hehe [20:33:57] !log bastion DNS issue fixed by adding a LDAP failover on the NS servers {{bug|40825}} [20:33:58] Logged the message, Master [20:34:01] ... [20:34:02] It does like you URL ;-) [20:34:03] seriously [20:34:11] crazzzyyyy [20:41:36] I need to fix that bot :( [20:41:52] there's seriously way too much work for a single ops person to do [20:45:09] we need more volunteer help. you guys have been fixing most things lately, after all ;) [20:46:42] Ryan_Lane: Is the code public? Maybe I can help you [20:46:51] yep [20:46:56] everything is [20:47:07] where? [20:47:21] for the bot? [20:47:29] Yes [20:47:31] lemme find it [20:47:49] i hope I actually pushed in my last set of chanegs [20:48:38] in gerrit: operations/debs/adminbot [20:48:52] <^demon> Ryan_Lane: I could do more things myself if I had sudo rights to the gerrit box :\ [20:48:58] let me see if my latest changes are there [20:49:04] ^demon: heh [20:49:08] ^demon: put an rt ticket in [20:49:33] I'll +1 it [20:51:22] Jan_Luca: let me push in my latest changes [20:51:56] <^demon> #3698 [20:56:07] gerrit is down? [20:56:24] <^demon> Up for me [20:56:40] Ryan, it's up again [20:57:05] ah ok [20:58:00] <^demon> Ryan_Lane: I am going to need your help tomorrow updating the deb (I know zilch there). I scheduled us for 1pm PDT. [20:58:08] that's cool [20:59:15] <^demon> I've gotten pretty much everything done I needed to for today. It's 5. Later. [20:59:26] see ya [21:05:18] ah. only a single change needed [21:05:22] for the bot [21:08:21] Do you found the bug? [21:12:36] * Damianz eyes Ryan spam [21:12:41] * Damianz yays at fixing shit [21:12:49] Jan_Luca: no [21:13:00] I'm seeing what I need to update in the version in gerrit [21:13:08] to match the one on bots-labs [21:13:10] < Ryan_Lane> there's seriously way too much work for a single ops person to do < *points at andrewbogott_afk and paravoid* [21:13:31] Damianz: paravoid is doing swift [21:13:50] andrewbogott_afk has been doing lots of nice ops work lately, thankfully :) [21:13:56] yeah, I'm kind of busy these days [21:13:56] I thought swift was 'kinda stable' now [21:14:05] Yay for swift though [21:14:26] hopefully some of our object storage work can find its way back to labs [21:14:45] but I don't think I'll be doing much on labs besides that for the short term [21:16:24] Hopefully swift will get less breaky hardware and live in peace and happiness sometime [21:17:33] Change on 12mediawiki a page OAuth was modified, changed by Sharihareswara (WMF) link https://www.mediawiki.org/w/index.php?diff=592746 edit summary: 1 person on a team = lead? [21:18:18] Hmm I kinda want to know how much I broke with the nscd change since I just copied the default file and changed the cache values to not-very-long-at-all. Be interesting to see if opendj complains at a random spike in work. [21:18:41] Damianz: I think it'll be fine [21:19:12] Jan_Luca: ok. the version is gerrit is now totally up to date [21:19:52] Sharihareswara: Tbf labs has Ryan as management with no team and no start date according to mediawiki.org :P [21:20:09] Damianz: that sounds about right [21:20:10] :D [21:20:16] andrewbogott_afk is on the team, of course :) [21:20:27] paravoid used to be, but he likes swift more [21:20:45] riiiiight [21:20:46] !log testing test [21:20:47] Logged the message, Master [21:20:57] I think andrewbogott_afk secretly just likes openstack and doesn't really want to be associated outside of code pushed into the backend :D [21:21:00] ok. I'm running the bot interactively [21:21:05] so I can see how it crashes when it crashes [21:21:09] !log testing test [21:21:10] Logged the message, Master [21:21:21] You know it never breaks when you are running it so you can see the errors :P [21:21:28] heh [21:21:31] we'll see :) [21:21:40] I'm running it in screen [21:21:42] it'll break eventually [21:22:12] !log testing A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l [21:22:13] !log testing test [21:22:13] Logged the message, Master [21:22:14] Logged the message, Master [21:22:19] heh [21:22:24] See it usually hates long stuff [21:22:32] !log testing http://google.com [21:22:34] It never likes urls [21:22:46] ah ha! [21:22:48] captcha [21:22:51] * Damianz swears half the time it's labsconsole being slow that breaks the editpage stuff [21:23:10] captcha who? [21:23:21] I need to add the user as a bot, then mark bots as skipping captcha [21:23:28] captcha on beta is like a chocolate frying pan :D [21:23:45] Hmm [21:23:47] it's using fancy catpcha, right? [21:23:54] think it is now [21:23:56] dunno [21:24:07] * Damianz wonders if he reboots some instances and causes andrewbogott_afk's code to run if it wants to break more :D [21:24:24] Still want a way to trigger info updates without being distructive :( [21:24:41] !log testing http://google.com [21:24:42] Logged the message, Master [21:24:45] ;) [21:25:27] I wonder if that was the problem all along [21:25:38] well it does seem to hate netsplits [21:25:42] yes [21:25:45] I wonder if freenode will let us break their servers to test that [21:25:45] that it does for sure [21:25:50] hahaha [21:26:53] Damianz: just lock at the freenode update announces and connect to one server that will be upgrated soon [21:29:46] we need to add a handler for nick in use [21:33:14] nomomomnom [21:39:51] nicknameinuse, nickcollision, unavailresource [21:42:19] omfgtheworldimplodeddiaf [21:55:30] so, let's see how this works... [21:55:53] bleh [21:56:36] ah [21:56:42] I forgot to register the quit event [22:04:32] PROBLEM Total processes is now: CRITICAL on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS CRITICAL: 289 processes [22:05:38] there we go [22:05:55] that should fix that [22:08:43] Damianz: ok. bot should survive netsplits now :) [22:08:54] https://gerrit.wikimedia.org/r/#/c/27634/ [22:09:02] we'll soon find out [22:09:32] PROBLEM Total processes is now: WARNING on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS WARNING: 195 processes [22:10:05] argh [22:10:07] screw off gerrit [22:10:14] eh? error? [22:10:21] if I double click on something I want to copy it not freaking login to leave a comment [22:10:48] ah [22:10:57] yay for iframes [22:10:58] not [22:34:02] !log testing test [22:34:33] RECOVERY Total processes is now: OK on wikistats-01 i-00000042.pmtpa.wmflabs output: PROCS OK: 102 processes [22:35:05] !log testing test [22:35:07] Logged the message, Master [22:38:14] !log testing test [22:38:15] Logged the message, Master [23:10:30] hey guys, let's update this one [23:10:32] https://meta.wikimedia.org/wiki/Wikilabs [23:10:43] i just ran across a German discussion regardin toolserver and labs [23:10:52] and they try to lookup info there for some reason [23:43:14] I'm sure this edit is going to piss *someone* off: https://meta.wikimedia.org/w/index.php?title=Wikilabs&diff=4240940&oldid=4240915