[01:05:53] PROBLEM Total processes is now: WARNING on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS WARNING: 177 processes [01:10:52] RECOVERY Total processes is now: OK on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS OK: 100 processes [01:31:47] petan, about? [02:38:53] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 20% free memory [02:39:23] RECOVERY Free ram is now: OK on bots-sql2.pmtpa.wmflabs 10.4.0.41 output: OK: 21% free memory [02:52:22] PROBLEM Free ram is now: WARNING on bots-sql2.pmtpa.wmflabs 10.4.0.41 output: Warning: 14% free memory [02:56:52] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 18% free memory [06:31:52] PROBLEM Total processes is now: WARNING on parsoid-roundtrip4-8core.pmtpa.wmflabs 10.4.0.39 output: PROCS WARNING: 151 processes [06:46:52] RECOVERY Total processes is now: OK on parsoid-roundtrip4-8core.pmtpa.wmflabs 10.4.0.39 output: PROCS OK: 147 processes [09:55:25] yuvipanda so where is this person? :p [09:55:39] probably sobering :D [09:56:32] i'd guess he's probably on vacation till new year's [10:07:15] but I thought 21 dec was apocalypse :/ [11:56:32] 12/26/2012 - 11:56:31 - Updating keys for tb at /export/keys/tb [12:17:13] ToAruShiroiNeko: We're all zombies and /will/ eat your brains [13:26:33] PROBLEM Free ram is now: WARNING on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: Warning: 19% free memory [14:56:55] hi, pywikipediabot team is planning SVN->GIT migration, and I had a question - is it possible to migrate without using gerrit? Everyone seems to be a bit uneasy about extra complexity which seems unneeded for this project. [15:13:39] You can move without gerrit, the only git hosting we have is via gerrit though [15:13:48] (You can just use ssh and git if you really want though) [15:46:11] 12/26/2012 - 15:46:10 - Creating a home directory for olaph at /export/keys/olaph [15:50:58] Damianz: not sure what you mean by ssh -- if the pywiki project relocates from MW SVN to GIT, can people commit to pywiki master? [15:51:19] using ssh links with public keys of course [15:51:21] 12/26/2012 - 15:51:21 - Updating keys for olaph at /export/keys/olaph [17:03:27] Damianz oh? [17:03:29] brainz? [17:03:38] why eat the dog? [17:40:03] Would someone be able to add me to the bastion project so I can ssh in? My shell account application was processed, but I don't appear to be a member and can't ssh in. [17:50:08] franny: can you try now? [18:01:00] Yurik: yes. people will still be able to commit to master, with review (or self-review, if your project prefers that) [18:01:37] Yurik: oh. without gerrit? sure, you can use github if you want [18:01:53] realistically, gerrit isn't nearly as difficult as people make it out to be [18:01:55] Ryan_Lane: i would prefer pywiki to stay under MW umbrella [18:02:23] 99.99% of the time, you do this: ; git commit; git review [18:02:30] will people be able to just do git push to master, without using git-review ? [18:02:43] git review isn't required [18:02:45] true, except for one thing - it took me half a day to setup gitreview :( [18:02:47] but it makes things much easier [18:02:53] are you on windows? [18:02:58] yep [18:03:00] ah [18:03:20] and judging by statistics of things, it is a more common scenario :) [18:03:28] doubtful [18:03:45] no no no, no platform wars pls :) [18:03:52] I'm not [18:04:00] it sucks that windows isn't easier to use for git [18:04:18] our stats definitely show people favoring os x and linux [18:04:51] though I'm sure the fact that it's a pain to use in windows skews that some [18:05:05] i am thinking of installing linux just because of this pain :))) [18:05:15] agree re windows :( I would also like windows to resolve a number of other pet peeves of mine: "/" instead of "\", "\n" instead of "\r\n" [18:05:19] but, we usually hear about the people using windows, because we tend to need to walk them through some steps [18:05:27] and utf8 in console [18:05:30] yeah [18:05:48] well, / vs \ will never change ;) [18:05:59] * Yurik shoots himself [18:06:24] they kinda tried with NTFS & Win3.5 [18:06:45] heh [18:07:09] using a linux vm will help with this [18:07:16] ok, so ryan, i will post to the list that we can migrate to MW GIT and allow direct git commit + git push, correct? [18:07:21] * Ryan_Lane uses a linux vm for local mediawiki dev [18:07:31] direct push? [18:07:35] * Ryan_Lane twitches [18:07:43] wa? [18:07:49] let's talk to ^demon about that [18:07:58] it may not be a problem [18:08:18] what is so special that git-rieview does? [18:08:20] we disable it by default [18:08:44] it automates a bunch of stuff in git and gerrit [18:10:07] Yurik: you guys don't want to do code review at all? [18:10:17] it's one of the nice things gained by using gerrit [18:12:30] hm. I guess I should pack [18:13:28] I would love to have it, but if that requires significant extra hassle for a new dev to join... i don't know really [18:14:06] I don't think it adds much extra hassle [18:15:15] people had a pretty bad upfront reaction to gerrit because it was a combination of switching from svn to git and because it isn't github [18:15:21] we don't really hear much for complaints now [18:16:02] that's exactly what i hear on the list [18:16:15] some advocate github, but i feel its a trap [18:17:41] all together, we had about 60 devs commiting, which is not that high i think. I don't know honestly, what would be better - it did take me a bit of time to setup, and still learning the ropes, but i know it will be better... eventually [18:19:44] yes, it's getting better and better every release [18:20:06] I'm really looking forward to the next one. gitblit rather than gitweb! [18:26:24] giftpflanze, it works now. Thanks. :) [18:38:18] Hmm... [18:38:20] fran@bastion1:~$ ssh editor-engagement.pmtpa.wmflabs [18:38:21] ssh: Could not resolve hostname editor-engagement.pmtpa.wmflabs: Name or service not known [18:39:29] Am I doing something wrong? [18:40:04] that instance doesn't exist [18:40:45] Oh. :| I thought that was the instance I was supposed to be helping with. [18:41:33] I guess need to talk to Ori. [18:42:01] it's a project [18:44:42] Ah, he gave me the right hostname. Thanks for the help. [18:44:52] np :) [18:51:16] 12/26/2012 - 18:51:16 - Updating keys for spetrea at /export/keys/spetrea [18:56:07] is it just me, or labs is slow like hell today? [18:58:13] Yurik: If you have a repo on a server you can 'push' to it just over ssh+git, it gives no restrictions on the repo though nor forced code review. It's a cheap hosting setup [18:59:15] Damianz: you mean we should create another repo on a labs instance? [19:00:39] git clone ssh://damian@bastion.wmflabs.org/repos/myrepo.git would work fine, it's lame and not redundant... personally I use git/bitbucket for opensource, gerrit for labs and that for private stuff I've yet to os. [19:03:12] MaxSem: When you say labs is slow, do you mean the web interface? Or your instance? [19:03:22] instance [19:03:23] labsconsole is allways slow [19:03:39] It's extra slow when memcached crashes, which is why I was asking. [19:03:45] A slow instance I can't do much about :( [19:04:04] typically, an empty puppet run is 30 seconds, but I observe up to a minute [19:06:36] That's probably the puppet server being slow… might be it's just having a busy day [19:06:59] If it times out let me know and I'll have a look. [19:07:42] it doesn't so far, thanks [19:13:42] ugh. if puppet in labs starts getting as slow as in production it's going to suck [19:20:05] Ryan_Lane, how long does it take in prod? [19:25:44] MaxSem: you don't want to know [19:25:50] hehe [19:25:58] let's just say 1 minute is incredibly fast [19:26:19] but it was twice as fast, aaaaa! [19:27:24] maybe something you were applying takes a while? [19:27:39] I need to turn off exported resources in labs [19:27:43] that should speed it up some [19:27:50] nope, I measured for an empty (=no changes) run [19:28:31] I wish you could collect data internally without using exported resourcces :( [19:34:45] Ryan_Lane: Pretty much every day when I first click on 'manage instances' the instance list shows every project as empty until I log out and in again. Is that another symptom of one of our known keystone problems? [19:35:02] yes. it's due to memcache going away [19:35:15] Oh, so anytime one of us restarts memcache that happens? [19:35:16] I know where the bug is, but I haven't gotten a chance to fix it [19:35:36] MaxSem: ah, ok. I'll look at the run times to see what's up [19:35:53] andrewbogott: yeah. [19:37:00] why do you need to restart memcache? [19:37:46] giftpflanze: it's segfaulting [19:37:51] I need to launch it in gdb [19:37:56] to track down the bug [19:38:07] oh, ok [19:38:26] andrewbogott: the bug is in getCredentials in OpenStackNovaUser.php [19:38:46] andrewbogott: if a token comes back empty, I load the generic token from the database [19:38:53] that's fine… for the generic token [19:38:57] but not so much for project tokens [19:39:46] hm. I may know the proper fix [19:39:50] let me try and test it [19:41:50] nope. that didn't do it [19:41:52] heh [19:42:55] ok. have to head to the airport [19:42:59] back in a while [19:43:53] PROBLEM Current Load is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:44:33] PROBLEM Current Users is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:45:12] PROBLEM Disk Space is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:45:52] PROBLEM Free ram is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:47:22] PROBLEM Total processes is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:48:12] PROBLEM dpkg-check is now: CRITICAL on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: Connection refused by host [19:52:22] RECOVERY Total processes is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: PROCS OK: 84 processes [19:53:12] RECOVERY dpkg-check is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: All packages OK [19:53:52] RECOVERY Current Load is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: OK - load average: 0.11, 0.72, 0.58 [19:54:32] RECOVERY Current Users is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: USERS OK - 0 users currently logged in [19:55:12] RECOVERY Disk Space is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: DISK OK [19:55:52] RECOVERY Free ram is now: OK on mwreview-abogott-dev.pmtpa.wmflabs 10.4.0.205 output: OK: 694% free memory [20:50:52] PROBLEM Current Load is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [20:51:02] PROBLEM Free ram is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [20:51:32] PROBLEM Current Users is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [20:52:12] PROBLEM Disk Space is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [20:52:22] PROBLEM Total processes is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [20:52:52] PROBLEM dpkg-check is now: CRITICAL on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: Connection refused by host [21:00:52] RECOVERY Current Load is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: OK - load average: 1.12, 1.22, 0.77 [21:01:02] RECOVERY Free ram is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: OK: 1044% free memory [21:01:33] RECOVERY Current Users is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: USERS OK - 0 users currently logged in [21:02:13] RECOVERY Disk Space is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: DISK OK [21:02:22] RECOVERY Total processes is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: PROCS OK: 83 processes [21:02:52] RECOVERY dpkg-check is now: OK on openstack-wiki-instance.pmtpa.wmflabs 10.4.1.49 output: All packages OK [21:28:42] hi there, I’m looking for a member of the Bots project. someone online? [21:53:02] !log DrTrigonBot installed python-numpy on bots-4 [21:53:03] DrTrigonBot is not a valid project. [21:54:12] !log bots installed python-numpy on bots-4 (for DrTrigonBot) [21:54:12] Logged the message, Master [22:02:02] !log bots removed python-numpy on bots-4 again - needs to be installed on bots-apache01 (for cgi) [22:02:05] Logged the message, Master [22:05:53] we are experiencing problems accessing some instances of the bots project (bots-nr1, bots-apache01 do not work, bots-4 works). can someone confirm or resolve this problem? [22:07:18] +1 [22:08:16] Works for me [22:08:23] Could do with rebooting for home dirs though [22:08:54] Shouldn't need to access bots-apache01 ever, as a side note [22:09:12] what if some software has to be installed there? [22:09:52] Should be done via puppet [22:10:05] docs? [22:10:18] nope [22:10:44] So... could you explain? (and then back to bots-1 problem ;) [22:11:16] Damianz, when I try to login with SSH, I get a long welcome message, containing the line ‘System restart required’ – could this be the reason? [22:11:26] The classes aren't actually used now, but apache/mysql/nr instances should be managed via puppet - you submit a patch to the class, it gets merged, change gets made. [22:11:56] ireas: Shouldn't be, does it just hang after that? [22:12:17] As a side note bots-1 is a crappy instance, that you probably don't want to use :) [22:13:04] Damianz, same problem with bot-nr1. just pastebin-ing the error message … [22:13:22] Damianz, http://pastebin.com/qdB4fAPZ line 25 [22:13:54] I get just a "Permission denied (publickey)." which is somehow strange... [22:13:58] That's just because updates have been applied [22:14:05] I'll reboot it since the mounts havn't been changed yet [22:21:01] !log bots restarted bots-1 and bots-nr1 to switch over mounts+maybe fix auth for some people [22:21:14] Logged the message, Master [22:21:14] great, bots-nr1 works again! [22:21:44] (not for me...) [22:21:57] Hmm I might make a new page on osm for restarting services on bots-nr servers, could be interesting... though I should work on getting my puppet stuff merged first [22:22:07] andrewbogott: any idea why? [22:22:15] pastebin ssh -vvvv [22:22:15] and ssh-agent [22:22:16] PROBLEM Current Load is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:22:16] on bastion [22:22:19] DrTrigon: Anything that I might understand would be project-wide. I can't think of why you'd be able to get into one instance in bots and not another. [22:22:19] The keys, the homedir -- all shared project-wide. [22:22:38] PROBLEM Current Users is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:22:42] DrTrigon, you aren't getting a notice from ssh about a conflict in known_hosts are you? [22:22:43] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 21% free memory [22:22:43] I think that would be obvious [22:22:54] PROBLEM Disk Space is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:22:55] andrewbogott, I had the same problem, but the reboot fixed it for me [22:23:02] PROBLEM Free ram is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:23:12] PROBLEM dpkg-check is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:23:37] Hm. If your labs account was created since the homedir migration (last Monday) then, yeah, a reboot might be required. [22:23:49] which server? [22:24:09] andrewbogott: I get this http://pastebin.com/sc1ME79U [22:24:22] PROBLEM Total processes is now: CRITICAL on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: Connection refused by host [22:24:30] Or wait, this is more true: If you never accessed a given instance before the migration then probably you can't access it until it reboots. [22:24:39] DrTrigon: Does that ring true? [22:25:03] DrTrigon, if you go back to bastion and then login via ssh again? [22:25:06] andrewbogott: I never used others than 'bastion' and 'bots-4' so I do not know if it worked as some point in the past... [22:25:58] ok -- Damianz, do you know if bots-1/2/3 are due for a reboot? [22:26:38] 1 just got reboobed [22:26:53] 3 got reboobed 9 days ago [22:26:56] 2 has been up a while [22:27:05] Can probably do them without too much hassle [22:27:21] DrTrigon, maybe you can access bots-1 now? [22:27:45] i thought it was a bad idea to use bots-1 … [22:28:02] it is [22:28:16] giftpflanze: I'm just interested in fixing DrTrigon's access… have no opinion about what happens after that. [22:28:26] andrewbogott: ok thats intressting; it worked from bastion... I thought I can also go to bots-1 from e.g. bots-4 but that is wrong! thnaks for the hint!! :) [22:28:32] Hmm, I'm having problems sshing into anything other than bastion. I'm getting the error "Unable to create and initialize directory": http://pastebin.com/SkVpELni [22:28:49] is legoktm lego|away? [22:28:50] lol [22:28:58] DrTrigon: OK, cool. [22:29:00] franny: now that is due to dir swap [22:29:06] thanks!! [22:29:41] franny: That means the instance you're trying to access is due for a reboot. Instances that haven't been rebooted since the 18th are in kind of a fossilized state and can't accept new users. [22:29:47] Oh. Hmm. [22:30:12] Now I have one more question: How to install e.g. python-numpy on bots-apache01 by puppet? [22:31:09] franny: If it's in your own project you can reboot via the web interface. Otherwise you'll need to corral a sysadmin to do it. [22:32:16] andrewbogott: is -3 broken? [22:32:17] why reboot, topic says remounting would be sufficient? [22:32:24] there's only beet on there [22:32:28] giftpflanze: Pointless [22:32:46] Need to stop the processes with fh's on the mount, so might as well reboot and do updates [22:33:26] so maybe we should say that [22:33:34] Damianz: No idea about bots-3. I was listing off the instances that DrTrigon was thwarted by, but that turns out to be unrelated. [22:33:59] ok [22:34:22] RECOVERY Total processes is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: PROCS OK: 84 processes [22:34:24] Well technically remounting will 'fix' it, rebooting is easier though [22:35:34] giftpflanze: In fact, if a magical moment arrives when not a single file in /home is in use, the system will remount automatically. As far as I know that has never happened in real life though. [22:35:50] So, we presume that a reboot is the only time when the stars align properly. [22:35:53] RECOVERY Current Load is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: OK - load average: 0.48, 1.08, 0.82 [22:36:11] Well that's not really true [22:36:25] If everything drops it's fhs on the mount for like 15m so it times out, next access will remount it [22:36:29] I think I was given admin access, but I don't really trust myself to do it yet. I'll wait. :) [22:36:33] RECOVERY Current Users is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: USERS OK - 0 users currently logged in [22:36:46] Damianz: I suppse that is subtley different from what I said :) [22:37:04] Larger scope for the stars needing to align [22:37:13] * andrewbogott nods [22:37:13] RECOVERY Disk Space is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: DISK OK [22:37:54] Also Ryan must be home :D [22:38:03] RECOVERY Free ram is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: OK: 850% free memory [22:38:14] RECOVERY dpkg-check is now: OK on mwreview-abogott-dev2.pmtpa.wmflabs 10.4.1.58 output: All packages OK [22:38:45] franny: waiting won't necessarily accomplish anything unless you've emailed or asked someone to reboot it… quite possible that you're the only one who can't access the instance. [22:40:16] * andrewbogott makes a mental note to look to the left of the $ next time someone posts a shell transcript [22:40:48] Ori said I could do whatever I wanted to the test server, but I can't find anything to reboot it on labsconsole. [22:42:07] franny: What project and instance? [22:42:58] kubo, in editor-engagement. [22:43:23] Oh, he's rebooting it. [22:43:31] For future reference, where is the interface to reboot and the like? [22:43:52] franny: It's here: https://labsconsole.wikimedia.org/wiki/Special:NovaInstance [22:44:00] But the options available depend on your rights in the project. [22:45:34] RECOVERY Disk Space is now: OK on kubo.pmtpa.wmflabs 10.4.0.19 output: DISK OK [22:46:09] https://labsconsole.wikimedia.org/wiki/Special:NovaProject lists me as a sysadmin. [22:46:50] But I don't see anything about rebooting at . [22:47:05] Is there an 'actions' column with links in it? [22:47:14] For each instance? [22:47:14] Nope. [22:47:23] Is there at least a list of instances? [22:47:34] Yes. [22:47:41] Hm. [22:47:50] Instance Name Instance Type Project Image Id FQDN Public IP Launch Time Puppet Class Modification date Number of CPUs RAM Size Amount of Storage [22:48:20] wait [22:48:28] that doesn't sound right [22:48:30] I feel like you're describing the page for a specific instance [22:48:38] Which you get to from clicking a link on the page I linked you to. [22:48:47] there's a page under the project which lists the instances, there's a manage instances tab on the left under sysadmin [22:48:50] Whereas I am talking about the actual page at the link that i… linked. [22:48:50] (if you have sysadmin rights) [22:49:13] Nothing about "actions" in there, either. [22:49:52] https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-000003dd [22:49:53] franny: screenshot? [22:50:14] I am more and more confused [22:51:28] hey andrewbogott: I rebooted kubo.pmtpa.wmflabs, now I can't login either :) [22:51:43] $ ssh kubo.labs [22:51:43] Creating directory '/home/ori'. [22:51:43] Unable to create and initialize directory '/home/ori'. [22:52:05] franny: When I click on https://labsconsole.wikimedia.org/wiki/Special:NovaInstance I see bunch of tables, one for each project. the table has columns instance name, instance id, instance state, instance IP address, instance floating ip address, security groups, image id, launch time, actions [22:52:06] andrewbogott, https://dl.dropbox.com/u/11458013/Screenshot%20from%202012-12-26%2017%3A51%3A07.png [22:52:41] franny: You see /that/ when you click on the link I just pasted? [22:52:55] ori-l: damn [22:52:59] andrewbogott, no, that's when I click the particular instance. [22:53:18] When I click that exact link you pasted, I get... [22:53:26] [22:53:32] PROBLEM Disk Space is now: WARNING on kubo.pmtpa.wmflabs 10.4.0.19 output: DISK WARNING - free space: /mnt 595 MB (3% inode=95%): [22:53:42] https://dl.dropbox.com/u/11458013/Screenshot%20from%202012-12-26%2017%3A53%3A14.png [22:53:56] "Toggle" doesn't do anything. [22:53:59] press toggle [22:53:59] ahah! so there is /not/ a list of instances [22:54:04] The toggles! They do nothing! [22:54:08] which is why I asked you before if you say a list of instances :) [22:54:13] lame... [22:54:17] *saw [22:54:25] you have sysadmin rights for the project... does it have any instances! [22:54:35] franny: This is a known bug having to do with memcached crashes. Log out and in and things will be better. [22:55:00] andrewbogott, oh. I was clicking "editor-engagement" and getting a list there. [22:55:00] Basically any time memcached crashes (which happens about once a day) that page becomes instantly useless. Happens to me too. [22:55:29] Oh, OK, now I see a list. :) [22:55:37] excellent. [22:55:43] ori-l: I am looking... [22:57:12] * andrewbogott makes another mental note to just always ask people to log out and in again before trying to diagnose anything at all [22:59:13] andrewbogott: thanks! [23:00:10] Still no luck with kubo. [23:00:35] I logged into another instance (piramido) successfully. [23:06:45] 12/26/2012 - 23:06:44 - Updating keys for mwang at /export/keys/mwang [23:10:03] * Damianz thinks andrewbogott should just create a cronjob to clear the sessions table :P [23:38:48] franny, ori-l: I am running out of ideas for fixing kubo. I'll revisit later on... [23:39:01] meanwhile maybe franny can just create a fresh instance? [23:39:53] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 19% free memory [23:40:03] andrewbogott: sure, if you're willing to allocate an additional public IP :)