[09:02:43] Hi, there [09:03:25] As I understand only PHP/Python over cgi can be web apps hosted on tool labs& [09:03:29] As I understand only PHP/Python over cgi can be web apps hosted on tool labs& [09:03:34] As I understand only PHP/Python over cgi can be web apps hosted on tool labs? [09:08:28] Ilya_: PHP does not run cgi on tool labs ;) [09:16:03] I want to run Scala on PlayFramework (JVM/Netty based). I think toollabs does not allow any open port to serve http from application. [09:25:40] Ilya_: te right person to ask is Coren|RnR, but maybe YuviPanda has some knowledge of this [09:25:57] hey Ilya_ [09:26:09] so no, you can't really run Netty on labs [09:26:14] well [09:26:15] on tool labs [09:26:17] yet [09:26:42] YuviPanda: how's oing with hadoop? [09:26:50] fale: oh, I'm not doing that right now [09:26:53] i'm playing with iPython [09:26:59] yet? are there any plans to allow it? [09:27:00] notebook [09:27:17] YuviPanda: I see :D [09:27:28] Ilya_: it's not a policy issue, but just one of 'we haven't had anyone work on it yet' [09:27:30] :D [09:27:59] so if you can figure a way out to make netty play nice with apache httpd, it shouldn't be *that* hard to get it running [09:28:10] provided it doesn't do too many other bad things (like eat resources when it is *not* running) [09:30:10] !ping [09:30:10] pong [09:30:14] ah, I'm still connected [09:38:35] it is easy to configure httpd to be proxy to other port https://community.jboss.org/wiki/UsingApacheHTTPDAsReverseProxyAndLoadBalancerForCustomNettyHTTPEngine?_sscc=t [10:04:07] Ilya_: file a bug? [10:04:30] not sure how soon Coren|RnR can get to it, tho. Everything needs to be vetted by him before it goes live... [10:17:36] Change on 12mediawiki a page Wikimedia Labs/Tool Labs was modified, changed by Nemo bis link https://www.mediawiki.org/w/index.php?diff=720321 edit summary: [+64] all subpages [10:23:15] is there some problem on the apache server? I receive Bad request error... is my tool that's broken or is apache itself? [11:08:56] legoktm: your pywikibot has something to do with pywikipedia? [11:24:04] :-( [11:24:25] * Beetstra should move his bots .. again .. but has a problem installing perl modules on the new box [11:25:37] trying another install first [11:39:47] :-( [11:41:12] Can someone help me to install POE for perl .. that module fails installing, all the others seem to work [12:01:37] fale: atm in use for the git conversion, and it will probably host nightly generation [12:05:25] valhallasw: are you talking about legoktm pywikibot? [12:05:39] about the pywikibot project on tool labs [12:05:47] I guessed that was what you were referring to [12:06:39] valhallasw: I ws referring to https://git.wikimedia.org/summary/?r=pywikibot/core.git [12:07:02] that's the git reverted repository for rewrite [12:07:11] pywikibot/compat is old-trunk [12:07:20] see https://www.mediawiki.org/wiki/Git/Conversion/pywikipedia [12:08:29] valhallasw: I see. Is there any progress report? Is the code already stable and usable? [12:08:51] fale: huh? it's the same code [12:09:07] or do you mean core vs compat? [12:09:20] valhallasw: exactly core vs compat [12:09:34] there's a reason we call it 'core' now ;-) [12:09:55] yes, it's stable and usable [12:11:03] valhallasw: thanks :). So I'll use it to base the pywiki part of the lists project [12:11:23] note that the git conversion is *not* stable yet, though [12:11:33] so there could be some complete history rewrites [12:11:34] valhallasw: oh :( [12:11:41] you can just checkout from svn [12:11:52] https://svn.wikimedia.org/svnroot/pywikipedia/branches/rewrite/ [12:12:18] or use git, but that might need some git reset --hard origin/master - like commands [12:12:33] valhallasw: yep, but having the project in git I would have preferred to add a git submodule :D [12:12:56] I guess that should be OK, because you then just refer to a specific commit [12:13:36] valhallasw: yes, and wehn it will be stable your git I can rebase on the specifi commit of that moment and so on :) [12:13:58] right [12:14:06] valhallasw: thanks :) [12:20:50] zz_YuviPanda: I think this page needs some updates: https://www.mediawiki.org/wiki/User:Yuvipanda/G2G [12:58:31] * Beetstra pings Coren or petan .. I have troubles installing POE (for myself) for perl. Other modules seem to go OK [13:15:35] fale: ? what would you like me to add? [13:15:40] {{done}}s are pretty accurate [13:22:47] Change on 12mediawiki a page Wikimedia Labs/Tool Labs/List of Toolserver Tools was modified, changed by Risker link https://www.mediawiki.org/w/index.php?diff=720383 edit summary: [+0] sp [13:28:08] valhallasw: so I realized that this particular problem (run ipython notebook and proxy through) is just a specific instance of a more general problem (run a non-php / CGI thing, proxy it through) [13:28:12] need to look through that [13:28:18] okay, leaving brb [13:46:20] Beetstra: Heyas. [13:46:23] "POS"? [13:46:39] POS? [13:46:43] POE [13:46:45] POE* [13:46:46] ? [13:46:50] :-) [13:47:11] Hm. [13:47:17] The module [13:47:19] in perl [13:47:21] What issues are you running into? [13:47:44] it complains that POE::Test::Loops is not installed [13:47:51] But if I install that .. it is already installed [13:47:55] o_O [13:48:01] How... evil. [13:48:22] well .. yeah [13:48:57] t/00_info.t ....................................... 1/1 # Testing POE 1.354, POE::Test::Loops doesn't seem to be installed, Perl 5.014002, /usr/bin/perl on linux [13:49:10] Hm. What's probably going on is tha POE has support directly from Ubuntu, and you may well be running into "part-system-part-local" issues. [13:49:23] may-be [13:49:27] Perhaps it'd be simpler if I simply installed the system POE. [13:51:30] * Beetstra puts on a 'pretty please' face [13:51:31] Bit of a sidenote to that .. the MyConfig.pm needs to be tweaked before one could instlal modules locally .. but maybe that is to prevent complete n00bs to install things [13:53:23] Beetstra: puppet is running the install now. [13:53:36] Beetstra: Not on purpose, no. [13:53:37] OK, cheers! [14:00:29] Coren .. I think it is something else .. path-wise to local installs [14:02:01] Hm. Haven't run into any issues, but then again the only local perl module my bot uses isn't even CPAN and I am explicit about its path. [14:09:20] I'll have a better look tomorrow .. time to go home (workday is over). Thanks so far, Coren [14:09:52] Beetstra: No worries. Ping me if you need anything else. [14:10:56] Thanks .. I hope that is not going to be a long list of modules to be installed :-D [14:18:04] hi my monkeys... er. friends :D [14:18:33] AzaToth: how do I make .deb from your version [14:18:44] I had a script ./debian for it, you removed it XD [14:33:41] petan: git buildpackage -uc -us [14:34:04] "-uc -us" is just to not sign them [14:34:55] petan: some things might need to be fixed though. for example, I never updated copyright [14:36:11] petan: and need to either add terminatord/config.log to .gitignore or make sure it's removed on clean [14:36:36] petan: and the changelog needs to be updated (wrong version for example) [14:36:44] you update the changelog using dch [14:37:16] petan: after you've build the package, you can type "debc" to see the produced package, and "sudo debi" top install it [14:37:59] petan: in this package I set build-dep to libprocps1-dev (>= 3.3.6) [14:38:12] dunno if ubuntu has it as libprocps0-dev still [14:39:00] petan: also defined Section as utils as the one you used is not a valid one [14:39:34] petan: here are all sections: http://packages.debian.org/stable/ [14:41:08] petan: here is the lintian for my current package: http://paste.debian.net/13438/ [14:42:25] petan: copy? [15:47:28] can I use redis with mediawiki? [15:47:51] just realize there's a redis on labs [15:51:30] or petan's sharp memcached? [16:16:46] petan: sdgfdghegegerg [16:17:17] * AzaToth slaps petan with a big slimy lobster [16:24:01] hey petan [16:24:12] can you install python-matplotlib and python-zmq on tools-login? [16:25:02] there's a puppet patch too, but I need to confirm that my tool works before pushing it out [16:25:50] Coren: ^ maybe [16:26:11] YuviPanda: the queue starts there → [16:26:24] the queue looks empty to me :P [16:27:00] I'm wanting to smack some knowledge into petan [16:27:09] about debs? [16:27:12] aye [16:27:20] about packaging or installing? :D [16:27:25] packaging [16:27:35] ah [16:27:40] petan is the first one I've seen to have made debs manually [16:27:45] I gues he's not around [16:27:56] well, he does like doing things himself [16:28:00] he was when he asked, but then... [16:28:03] see also: rewriitng memcached and rsyslog [16:28:10] yea, but when he complains it doesn't work... [16:28:27] you weren't around for the take saga were you? :D [16:28:34] no [16:28:40] take? [16:28:42] he's very nice, however. toollabs wouldn't be going as quickly if not for petan being around [16:29:00] ok [16:29:13] don't know what "take saga" means [16:29:15] scfc_de: around? [16:29:23] AzaToth: so toollabs has a tool called 'take' [16:29:28] ok [16:29:36] written in straightforward C, which petan attempted to rewrite with STL [16:29:52] YuviPanda: scfc_de is busy masking his phone from NSA [16:30:02] hehe [16:30:46] there were also security issues with it (race conditions with file perm/ownership checks), and it went 'round and 'round and 'round [16:30:47] if you are going that way, then... [16:31:15] -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security --std=c++0x -D_FORTIFY_SOURCE=2 [16:31:39] so the 'saga' was him asking for code review, a bunch of us doing code review and then him telling us that we're not really doing code review [16:31:40] (that's the default debian build flags for hardened stuff) [16:31:45] ah [16:31:50] wait, c++0x? [16:31:54] not that one [16:31:57] but the others [16:32:22] ah [16:33:28] made an update to petans "terminatord" at https://github.com/benapetr/terminator/pull/2 [16:33:45] I'm not sure if terminatord is another manifestation of the NIH syndrome [16:33:47] and now it won't compile due to hardening bails out ツ [16:33:51] ah, scons [16:33:58] I HATE AUTOHELL!!!! [16:34:07] hehe [16:34:17] I've used autohell too much [16:34:18] It's bad, but better than a hand rolled shell script [16:34:23] true [16:34:40] YuviPanda: but when you start to understand m4, then you are lost [16:34:44] haha [16:34:45] yeha [16:34:48] *yeah [16:35:00] most people don't really write m4 tho [16:35:06] especially when you start to redefine quotes [16:35:12] 'oh this other project had this thing that mostly works so I took it and made a few tweaks' [16:35:43] the only time I've used autotools was when I was doing GNOME stuff, and it's easy enough to pick up another project's autotools setup and adapt it to yours [16:35:54] not really [16:36:21] inside GNOME at least [16:36:31] you have autoconfig 2.61 installed and the configure.ac requires autoconf 2.60 [16:36:50] and config.guess is 4 minutes too old [16:37:07] hehe [16:37:17] and that po/Makefile.in.in [16:37:44] and autoreconf doesn't work because you have an m4 dir not specified in Makefile.am [16:38:12] and CONFIG_AUX_DIR point to a dir with old random crap [16:38:40] you've clearly had too much autotools to drink [16:38:50] should you install libintl or should you link to it... [16:39:04] I hate it [16:39:08] :D [16:52:13] zz_YuviPanda: terminatord smells a lot like slayerd on toolserver [16:53:37] which is also smarter because it does per-user limits instead of per-process limits [16:54:07] valhallasw: tell that to petan [17:02:59] valhallasw: If I understand things correctly, the goal is slightly different. slayerd looks like it's to enforce per-user memory usage limits, while terminatord is (as far as I understand) just supposed to kick in before Linux's OOM killer and be "smarter" about what's to be killed (i.e. kill user processes, not system daemons) [17:07:42] anomie: true, except there are also per-process memory limits, which doesn't really help much in most cases [17:09:05] YuviPanda: That was a short nap :-). What can I do for you? [17:09:41] anomie: valhallasw isn't there something already built that we could use/ [17:09:43] scfc_de: was on a bus when talking that time, so closed laptop and came back out ;) [17:09:45] scfc_de: install python-matplotlib and python-zmq on -login? [17:10:38] Depends what your goals are. If you want to automatically prevent users from using more than their "share" of memory, per-user limits are needed. If you just want to stop a poorly-coded processes from using more memory than expected (and use social mechanisms to deal with users using too much), per-process limits are fine. [17:11:17] question is more like 'is there not another per-process limit killing daemon'? [17:11:59] YuviPanda: echo -1000 > /proc//oom_score_adj will solve the "killing system daemons" issue. Which Ubuntu's upstart has config file settings to automatically handle. [17:12:15] so why write another deamon? to add 'intelligence'? [17:12:42] YuviPanda: First time to install a package on Tools, and it seems to have worked. [17:12:48] scfc_de: \o/ [17:12:50] :D [17:12:53] let me check ;) [17:13:18] !log tools Installed python-matplotlib and python-zmq on tools-login for YuviPanda [17:13:20] Logged the message, Master [17:17:47] scfc_de: it works :) [17:17:55] valhallasw: we've numpy and matplotlib now ;) [17:18:01] and setup is a lot faster [17:20:23] * anomie does a little statistical investigation: As of a few minutes ago, 38 processes were specifying their memory limit in multiples of 1e3 bytes (probably from jsub/jstart), 10 in multiples of 1e6 bytes (9 probably old jstart), 5 in multiples of 1024**2 bytes, 1 in multiples of 1e9 bytes, and 9 in bytes (those are mine). [17:21:29] hmm, is terminatord supposed to kill things on SGE [17:21:32] or on -tools? [17:21:37] gah [17:21:38] -login [17:35:39] YuviPanda: I'd think tools and login, because we have SGE to kill stuff on -exec nodes [17:38:34] + tools-dev. [17:39:49] what exactly is tools-dev to be used for? [17:40:01] compiling stuff [17:40:19] and probably everything you have been using -login for until now :p [17:41:28] That's not well defined ATM. Biggest difference: Crontabs should be installed on -login, so I think breaking/slowing down/etc. -dev will be met with much more understanding than the same on -login :-). [17:42:22] I think it's basically "if you're going to use a lot of CPU, do it on -dev instead of -login if for some reason you can't submit it as a job" [17:44:38] Yep. Though of course then you have to decide, "what's a lot of CPU", and using tools-dev may be a good default for /all/ development work. [17:45:34] then what's the point of having a -login? [17:45:47] there's only one -dev [17:46:01] * valhallasw suggests changing -dev to -login and -login to -submit [17:46:27] then again, I think we will have some cron variety in the future that will Just Work(TM) [17:46:39] For example for crontabs. At some point we should use a cluster-wide cron where users don't have to think about that. [17:46:43] and start the 'start SGE job'-job on some random host [17:46:46] valhallasw: You were faster :-). [17:47:31] Unfortunately cronie (which has a cluster-type option) doesn't seem to have been ported to Ubuntu yet. [17:48:00] https://blueprints.launchpad.net/ubuntu/+spec/security-karmic-replace-cron [17:49:11] * anomie has a cronjob that needs to run on -login, and one that needs to run on -dev. Or else things will break. [17:50:08] anomie: Is that "only" the package comparison? [17:51:11] scfc_de: Yeah, that's the cronjobs that do "dpkg --get-selections" for my packages.html page. The one on -dev also does the work to actually generate the page. [17:53:24] anomie: For this special case, we could find other arrangements (for example, have Puppet cron jobs for all hosts that dump their package lists to /data/project/.system/some-dir). [17:54:58] scfc_de: True. BTW, why is php5-redis and python-redis on all -exec nodes except -exec-06? [17:57:55] anomie: Moment. [17:58:27] !log tools Set ssh_hba to yes on tools-exec-06 [17:58:29] Logged the message, Master [17:59:29] anomie: Need to wait (at most) 30 minutes until I can log into tools-exec-06. [18:11:30] valhallasw: hmm, so I guess I should actually run ipython notebook on -dev :D [18:11:37] valhallasw: for now, at least. Eventually they should go on SGE [18:14:37] valhallasw: though that'll make the tunneling... more complicated than i'd want [18:20:59] valhallasw: I want to have a -dev [18:21:20] it sounds better [18:21:50] have no idea what "submit" would mean, and "login" is a bit a mystery [18:25:49] -dev isn't accessible publicly, methinks [18:25:58] so you ssh to tools-login and then ssh from there to wherever [18:27:09] so... ssh. If I ssh somewhere and run a command, would that get a sighup when the ssh connection terminates? [18:29:25] YuviPanda: ssh tools-dev.wmflabs.org seems to work fine [18:29:30] ...wat [18:29:48] meh, then why do we have a -login? [18:31:28] I think the point of -login is to manage your jobs (submit, kill, schedule crontabs to start them, etc), ideally without things being slow because someone else is running some huge compile. [18:40:17] hmm [18:40:30] should... not be called -login then, I guess? [18:54:17] AzaToth: the same as on the toolserver: the host from where you submit jobs [18:55:33] valhallasw: I never submitted any jobs on toolserver [18:55:46] I didn't even know TS had SGE :| [18:55:52] (but then again, I didn't do much on TS) [18:56:41] all I know of TS is that it's solaris and replag [18:57:42] hehe [19:00:47] hello [19:00:49] anybody there? [19:05:15] mabbe [19:05:18] !ask [19:05:18] Hi, how can we help you? Just ask your question. [19:05:20] :D [19:05:50] hey! I'm trying to install the LAMP stack on my labs instance [19:05:54] and I've installed apache [19:06:11] now to check it, the instructions ask me to visit http:// [19:06:19] I tried that, but to no avail [19:06:21] Halp? [19:06:28] internal ip or external ip? [19:06:48] did you get an external IP for your labs instance? [19:11:26] hmm not sure how to do that SadPanda [19:11:38] pragunbhutani: you need to ask one of the labs people.... none of whom are here now, I think [19:12:12] ah is that the public IP I'm being told about? [19:12:19] yeah [19:12:59] then I guess I'll have to wait till tomorrow to run into one of the labs people [19:22:01] anomie: I still can't log into tools-exec-06. Coren or petan need to look into this. [19:22:55] pragunbhutani: Instructions are at https://wikitech.wikimedia.org/wiki/Help:Addresses, FYI. [19:25:12] thanks anomie [19:34:14] I can't login to instance-proxy now [19:34:23] gluster failure again? [19:34:39] stuck after debug2: we sent a publickey packet, wait for reply [20:12:11] hi ! when I try to use screen on tools-login I get "Cannot open your terminal '/dev/pts/104' - please check." [20:12:37] I believe it's a matter of rights [20:12:40] ? [20:14:40] !screenfix [20:14:40] script /dev/null [20:14:41] Toto_Azero: screen in your user account instead of tool account [20:14:49] ^ or that [20:15:46] ok both work well, thank you :) [20:23:32] Coren, petan: Re not being able to log into exec-06, it's configured for the Puppet class ldap::role::client::labs, while the other exec nodes are ldap::client::wmf-test-cluster. Looking at modules/ldap/manifests/role/client.pp, they seem to be equivalent. Can this still be related to that? [21:30:11] !doc [21:30:11] There are multiple keys, refine your input: docs, documentation, [21:30:15] !docs [21:30:15] View complete documentation at https://labsconsole.wikimedia.org/wiki/Help:Contents [21:30:41] !documentations [21:30:46] !documentation [21:30:46] https://labsconsole.wikimedia.org/wiki/Help:Contents [21:31:05] petan, where's the bot doc? [21:31:31] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [21:31:37] !doc del [21:31:38] Unable to find the specified key in db [21:31:42] !docs del [21:31:42] Successfully removed docs [21:31:47] !documentation del [21:31:47] Successfully removed documentation [21:31:50] !docs is https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help [21:31:50] Key was added [21:31:58] Cyberpower678: that? [21:32:00] ^ [21:32:09] o [21:32:10] NO [21:32:16] wm-bot doc [21:40:41] oh [21:40:42] sorry [21:40:53] AzaToth: Steinsplitter might have perms to import on betalabs [21:40:54] not sure [21:41:17] import upload or crosswiki? [21:41:58] Steinsplitter: would want [[en:Barack Obama]] + all templates to http://en.wikipedia.beta.wmflabs.org/wiki/Barack_Obama [21:42:25] want to be able to test VE in a master env and the lagout issue [21:43:43] nope. i cannot import from en:wiki [21:43:53] *ping* Thehelpfulone [21:44:42] Was thinking about using Special:Export on enwiki, which I can, and then Special:Import on betawiki, which I can't [21:45:04] jepp, importupload is not enabled [21:45:17] ok [21:45:20] but i can import from dewiki [21:46:04] meta, es, fr, it, pl [21:46:15] and nost [21:46:21] heh [21:46:48] Steinsplitter: I need a page that clearly fucks VE up [21:46:54] But maby Thehelpfulone can garant you the rights [21:47:26] possibly [21:48:32] seems I got Reedy on the hook ツ [21:48:58] AzaToth, Barack Obama from dewp? [21:49:20] :D [21:49:29] Ach nein! Auf Englisch bitte! [21:50:16] Reedy: wen das nicht geht :P sorry [21:51:20] Steinsplitter: I think the only import targets are other beta wikis [21:51:46] but i hav tested now to import a page from de and it works o_O [21:51:58] on commons i can upload a page fro enwep [21:52:00] @docs [21:52:08] @doc [21:52:08] @documentation [21:52:16] HELP!! :/ [21:52:18] AzaToth: on connonswiki i hav importupload [21:52:40] but you needed on enwiki :( [21:53:10] I've gotten importer on betawiki [21:53:11] Can someone please provide me with the documentation to wm-bot? [21:53:38] AzaToth: good :-) [21:53:46] !help [21:53:46] !documentation for labs !wm-bot for bot [21:53:53] !help | Cyberpower678 [21:53:53] Cyberpower678: !documentation for labs !wm-bot for bot [21:54:03] !wm-bot [21:54:04] http://meta.wikimedia.org/wiki/WM-Bot [21:54:11] _O_O [21:54:27] Thank you. [21:57:15] @RC+ meta_wikimedia Requests_for_comment/X!'s_Edit_Counter [21:57:15] Inserted new item to feed of changes [21:59:58] are lua templates capable of retrieving a list of pages a user created? [22:00:12] Err... [22:00:21] Added that to the wrong channel. [22:00:30] who, me? [22:00:30] @RC- meta_wikimedia Requests_for_comment/X!'s_Edit_Counter [22:00:30] Deleted item from feed [22:00:42] ok [22:00:44] gry, no not you. [22:00:46] Steinsplitter: done ツ [22:00:52] http://en.wikipedia.beta.wmflabs.org/wiki/Barack_Obama [22:00:54] ツ :-) [22:01:35] I want wm-bot to follow an RfC on meta. It is very active, and adding it here, would only annoy everyone. [22:01:47] gry: no, IIRC they aren't. [22:01:59] but that's just IIRC [22:11:32] petan: back yet? [22:13:55] Steinsplitter, I'm back now - still need me? [22:14:23] AzaToth, if you need any rights I'm happy to grant them :) [22:21:39] Thehelpfulone: I think he got his from Reedy [22:25:08] YuviPanda: Sshhhhhh [22:25:10] Not in public! [22:25:17] sorry, master! [22:32:03] Thehelpfulone: too late ツ but thanks anyway [22:32:34] Thehelpfulone: managed to fuck up parsoid on betalabswiki [22:32:42] so it's all good [22:32:49] heh no problem [22:33:01] I think petan has access to the deployment-prep projectg [22:33:04] project* too [22:33:34] Parsoid on betawiki, is it a lot sup pair performance wise than the "real" one? [22:35:08] Thehelpfulone: https://bugzilla.wikimedia.org/show_bug.cgi?id=50480 [22:35:43] would that be only because the parsoid is running an a old 486 standing dusty in a corner? [22:54:24] petan, ? [23:25:18] Has anybody else just lost their ability to SSH to tools-login.wmflabs.org? [23:25:55] I get the access problems notice, and then it just hangs. [23:27:22] yes, 'shell request failed on channel 0' [23:27:26] HTTP to tools.wmflabs.org also hangs. [23:27:45] gry: It's not just me, then? [23:27:48] nope [23:28:46] * wolfgang42 mutters quietly to himself. [23:28:51] Thanks, gry. [23:31:13] hmm, my mosh connection still works [23:31:26] wolfgang42: wfm? [23:31:56] a bit slow tho [23:33:03] YuviPanda: My mosh connection reports "mosh: Last contact 765 seconds ago." [23:33:16] YuviPanda: Which is why I noticed to begin with. [23:33:36] weird [23:33:40] maybe it's out of memory? [23:33:52] indeed it is [23:34:20] What, the server? [23:34:34] yeah [23:34:39] bash: fork: Cannot allocate memory [23:34:43] when I try to do anything [23:34:50] Oh, I just noticed: my other mosh session is accepting keystrokes, but not responding at all [23:35:30] do they have underlines? [23:35:41] probably just mosh's local keystroke echo behavior? [23:35:47] No, no underlines. [23:35:57] hmm [23:36:03] wolfgang42: you can still login to tools-dev.wmflabs.org btw [23:36:05] And I was at a MariaDB prompt, and when I backspaced over the prompt it came back [23:36:17] YuviPanda: What's that for? [23:36:36] i'm told it is a 'development' server [23:36:40] not too clear what that means yet :P [23:36:56] Also, it's showing the same behaviour for me. [23:37:05] hmm, it worked for me just a little bit ago [23:37:08] is out now [23:37:15] whelp. is out [23:37:31] I think we've an outage, folks :) [23:37:41] don't think any of the admins are around [23:37:45] so... not sure what to do :( [23:39:37] YuviPanda: Go outside and enjoy the fresh air? :p [23:39:46] it's 5 AM :D [23:40:19] i should sleep instead [23:40:21] i guess [23:40:28] Oh. [23:40:30] BTW, isn [23:40:45] *isn't Linux supposed to kill a task when it runs out of memory? [23:40:57] sortof [23:41:03] iirc it's not fully configured here yet [23:48:20] PROBLEM Free ram is now: UNKNOWN on tools-login.pmtpa.wmflabs 10.4.0.220 output: Unknown [23:50:40] where is that reporting? [23:51:20] YuviPanda: #wikimedia-labs-nagios [23:51:28] ah [23:51:36] still, there's little we can do about it :( [23:51:46] (See the topic of this channel) [23:51:48] I suppose it is possible that it might page Coren [23:51:49] or not [23:56:03] When I log into tools-master.pmtpa.wmflabs, the connection hangs after "No mail". Something NFS? [23:58:15] rest of us can't login to anywhere, scfc_de ;) [23:58:19] -login is out of memory [23:58:21] -dev too maybe [23:58:32] From bastion1, "ssh tools-login" (without agent forwarding) gives "Connection closed by 10.4.0.220", "ssh tools-dev" "Permission denied (publickey)." So tools-login seems to be FUBAR. [23:59:06] I just ssh'd to limn0 instance just worked, so rest of labs seems ok [23:59:08] tools-dev hangs as well after "Last login: Sun Jun 30 14:28:22 ..." [23:59:12] yeah [23:59:43] when I had a working mosh conenction, i go [23:59:44] t [23:59:44] bash: fork: Cannot allocate memory [23:59:51] so there's that :)