[00:58:58] It been about 2 weeks and my access log already 150 MB [01:34:40] Awe fuck fcgi! Labs has enough power to do everything with resource intensive vanilla cgi [01:51:56] Dispenser: That'd take some doing given that fcgi is the default. [01:52:32] Ah, hm, for php. Which may not be what you're doing. But setting up a fcgi responder is a three-line stanza of config. [01:52:34] Wouldn't that be a security with perl and python [01:52:43] security risk* [01:53:24] I don't see why, unless you use some global state in your code rather than per-request. [01:53:43] (and php fcgi doesn't keep state between requests) [01:55:42] Python's startup delay about 100-200 ms, if you get lots of requests (e.g. GeoHack) or make something that would request a new cgi image 50 times per page load, and you're looking to cut down load times to under 1 second [01:56:13] idk, let see if it'll bring down this server [02:13:28] Coren: do you know what this "unlink failed" in ~tools.giftbot/error.log is? [04:30:02] 3Wikimedia Labs / 3wikitech-interface: Interwiki map broken on wikitech - 10https://bugzilla.wikimedia.org/41786#c10 (10This, that and the other) (In reply to Nemo from comment #9) > No as in "it doesn't imply not anymore". The bug is current and easily > reproducible, see URL. How so? What do you expect to... [06:51:32] 3Wikimedia Labs / 3wikitech-interface: Interwiki map broken on wikitech - 10https://bugzilla.wikimedia.org/41786#c11 (10Nemo) (In reply to This, that and the other from comment #10) > How so? What do you expect to see? The [[m:IWM]] interwikis. It seems they re-appeared since April. On the other hand, "loca... [13:08:09] YuviPanda: Merlissimo pointed out a much superior solution for the automatic webservice problem: A SGE epilog script is executed even if the job itself OOMed; if that epilog script exits with return code 99, the job is rescheduled. So we would only have to modify "webservice start" to create a "lock file" $HOME/.webservice-should-run, "webservice stop" to remove the lock file before qdel, and then in the epilog script just check wether [13:08:09] there is the lock file or not. If it is, the job stopping was unintended and "exit 99" will restart it. If the file doesn't exist, the epilog script just returns 0 and the job is properly "closed". I'll test this in the next couple of days. [13:11:14] !log tools tools-exec-12: Moved /var/log around, reboot & iptables-restore [13:11:17] Logged the message, Master [13:13:55] !log tools tools-exec-13: Moved /var/log around, reboot, iptables-restore & reenabled queues [13:13:58] Logged the message, Master [13:14:14] scfc_de: just check resturn code 137 which is returned if the script is killed because of hard limit [13:14:37] no probalby no lock file needed [13:16:44] Merlissimo: On OOM, SGE still uses the normal kill chain, so often lighttpd is killed with -TERM. [13:17:11] (At least it looks this way.) [13:17:27] have you a jobname of an oom killed webserver job? [13:18:31] Merlissimo: Moment, please. [13:20:05] qacct -j lighty-catscan2 |grep -P "(failed|exit_status)" shows exit code 143 [13:21:07] = -TERM [13:21:53] 127 +15, yes [13:22:02] 128 +15 [13:22:23] So I think we'll need that "lock" file. [13:22:35] (But still a much cleaner solution than watchdogs & Co.) [13:25:31] 128 +signal number= exit code: [13:25:31] An exit code above 128 means that a job died via a signal. The signal number is the conventional UNIX signal number [13:26:09] https://java.net/downloads/oc-doctor/User%20Documentation/Grid%20Engine%206.2%20U6%20User%20Docs/gridengine62u6-ErrorMessages.pdf [13:36:43] Hoi, Autolist2 does not have enough memory ... Getting WDQ data... Fatal error: Allowed memory size of 2621440000 bytes exhausted (tried to allocate 71 bytes) in /data/project/wikidata-todo/public_html/autolist2.php on line 134 [15:22:35] GerardM-: ... a php script that is using 2.5G of data? The problem isn't that it doesn't have enough memory, but that it is crazy inneficient! [15:29:27] Coren: see scfc_de's email in that 'webservice restart' thread. most of these scripts *are* using 2.5G of mem per request [15:31:02] That's completely beyond Tool Lab's capacity to provide reliably; either this needs to be made more tractable or it'll need its own project - or at the very least its own nodes. [17:19:19] hi [17:19:38] erwin85 tools are unresponsive again, if I look in the error-log, I again see: [17:19:41] 2014-07-13 14:16:28: (server.c.1398) [note] sockets disabled, connection limit reached [17:20:09] I know restarting the webservice help, but what can be the root-cause? [17:23:36] restarted the webservice now [18:36:17] . [18:36:18] !channels [18:36:18] | #wikimedia-labs-nagios #wikimedia-labs-offtopic #wikimedia-labs-requests [18:37:15] h YuviPanda [18:37:25] finally I have some internet :o [18:37:39] just wanted to tell you if you are to move /tmp off sda1 use LVM [18:38:10] hi petan, i read that you wrote in bugzilla that the irc feeds will be discontinued? where and when was that announced? [18:38:24] gifti: on wikitech-l [18:38:35] it wasn't officialy announced but predicted by hashar [18:38:48] it quite makes sense as they are going to replace it with websockets [18:39:44] "IRC will be phased out entirely and related code in MediaWiki entirelyremoved if it is not of any other use." [18:40:15] I don't believe that is going to happen sooner than in few months, so no rush [18:40:49] I don't have any problem with this beside the fact that websocket.IO is incredibly hard to implement in anything but python [18:40:58] it's much harder than creating IRC parser [18:41:39] is there already documentation for that? [18:41:43] yes [18:42:03] https://wikitech.wikimedia.org/wiki/RCStream [18:42:13] thx [18:42:18] TBH I didn't get it to work yet [18:42:35] not even in python neither more hardcore languages [18:42:52] the example on webpage throws syntax error [18:42:59] if I fix it then it complains about missing python modules [18:43:07] pip is telling me that there is no version for my OS [18:43:27] so it's quite unusable right now [18:43:48] what is username of ori [18:44:01] hm, will see [18:44:15] @seenrx ori.* [18:44:15] petan: Last time I saw ori they were talking in the channel, they are still in the channel. It was in #wikimedia-dev at 7/12/2014 5:08:10 PM (1d1h36m5s ago) (multiple results were found: Falcorian, Gloria, GorillaWarfare, fabriceflorin, fabriceflorin_ and 95 more results) [18:44:16] meh [18:48:15] ah, nice [18:49:36] petan: wtf? pip install socketio-client [18:50:16] that works even on windows [18:52:00] and how do you get a syntax error? Are you trying to run it under python 3? [18:52:52] valhallasw`cloud: it doesn't work here in ubuntu 14.04 [18:53:04] I can install socketio but then it tell me there is no urlparse [18:53:06] ImportError: No module named 'urlparse' [18:53:24] pip install urlparse: No distributions at all found for urlparse [18:53:29] err [18:53:31] yes [18:53:35] so you're running it under python 3 [18:53:45] well, that is only python available atm for me [18:53:59] is it not possible to run it in newer python? [18:54:11] in fact I need to get it working in C [18:54:18] which is going to be much harder I guess [18:54:34] then you'd better wait until the server is actually functioing :-p [18:54:41] for beginning I wanted to try the simplest example which doesn't work either [18:55:11] well, they already plan to phase out IRC provider and they didn't yet get this thing working meh [18:55:32] this isn't right way to do this [18:55:40] they should first let people give them some feedback [18:55:54] I still think that websockets are bad idea here [18:55:54] and only then clarify the intent is to kill IRC? [18:56:21] that's just a retarded idea. Then people will complain 'Oh, no, but you never told us that this new thing was supposed to take over the IRC channel!!!1111oneoneone' [18:56:34] what you mean [18:56:41] which idea is retarded [18:57:04] the idea of only announcing 'we kill kill IRC' at some other point in the future [18:57:55] it's pretty lame to tell people "we are going to kill IRC, get ready for new system... oh wait, we don't know what the new system is actually going to be..." [18:58:07] except we do know what that system is going to be [18:58:10] you just don't like it [18:58:17] everyone knows that IRC feed will not be supported for ever [18:58:39] officially it's still just one of proposed solutions [18:59:02] which indeed I disagree with for several reasons [19:00:21] Ubuntu 14.04 = Trusty, isn't it? And that doesn't provide Python 2? [19:00:34] oh, http://codepen.io/Krinkle/full/laucI/ seems to be working now [19:03:20] BTW, valhallasw`cloud: I'm looking for some thin wrapper around Gerrit's "API": Something like getChange ("Iabcdef")->files (), ->reviewers (), etc. Do you know if there's such for Python? [19:04:08] (I looked for Perl, but that one's *too* thin :-), basically just putting the JSON answer in a hash.) [19:05:41] scfc_de: so first they tell me python 2 is dead and now I have to get it... https://github.com/huggle/huggle3-qt-lx/issues/31 interesting world of pythonists [19:05:49] scfc_de: let me see.... [19:06:37] petan: well, python 2 the *language* is dead, python 2 the *ecosystem* not so much [19:06:59] valhallasw`cloud: ok so why are we prefering it over python 3 here? [19:07:15] petan: because socketio-client clearly does not support python 3 [19:07:21] eg. why this new RC feed is not working in python 3? [19:07:29] aha right [19:07:49] which it should, but the developers are probably working on py2 themselves [19:08:16] they do care about it, though, as they use six [19:08:26] I just feel s(c)eptic about replacing IRC which works everywhere with python-only solution that works in python2 only [19:08:39] scfc_de: https://github.com/valhallasw/gerrit-reviewer-bot/blob/master/gerrit_rest.py [19:08:48] scfc_de: so it didn't exist back then [19:09:00] petan: wtf is this 'python-only' nonsense [19:09:15] it's a protocol that comes from a /javascript/ world [19:09:18] valhallasw`cloud: Then I'll probably stick to Perl. Thanks! [19:09:24] and there's implementations listed for different languages [19:10:04] I suppose it can also be implemented in Python 3, just nobody got to it yet :-). [19:13:06] anyway, petan, if you want to test [19:13:13] the initial connection fails 4 out of 5 times [19:13:18] so retry that, then it should work [19:15:10] valhallasw`cloud: I don't see any working cross platform c++ solution [19:16:13] hm, i wonder how workable this is when i only have a websocket library [19:16:26] I have that too [19:16:33] but I guess it needs some more work [19:16:53] https://github.com/dhbaird/easywsclient ? [19:16:56] I couldn't find any specs or RFC of that websocket.IO thing [19:16:59] yeah and i'm on a phone atm [19:17:00] oh, right [19:17:30] I think socketio is just websockets on steriods, with a fallback on other techniques [19:17:41] you *might* be able to use long-polling, actually... [19:18:52] scfc_de: valhallasw`cloud it's also a standard http://tools.ietf.org/html/rfc6455 [19:18:57] I will just create a relay on labs for conversion from websockets to some decent, easy to parse and portable stream... [19:18:58] petan: ^ specs [19:19:08] yeah, IRC with colors is just so decent and easy to parse [19:19:17] much easier than this :P [19:19:35] at least if you are not in python :P [19:19:44] YuviPanda: that's *websockets*, not *socket.io* [19:19:53] but YuviPanda you are most welcome to write a C++ handler in Qt for huggle :o [19:19:56] valhallasw`cloud: indeed, and rcstream is pushing things out through websockets, not socket.io [19:20:14] oh, right. [19:20:15] valhallasw`cloud: socket.io is just the library being used, which supports websockets along with a couple of other older protocols [19:20:31] I would like to see you write a C++ handler for websocket.IO that would be more simple than IRC parser [19:20:38] petan: it's built in to Qt, http://qt-project.org/doc/qt-5/qtwebsockets-examples.html [19:20:44] http://qt-project.org/doc/qt-5/qtwebsockets-index.html [19:20:47] YuviPanda: yes I use that [19:20:52] but that is not websocket.IO [19:20:55] as I said [19:20:57] that is just raw websockets handler [19:20:59] websockets.IO is not a protocol [19:21:02] yeah, IRC with colors is just so decent and easy to parse [19:21:03] it's what is used on the server side [19:21:04] IRC with JSON? [19:21:06] Oh wait no lol [19:21:20] there's no 'protocol' called websockets.io [19:21:21] lul [19:21:35] YuviPanda: ok but it's something I guess [19:21:42] err? like a chair? [19:21:45] which is also something [19:21:47] yes :) [19:21:51] :) [19:22:01] Krenair: :D [19:22:17] so we need some parser for this chair thing that uses websockets [19:22:24] yes, it is called 'JSON' [19:22:41] json? [19:23:34] yes? the data is formatted in JSON [19:23:49] it's JSON over websocket, rather than color codes over IRC [19:24:24] uargh... [19:24:29] now get me JSON for Qt :P [19:24:50] keep in mind huggle needs to work in Qt4 as ubuntu doesn't support Qt5 [19:24:55] http://qt-project.org/doc/qt-5/json.html [19:25:04] oh. [19:25:04] that is qt-5 [19:25:25] I think I will stick with IRC parser for some time :P [19:25:26] Can't you use normal JSON? [19:25:39] http://qjson.sourceforge.net/ then? [19:25:43] like, second google hit. [19:25:43] scfc_de: you mean like another 3rd library? [19:26:05] I think I will just create a relay on labs, that sounds like 1000000 simpler [19:26:23] I once added 1 extra 3rd library to huggle [19:26:33] And it'll scale nicely, for sure. [19:26:34] then I received like 200 e-mails from people who stopped being able to compile it [19:26:38] you can also write your own JSON parser [19:26:42] it'll be quite a fun exercise [19:26:51] I don't think so [19:26:54] why? [19:27:03] I had some fun writing that IRC parser and it works fine :P [19:27:14] good luck, petan [19:27:23] You can always link the library statically if you need to. Google Chrome does that to the extreme. [19:27:37] yes, but people still want to be able to compile it [19:27:46] distributing pre-built packages is no problem for me [19:27:50] but people wants source code [19:27:50] add ALL the sources [19:27:51] :-p [19:27:58] What valhallasw`cloud said. [19:27:59] and they want to be able to compile it [19:28:07] while they don't know shit about programming [19:28:08] that is problem [19:28:23] this socket.io thing looks horrific [19:28:31] gifti: I love you :) [19:28:37] ^^ [19:29:03] petan: Then don't make it your problem and live a happy life :-). [19:29:10] what about simple sockets with json or so? ;) [19:30:12] gifti, gl trying to open that from a web browser [19:30:25] ah, that's the problem … [19:30:35] people with browsers don't need RC feeds they can open [19:30:37] wikipedia :P [19:30:46] machines need RC feeds [19:30:53] bots written in ASM and so on :P [19:31:06] btw where is some websocket library for ASM hm? [19:31:42] there is one for tcl, that's sufficient :p ;) [19:31:53] where is some magic IRC bizarre format parser for ASM hm? [19:31:55] tcl is good :P [19:32:01] i use it :) [19:32:08] yes for eggdrop [19:32:16] for bots [19:32:17] I used that before I invented wm-bot [19:32:37] this irc format is super simple [19:32:43] but limited [19:33:10] and lately fucked up; doesn't make any sense when i think about it [19:40:36] YuviPanda: it's not 'just' websockets, though, because Upgrade: Websockets on an GET /rc/ does not work [19:40:53] valhallasw`cloud: I don't think rc is fully functional yet [19:41:08] valhallasw`cloud: but it should when it is fully done, since otherwise you can't call diferctly from a client [19:41:39] YuviPanda: it does, but throws 502 while connecting 9/10 times. It works with socketio-python, but a C websockets library throws up over the connection [19:41:53] and not because of the 502, but because of an unexpected response on /rc [19:42:03] valhallasw`cloud: yeah, as I said, not in production yet :) [19:42:05] (namely 404 instead of websockets crap, I guess?) [19:42:12] so? [19:42:17] valhallasw`cloud: so things are broken? [19:42:21] it's not expected to work [19:42:47] *sigh*. I'll pop in another bug report, then. [19:42:58] yeah, although at this point it is well known [19:43:18] *what* is well-known? the 502's? the weird responses as to what a WS client expects? [19:43:19] valhallasw`cloud: I think there's a betalabs one that sends fake data but owkrs. [19:43:32] no, that one is down [19:43:34] valhallasw`cloud: uh, 'it is broken, we do not expect it to work, if it works that is just a fluke?' [19:43:35] there's stream.wikimedia.org [19:43:36] ah [19:43:44] that works if you're persistent enough [19:44:23] right [19:44:59] valhallasw`cloud: saw the bug, ty [19:47:28] valhallasw`cloud: IIRC there's a pywikibot generator for it [19:47:40] YuviPanda: I know. I wrote it. [19:47:41] :-p [19:47:50] :D I was thinking that just after I typed it... [19:48:50] but basically, as far as I understand it at the moment, socket.io is some protocol *on top of* websockets (or xmlhttpreq, or some other things) [19:50:52] valhallasw`cloud: right, in the sense that socket.io also offers *other* protocols, other than websockets [19:50:55] (like long polling, etc) [19:51:08] http://stackoverflow.com/a/10112562 [19:51:27] > probably more importantly it provides failovers to other protocols in the event that WebSockets are not supported on the browser or server. [19:52:20] yes, but it also *brokers* that protocol [19:52:30] so you cannot connect to a socket.io server with just a websockets client [19:52:35] you should be able to [19:54:10] oh, petan, https://github.com/billroy/socket.io-arduino-client :-p [19:54:18] it even runs on arduinos ;-) [19:55:06] shussh, IRC color codes are even better [20:12:46] when running job on -11, -13, LANG is not not exported at all, on other node LANG=en_US.UTF-8, I can workaround that, but I doubt it's expected [20:13:06] scfc_de: ^ [20:13:16] more unpuppetized things? or something I fucked up when adding them? [20:13:39] phe: thanks for reporting it! [20:13:57] Huh. [20:14:17] YuviPanda, I hate you :D passed a couple of hours to understand it ;( [20:14:25] sorry :( [20:14:58] phe: hmm, weird. I just ssh'd to tools-exec-11 and did an 'echo $LANG' and it gave me the proper string [20:15:21] YuviPanda: ssh != sge! [20:15:26] idiocy is mostly in application that don't use filename as it but try to do clever with their encoding :( [20:15:32] ssh passes your current LANG [20:15:36] aah right [20:15:46] in addition, phe is probably submitting from cron [20:16:18] phe: I just ran (as SGE jobs) env on -06, -11, -13 and all showed LANG being set to the one in my .profile. [20:16:27] hmm [20:16:49] but does .profile get parsed if you run from cron? [20:18:07] valhallasw`cloud: Don't remember. phe, how get's your program executed? As an SGE job? [20:18:37] ok, when you log you get LANG=en_US.UTF-8 but anyway you need it in your .profile too ? [20:18:46] no, when you log in, you get whatever ytou have locally by default [20:19:01] scfc_de, I tested it with jsub env [20:19:53] (There's a bug with some discussion on Bugzilla, but in case that's a different problem ...) [20:20:07] phe: So what exactly are you trying to do, and how does it fail? [20:20:58] Okay: New data point: Without .profile setting LANG, I get LANG=en_US.UTF-8 on 01..10, but nothing on 11 and 13. [20:21:24] scfc_de, I use a gtk app that need a LANG=xxx.utf-8 to work properlu [20:23:52] On both -03 and -11, /etc/default/locale says LANG="en_US.UTF-8". [20:25:09] phe: As a work-around, can't you set LANG explicitly in your application to en_US.UTF-8? [20:26:21] yeps, I'll do that but different env depending on the host break all gtk based application when using utf8 filename [20:26:42] yeah, that's basically the same issue as the python 3 issue [20:27:07] valhallasw`cloud: Was that the bug I was thinking about? [20:27:12] I think so [20:27:25] scfc_de, I'll try first to pass -v LANG=xxx to sge [20:27:29] basically, Python 3 (and py2, too, but slightly differently) assumes everything is the locale encoding [20:27:52] so an utf-8 filename will work fine when running directly, but will fail using cron, because then locale=C [20:28:11] valhallasw`cloud: btw, we should have trusty hosts soon. native py3.4 :) [20:28:35] the return of the -arch flag on SGE ;-) [20:29:33] valhallasw`cloud: ah, is that a toolserverism for solaris vs linux? :) [20:29:37] or something like that? [20:29:45] valhallasw`cloud: eventually all the precise hosts will go away, though. one by one... [20:29:54] yeah [20:29:56] after prod switches to trusty, I'd guess. Months away [20:30:02] * YuviPanda had almost no toolserver experience [20:32:36] the toolserver was this thing where your web application would magically stay alive as long as you logged in every 6 months or so :-p [20:32:53] heh [20:33:04] how did they deal with OOM'ing requests? [20:33:15] i gues Zeus would just kill that process and start again... [20:34:15] Dunno. I just know I had to do nothing :-p [20:34:25] heh [20:34:35] hopefully we'll fix that too... at some point in the very near future [21:22:12] YuviPanda: overcommit [21:22:34] most of the applications don't really run out of physical memory, it's just virtual memory that gets exhausted [21:22:45] on tool labs overcommit is disabled so they are killed [21:23:11] that's why toolserver worked fine, just as old bots cluster did [21:23:42] also, toolserver was using apaches instead of lighttpd, so if your php script ran out of memory, it didn't kill whole server [21:38:19] 'vm.overcommit_memory' => 2, [21:38:19] 'vm.overcommit_ratio' => 95, [21:38:21] petan: ^ [21:38:31] from exec_environ.pp [21:38:49] that doesn't matter [21:38:54] SGE config is what matters [21:39:12] it's SGE which kills the lighttpd not kernel OOM killer [21:39:47] also lighttpd is one job for all on toolserver there was one process per session [21:39:57] so in worst case only 1 php run got killed [21:40:04] instead whole server like on labs [21:56:34] scfc_de: hmm, for people doing lots of dump operations, I'm wondering if a couple of grid nodes where the latest dumps for 'the most popular projects' are on /srv [21:56:38] no historical data, only current [21:56:44] should speed things up a fair bit [21:57:35] YuviPanda: maybe just have 'dump' nodes that cache recently/often-accessed files from the dumps nfs mount? [21:57:46] like the hybrid hdd-ssd drives [21:57:51] not sure how doable that is, though. [21:57:56] hmm, hybrid nfs-vfs drives :D [21:58:06] valhallasw`cloud: yeah, I'd guess that'll be more complicated [21:58:07] as well [21:58:12] GERMANY WON :DDDDDDD [21:58:46] valhallasw`cloud: I suppose just compressed 'all pages, current versions' dumps for enwiki, dewiki, wikidata would fit comfortably in /srv [21:58:58] probably, yes [21:59:07] and /srv is currently underused [21:59:45] in fact, we could just get it done on *all* nodes. Only problem would be the 'atomic copying' of the new dumps, since I don't think we'll have enough filespace to have them *two* at a time [22:00:15] scfc_de, did you watch the game? [22:00:21] would coping to tmp by job achieve the same? [22:00:36] 3Wikimedia Labs / 3deployment-prep (beta): Setup rcstream for beta.wmflabs.org wikis - 10https://bugzilla.wikimedia.org/67888#c2 (10Peter Bena) so stream.wmflabs.org is rc feed for beta cluster? [22:04:13] gifti: /tmp is nowhere near big enough, though. [22:04:55] even for individual files? [22:05:19] 3Wikimedia Labs / 3deployment-prep (beta): Setup rcstream for beta.wmflabs.org wikis - 10https://bugzilla.wikimedia.org/67888 (10John Mark Vandenberg) [22:05:47] gifti: it would be ok for individual files, but on the whole not much of an im provement from current situation (read from NFS) [22:05:56] since you'll have to keep copying files to and fro NFS [22:06:12] in fact, it'll be a net negative, since you are first reading from NFS (to copy), and then reading from local system again [22:06:32] ok, nvm ;) [22:22:42] YuviPanda: Interesting idea. But I think that it would introduce a lot of complexity (which dump is where, are they up-to-date, are they complete, etc.) and load. So as a big fan of KISS ... :-) [22:23:00] scfc_de: indeed, we've a lot of other issues to fix before that :) [22:23:04] but... someday :) [22:23:13] I think atomic copies would be the biggest problem there [22:25:09] YuviPanda: just temporarily replace with a symlink while copying? [/lazy option] [22:25:13] rsync does that by default, but in doubt copies the whole file, so you need free space = biggest file. [22:25:19] right [22:25:26] exactly, and I dunno if our 160 would be good enough [22:25:29] *rsync does atomic, not symlink [22:25:39] valhallasw`cloud: good point, but tools currently with read handles? [22:26:11] Still sounds more like a solution looking for a problem to me :-). [22:28:32] Before I forget: sge_execd's environment on tools-exec-03 has "LANG=en_US.UTF-8", on -11 not. So -03 seems to set it somewhere in the start-up of sge_execd. I just rebooted -11 yesterday, so it can't be recent changes. [22:29:03] (Environment = /proc/$PIDOFEXECD/environ) [22:30:43] petan: stream.wmflabs.org is working immediately here, so if you want to test, that's a better choice [22:30:49] (stream for beta) [22:32:14] socket.io compared to websocket uses some magic with custom message types, but i have no fucking clue, how it does it [22:32:43] yeah, me neither [22:32:49] at least it doesn't 502 :p [22:33:29] gifti: https://stackoverflow.com/questions/6692908/formatting-messages-to-send-to-socket-io-node-js-server-from-python-client [22:33:54] i wish i could intercept the communication somehow [22:34:37] wireshark :-p [22:34:46] but that SO link shows you what the protocol looks like [22:35:29] interesting :o thx [22:36:07] the initial handhake is a bit retarded ,though... [22:37:02] gifti: go to http://stream.wmflabs.org/socket.io/1/, copy the first integer, then connect to ws://stream.wmflabs.org/socket.io/1/websocket/ [22:37:31] just the difference between socket.io and websocket would be enough, this is socket.io/http if i'm right [22:37:42] wtf [22:38:18] my mind is blown [22:39:07] ? [22:40:08] hmmm [22:41:03] and then there is also https://github.com/Automattic/engine.io-protocol which is supposed to be the protocol [22:41:05] except it's not [22:41:29] so, yeah, socket.io sucks [22:42:46] :o [22:45:33] WHOO [22:55:34] gifti, YuviPanda: https://bugzilla.wikimedia.org/show_bug.cgi?id=67955 [22:56:11] ugh [22:56:17] that... should be fixed, I'd think [22:56:35] sorry about making assumptions about things being sane before, should've checked first [22:58:54] hm, stream.wmflabs.org from within labs, how do i access it again? [23:02:19] YuviPanda: I think it's just what everybody had hoped :-) [23:02:28] heh [23:02:55] gifti: if it doesn't work, try the IP, 208.80.155.138 [23:03:07] gifti: I can ping stream.wmflabs.org from inside labs, it resolves to in ternal IP [23:03:14] gifti: I think that's deployment-stream (10.68.17.106). [23:03:19] 10.68.16.52 [23:03:20] yeah [23:03:32] scfc_de: huh? I was getting 64 bytes from deployment-eventlogging02.eqiad.wmflabs (10.68.16.52): icmp_req=1 ttl=64 time=1.55 ms [23:03:42] sigh, thx :) [23:04:07] YuviPanda: Dunno. Yes, the Beta people recently added internal DNS entries for the external names with the internal IPs. I wanted to do the same for tools.wmflabs.org. [23:04:21] scfc_de: yeaaah, we should. [23:04:54] YuviPanda: Commit aee21e6fb54beca23a53d7abc184800a06ae1147 in operations/puppet. [23:04:55] scfc_de: wasn't it puppetized as natfix or soemthing? [23:05:36] woohoo [23:06:32] YuviPanda: No, they manipulate the dnsmasq that runs on virt*. [23:06:43] ah [23:07:26] So the alias works in all of Labs. [23:09:23] valhallasw`cloud: Re https://bugzilla.wikimedia.org/show_bug.cgi?id=67955, is this now a real bug or were just your expectations skewed? [23:10:32] I think the story was that 'websocket works' [23:11:08] "RCStream is a simple tool to broadcasting activity from MediaWiki wikis using the WebSocket protocol. I" [23:11:15] I think that was my assumption too [23:11:34] so either it's a doc bug, or it's a bug where everyone assumed that because socket.io does WS, you can also connect with just WS [23:18:52] is the 5:1:: part of 5:1::{'name':'newimg', 'args':'bla'} also json? what does it mean then? [23:19:14] hmm, that looks suspiciously like an IPv6 address [23:20:10] gifti: no, that's socket.io protocol [23:20:16] raaah [23:20:39] which means something like 'json message':'no f--ing clue what':'namespace':'message' [23:22:21] ok