[00:24:26] YuviPanda: update, I don't know anything about gpg and this is driving me a little bit crazy [00:24:31] haha! [00:24:41] andrewbogott: isn't it getting late, and you should go sleep? [00:24:53] we can possibly poke ryan or paravoid tomorrow again. [00:24:55] Eh, not that late yet. But I will probably not finish tonight. [00:25:30] hmm, okay [00:26:07] I actually have some step-by-step instructions in front of me, but adapting them for puppet is obnoxious [00:26:16] ah, of course. [02:44:00] Hello. [02:44:02] I'm here. [02:44:09] Why can't I start a screen session as dbreps? [02:44:22] local-dbreps@tools-login:~$ screen -S h [02:44:22] Cannot open your terminal '/dev/pts/50' - please check. [02:44:36] Same with just `screen`. [02:44:37] Hmmm. [02:44:48] * Elsie loads labs.wikimedia.org. [02:46:20] https://wikitech.wikimedia.org/wiki/Screen [02:46:21] I found that. [02:46:44] !screenfix [02:46:44] script /dev/null [02:46:47] Elsie: ^ [02:47:19] What? [02:47:28] type that in [02:47:35] I don't know what that does. [02:47:36] then screen will work [02:47:43] it fixes screen :P [02:48:07] A typescript of everything printed to my terminal. [02:48:30] wtf [02:48:35] Why do I have to do that? [02:48:40] no clue. [02:49:05] Okay. [02:49:11] This is why I get nothing done, BTW. [02:49:18] Because I go to do something and now I'm going to look up bugs about Labs. [02:49:29] And that wasn't at all what I set out to do. [02:54:52] https://bugzilla.wikimedia.org/show_bug.cgi?id=50248 [02:54:53] Deep sigh. [02:54:55] [bz] (8REOPENED - created by: 2Roan Kattouw, priority: 4Unprioritized - 6normal) [Bug 50248] screen doesn't work from within 'become' - https://bugzilla.wikimedia.org/show_bug.cgi?id=50248 [02:57:36] Okay, back to what I was doing. [03:00:49] I'm getting a Python error. [03:00:54] About not having perms to the database. [03:08:12] Okay, new question: why does Labs have two .my.cnf files? [03:08:34] Actually, three. [03:42:03] three? [03:42:16] well there's a .my.cnf (for your tool), because your tool has it's own mysql database [03:42:17] Maybe two. [03:42:26] then there is replica.my.cnf which is for access to the replicas [03:43:10] It doesn't help that I'm using a shared account. [03:43:14] So I have .my.cnf~. [03:43:46] And someone has copied the contents to replica.my.cnf to .my.cnf and commented out the original lines. [04:55:07] Elsie: I commented on the bz; I found how they do it and... it's not pretty. [05:04:47] Coren|Away: Well, River is/was a grade-A sorceress. [07:41:24] !ping [07:41:25] !pong [08:24:27] [bz] (8NEW - created by: 2Dereckson, priority: 4Unprioritized - 6normal) [Bug 53793] Users with a former SVN account not migrated can't create an account - https://bugzilla.wikimedia.org/show_bug.cgi?id=53793 [10:58:58] !admin ERROR 1146 (42S02): Table 'commonswiki_p.text' doesn't exist [13:18:28] Coren|Away, hihi [13:30:05] [bz] (8REOPENED - created by: 2Roan Kattouw, priority: 4Unprioritized - 6normal) [Bug 50248] screen doesn't work from within 'become' - https://bugzilla.wikimedia.org/show_bug.cgi?id=50248 [13:32:45] CP678: Heyo. [13:33:18] Hold on... I'm about to start a C++ quiz. [13:48:48] That was an easy quiz. [13:48:58] C++ is actually really easy. [13:49:57] Coren, Can you temporarily add 2 extra slots to Cyberbot while you create the special Exec node? [13:50:15] That way the queue can go work itself away. [13:50:24] Cyberpower678: I'm going to be done with your node in an hour or two. [13:50:32] Oh cool. [14:34:51] ... why is mysql not being able to be started on oauth-sql01? Logs show something about apparmor. [14:51:29] Coren: poke [14:51:40] * Coren pokes back! Poink! [14:51:58] Do we have docs for connecting to the databases? [14:52:22] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Database_access [14:55:00] Coren: how much trouble would it be to write up something like https://wiki.toolserver.org/view/Database_access#Program_access [14:56:43] Not very, I suppose, and since that doc is applicable basically as-is and is CC-BY-SA, we could simply copypasta. /Help is beginning to be a little on the heavy side, though, so splitting it up is increasingly needed. [15:06:03] When gpg says 'gpg: no valid OpenPGP data found' is it complaining about the data stream I'm signing, or complaining about my key itself? [15:13:53] ok, WTF? Why is puppet replacing "/run/mysqld/mysqld.sock" with "/mysqld/mysqld.sock" in my.cnf? [15:15:32] anomie, can you tell me more? e.g. what class is doing it? [15:16:29] andrewbogott: Instance is oauth-sql01. oauth-sql02 isn't doing it, oddly enough. How do I tell which class? [15:18:37] Coren: any reason Im getting: [15:18:39] _mysql_exceptions.OperationalError: (1044, "Access denied for user 'p50380g50402'@'%' to database 'enwiki.labsdb'") [15:18:46] anomie, is the issue that you are hand-editing that file and then puppet is reverting it? [15:19:44] andrewbogott: No. I hadn't touched the server for a long time, until this morning mysql was suddenly down. After a few false starts I hand-edited the file to change it back to /run/mysqld/mysqld.sock, then ran sudo puppetd -tv and saw puppet changed it back. [15:20:01] Betacommand: Hm. Shouldn't be. Lemme look into it [15:20:53] anomie… so you started that sentence with 'no' but then described exactly what I asked :) hand-editing, puppet is reverting... [15:21:13] andrewbogott: So why did puppet change it in the first place that made things break this morning, hmm? [15:21:34] what project [15:21:35] ? [15:21:39] andrewbogott: oauth [15:22:14] Betacommand: As far as I can tell, the credentials in ~local-betacommand-dev/replica.my.cnf work fine. Can I see the source that gives you this error? [15:22:17] andrewbogott: (I got "/run/mysqld/mysqld.sock" from comparing my.cnf on oauth-sql01 with oauth-sql02, FYI) [15:24:03] Coren: http://pastebin.com/Xp8ad9US [15:26:26] Betacommand: Oh; your DSN is pointing at the wrong place. You want: db="enwiki_p", host="enwiki.labsdb". the db names are the same as the toolserver's, only the host changes. [15:27:01] anomie: It looks like something interesting is happening! That patch is defined in the 'base' module, and somehow the mysql class is getting parsed before base is loaded. [15:27:49] andrewbogott: Wow, weird. How do I fix that? [15:28:26] puppet load order -> Hell on Earth. [15:28:52] I'm not sure. It's sort of broken that we care about ordering at all [15:30:09] Betacommand: Did that work okay? [15:33:34] Coren, so, the right solution for this is to use something functionlike rather than a global to determine the run_directory so that it's calculated on demand. Any idea if there's a way to do something like that? [15:33:58] * andrewbogott sort of can't believe he's asking [15:34:04] * anomie creates this "/mysqld" directory so he can get work done while waiting for others who know more about puppet to figure out how to fix it [15:34:05] andrewbogott: Only way I can think of is to create that file from an erb template [15:34:25] Ah, no, i need to fix the problem with this global /everywhere/. [15:34:28] Not just in this one reference. [15:34:35] Oh. [15:34:54] It has to do with /var/run having changed to /run depending on ubuntu version [15:35:37] Right now lots of our puppet repo depends on resolving that 'at the beginning before anything else happens' as though that's a thing [15:35:37] Well, if you define it only in one place, the definition can be conditional on something from facter -- iirc the version is available there for comparison. [15:36:23] And no, anything that depends on order in puppet is doomed to fail unless you do stages. You don't want to do stages. [15:36:32] I agree. [15:36:47] But, I don't understand what you mean about 'only in one place.' I think that's what we're doing now. [15:36:52] I guess I can move it into a fact. [15:37:07] that really /will/ happen first. [15:37:12] * Coren nods. [15:37:22] facts are idempotent. [15:40:58] can… a fact refer to other facts? Because I need to know my distro [15:41:09] * andrewbogott has never written a fact [15:41:58] Well, you don't need to put the path /in a fact/, you only need to make the variable definition dependent on one. Where is it defined atm so I can look at it? [15:42:11] ok, I am curious, is a "fact" some kind of filetype/infotype in Puppet? [15:42:37] sumanah: closer to an 'environment variable' of sorts. [15:42:40] sumanah: It's a key/value pair that comes from the host where the puppet config is to be applied. [15:42:55] the puppetslave? [15:43:10] sumanah: Facts are gathered from the target environment. [15:43:16] sumanah: Yes, the slave. [15:43:21] you sound like a general Coren [15:43:34] "like a general"? [15:43:36] "we gather facts from the target environment. Then: GROUND CAMPAIGN" [15:43:45] Heh [15:43:52] * sumanah quiets down. carry on [15:43:56] Coren, modules/base/manifests/init.pp, first few lines at the top [15:44:12] andrewbogott: spagewmf also had the same problem anomie is having, IIRC [15:45:33] andrewbogott: Hmmm. Yeah, that's dreadfully order dependent. Damn. [15:45:59] One of the numerous reasons why puppet globals are teh ev1lz. [15:46:08] I think it should just be defined in a new fact. I just don't know enough ruby to do it. [15:46:09] * Coren ponders. [15:46:20] * andrewbogott wonders if he can learn ruby before his appointing in 15 minutes [15:46:26] *appointment [15:46:35] The problem is that it's a global; if that was a class parameter there wouldn't be an issue. [15:46:45] yep. [15:46:56] But yeah, making it a fact would neatly avoid the issue. [15:47:15] Ah, so it could be defined in the base class and then refered to as base::run_directory elsewhere? [15:47:49] Coren: thanks [15:50:03] Coren, can I really refer to class parameters from elsewhere? As ^ ? [15:50:44] andrewbogott: Yes, but that makes it... order dependent. [15:51:11] Won't referring to base::run_directory just force the loading of base beforehand? [15:53:21] Coren, e.g. https://gerrit.wikimedia.org/r/#/c/82845/ [15:53:47] No idea if that syntax is correct, but, for the sake of argument... [15:54:34] Oh man, run_directory is only used in that one place. So this is all moot [15:55:03] Well, it's certainly easier to just fix it /there/ then. :-) [15:57:23] OK, Coren, https://gerrit.wikimedia.org/r/#/c/82845/ [15:59:03] Coren, did you do the sockpuppet merge as well? [15:59:15] andrewbogott: Just did. [15:59:24] cool. OK, anomie, try it now? [15:59:49] * anomie 'sudo puppetd -tv's [16:00:51] andrewbogott: It changed the path back to /run/mysqld/mysqld.pid. So either it worked or the ordering randomly got it right this time. ;) [16:01:05] cool. [16:01:26] Thanks for the headsup. This was indirectly my fault since I was tinkering with base yesterday. [16:01:36] (and by 'indirectly' I mean 'entirely') [16:02:51] YuviPanda, I haven't forgotten you, making creepingly slow progress on the repo-signing issue. [16:19:51] andrewbogott: :) ty [16:37:57] Coren, I'm looking at two different guides, one about how to automatically create gpg keys, one about how to sign repos with gpg. And both guides are fine, except the signing guide wants to know the 'name' of my key, and the docs about creating keys don't indicate that keys have names. [16:38:00] any idea what that's about? [16:41:30] [bz] (8NEW - created by: 2Dereckson, priority: 4Unprioritized - 6normal) [Bug 53793] Users with a former SVN account not migrated can't create an account - https://bugzilla.wikimedia.org/show_bug.cgi?id=53793 [16:42:28] Coren: for some reason https://gerrit.wikimedia.org/r/#/c/82047/ never got merged, despite you giving it a +2 [16:46:55] marktraceur: or perhaps andrewbogott or Coren or Ryan_Lane [16:47:04] (puppet messing up mysql again, for marktraceur) [16:47:07] puppet is going nuts on my mysql install [16:47:10] See also http://multimedia-alpha.wmflabs.org/wiki/ [16:47:13] hey, I just said that [16:47:32] bwahahaha, http://blue-dragon.wmflabs.org/wiki/Main_Page isn't affected since it uses the vagrant puppet files :D [16:47:45] (but some day there will be an NFS change and lue dragon will be fucked) [16:47:55] well at least we notice puppet problems real fast [16:47:59] manybubbles, what's it doing? [16:48:10] let me find the log [16:48:36] andrewbogott: anomie says you and Coren fixed it for him [16:48:46] (in -dev) [16:48:47] should be fixed for everyone [16:48:56] Do we need to run puppet again? [16:49:23] andrewbogott: this is my syslog https://gist.github.com/nik9000/0ff726164d954772eab2 [16:49:30] right before mysql tries to restart and fails [16:50:05] oh, that log is from a couple hours ago? You should try again now. [16:50:32] puppet is still running so it is seriously wedged. [16:52:40] andrewbogott: after kill -9 and running it myself it seems better. [16:52:47] I wonder why it gets stuck like that [16:53:34] all better now [17:24:33] libgcc_s.so.1 must be installed for pthread_cancel to work [17:24:40] Coren: ^ not sure what that means [17:24:48] YuviPanda: Merged, and ima create your instance now. Do you need it big? [17:24:50] Coren: that means 'out of memory' [17:24:53] (running a php script via jsub) [17:25:01] err [17:25:02] legoktm: ^ [17:25:04] YuviPanda: Oh [17:25:05] hm [17:25:05] Coren: medium? [17:25:08] legoktm: Means out of memory, 99% of the time. PHP usually wants ~350m [17:25:10] legoktm: it's in tools-help [17:25:12] even [17:25:14] ok [17:25:15] thanks [17:25:54] YuviPanda: tools-help is #REDIRECT [[#wikimedia-labs]] for me ;) [17:26:00] pfft [17:26:22] YuviPanda: It should be ready in ~1h [17:26:51] Coren: yay [17:26:56] Coren: would I be able to ssh in? [17:27:05] Coren: also can I get root on that, for now at least? [17:27:18] Coren: we can delete and recreate it before it becomes 'real', and I won't have root then. [17:28:38] YuviPanda: Better yet, since you are staff I'll just put you in projectadmins for now. [17:28:45] wah [17:28:46] okay [18:14:46] [bz] (8NEW - created by: 2Marc A. Pelletier, priority: 4Unprioritized - 6minor) [Bug 53816] Hostnames assigned to floating IP persist when deallocated - https://bugzilla.wikimedia.org/show_bug.cgi?id=53816 [18:15:53] YuviPanda|brb: Should be up and full of happies. It has a public IP and name 'tools-proxy' [18:16:05] just in time, boohoo! [18:16:22] as in, I was back just in time [18:22:02] YuviPanda: Your class is missing a few prerequisite for proper integration into the tool labs infrastructure. Changeset incoming. [18:22:17] oh? [18:22:21] waiting, then :) [18:23:02] Coren: also, you should perhaps mail out labs-l about adding me as projectadmin? [18:23:42] YuviPanda: Unless you actually want to do adminy stuff in general, I was intending this to be a temporary adjustment while you worked on the proxy. [18:24:09] good enough for me, although I could help people out with 'admin'y stuff if needed. [18:25:47] Coren: tools::webproxy didn't have infrastructure? [18:26:28] toollabs::webproxy includes it transitively [18:26:51] aaah [18:26:59] Coren: makes sense. [18:28:12] ::infrastructure is mostly just access control atm, but it's also the class where 'generic' maintenance will go. [18:28:18] right [18:28:20] makes sense [18:29:07] Coren, so my cyberbot bash history seems to have commands in it that I don't recognize. [18:29:15] Why is this? [18:29:49] Cyberpower678: Because I needed to test your queue. It's working now, btw. You can (and should) restart your jobs with '-q cyberbot' now. [18:30:45] YuviPanda: I'm going to do an upgrade of packages on your box and reboot. [18:30:57] Coren, ah. Where do I execute -q cyberbot [18:30:59] Coren: it'll need packages added to the local deb repo [18:31:16] Cyberpower678: It's an argument understood by jsub, jstart and qsub. [18:31:34] Coren: but I iguess I can add it myself? [18:32:16] YuviPanda: Indeed you can. Look at /data/project/.system/deb [18:32:45] Coren, I learn better visually. Can you modify the following statements to make it access my new exec node? [18:32:46] cd $HOME/compat && jsub -mem 1g -cwd -continuous -N RfPPBot -o $HOME/CyberbotI/RfPPBot.out -e $HOME/CyberbotI/RfPPBot.err python rfppbot.py [18:33:06] cd $HOME/compat && jsub -q cyberbot -mem 1g -cwd -continuous -N RfPPBot -o $HOME/CyberbotI/RfPPBot.out -e $HOME/CyberbotI/RfPPBot.err python rfppbot.py [18:33:21] 1 gig of ram? [18:33:43] That's just my default size. [18:33:46] KittyReaper: python can be ugly like that [18:34:16] * Cyberpower678 should convert those to PHP. [18:34:46] And I thought Java ate memory? [18:34:53] If only I had the time. [18:35:00] Cyberpower678: Oh, wait. I note that jsub will actually override that with -continuout. Lemme make a fix for this. [18:37:04] So I am going to dedicate the files using less memory to my node, and the experimental ones to the regular ones. [18:37:26] That would make almost all of my scripts. [18:39:11] (03PS1) 10coren: Make jsub obey -q even when -continuous specified [labs/toollabs] - 10https://gerrit.wikimedia.org/r/82881 [18:41:28] (03CR) 10coren: [C: 032] "LGM" [labs/toollabs] - 10https://gerrit.wikimedia.org/r/82881 (owner: 10coren) [18:41:35] (03CR) 10coren: [V: 032] "LGM" [labs/toollabs] - 10https://gerrit.wikimedia.org/r/82881 (owner: 10coren) [18:41:38] can someone give me an estimation about how much the enwiki database tables take? [18:42:27] lbenedix: You mean the size on-disk? [18:42:27] diskspace [18:42:30] yes [18:42:41] including the text tables [18:43:27] I would guess several dozen megs at least [18:44:10] the german takes ~2.6 TB in our mysql-database [18:44:21] lbenedix: Lemme to check, actually. [18:44:50] It's ~800G before text. [18:44:58] O.o [18:45:01] Damn [18:45:26] Coren, what about with text? [18:45:35] a couple of TB [18:45:36] I'd guess [18:45:40] I'm looking for that now. [18:46:22] "a couple" is not good enough to buy harddisks ;) [18:46:52] hey, it was a guess! :P [18:47:22] and defenitly not wrong [18:50:09] It's actually more complicated than first appears to calculate because of the way we store text on external storage, with compression. [18:50:38] lbenedix: I guess the first question I should ask is "are you trying to buy disks to replicate our setup or to have a 'normal' mediawiki install?" [18:50:59] we want a local snapshot for queries and stuff [18:52:54] can you give a rough estimate? 10TB? 20? 100? [18:52:55] Coren, is the fix in place? [18:53:04] 100? [18:53:14] I doubt it's that much. [18:54:12] Cyberpower678: Not yet, I'm building and deploying soon. [18:54:19] Ok. [18:58:48] lbenedix: Currently, compress text on the external store takes ~38T [18:59:10] thanks a lot [18:59:18] any idea about the compression rate? [19:00:24] lbenedix: It's fairly high, IIRC, but the way it's done you can't actually do SQL queries against the text; it's in blobs and needs some application logic to actually fetch a revision (i.e.: that doesn't give you a consistent enwiki.text table as you'd expect for a standalone mediawiki install) [19:02:04] lbenedix: If you wanted a working text table, I think you need to count on at least 500T of space. [19:02:49] I have serious doubts it would be 500T [19:02:57] just look at the dumps [19:03:02] the full dumps [19:03:07] that's the compressed size [19:03:25] uncompressed it'll be much larger [19:03:40] Ryan_Lane: I'm estimating based on the sum of revision lengths, I'm probably missing something. [19:03:55] if it's more than 50G I'd be surprised [19:04:00] maybe 100 [19:04:08] ... what? [19:04:15] for full revision enwiki for text? [19:04:40] err. sorry it's about that large for no revisions, right? [19:04:44] let's look at the dump sizes [19:04:55] Yeah, I was about to say! :-) [19:05:11] 100? [19:05:45] G without text [19:05:51] current versions only is 10G [19:06:09] we want the full history in our db [19:06:23] * Coren is actually running the query. [19:06:41] sigh. actually figuring this out from looking at the dumps is hard [19:07:50] yeah. it's relatively small [19:07:54] 50-100G [19:08:38] but the compressed dumps are not a good meassure for the size of the diskspace needed for the mysql-database [19:08:53] uncompressed is [19:09:25] the german with complete revision history takes ~2.6TB [19:12:36] Ryan_Lane, lbenedix: Lemme put it to you this way; I just checked how much actual diskspace needed in mysql for 'George_W._Bush' with history. [19:12:41] 3.9G [19:12:46] For just that one page. [19:13:10] I guess we do store full text per revision [19:13:18] We do. [19:13:40] It's actually easy to estimate. select sum(rev_len) from revision; [19:13:52] That query would probably take a while though. :-) [19:14:58] Ah, and 2.2G for Talk:George_W._Bush [19:15:10] Admitedly, /that/ page is pretty much a worst case. :-) [19:15:20] Coren: pfft. Try Evolution :P [19:15:27] * Coren tries it. [19:15:40] I guess we need to buy a lot of harddisks ;) [19:16:53] YuviPanda: It's actually not so bad: 1.2G for Evolution and 3G for Talk:Evolution. Although none of those number count archives moved to subpages. :-) [19:17:05] that probably explains it :P [19:17:17] hmm, and 1.2G for evolution is probably because of heavy protection [19:18:28] Jesus: 2.4G, Talk:Jesus: 2.5G [19:20:05] lol [19:20:24] Only Wikipedia could churn out five gigabytes of text on Jesus [19:20:44] Even the Bible is what...several megabytes? [19:21:22] KittyReaper: In all fairness, that's the sum of the whole text of all revisions; if you add 1 character to a 2mb page, you still use 2mb for the revision. [19:21:38] ah, true [19:21:47] I thought it stored deltas [19:22:05] * lbenedix is impressed that the wikipedia scales so well [19:22:14] KittyReaper: It'd be efficient space-wise, but it means that reading an old revision would need to "replay" the history backwards. [19:22:26] lbenedix: it doesn't really. take away all the frontend caches and boom! [19:22:26] over 9000 kudos to the ops team [19:22:39] oh yes, *wikipedia* scales really well [19:22:42] mediawiki, on the other hand [19:22:57] lbenedix: Much of this is thanks to the external store -- if we used the text table like a vanilla mw install, it'd die faster than you can say "crap". :-) [19:23:03] I guess I'm thinking in regards to networked games, where sending server frames and packet deltas makes sense to cut badwidth. [19:23:06] craaaaaap [19:23:18] Coren, let me know when I can start transferring to the new node. [19:23:41] Cyberpower678: Sorry, got distracted by the shiny. Lemme finish this up real quick for you. :-) [19:36:00] Cyberpower678: Should work now. [19:36:20] Ok. Thanks. [19:38:12] I'm going to start dumping some tasks to the new node. [19:40:39] KittyReaper: It makes sense for an online game, because time goes only in one direction. [19:41:12] yeah [19:41:37] And if packets are lost (UDP stream), it really doesn't matter for most things [19:43:52] Coren, how do I access my node? [19:44:01] It depends on whethere you were smartly predictive or not... but then we're drifting topic away from job towards hobby. :-) [19:44:15] hm. I wonder if that new compute node is properly working [19:44:31] I just submitted two tasks to it. [19:44:38] I can't see it in qstat. [19:45:02] hm. no br103 [19:45:11] Cyberpower678: Do you have error messages in your log files? [19:45:20] lemme check. [19:45:22] eth1 and eth1.103 are up. that's a good sign [19:45:27] * Ryan_Lane creates a vm [19:47:29] oh give me a break. it scheduled it onto virt8? [19:48:29] and now virt10? [19:48:30] * Ryan_Lane sighs [19:48:39] Coren, I've got a half million log files. Which one? [19:48:42] :p [19:48:46] * Ryan_Lane pokes nova-scheduler [19:48:48] stupid scheduler [19:48:57] Cyberpower678: Heh. What is the exact bot you tried to start, and how? [19:49:11] cd $HOME/bots && jsub -q cyberbot -mem 8m -cwd -continuous -N status -o $HOME/CyberbotI/status.out -e $HOME/CyberbotI/status.err php status.php [19:49:11] cd $HOME/bots && jsub -q cyberbot -mem 8m -cwd -continuous -N taskchecker -o $HOME/CyberbotI/taskchecker.out -e $HOME/CyberbotI/taskchecker.err php taskchecker.php [19:49:28] Cyberpower678: Lemme see. [19:49:58] RAWR. schedule my fucking instance onto the new node [19:50:24] Coren, I gotta leave now. Memoserv me. [19:50:32] kk [19:51:19] ooohhhhh [19:51:21] I know why [19:51:39] /dev/md1 1.1T 34M 1.1T 1% /srv [19:51:43] no. no. that's not right [19:52:30] hooray for ajax delete! [19:52:41] now I really want ajax add instance [19:55:04] Ryan_Lane, lbenedix: select sum(rev_len) from revision returns 10137024812315; so that places a lower bound of space for text at around 10T, probably at least twice that given mysql handling of blobs. [19:55:21] * Ryan_Lane nods [19:55:31] ok nova, you're now pissing me off [19:55:35] That only includes undeleted revisions, though. [19:56:10] (Which, I suppose, is what lbenedix would need to count against) [20:20:06] -_- [20:20:15] the scheduler we're using doesn't actually do anything anymore [20:29:01] and fixed [20:33:04] YuviPanda: hey! have you had any time to work on G2G? :-) [20:33:14] sadly, no. [20:33:26] will hopefully be able to to do in the weekend [20:33:26] or so [20:33:48] Don't feel obliged :-) [20:40:26] valhallasw: i do feel guilty [20:40:32] You shouldn't. [20:40:34] valhallasw: my spare cycles have been spent on the labsproxy stuff [20:46:35] YuviPanda: because you're awesome [20:46:38] what's G2G? [20:46:58] Ryan_Lane: Github to Gerrit [20:47:07] ah [20:51:42] YuviPanda: is there anything I could do to help you? [21:01:03] valhallasw yes! [21:01:14] valhallasw: it actually all works, it is just that I've not had time to test it and set it back up [21:01:41] valhallasw: it rewrote it from a CGI script that spawns jobs on the grid to a proper queue system, with a CGI script that puts items into a redis queue and a continuous job that takes items out of the queue [21:08:44] Ryan_Lane: seen http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security yet? [21:08:54] yes [21:09:22] it doesn't say what key strength they are using [21:09:40] hmm, that's possible. maybe they've enough hardware to break the low strength ones [21:10:10] > The three organisations removed some specific facts [21:10:16] specific facts probably was keystrength [21:27:29] YuviPanda: ah, OK. [21:28:42] valhallasw: it is mostly setup too, just needs testing and poking [21:30:06] YuviPanda: is it all in the gerrit2redis project? [21:30:16] valhallasw: so it is split across the gerrit2redis project [21:30:20] and the suchabot project [21:30:23] I see [21:30:23] suchaserver [21:30:31] gerrit2redis takes the notifications from github [21:30:33] puts them in redis [21:30:39] suchabot reads them [21:30:41] and syncs them [21:31:24] hrm. [21:32:37] ok, yes, this makes sense [21:43:03] is there a stall on labs right now or anything? [21:44:07] sometime over the last few hours my snitch instances stopped responding [21:44:28] i can quit them and add rules but they don't display stuff from irc.wikimedia [21:46:16] time for bed [21:46:51] night valhallasw [21:48:03] thanks sumanah :-) [22:49:00] Coren: weren't you going to memoserv me? [23:56:55] Coren, ping