[00:07:12] i am surprised how it's even controversial [00:07:27] if the task is "replace favicon" and it's not replaced, how could that be resolved [00:11:45] mutante: well yeah. I see what you mean [00:12:12] but it happens elsewhere. that's all I'm saying [00:15:22] that doesnt surprise me :) [00:15:32] nothing surprises me anymore [00:17:22] :) [00:19:45] mutante: Negative24 fwiw, MW things are marked with resolved when merged. [00:22:36] YuviPanda: well, bugs are resolved when someone verifies the fix...most people just verify the fix on master :) [00:22:46] ah fair enough. [00:22:54] the person verifying it might be the person who CR’d it [00:22:58] or the person who wrote the patch [00:23:10] so most workflow is ‘someone (including the person who wrote the fix) verified it’ I guess [00:23:57] "verified" doesnt even exist anymore [00:24:01] it used to in Bugzilla [00:24:16] but i already gave up on the difference between "resolved" and "verified" [00:24:27] resolved because it's planned is one step further [00:26:10] For MediaWiki+extensions/skins there are obvious reasons why we resolve when it goes into master [00:26:28] For site-specific stuff, we should probably only resolve upon deployment to that site [00:41:27] what is the obvious reason? [00:48:55] mutante, ... [00:49:02] mutante, there are a LOT of sites running mediawiki [00:49:28] If we only marked stuff as resolved when everyone had updated, we wouldn't have resolved anything since MW 1.0 [00:50:09] ok, i assumed we are talking about extensions and skins are using in production [00:52:09] Even if you completely ignore every single non-Wikimedia user of MW, you still have different deployment dates for different wikis [00:56:46] fair, i was thinking more classic bug like "XY broken on bla wiki" which can be verified. if you just work on an extension and add new features it's different. i guess in BZ lingo these would not have been bugs but enhancements [01:02:46] sorry. I was afk [01:03:36] mutante: I'll follow that in the future but it seems that this wasn't as clear as I though it was since it started this discussion [01:04:20] it never is:) yes [01:04:28] thanks [02:41:34] YuviPanda? [02:41:41] hmm [02:42:55] did I miss a nick or quit? [02:57:06] [13intuition] 15Krinkle 04force-pushed 06json from 148aa8d0b to 149b0058b: 02https://github.com/Krinkle/intuition/commits/json [02:57:07] 13intuition/06json 149b0058b 15Timo Tijhof: [WIP] Add JSON support... [03:01:13] Negative24: (2 hours ago) YuviPanda is now known as AngryPanda [03:01:21] AngryPanda is now known as AngryPanda|visa [03:01:30] ? [03:01:53] That's all we know. [03:02:29] I glaze over nick messages [03:02:43] and my logs don't keep them [03:10:31] Krinkle: just looking at your dotfiles project. how do I create a new tab? or am I really misunderstanding the whole thing? [03:11:03] Tabs are part of the terminal, not the shell. [03:11:08] Depends on your terminal software [03:11:14] I use Mac Terminal.app [03:11:20] which has built-in tabs [03:11:42] top menu Shell -> New Tab [03:11:46] (or cmd+T) [03:12:06] oh [03:12:37] and is this a project primarily for your use only? it has a dir in hosts named KrinkleMac [03:12:42] My dotfiles provide the colored prompt and data (e.g. full direcory path, hostname, user name, git branch, git status) [03:12:58] Yeah, it's how I synchronise my dotfiles across different servers [03:13:12] But it's open sourced for anyone to take inspiration from [03:13:27] Or if you dare execute unknown code on your shell, you're free to use the read-only version [03:13:32] well it is really nice [03:13:39] and yes I am that daring [03:13:41] which will run everything except KrinkleMac specific stuff [03:13:55] and how would I go about uninstalling... [03:14:18] It keeps everything modular [03:14:33] you just edit your .bashrc and .bash_profile and remove the krinkle dotfiles index.bash inclusion [03:14:39] 1 line [03:14:45] that is really cool [03:14:56] I'm going to fork and mess around a bit :) [03:15:01] in fact, not even two files, just one [03:15:09] Only need to edit bashrc [03:15:14] source ~/.krinkle.dotfiles/index.bash [03:15:30] In case you had stuff in there before, the init script will have moved it to .dotfiles.backup [03:16:28] I'm afraid I don't customize my shell that much. It didn't have anything. But I will keep it installed until I use my own ;) [03:17:11] that really is quite nifty [03:20:31] Negative24: Yeah, I couldn't live without it anymore [03:20:52] My precious 'l' command [03:21:08] oh [03:21:22] yes I can see that :) [03:21:25] https://github.com/Krinkle/dotfiles/blob/master/modules/aliases.sh [03:21:39] or .. and ... [03:21:47] to quickly navigate up directories [03:22:24] Can we set vim to accept Wq and Q! as wq and q!? ;-) [03:22:33] I make those errors a lot [03:22:38] ? [03:22:45] Don't do that :P [03:22:55] do what? [03:23:02] make those errors [03:23:14] Having said that, this is in KrinkleMac modules: https://github.com/Krinkle/dotfiles/blob/master/hosts/KrinkleMac/modules/aliases.sh#L11-L16 [03:23:26] it just fails saying that they aren't editor commands [03:23:42] computers are never wrong [03:24:45] When I use tool labs or other labs projects, I run the basic version of my dotfiles there too [03:24:50] so without KrinkleMac [03:25:02] that is to say, I do use the base version myself a lot as well [03:28:01] yeah. I will adopt a similar strategy. I am now distracted from everything that I was doing :) [03:28:39] Great. My mission accomplished :P [03:28:47] Anyhow, I'm glad it's useful to you. [03:28:52] :) [03:30:36] Negative24: hey [03:32:18] hey [03:32:46] why are you angry? :-) [03:33:06] Visas suck, mostly. [03:33:14] let’s not go there [03:33:15] ‘sup? [03:34:49] I have to leave soon fyi but I wanted to ask if there were any differences between using a project proxy and associating an ip. I'm trying to debug phab-02 (again) [03:37:03] Associating an identical hostname with an ip instead of the proxy causes it to not serve webpages [03:38:01] Negative24: oh, that sounds like a bug maybe. [03:38:11] they’re served by different subsystems maybe there wasn’t a proper handoff? [03:38:54] I once ran a whois and it was pointing to a different ip but I reassociated and it then pointed to the right one [03:39:53] Negative24: what was the hostname? [03:40:15] Negative24: oh wait. [03:40:20] Negative24: you’ve to open up security groups... [03:40:36] Negative24: you might have port 80 open to 10.0.0.0/8 need to open it to 0.0.0.0/0 maybe? [03:41:27] nope [03:41:43] Negative24: ‘nope’ for? [03:41:57] 80 is for 0.0.0.0/0 [03:42:36] only 22 and 5556 is labs only [03:43:16] YuviPanda: well I'll have to follow up with you tomorrow [03:43:27] Negative24: cool, file a bug I might not be around... [03:43:40] sounds good [05:38:12] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Tpalo was created, changed by Tpalo link https://wikitech.wikimedia.org/wiki/Nova+Resource%3aTools%2fAccess+Request%2fTpalo edit summary: Created page with "{{Tools Access Request |Justification=For a Stanford project (in the class CS341) in which we try to extract relationships and information from wikipedia corpus thanks to Deep..." [08:26:06] [13intuition] 15Nikerabbit pushed 1 new commit to 06json-i18n2: 02https://github.com/Krinkle/intuition/commit/d01d24b6a219c8e03bd15add89748d3c96f98163 [08:26:07] 13intuition/06json-i18n2 14d01d24b 15Niklas Laxström: Convert intuition core [08:59:29] Negative24 / mutante: I thought that was what the 'Stalled' status was for... [09:00:14] but that status sometimes confuses people even more [11:07:02] Hi! On tools.taxonbot is running a crontab. Anyhow, crontab -l returns: no crontab for tools.taxonbot // What can I do to stop this obbviously running crontab? I already tried crontab -r [11:11:24] crontab -r returns : no crontab for tools.taxonbot [11:11:50] but what else is starting the jobs [12:08:13] doctaxon: on which host / how do you know there's a crontab running? [12:08:52] there are starting jobs each 10 minutes [12:09:12] as put into crontab [12:09:43] my host is tools.taxonbot [12:10:15] again, how do you know there's a crontab running? what do you see? do you get e-mails? [12:11:10] qstat returns the job [12:11:45] after deleting by qdel it is starting next 10 minutes [12:11:55] to see on c-uncat.out [12:15:15] that's.... odd [12:16:57] yes it is [12:17:18] doctaxon: qstat -j 65163 shows sge_o_host: tools-trusty, so it was submitted from there; [12:17:44] okay [12:17:55] so let's see if there's a crontab there... [12:18:22] tools-trusty, no crontab [12:18:44] let's have a look at tools-login [12:18:48] well, it says no crontab, but that's actually the tools-submit crontab. There might be a local one by accident [12:19:19] yep, that's it. doctaxon, try running /usr/bin/crontab -l on tools-submit [12:19:38] tools-login? [12:19:46] uh, tools-trusty [12:20:00] note the path to crontab, you need that to get the local one [12:21:11] tools-login says no crontab for tools.taxonbot [12:21:36] tools-trusty says no crontab for tools.taxonbot , too [12:21:55] doctaxon: use /usr/bin/crontab [12:21:57] okay, wait [12:22:01] I'll dump it to a file and disable it [12:22:22] yes please disable the crontab [12:23:19] doctaxon: cat ~/old_local_crontab_on_tools_trusty for the old crontab [12:23:53] i don't need it any more [12:24:40] but it will be very helpful, if you disable the crontab [12:24:49] I disabled it. [12:26:22] thank you, i'll check it [12:35:10] YuviPanda: you around? [13:09:16] hello [13:10:54] hi [13:11:59] i have a project named WikiSpy that is soon going to be production ready and allows the user to query Wikipedia anonymous edits by a given rDNS (the database is available under creative commons). i'm looking for some hostind - i'd need at least 200GiB disk space for a PostgreSQL database - about 25 GiB if you'd give me a compressed partition. is there a way Labs could host it? [15:17:30] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Tpalo was modified, changed by Tim Landscheidt link https://wikitech.wikimedia.org/w/index.php?diff=154447 edit summary: [15:43:10] d33tah: technically that should be doable (although I don't get why you would need that much space?). There might be privacy issues, though, but as it's contribution data, it's already public [16:23:14] valhallasw`cloud: basically, it's about privacy issues. my project was created in order to allow people to for example look for propaganda coming from government IPs [16:23:24] valhallasw`cloud: the disk space is for rDNS table [16:26:52] ....propaganda and privacy are very different things [16:26:53] but ok [16:27:17] I don't think disk space would be a huge issue, although I'm still unsure why a rDNS db would need 200GB [16:31:50] it's rDNS for the whole internet [16:31:59] uncompressed, it's 82 GiB [16:32:19] +25 GiB of ip -> rdns index [16:32:32] and +64 GiB of index for rdns itself [16:33:03] the rest i reserved for entries about anonymous edits from major wikipedias (eventually maybe all of them) [16:33:30] it's one billion records. [16:33:37] (for the rDNS) [16:33:51] 18:26:52 valhallasw`cloud $ ....propaganda and privacy are very different things [16:34:07] yeah, but by looking for propaganda done from government ips, one could argue that you violate their privacy :p [16:35:07] d33tah: oh, right, it's 4GB for the IP addresses alone, and then you need the actual hostname [16:36:33] and some other storage-related data that PostgreSQL likes to keep [16:43:14] d33tah: ok, let's see. I don't think there's a standard instance where you'd have enough local (i.e. non-nfs) drive space [16:43:50] d33tah: so I think it's best if you create a new labs project request (make a subtask at https://phabricator.wikimedia.org/T76375 ) [16:43:55] and then discuss what you need there [16:47:13] valhallasw`cloud: will do! i'll just finish my experiments with English wikipedia. [16:47:18] are our home folders protected? [16:47:30] i only tested with Polish so far. [16:48:12] Superyetkin: 'protected'? [16:48:25] Superyetkin: they are world-readable, but not world-writable by default [16:51:05] umm, world-readable is not good [16:51:13] I cannot post my bot's password there? [16:52:08] you can probably chmod o-r the file with it [16:52:29] it's good to have two user accounts to test this kind of things [16:52:49] is there anyone here hosting bot files? [16:53:27] most people have files with private information [16:53:47] and indeed the typical solution is to make sure those files are not world-readable [16:53:50] how o protect it? [16:54:29] I am a bot operator on trwiki and want to use tool labs for hosting (I have a JustHost account for now) [16:54:30] chmod o-r [16:54:45] Superyetkin: you should probably take a look at https://en.wikibooks.org/wiki/A_Quick_Introduction_to_Unix/Permissions that explains that chmod command [16:54:58] o = 'other' (i.e. the rest of the world), '-r' = 'remove read rights' [17:06:38] chmod 400 enough? [17:08:40] Superyetkin: yes, but that means you can't write to that file anymore, chmod 600 is better and only gives read & write access to you [17:09:20] can I read the file from another PHP script? [17:10:50] yes [17:14:08] shall I give "group" read access to files? [17:16:10] Superyetkin: for files in a tool home directory, yes [17:16:41] but others in the gropu can view it [17:16:50] Imean, other tool developers [17:17:05] yes, which is probably what you want; otherwise you always need to 'become ' to view the contents [17:17:16] only members of that specific tool can view it [17:17:46] can you view "config.txt" under my public_html folder? [17:18:13] which tool? [17:18:19] superyetkin [17:18:55] Superyetkin: no; those permissions are set correctly [17:19:20] it is 640 now [17:19:33] -rw-r----- 1 superyetkin tools.superyetkin 10 Apr 18 16:04 config.txt [17:19:33] ^ user 'superyetkin' can read/write, anyone in the 'tools.superyetkin' group can read [17:20:09] yes, how to join "tools.superyetkin" group? [17:20:37] Superyetkin: https://wikitech.wikimedia.org/w/index.php?title=Special:NovaServiceGroup&action=managemembers&projectname=tools&servicegroupname=tools.superyetkin [17:20:52] everone you add there has access to it [17:21:25] Superyetkin: you created the group, so you're already in it [17:21:39] yes, I want to make sure my files are secure [17:21:43] thanks for the info [17:22:00] I would like to migrate my bot from JustHost to ToolLabs [17:23:07] is my roadmap correct so far? [17:25:29] I think so, yes [17:25:39] ok, thanks [17:42:10] valhallasw`cloud: I think the stalled status is good. There is no way that we can make it absolutely clear (though we have added it as a blocking task). It really shouldn't matter how it is represented [17:59:48] Krinkle: what is the red number next to the prompt $? It starts with 1 and then disappears and comes back, time to time [18:01:38] Negative24: The error code of the last command you ran [18:01:54] oh [18:01:54] Negative24: In unix systems, programs have an exit code. 0=success. [18:01:56] 1=error [18:02:03] There are 100s of other error codes as well [18:02:11] usually this is quite hidden but I prefer seeing it. [18:02:15] I know. I just don't know how useful it is for my use case :) [18:02:27] Feel free to take it out of the ps1 function [18:02:40] I've scripted puppet. I know how fun error codes can be [18:02:43] :P [18:02:58] Cool [18:03:01] I assume you know how to take it out [18:03:17] https://github.com/Krinkle/dotfiles/blob/master/modules/functions.sh#L41 [18:03:27] Remove the '$(_dotfiles-ps1-exit_code $ec)' part [18:03:29] yea but know I think its cool :) [18:03:34] *now [18:03:35] Haha [18:03:37] Nice [18:03:41] I didn't come up with it though [18:03:47] I improved it but stole it from elsewhere [18:03:58] https://github.com/Krinkle/dotfiles#thanks [18:04:05] I don't know which , but probably one of them [19:35:51] !log tools.wikibugs wikibugs has broken down again. Trying to figure out why. [19:36:01] Logged the message, Master [19:36:44] !log tools.wikibugs last message in redis2irc.log was 2015-04-18 02:10:26,157 [19:36:46] Logged the message, Master [19:38:20] !log tools.wikibugs that is, the last message to irc. The bot is still running and doing ping/pongs. However, wikibugs.log is completely silent after that time. wb2-phab.err does have errors, but without timestamps, so it's basically useless. Restarting wb2-phab to see if that helps [19:38:22] Logged the message, Master [19:40:58] !log now wb2-phab is functioning again, but wb2-irc is not reporting?! Restarting that as well [19:40:59] now is not a valid project. [19:41:13] !log tools.wikibugs now wb2-phab is functioning again, but wb2-irc is not reporting?! Restarting that as well [19:41:15] Logged the message, Master [19:41:49] valhallasw`cloud: I think tools-redis is not working [19:42:12] at least it doesn't react to my commands via telnet [19:42:32] :/ [19:43:15] !log tools.wikibugs tools-redis doesn't respond to commands, which could explain why wb2-phab was hanging. But why is tools-redis completely broken? [19:43:17] Logged the message, Master [19:44:17] ssh also doesn't respond [19:44:18] YuviPanda? [19:45:30] YuviPanda: tools-redis does listen on some ports, but doesn't actually respond with anything. SSH is also borked. [19:49:49] valhallasw`cloud: I just saw the bug [19:49:58] OOM it seems [19:50:19] aargh, I thought setting maxmem to 12G would’ve prevented that from happening [19:50:37] valhallasw`cloud: well, let’s restart it and evict everything, I guess [19:50:38] https://tools.wmflabs.org/nagf/?project=tools#h_tools-redis wtf?? [19:50:43] YuviPanda: no, wait [19:50:45] look at nagf [19:50:56] * YuviPanda looks [19:51:01] that's not OOM [19:51:13] unless it's a very sudden spike? [19:51:23] I see no data [19:51:27] and I can’t ssh in [19:51:27] right [19:51:29] let me try root key [19:51:48] y u no warn us, shinken-wm [19:51:48] so it's fine and then suddenly dies [19:51:59] because ping still works [19:52:05] there’s an ssh test too [19:52:08] oh [19:52:35] well, it’s unresponsive [19:52:39] valhallasw`cloud: wanna reboot it? :) [19:52:42] sure [19:52:56] !log tools tools-redis unresponsive (T96485); rebooting [19:52:58] I wonder if we should experiment with redis cluster [19:52:59] Logged the message, Master [19:56:45] valhallasw`cloud: is it back up? [19:57:04] * valhallasw`cloud waits for wikibugs to tell us [19:57:05] mm. [19:57:16] I can connect again, yes [19:57:24] not ssh because GRRR SSH KEYS [20:05:10] YuviPanda: except still broken, I think. [20:05:35] it can't store the dump because of too little memory [20:05:35] ugh [20:05:46] valhallasw`cloud: I see it’s working for me? [20:05:54] 127.0.0.1:6379> get 'hi' [20:05:58] "hi" [20:06:03] YuviPanda: yeah, it seems to. the log file has lots of complaints of not being able to run BGSAVE though [20:06:17] aargh that’s lack of memory overcommit maybe [20:06:34] the server memory is completely full [20:08:12] oh it's the slave resync that's failing [20:09:04] valhallasw`cloud: fixed it [20:09:14] ? [20:09:16] !log tools sysctl vm.overcommit_memory=1 on tools-redis to allow it to bgsave again [20:09:18] Logged the message, Master [20:09:53] yeah, but that's not the underlying issue [20:10:17] overcommit? yeah it is [20:10:21] well [20:10:22] kindof [20:10:38] (https://gerrit.wikimedia.org/r/#/c/194095/) [20:12:01] my DNS randomly died :| [20:12:01] valhallasw`cloud: I think the underlying issue is that we have too much use and too little memory [20:12:24] YuviPanda: still, how does that explain the entire server locking up? it should just kill redis-server and be done with it [20:12:34] ah no, it doesn't [20:12:54] "[3275840.851752] Out of memory: Kill process 27882 (redis-server) score 932 or sacrifice child" [20:13:28] it tries to kill it, it seems, but it immediately restarts? don't get it. [20:19:52] YuviPanda: eeeehhhh NFS dying? [20:20:00] hmm? [20:20:10] wikibugs2' git repo is corrupt, as well as files in the dir [20:20:25] channels.yaml is all \x00s [20:21:24] git fscking now [20:21:37] ... [20:24:14] !log tools.wikibugs file system corruption?? channels.yaml is all \x00s and .git/objects/* is corrupt. Cleared .git/objects, git fetch --all'd and git checkout channels.yaml seems to bring wikibugs back to life [20:24:17] Logged the message, Master [20:26:31] waaaat [20:31:02] 7Tool-Labs: NFS file corruption - https://phabricator.wikimedia.org/T96488#1218173 (10valhallasw) 3NEW [21:05:21] 7Tool-Labs: tools-redis broken - https://phabricator.wikimedia.org/T96485#1218194 (10valhallasw) There seem to be *two* running redis processes every now and then, both using ~12GB of RAM, even though the host doesn't have that. This might be during a BGDUMP, as that fork()s the process... ps auxf confirms this... [21:13:24] 7Tool-Labs: tools-redis broken - https://phabricator.wikimedia.org/T96485#1218195 (10valhallasw) @Yuvipanda just reminded me that linux uses copy-on-write for fork(), so that should be OK. The free memory does drop from 3.1G to ~2.4G, but that leaves more than enough breathing room -- so I'm not sure anymore th... [21:14:56] YuviPanda: I made that memory .csv file but I can't find it anymore >_< [21:15:06] wheeeere did it gooooooooo [21:15:47] oh, find to the rescue [21:28:26] YuviPanda: ok, parsing the .csv is pretty fast, ~1 min [21:28:50] that’s not so bad :) [21:28:55] I hope people had prefixes [21:30:38] it seems to be a job queue that's the worst offender [21:30:44] not everyone has [21:30:45] :( [21:31:40] ah [21:31:42] which job queue? [21:31:45] gerrit-to-redis?! [21:32:00] rq:job: 11243203718 [21:32:12] that's our 12GB [21:32:50] 7Tool-Labs: Audit redis usage on toollabs - https://phabricator.wikimedia.org/T91979#1218208 (10valhallasw) I now have a memory.csv in the following format: ``` database,type,key,size_in_bytes,encoding,num_elements,len_largest_element 0,hash,"rq:job:b3b0810c-42c8-448b-9663-9a419388fbdf",2202,hashtable,7,597 ```... [21:33:08] yeah, it's just a python job queue [21:33:18] ah but that measn I can just get an entry and read it [21:40:17] 7Tool-Labs: Audit redis usage on toollabs - https://phabricator.wikimedia.org/T91979#1218211 (10valhallasw) Based on the contents of the jobs, I'm guessing it's https://github.com/notconfusing/cocytus that's pushing all these elements in the queue. @notconfusing, do you have any idea why the jobs are not cleared... [22:33:41] 7Tool-Labs: Audit redis usage on toollabs - https://phabricator.wikimedia.org/T91979#1218247 (10valhallasw) Some more results (prefix, number of keys with that prefix, total size of all keys with that prefix): ``` AnomieBOT 504 213.9KB bub_ 167.5K 59.5MB pir^2_iw:http: 129 265.2KB rq: 5.2M 11.5... [22:40:08] YuviPanda: I'm done with redis :P [22:40:26] valhallasw`cloud: saw that :D so that’s basically one tool fucking everyoen up :) [22:40:42] YuviPanda: I'm wondering whether we can build trivial accounting [22:40:58] YuviPanda: even if it's just on the amount of data over the network per user [22:41:01] hmm [22:41:07] problem is figuring out where data is coming from [22:41:13] identd! [22:41:41] valhallasw`cloud: we could enable ‘CLIENTS’? [22:42:56] valhallasw`cloud: I wonder if we can also enforce prefixes in some way [22:43:32] YuviPanda: not sure if CLIENTS allows us to also measure traffic [22:50:55] maybe ntop [22:51:17] hmm [22:51:22] iftop + identd? [22:51:28] the only other thing I can think of is having a proxy-like system where the traffic goes through a different program before going to redis [22:51:41] nutcracker [22:51:43] we use that in prod [22:51:54] that program then selects a database (so we don't need a prefix anymore) and dumps the user name somewhere in that db [22:52:46] but that would be horrible to debug when it breaks :P [22:53:37] yup :) [22:53:47] I think the ‘real’ solution is user education and redis-cluster [22:53:58] I'm not sure how redis-cluster would solve this [22:54:25] user education would work, but for that we need easier monitoring, and per-user monitoring if possible [22:55:26] redis cluster won’t actually solve this, but just make the problem less apparent [22:55:29] yeah, need easier monitoring [22:55:45] YuviPanda: note that the actual usage outside of that massive queue is just a gigabyte or so [22:55:49] yup [22:55:56] so hmm you’re right [22:56:01] redis cluster isn’t really going to hlep us here [22:56:25] hmm, how about tracking network usage on a per-user basis somehow? [22:56:35] whatever iftop uses + identd [22:56:42] yeah, I think that's something that should be possible [22:56:50] and it's something that might come in handy in other situations as well [22:59:04] yeah [22:59:13] valhallasw`cloud: I wonder if we can do that on the client side [22:59:19] have it run on all the exec / web hosts [22:59:31] but that can't differentiate if the keys get deleted very quickly or that they agglomerate and never get deleted [23:00:11] yeah but it’s an approximation, I guess... [23:00:16] in some form [23:00:23] or maybe not. [23:00:48] maybe some voluntary agreement between devs that each one only uses one specific redis database and then do some analysis over each db? (Almost everyone seems to use 0 currently) [23:01:01] well, I think ‘only’ way to get super accurate measurements is to proxy I guess [23:01:06] sitic: that's what we basically do with prefixes now [23:01:18] yeah, the ‘voluntary’ agreements aren’t going so well.. [23:02:01] anyway, I'm going to sleep [23:02:02] true, but also the redis prefixes are a problem when some middle library as that rq job queue comes in [23:02:27] sitic: and that might present a problem with choosing a db as well ;-) [23:02:43] yep :-/ [23:03:14] Step 1 might just be to puppetize the script that you wrote, valhallasw`cloud [23:03:20] make it super easy to see at least prefixes and what not [23:03:39] and what would puppet do? :P [23:03:45] it takes an hour or so to run in total [23:03:46] oh nothing [23:03:55] I meant puppetize as in just put it in /usr/local/bin [23:03:57] so anyone can run it [23:04:01] and see what’s happening [23:04:08] except you can't because you need root access on tools-redis [23:04:19] because you need access to the actual dump files [23:05:06] so I'm not sure if puppet is helpful :-p but I should at least document what I did [23:05:29] anyway, bed time [23:05:31] well [23:05:36] anyone as in ‘tools admins' [23:05:47] valhallasw`cloud: basically, to not have it just be in your homedir :) [23:05:50] and in a repo somewhere :) [23:07:07] YuviPanda: *nod* [23:07:15] most of it is just redis-rdb-tools [23:07:19] right [23:07:27] valhallasw`cloud: thank you for investigating this all :) [23:07:33] yw [23:07:42] let me dump the script on that bug [23:08:45] +1 [23:09:06] * sitic thanks valhallasw`cloud as well and is happy that redis stops evicting my keys soon :-) [23:10:43] :D [23:13:38] 7Tool-Labs: Audit redis usage on toollabs - https://phabricator.wikimedia.org/T91979#1218249 (10valhallasw) [23:14:41] bed! [23:14:42] :w [23:30:42] 7Tool-Labs: Epilog scripts for web services fail with exit code 255 - https://phabricator.wikimedia.org/T96491#1218251 (10scfc) 3NEW [23:31:28] 7Tool-Labs: Epilog scripts for web services fail with exit codes 1 and 255 - https://phabricator.wikimedia.org/T96491#1218259 (10scfc) [23:38:24] 7Tool-Labs: Epilog scripts for web services fail with exit codes 1 and 255 - https://phabricator.wikimedia.org/T96491#1218263 (10yuvipanda) Also curious is the few of them which belong to users rather than tools.