[00:32:21] (PS1) JGonera: Update MobileWebEditing schema revision [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/141874 [04:27:57] (CR) Milimetric: Show recurrent reports regardless of age (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/141715 (https://bugzilla.wikimedia.org/67030) (owner: Milimetric) [06:50:24] springle: about the analytics slaves -> dbstore migration ... we've now migrated all the slaves, that we said we'd migrate. [06:50:27] But given recent mails from the analytics list, is it ok if I append s5 to the schedule, [06:50:31] and migrate it next week? [06:53:51] qchris: yep sounds good [06:54:05] Ok. Then I'll keep the bug open and do so. Thanks. [10:42:47] qchris, the timestamps on EL db are GTM right? (UTC) [10:43:21] qchris: Timestamps around wmf are typically utc. [10:43:30] Which column are you referring to specifically? [10:58:49] the timestamp column [11:08:05] qchris: one question if you have time [11:09:09] Sure. Shoot. [11:10:35] (Which timestamp column? :-) For example log.NavigationTiming_8365252.timestamp is UTC on dbstore1002) [11:13:21] i was looking at the raw logs [11:13:34] and for today [11:13:48] all-events.log-20140625.gz log has [11:14:09] nuria@vanadium:~$ more all-events.log-20140625 | grep MediaViewer | wc -l [11:14:09] 1024858 [11:14:28] 1 million of events [11:14:30] right? [11:15:25] but [11:15:25] select count(*) from MediaViewer_8935662 where timestamp like '20140625%'; [11:15:36] returns 40.000 results.... [11:15:43] must be missing something here... [11:15:59] The log file does not contain all events for a given day. [11:16:17] (log rotation is not happening at 00:00) [11:16:47] If you want all events from the log files, you have to combine two log files. [11:16:55] ya, i get that [11:17:02] but look at the order of magnitude [11:17:25] let me give you numbers for june 24 or 23rd [11:18:37] if i look at the raw logs [11:18:41] Well ... not sure about the input file, but your grep does not limit to the schema. [11:19:06] i was simplifing [11:19:27] but it's the schema number [11:20:04] #!/usr/lib/python [11:20:04] import json [11:20:04] import sys [11:20:04] filename = sys.argv[1] [11:20:04] f = open('all-events.log-20140623') [11:20:04] i = 0 [11:20:04] while i <1000: [11:20:05] i = i +1 [11:20:05] for line in f: [11:20:06] event = json.loads(line) [11:20:06] print event['schema'] [11:22:16] nuria: Sorry, I do not get your point. [11:22:36] argh sorry [11:23:34] So you compared the log lines for a schema from the log files and those from the db and they do not sum to the same number? [11:25:09] they do not for mediaviewer [11:25:24] And you also accounted for the log rotation. [11:25:52] Did you also check against the raw events? [11:26:19] (It might be that most of them are not valid, and it seems you focused on the all-events file) [11:27:01] log rotation, yes [11:27:33] but if you see the graphite logs [11:27:43] the "raw" and "valid" counts match [11:28:38] but what graphite shows does not match the queries you posted above, so I am trying to rule out things. [11:28:41] Anyways ... [11:29:02] If they do not align, and you checked the raw vs. valid ... then there seems to be an issue $somewhere. [11:29:19] If it is just limited to some schema, you could [11:29:26] check a few events for that schema. [11:29:43] If the drop rate is high, try to track a few events from the raw logs to the database. [11:29:55] And see what the dropped ones have in common. [11:30:14] You could also check if the Extensions that produces the events had some changes recently. [11:30:28] Look up on SAL if it shows anything related. [11:35:29] The drop rate sure seems huge, i will take a little time to see if i can find out what is going on [11:39:49] Are you sure that your filtering for schema revision before comparing numbers was effective ... MediaViewer_8572637 (old schema) has ~350K lines like '20140625%' [11:40:05] That would explain where a good part of the 1 million events went to. [12:51:27] nuria: I had a short look at the logs, and to me it looks like that it's not MediaViewer but UniversalLanguageSelector-tofu (revision 7629564) spiking between ~2:36 and ~3:13. [12:52:05] There, this schema jumped from ~4K requests per 100seconds to somewhere close to 30K-40K per 100 seconds. [12:53:29] That nicely matches the graphs graphite is showing. [12:54:07] What do you think? [13:15:58] nuria: ^ [13:35:11] nuria: ^ [13:39:52] Back, sorry [13:40:18] ^qchris [13:40:31] Here :-) [13:40:55] So what do you think about the above MediaViewer vs. UniversalLanguageSelector-tofu [13:41:26] i was also looking at that one as a top contender, let me ssh and get the reports i made [13:44:10] looking in the logs "overall" (i.e. not teh specific period) [13:45:28] the number of events for the tofu schema are on the same ballpark [13:46:41] At the time the event was happening I think the top event is actually MobileWebClickTracking [13:47:35] I think I was mistaken with media viewer though, they switched schemas rather than logging with higher sample rate [13:48:16] Are you really looking at the time of the spike? [13:48:22] Like that very half hour? [13:48:44] let me recheck the timestamp [13:48:47] Or at the whole file? [13:49:13] Even in the time between 0:00 and 6:00 UniversalLanguageSelector-tofu is more than twice [13:49:17] MobileWebClickTracking for me. [13:50:49] wait maybe i got the times wrong [13:51:32] I look arround: 1403664300 (Wed, 25 Jun 2014 02:45:00 GMT) [13:52:11] hi, can I bother you guys with a puppet question? [13:53:15] every time we change the wikimetrics puppet module, we have to make two changes to update the operations and vagrant pointers to the new commit. Can we just make those point to master and have it always be latest? That would save so much paperwork... [13:53:18] ottomata: ^ [13:53:37] but the module points to a sha [13:53:41] not a branch [13:53:44] right? [13:53:52] i thought it pointed to a "git thing" [13:53:56] shows what I know though [13:54:06] qchris would know best [13:54:22] But i believe it is a sha [13:54:33] milimetric: I only know how to use it to point to a sha. [13:54:52] :( #gitfail?? [13:54:56] Gerrit can be abused a bit to do what you suggested. [13:55:08] But I wound not do that around puppet. [13:55:33] Puppet gets deployed to machines directly, so inconsistencies might cause problems. [13:55:46] yeah, its a sha [13:55:58] nuria: And around that timestamp you see more MobileWebClickTracking than UniversalLanguageSelector-tofu? [13:55:59] those extra commits you make are updating the parent repos sha [13:56:00] they say [13:56:06] make the wikimetrics submodule now point at this sha [13:56:13] qchris:yes [13:56:34] nuria@vanadium:~$ tail events.14036643.sorted [13:56:34] 127144 PageCreation [13:56:34] 171342 PersonalBar [13:56:34] 358813 SignupExpAccountCreationImpression [13:56:34] 400575 TrackedPageContentSaveComplete [13:56:35] nuria: I guess that's something to discuss after standup then. [13:56:35] 408094 NavigationTiming [13:56:35] 531692 MultimediaViewerNetworkPerformance [13:56:36] 1057462 PageContentSaveComplete [13:56:36] 2608531 MediaViewer [13:56:37] 2981271 UniversalLanguageSelector-tofu [13:56:37] 3630350 MobileWebClickTracking [13:56:37] right, I know ottomata, that's what my beef is with :) [13:57:33] qchris: yes, maybe my interval is too small [13:57:47] afaik, no way to make that more streamlined, [13:57:56] in ops/puppet, when we 'pull' on puppet masters [13:57:59] we use a wrapper script [13:58:07] that automatically runs git submodule update [13:58:20] but, it still requires a commit to actually change the sha pointer [13:58:33] ottomata, milimetric , git docs say "You can’t record a submodule at master or some other symbolic reference." [13:58:48] milimetric: you could make yourself a little local wrapper command that does all the commits for you automatically :) [13:59:18] doesn't this say that this got added to git in 1.8.2? [13:59:18] http://stackoverflow.com/a/18799234/180664 [14:00:44] * marktraceur wanders in [14:00:53] nuria: Still looking like our events are the problem? [14:02:48] no, marktraceur, my bad [14:02:53] Oh, K [14:02:56] I can send the followup [14:03:45] that is cool! [14:03:48] the git submodule branch thing [14:03:59] servers only have 1.7.9.5 though :/ [14:11:17] (CR) Nuria: [C: 2] Show recurrent reports regardless of age [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/141715 (https://bugzilla.wikimedia.org/67030) (owner: Milimetric) [14:11:24] (Merged) jenkins-bot: Show recurrent reports regardless of age [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/141715 (https://bugzilla.wikimedia.org/67030) (owner: Milimetric) [14:48:16] okay, cluster still borked :( [14:48:22] I'm pretty sure it's a scaling problem though [14:48:30] it objects when I add more columns/more rows to the LIMIT [14:50:09] I thought you were all driving across the country or something [14:50:17] Ironholds, I noticed one very interesting thing yesterday [14:50:39] ottomata, ooh? [14:50:41] and yeah, I am! [14:50:46] 15 hour drives with no internet! [14:50:47] might be related, not sure [14:50:54] so I was hoping that some of my queries might complete on the way ;p [14:50:58] * Ironholds ducks [14:51:01] but, apparently hadoop uses a local directory on one of the data disks for each job [14:51:03] to keep track of some stuff [14:51:04] (haha) [14:51:05] ahaaaa [14:51:19] it'd be interesting to see if there's some kind of disk space problem [14:51:20] and, i've occasionally been getting icinga alerts about disks filling up [14:51:23] but usually, only a disk at a time [14:51:34] i looked into one yesterday where a disk was getting really close to full [14:51:39] like, less than 200 M remaining [14:51:43] * Ironholds winces [14:51:46] and...? [14:51:53] some job you were running was taking up lots of space on that disk [14:51:54] not sure why [14:51:59] I can think of one [14:52:00] it isn't data [14:52:03] job metadat? [14:52:05] not sure [14:52:17] hmn. okay, if it's not the data I can't explain it. [14:52:19] logs? [14:52:30] think its logs' [14:52:31] job logs [14:52:38] could you do me a favour and get me the timestamps of when it ran out of memory? [14:52:51] well, afaik it didn't run out of space [14:52:52] it'd be interesting to see if I could link those up to the Reduce tasks resetting [14:52:53] just got close [14:52:55] ahh [14:52:59] also, just curious [14:53:07] what data are you running across [14:53:07] what dates? [14:53:32] 'all of June' [14:53:36] I'm going to try limiting the query [14:53:40] well, more [14:54:34] (remind me, do we do 1 or 01 for days?) [14:54:52] 01, but shouldn't matter i think, pretty sure they are integers [14:55:04] so, hm, i really do wonder what happens if data disappears from under your query [14:55:45] okay, query restarted with a more limited range [14:55:48] lets see what it does! [15:22:21] (CR) Nuria: [C: 2] "Puf, we really need to do some agreggation for you so you do not need to "UNION ad infinitum"" [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/141874 (owner: JGonera) [15:23:14] yeah, no kidding [15:39:50] * YuviPanda waves at milimetric [15:39:51] around? [15:39:57] hi YuviPanda [15:40:00] of course, what's up [15:40:19] milimetric: :D so the app is going out today, and EL tables are going to fill up a bit [15:40:27] which means next week or the one after that I'll start working on dashboards [15:40:31] hopefully without touching limn :) [15:41:02] milimetric: thinking of a similar process to what we have now, which is 1. python scripts generate raw data in stat1003 and 2. something else runs on labs, parses data into charts, potentially as vega.js [15:41:03] things [15:41:04] cool [15:41:09] milimetric: no objections to that, right? [15:41:24] I'll make everything as re-usable as possible and make sure to keep you apprised. [15:41:29] our dashboard work is not done yet, so definitely no objections from my end [15:41:38] and i'll make sure to follow your work closely [15:41:39] milimetric: cool! any idea which quarter it is planned for? [15:41:44] feel free to add me as reviewer if you want [15:41:57] yeah, it's our very next priority [15:42:12] we should start on the new site this sprint or next sprint [15:42:14] milimetric: sure! Thinking of python backend and something more modern on the frontend (reactJS/angular). [15:42:20] we're tasking it tomorrow [15:42:34] well, if you want to talk it over, we're tasking it tomorrow [15:42:36] I can invite you [15:42:53] it's actually not crazy to think we could work on it together [15:42:59] milimetric: indeed, I'd love that. [15:43:08] milimetric: and we've wanted to do more collaboration anyway [15:43:28] milimetric: do invite me! [15:43:44] YuviPanda: ok, but we were going to cover some hadoop stuff which you probably don't care about [15:43:57] so I'll invite you and ping you on IRC when we're talking dashboards [15:44:02] milimetric: 'tis ok, I'm really good at ignoring things I don't understand in meetings :D [15:44:04] milimetric: that's ok too! [15:44:12] milimetric: do you guys have any preliminary thoughts? [15:44:19] milimetric: and any ideas who would be working on it? [15:44:29] ok, invitation sent YuviPanda [15:44:34] milimetric: <3 ty [15:44:41] I don't really have too much bias, but nuria and I will most likely work on it [15:44:51] I think I'm leaning towards node on the backend [15:44:56] milimetric: cool! [15:44:58] just because things like socket.io are easier with node [15:45:05] milimetric: hmm, why socket.io? [15:45:11] like, in case we do real time [15:45:14] are we going to be doing realtime updates? [15:45:15] can I come too? [15:45:30] halfak: :) sure... [15:45:31] I don't need to say anything. I just want to see what's going on with dashboards. [15:45:55] halfak: invited [15:46:00] ty! [15:46:09] milimetric: I'd prefer python, but node's ok too :) [15:46:11] ok, so it sounds to me like this should fork into a normal tasking followed by dashboard design [15:46:14] i'll talk to the team [15:46:14] milimetric: I'm more interetsed in what we use clientside. [15:46:30] milimetric: would really like to use something more than just plain jQuery. [15:46:42] ok, cool, I'll do some more research tonight YuviPanda, and hopefully there's some magic sauce out there I don't know about :) [15:46:49] definitely not jQuery [15:46:51] :) [15:47:07] milimetric: :D checkout reactJS, which I quite like. Angular I'm biased against. [15:47:14] you know I love knockout, but I'll abstain from it unless absolutely necessary [15:47:32] milimetric: right. reactJS is more barebones than knockout, IIRC. [15:47:40] i don't like angular either, too bulky [15:47:46] milimetric: I agree. [15:47:47] knockout's like 13K or something, pretty small [15:47:59] (reading up on reactjs now) [15:48:26] milimetric: I'm ok with knockout/backbone too, fwiw. Just not plain jQuery. [15:48:56] jquery is like... barbaric [15:49:10] milimetric: yeah. [15:49:23] milimetric: do you know that the only JS I've been writing for the last 6 months is pure clientside JS? no jQuery even :) [15:49:53] hey ... so many people at tasking tomorrow ... does that mean I can skip it? *scnr* [15:50:08] qchris: :P [15:50:46] qchris: I was going to send out an email but it's cool if you're here [15:51:03] I am always reading along :-) [15:51:07] so nuria, qchris, a couple of people are interested in dashboard design [15:51:11] qchris|bigbrother [15:51:19] and since we said we'd talk about that tomorrow at tasking, I was thinking we'd fork the meeting [15:51:39] so we'd discuss hadoop stuff in the beginning and then dashboard [15:52:24] does that sound like a good idea or do you guys want me to protect tasking and kick out this rabble? [15:52:26] :) [15:52:30] I do not like having too many people in it. More people, more discussion, less work getting done. It's kevinator's job to get opinions of people and prepare them. [15:52:44] well, this is differnt [15:53:00] so YuviPanda is going to be building a dashboard, and one possibility is we work on it together [15:53:24] so less work for us if we don't differ too much on how we see the problem [15:53:42] And I want to know status in a deeper way than estimates provide. [15:53:48] I'd still argue it's kevinator's job to gather and discuss these kind of things. [15:53:59] I will be fine with you guys telling me that this is the wrong meeting to do that. [15:54:08] qchris: I don't follow that, Yuvi wouldn't be a stakeholder in this case, but a team-member [15:54:14] like Teresa for example [15:54:32] halfak: hm, for what it sounds like you'd want, this may not be useful [15:54:38] we'll just be talking design and direction [15:54:45] That's useful to me. [15:54:51] I'm more worried about being a distraction. [15:55:11] milimetric: :-D People already got invited, so let's have fun in a bigger group tomorrow. But we should let kevinator do his work. [15:55:49] definitely, kevinator can overrule me of course [15:56:10] halfak: I don't think you'd be a distraction, let's try it out and worst case we lose an hour or so [15:56:45] I've got some overlap with other things so I'll be in there for the briefest time possible. [15:56:56] YuviPanda: points against react: [15:56:58] this.refs.text.getDOMNode().value.trim(); [15:57:03] hehe [15:57:04] html in jS [15:57:16] and this jsx compilation business [15:57:26] hmm, we can rule that out if you want [15:57:28] otherwise it looks good, I think knockout's a bit more elegant [15:57:34] and does roughly the same thing [15:57:39] milimetric: Y U NO BACKBONE? :) [15:57:55] are you asking why I don't have any backbone or why I don't like backbone? :) [15:58:12] backbone's cool, but a bit verbose and does only 1/2 the job [15:58:39] so much to reda [15:58:41] *read [15:58:49] don;t worry YuviPanda [15:58:50] milimetric: :D the latter [15:58:56] jquery is not gonna happen [15:59:01] :D [15:59:15] it started all nice but know is teh mother of all trouble [15:59:39] knockout does round-trip binding which is really useful in interactive UI [16:00:00] heh, yea [16:00:05] i think knockout will do us good cause dan alredy knows it pretty well a [16:00:08] backbone's cool, but a bit verbose and does only 1/2 the job [16:00:10] knockout does round-trip binding which is really useful in interactive UI [16:00:23] the only thing that worries me is memory [16:00:34] but that is no concern for our first impl [16:00:44] well, it's a concern for the overall architecture [16:01:07] but from the sound of it, we're going in the direction of doing data computation client side. So memory will be bottle-necked by the data we're processing more than the UI I think [16:01:14] milimetric: we should also abandon CSVs for JSON for representing the data [16:01:33] we're getting JSON from wikimetrics, so that's fine [16:02:09] i guess... nuria / YuviPanda, do you want to talk this over hangouts now? I have 1.5 hrs. before the scrum of scrums and I'm relatively free [16:02:30] in meeting , will answer back in 5 mins [16:02:36] milimetric: not right now, we're doing the app release as we speak [16:02:46] ok, no prob, maybe later [16:03:53] milimetric: yeah :) [16:04:01] milimetric: also checkout browserify for modules? [16:04:17] milimetric: I've been using it for the last 6 months and highly reccomend it. would also mean backend/frontend code reuse if we use node [16:09:22] milimetric: the data generator should stick to python tho [16:11:49] yeah, limn uses browserify, i'm not impressed with any of those solutions, they all bit me in the butt one way or another - but there's simply nothing perfect so that's cool [16:12:09] milimetric: yeah. async.js is a PITA so we should stay away from that [16:22:13] qchris: the tofu table [16:22:19] has 200 million rows [16:22:26] the just mean to run a survey [16:23:02] so tehy only need couple weeks of data (in a smaller dataset it will be easier to analyze the spike) [16:23:27] Ok. [16:26:47] we can go ahead and delete the data right? [16:27:01] Ahm ... I would not do that. [16:27:11] Do we know whether other people need that data? [16:27:43] Like ... some other person sumbled across this data and built a dashboard around it. [16:28:11] hi [16:28:23] Also ... I am not sure we have infrastructure in place to automatically cleanup after $X days. [16:28:26] Hi aharoni. [16:28:55] so, I'll stop logging to that table very soon, [16:29:01] but I do need some of the data [16:29:02] Cleaning up tables is something we should discuss with ahalfak and DarTar. [16:29:21] even when that dat ais logged by mistake? [16:29:36] as nuria says, if we delete all rows with timestamp before june, that's ok [16:30:00] I am a bit torn about deleting data ... is it sensitive data? [16:30:22] If not, I'd check on the mailing list and ask the community if anyone is using that data. [16:30:34] If no one cares, then delete away :-) [16:30:42] it's just me [16:30:59] and yes, there is some data that must remain private [16:31:11] qchris, we cannot delete the data ourselves right? [16:31:26] we need ops to do it [16:31:57] nuria: vanadium has the credentials required to do it. But I lack access to vanadium. So /I/ need Ops. That might be different for you ;-) [16:32:34] i am going to pass on that ... juas [16:34:19] I'd ask on the mailing lists. I do not expect that anyone else uses/used the data. But I'd just double check to avoid [16:34:26] messages like "analytics deleted our data!" [16:34:43] dan, yuvipanda, wamnt to talk in teh batcave? [16:34:45] *the [16:34:55] milimetric, ops [16:35:14] aharoni: If we do not trim the data right away, does it get in the way of your analysis? [16:35:45] nuria: sure, in about 2-3min? [16:35:55] i've got another hour, so anytime [16:36:01] oh, ok [16:36:03] I think the problem is that he actually has too much data to query effectively , maybe indexes will help , but there are 200 million rows [16:36:24] You can copy the needed data to a separate table. [16:36:25] i'll be in the batcave then: http://goo.gl/1pm5JI [16:36:32] That should address the "too much data" issue. [16:36:37] yuvipanda, milimetric: 40 minutes after the hour ok? [16:37:01] Then you need to query the huge lump of data only once. [16:38:07] that is 12:40 pm [16:38:11] milimetrics time [16:38:16] milimetric time [16:38:34] I'm in the batcave [16:38:48] eating potato chips [16:38:55] and thinking about frameworks [16:39:22] milimetric: not allowed to join [16:39:40] invited you [16:40:52] nuria: Yuvi and I are in the batcave [16:54:17] milimetric: so you’ve played with Phabricator, do you know if the plan is to allow each team to design custom workflows / card states [16:54:19] ? [17:14:53] sorry DarTar [17:14:57] I don't know of any plans [17:15:21] but from what I've seen the card states are standard and not customizable per project [17:16:02] But reach out to Quim and people coordinating the effort to migrate, and make sure they know about the issues you hold dear [17:18:12] the Phabricator discussion is happening in three places: [17:18:13] http://fab.wmflabs.org/project/view/31/ [17:18:19] http://fab.wmflabs.org/project/view/38/ [17:18:24] http://fab.wmflabs.org/project/view/21/ [17:18:41] so issues should be added to one of those three projects [17:19:52] qchris: I understand from nuria that there's nothing immediately urgent about this, [17:20:06] and it's pretty late for me here, [17:20:18] I'll discuss it with my team tomorrow and I'll email the analytics mailing list. [17:20:20] is that OK? [17:20:24] aharoni: yup. [17:20:27] thanks [17:20:29] aharoni: Thanks. [17:28:30] (PS1) Milimetric: Remove limit on recurrent, add throttling [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142007 (https://bugzilla.wikimedia.org/66841) [18:17:40] milimetric: thanks, will do [19:08:17] DarTar: can I show you a spreadsheet? [19:09:05] awight: all your spreadsheets are belong to us [19:09:17] I wish... this one has an NDA on it [19:09:34] DarTar, http://pastebin.com/ZGqziNxA <-- impression counts of all banners run from may 3 to may 15 [19:09:38] hm I should be covered by the same NDA as you guys [19:09:59] mwalker: I have a more useful thing popped open... [19:10:06] DarTar, yeah, we need fundraising data [19:10:06] for win and profit! [19:10:07] or I'll just evade it [19:10:07] "can't give me banner counts? Lol, got them for the raw request logs" [19:10:11] *from [19:10:35] mwalker: looking [19:12:05] there’s a large number of impression for the period I was looking for coming from wle_* campaigns [19:25:40] (PS1) Terrrydactyl: [WIP] Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 [19:26:07] (CR) jenkins-bot: [V: -1] [WIP] Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 (owner: Terrrydactyl) [19:27:41] (Abandoned) Terrrydactyl: Added delete wiki user functionality. [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/124878 (owner: Terrrydactyl) [20:51:17] i'm very confused by the stream of comments, I hope you guys know what you're doing :) [20:55:35] milimetric: you mean about the redis stuff? [20:56:54] yea [20:57:04] ... ottomata is teaching me puppet ... and I am really glad he is that patient with me :-) [20:57:09] k :) [20:58:22] I wanted to rollback the switch to the custom template fully, and he told me that those commits contained other parts too that he likes to stick around. [20:59:39] So the redis setup in wikimetrics is ending up cleaner than it was before the switch to the custom template. [21:00:12] clean is cool :) [21:00:56] yeah, basically, it makes the queue and whatever not work without redis installed [21:01:07] but it doesn't specify how redis shoudl be installed (e.g. via our custom redis module) [22:10:23] (PS2) Milimetric: [WIP] Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 (owner: Terrrydactyl) [22:10:27] (CR) jenkins-bot: [V: -1] [WIP] Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 (owner: Terrrydactyl)