[01:42:16] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafk - https://phabricator.wikimedia.org/T109244#1543821 (madhuvishy) NEW a:madhuvishy [01:43:25] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafka - https://phabricator.wikimedia.org/T109244#1543829 (madhuvishy) [08:43:40] Analytics-Kanban: Remove 'outreach' domain from pageview definition. - https://phabricator.wikimedia.org/T109256#1544123 (JAllemandou) NEW a:JAllemandou [08:45:47] Analytics-Backlog: Update pageview documentation to reflect current status. - https://phabricator.wikimedia.org/T109257#1544133 (JAllemandou) NEW [09:23:04] Analytics-Tech-community-metrics, ECT-August-2015: Remove deprecated repositories from korma.wmflabs.org code review metrics - https://phabricator.wikimedia.org/T101777#1544264 (Aklapper) >>! In T101777#1539853, @Dicortazar wrote: > if we do not want to show info from deprecated repos, we should remove th... [13:09:38] joal mornin [14:04:40] (PS1) Milimetric: Impove number formatting [analytics/dashiki] - https://gerrit.wikimedia.org/r/232038 [14:05:05] (CR) Milimetric: [C: 2 V: 2] "self merging simple format change" [analytics/dashiki] - https://gerrit.wikimedia.org/r/232038 (owner: Milimetric) [15:04:59] milimetric: hello! sorry I missed your pings on friday, had to go sign lease for new place [15:05:20] ottomata: you were asking about last access uniques? I am the one working on them [15:06:44] uhhhhhh [15:06:48] yes i remember>>.>..> [15:06:50] but not really [15:08:25] no problem madhu [15:08:38] we can talk after tasking or later today [15:08:49] milimetric: cool [15:09:07] ottomata: okay ping if you remember :) [15:09:42] i do not reember, i think someone was asking about something, but i can't remember where [15:10:26] joal: you around? quick sync up before standup? [15:11:07] ottomata: hmmm okay. I had a question for you - https://github.com/wikimedia/mediawiki-extensions-EventLogging/blob/master/server/eventlogging/handlers.py#L186 what is this trying to achieve? Do we not want to auto create topics? [15:13:21] madhuvishy: pykafka may be different, but i was seeing exceptions during the first message produced to a new topic that doesn't exist yet [15:13:32] the second time around, things were fine. [15:13:34] aaah okay [15:13:43] ottomata: pykafka autocreates [15:13:56] Analytics-Tech-community-metrics, ECT-August-2015: Jenkins-mwext-sync appears in "Who contributes code" - https://phabricator.wikimedia.org/T105983#1545602 (Dicortazar) It seems to be fixed by now. Closing this task. Thanks for the pointer! [15:14:00] so, this explicitly made a call to ensure the topic exists if python-kafka didn't already know about it [15:14:37] ottomata: http://pykafka.readthedocs.org/en/stable/ if you look here, it firsts needs a topic object before it can get a producer instance for it [15:15:09] Analytics-Tech-community-metrics, ECT-August-2015: Tech community KPIs for the WMF metrics meeting - https://phabricator.wikimedia.org/T107562#1545605 (Dicortazar) [15:15:10] Analytics-Tech-community-metrics, ECT-August-2015: Jenkins-mwext-sync appears in "Who contributes code" - https://phabricator.wikimedia.org/T105983#1545603 (Dicortazar) Open>Resolved [15:15:31] ottomata: i think it auto creates if the topic doesn't exist at that point. so i wanted to make sure it was not something we wanted to prevent [15:15:43] i'll test it out anyway [15:16:36] Analytics-Backlog: Set up auto-purging after 90 days {tick} - https://phabricator.wikimedia.org/T108850#1545607 (mforns) No worries :] [15:31:34] joal, hi [15:33:28] Analytics-Cluster, WikiGrok, Database: Purge MobileWebWikiGrok_* and MobileWebWikiGrokError_* rows older than 90 days - https://phabricator.wikimedia.org/T77918#831658 (Krenair) Is this still wanted? WikiGrok was recently undeployed. [15:39:38] ottomata: quick sync after standup ? [15:40:01] yes [15:40:07] k, thx :) [15:40:16] milimetric: join too if you like :) [16:17:18] leila: in office today? :) [16:19:45] madhuvishy: working from home. I'll start looking at the data at 10am [16:19:54] want to chat about something else madhuvishy [16:20:26] leila: no, i wanted to sync up about uniques. let's chat when you are done then :) I'll get to office around lunch after tasking in the morning [16:21:11] sounds good, madhuvishy. :-) [16:31:01] Analytics-Backlog, Research-and-Data: Analyze referrer traffic to determine report format {hawk} [8 pts] - https://phabricator.wikimedia.org/T108886#1546112 (kevinator) [16:32:27] hi milimetric. so you think the issue is that Labs is down? [16:32:43] just got your email milimetric. [16:32:45] thanks! [16:32:48] np [16:32:57] Analytics-Backlog, Analytics-EventLogging: Update Schema Talk pages {tick} [8 pts] - https://phabricator.wikimedia.org/T103133#1546122 (kevinator) [16:35:13] leila: the instance got rebooted and it was running in a very precarious way :) [16:35:20] I'll change the config [16:35:38] I had already made the URL recommend.wmflabs.org, do you prefer the -api thing? [16:35:50] no api, if possible [16:35:52] :-) [16:35:52] I'd normally put the api at recommend.wmflabs.org/api [16:35:58] k, i'll do it that way [16:36:02] makes sense, milimetric. let's go with that. [16:36:48] Analytics-Backlog: Write scripts to track cycle time of tasked tickets and velocity [8 pts] - https://phabricator.wikimedia.org/T108209#1546137 (kevinator) [16:37:18] leila, it'll take a bit 'cause i gotta find the right apache configs and stuff, I can't do it while in this meeting [16:37:20] is that ok? [16:37:42] sure, np, milimetric. Let me know when it's done and I'll start messaging few people for testing. [16:37:50] k, for now recommend.wmflabs.org is just the fake data, i'll put it up sometime in the next three hours [16:38:35] sounds good. [16:39:24] Analytics-Backlog, Analytics-Cluster: Update UA parser for better spider traffic classification {hawk} [ pts] - https://phabricator.wikimedia.org/T106134#1546150 (kevinator) [16:46:43] Hi everyone! really nitpicky question here: for EventLogging schema, is it best practice to register timestamps as a number or a string (I see schemas that do both)? Thx!!!!! [16:47:27] Analytics-EventLogging, Analytics-Kanban: Load test parallel eventlogging-processor {stag} [3 pts] - https://phabricator.wikimedia.org/T104229#1546244 (kevinator) [16:48:04] milimetric: ^ ? (sorry for the bother!) [16:48:25] Analytics-Backlog, Analytics-EventLogging: Puppetize parallel eventlogging-processor {stag} - https://phabricator.wikimedia.org/T104228#1410894 (kevinator) [16:48:51] AndyRussG: the timestamp that the event hits the server is automatically added. Just to verify, you need a separate timestamp, right (for you makes sense, you're sending in batch) [16:49:22] if so, then there's no convention, the people crunching the numbers should decide on whatever format they like [16:49:35] milimetric: ah K cool, makes sense, thx! [16:49:49] I'd guess the standard MW string of YYYYMMDDHHIISS is the most "common" [16:51:57] milimetric: ah in terms of actual content we're already using unixy things, I think number of seconds since whenever [16:52:14] I guess I'll call it a number :) [16:53:07] Analytics-Backlog, Analytics-EventLogging: Deploy EventLogging on Kafka to eventlog1001 (aka production!) {stag} [3 pts] - https://phabricator.wikimedia.org/T106260#1546342 (kevinator) [16:53:10] Or integer [16:53:45] Analytics-Backlog, Analytics-EventLogging: Puppetize parallel eventlogging-processor {stag} [8 pts] - https://phabricator.wikimedia.org/T104228#1546353 (kevinator) [16:57:10] Analytics-EventLogging, Analytics-Kanban: {stag} EventLogging on Kafka - https://phabricator.wikimedia.org/T102225#1546395 (kevinator) [16:57:11] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafka - https://phabricator.wikimedia.org/T109244#1546396 (kevinator) [16:57:34] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafka {stag} - https://phabricator.wikimedia.org/T109244#1546402 (kevinator) [16:58:53] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafka {stag} - https://phabricator.wikimedia.org/T109244#1543821 (kevinator) [17:02:15] Analytics-Backlog, Analytics-EventLogging: Move Eventlogging Kafka writer to use pykafka's Producer instead of python-kafka {stag} [8 pts] - https://phabricator.wikimedia.org/T109244#1546442 (kevinator) p:Triage>High [17:05:23] Analytics-Kanban: Analyze webrequest data issue on August 3/4 and 10/11 [?pts] {hawk} - https://phabricator.wikimedia.org/T107893#1546455 (kevinator) [17:05:47] Analytics-Kanban: Analyze webrequest data issue on August 3/4 and 10/11 [?pts] {hawk} - https://phabricator.wikimedia.org/T107893#1546457 (kevinator) a:Ottomata [17:13:58] Analytics-Backlog: Add better regexp to agent_type bot filtering - https://phabricator.wikimedia.org/T108343#1546483 (kevinator) p:Triage>Normal [17:14:24] Analytics-Backlog: Update pageview documentation to reflect current status. - https://phabricator.wikimedia.org/T109257#1546487 (kevinator) p:Triage>High [17:15:07] ottomata, pokey? [17:15:28] Analytics-Kanban: Update pageview documentation to reflect current status {hawk} [3 pts] - https://phabricator.wikimedia.org/T109257#1546489 (kevinator) a:JAllemandou [17:25:09] Analytics-Kanban, Research-and-Data: Bug in pageview title extraction: change spaces to underscores after percent_decode (not only plus signs) - https://phabricator.wikimedia.org/T108866#1546523 (DarTar) @Jallemandou will historical data be fixed and reaggregated too, where appropriate? [17:26:25] Analytics-Backlog, Analytics-EventLogging: Make EventLogging alerts based on Kafka metrics {stag} [8 pts] - https://phabricator.wikimedia.org/T106254#1546531 (kevinator) [17:28:19] Analytics-Backlog: Stats for en.wikinews.org not working - https://phabricator.wikimedia.org/T109146#1546541 (Milimetric) p:Unbreak!>High [17:28:42] Analytics-Backlog: Stats for en.wikinews.org not working - https://phabricator.wikimedia.org/T109146#1541481 (Milimetric) I changed to High as no site is actually down right now. cc-ing Erik as well. [17:29:02] Analytics-Backlog: Update maxmind database on Hadoop {hawk} - https://phabricator.wikimedia.org/T109039#1546546 (kevinator) [17:34:49] Analytics-Backlog: Update maxmind database on Hadoop {hawk} - https://phabricator.wikimedia.org/T109039#1546571 (kevinator) cron job commits updates to git weekly: /home/milimetric/GeoIP-toolbox/update_data_files.sh however, this is not pushed into HDFS [17:36:17] Analytics-Backlog: Update maxmind database on Hadoop {hawk} - https://phabricator.wikimedia.org/T109039#1546574 (kevinator) p:Triage>Normal [17:38:00] Analytics-Backlog: Add better regexp to agent_type bot filtering - https://phabricator.wikimedia.org/T108343#1546580 (kevinator) pageviews with a matching regexp should get flagged with agent_type = "bot" This will help us filter out bots when counting visitors based on Last Access. [17:39:07] Analytics-EventLogging, Analytics-Kanban: Load test parallel eventlogging-processor {stag} [5 pts] - https://phabricator.wikimedia.org/T104229#1546583 (kevinator) [17:46:21] madhuvishy: I'm going to set up ellery's Flask site on labs [17:46:35] when you were going to do that, did you get an apache config or other stuff that worked? [17:46:51] just trying not to duplicate work if you've already done it [17:48:24] Ironholds: Hi Sir, you there ? [17:48:36] joal, I am, but heading into a presentation in 10 [17:48:54] really quick: what is the host of wiki-donate PVs ? [17:50:28] Ironholds: --^ [17:50:59] milimetric: where is the task you are using to capture prioritized metrics to build when replacing wikistats? [17:51:33] joal, donate.wikipedia [17:51:47] this is likely to be them futzing with MIME types or paths since those WERE getting excluded [17:51:51] Ironholds: Shall I prevent this one to occur ? [17:51:55] yes [17:52:09] Ironholds: it was not part of a regexp (first time I see it) [17:52:17] Ironholds: seems a bug as well [17:52:26] it is, but again, I suspect it's a MIME/path change [17:52:30] that's how we were excluding them before [17:52:33] but more importantly, I want the answer to the question "which specific person in Analytics or Readership is tasked with maintaining and consistently vetting this org-wide, primary KPI" [17:52:39] and if the answer is "nobody" then that's a massive problem. [17:52:50] Ironholds: I'll update the existing tocket [17:53:10] thanks [17:53:12] Ironholds: I don't think the responsibility is on one person [17:53:19] then that's a massive problem [17:53:56] Ironholds: I acknowledge your concern :) [17:53:56] kevinator: just the {lama} epic, nothing else [17:54:23] joal, and I acknowledge it is not your fault, and I don't expect you to fix it on your own :) [17:54:59] Ironholds: I am the one currently updating code on pageviews, so I'll do it this time, but it could any AnEng [17:55:30] joal, sure, but my concern is it is nobody's job to do *this* [17:55:41] that is, go "oh, I looked at today's data and there's stuff we don't expect, someone has done something" [17:55:42] Ironholds: If you have ideas on ways to check data cleansyness (is that english?), let's organise a meeting [17:55:53] joal, absolutely! [17:56:23] Ironholds: I know you are a busy person, I let you invite me :-P [17:56:55] thanks milimetric [17:57:00] joal, data-based unit tests are a familiar thing, yep [17:57:27] Ironholds: joking, I'll talk about that with the team at standup tomorrow [17:57:36] kk [17:57:42] happy to help out [17:57:56] Ironholds: data-based unit-test are unfortunately not enough for data cleansiness I think, but it's a start [17:58:41] Ironholds: confirmation please: donate.wikipedia or donate.wiki* ? [17:59:09] .wikipedia [17:59:14] ok thx [17:59:17] milimetric: I haven't worked on the recommendations stuff yet - I was talking to Yuvi about puppetizing and he said it was not necessary at this stage. I can do that if it will make sense now [17:59:29] Analytics-Tech-community-metrics: Closed tickets in Bugzilla migrated without closing event? - https://phabricator.wikimedia.org/T107254#1546658 (Aklapper) [17:59:36] Oh by the way Ironholds, I have another question for you on UAParser [17:59:46] But it'll wait tomorrow [17:59:50] madhuvishy: I just wanted to get it running, and am hitting all kinds of python problems [17:59:52] Thx Ironholds :) [17:59:56] it's ok, i'll muddle out [18:00:24] milimetric: hmmm, for all of aaron's stuff, we are using uwsgi+nginx [18:00:45] ok, I guess i'll have to learn how to do that [18:00:45] joal, I mean you can ask now and I'll answer in half an hour ;p [18:00:46] https://github.com/wiki-ai/wikilabels-wikimedia-config is an example [18:01:13] joal: so ja batcave? [18:01:22] ottomata: OMW [18:01:36] milimetric: this is the flask app https://github.com/wiki-ai/wikilabels [18:02:13] oh... hm... I think ellery had some stuff configured on this box and it got wiped by the labs resets or something [18:02:15] grrrr [18:14:54] leila: I tried but something got really messed up with however ellery configured that. I have to talk to him to go forward, because I think he didn't document exactly what dependencies he was installing, and I can't guess [18:14:58] I'll reply to the thread [18:15:12] thanks milimetric [18:37:04] milimetric: I worked on this - https://github.com/madhuvishy/restbase/blob/test_projectview/specs/mediawiki/v1/analytics.yaml. It's still WIP and I have a few questions - can we chat about this and the puppet stuff in an hour ish? I'll be in office by then [18:45:40] ottomata: can push https://gerrit.wikimedia.org/r/#/c/230113/ whenever you like too :) (stat1002 rsync) [18:55:42] ottomata: Ready ! [18:55:45] Batcave ? [18:56:10] k [19:00:25] ottomata, belay that ping [19:00:37] I was asking ebernhardson's question [19:04:13] madhuvishy: so let's say 12:30 your time then? [19:06:36] milimetric: just got train. lets do 1? [19:07:18] cool, 1 [19:11:47] milimetric, did you talk to Ellery? [19:12:24] leila: no [19:12:35] hmm. lemme see if I can find him then. :-) [19:17:36] (PS1) Joal: Correct bug in pageview definition [analytics/refinery/source] - https://gerrit.wikimedia.org/r/232177 (https://phabricator.wikimedia.org/T109256) [19:18:15] Ironholds: --^ [19:18:17] :) [19:19:02] joal, looks good, will leave comments [19:19:08] Thanks [19:19:31] (CR) OliverKeyes: [C: 2] "Should have a changelog entry, but otherwise LGTM" [analytics/refinery/source] - https://gerrit.wikimedia.org/r/232177 (https://phabricator.wikimedia.org/T109256) (owner: Joal) [19:19:50] joal, have +2d. Should I verify too? [19:20:50] Ironholds: As you wish, if you don't I'll do (or ottomata will) :) [19:21:09] I'll update changelog before deploying (probably tomorrow or wednedsday) [19:21:15] Ironholds: --^ [19:21:39] Ironholds: I'll also catch up with you tomorrow about docs, to ensure it is clear where it should be [19:21:59] joal, cool! [19:22:08] don't make me Princess Bride you [19:22:18] (CR) OliverKeyes: [V: 2] Correct bug in pageview definition [analytics/refinery/source] - https://gerrit.wikimedia.org/r/232177 (https://phabricator.wikimedia.org/T109256) (owner: Joal) [19:22:50] Finally Ironholds, are those errors linked to the discrepancy between sampled/unsampled ? [19:23:19] not to my knowledge, they were introduced later [19:23:35] FWIW I had to do an analysis of sampled/unsampled for Lila's work today and found almost total agreement [19:23:36] hm, really ? [19:24:32] Can you have a look at that then: https://phabricator.wikimedia.org/T108925 [19:24:39] Ironholds: --^ [19:25:48] I've given the feedback I have via email already; I don't have anything additional to add, I'm afraid [19:25:54] if you mean "can you work out the answer" absolutely [19:26:02] :) [19:26:08] my salary ask is 90,000 US dollars a year with 401(k) matching up to 2% [19:26:17] :D [19:26:18] :P [19:26:37] I'll see with Tilman [19:26:43] Thanks anyway [19:27:01] no problem! If you have specific questions of "could it be X? Oliver, did you do X?" I am happy to answer those [19:45:24] Analytics-Backlog: Update maxmind database on Hadoop {hawk} - https://phabricator.wikimedia.org/T109039#1547104 (JAllemandou) Open>declined [19:47:08] Analytics-Backlog: Update maxmind database on Hadoop {hawk} - https://phabricator.wikimedia.org/T109039#1538539 (JAllemandou) @Ottomata confirmed we update our maxmind databases used for hadoop every week (not in hdfs, but on every cluster node in /usr/ folder). [19:56:56] leila: did you get a chance to look at the data? [19:58:30] milimetric: I think andrew and joseph are using batcave. I'll call you on hangouts directly? [19:59:21] sounds good madhuvishy [19:59:28] Yes, I looked at it [19:59:51] leila: what sounds good :) [20:00:01] you calling me on Hangout madhuvishy [20:00:10] :-) [20:00:11] aah that was for Dan [20:00:15] doh! :D [20:00:20] ooki, then let me explain here [20:00:35] we can do hangouts too, after i'm done talking to Dan [20:01:00] I looked at the data, I sampled 1M counts, and then 10M counts. It seems somewhere between 400-500 counts/day is the second modal of the distrbution [20:01:29] * madhuvishy pretends to understand [20:01:52] Ironholds: for the pageview definition, did you consider more than 500 views per day activity from a bot? (or I'm just making this up. ;-) [20:01:59] is that the peak of the second highest curve? [20:02:37] yes, madhuvishy. which means another way to remove bot like activity is to remove requests from account that have 450 views or more per day. [20:02:55] I'm pinging Ironholds here to see if he has learned something differently in the past. [20:03:07] leila, no [20:03:17] context? 500 per page? user? [20:03:18] I dont think the pageview definition does anything like that [20:03:33] 500 in a day Ironholds. [20:03:40] 500 pageviews per day Ironholds. [20:04:03] milimetric: ping when you're around :) [20:04:07] madhuvishy: i'm here [20:04:18] was making sure I don't interrupt [20:04:25] leila, per article? [20:04:26] madhuvishy: https://plus.google.com/hangouts/_/wikimedia.org/a-batcave-1 [20:04:27] or per user? [20:04:39] because otherwise all traffic is bots; pretty sure we have >500 pageviews a day total [20:04:44] Ironholds: per user [20:04:46] madhuvishy: how about the way we identify bots? does it consider any threshold on pageviews by a /user/? [20:04:49] no, I never looked at that [20:04:57] unique user identification is too inaccurate to make it worthwhile [20:05:03] and there are some ISPs that totally break it [20:05:09] leila: no, we don't do anything like that [20:05:13] okay, got it Ironholds. Thanks for confirming. [20:05:18] thanks, Ironholds. [20:06:41] madhuvishy: so if you want, you can exclude user ids (the same way you defined the pseudo-id) that have more than 450 views per day and see if the results changes a lot from what you observed last week. If it doesn't, that means we're pretty much capturing all the bots that we could capture using the regex and we can close this part. [20:07:06] leila: okay. do you want to do this per day? [20:07:15] yes, madhuvishy. [20:09:49] leila: okay i'll try that [20:10:22] thanks madhuvishy. please ping when you have the numbers and we can look at them together. [20:12:03] leila: cool [21:49:49] milimetric: also, can i move to gerrit for the restbase stuff? i see the original repo exists on gerrit [22:04:38] madhuvishy: the restbase folks said github [22:04:52] milimetric: aah [22:04:54] okayy [22:04:59] yeah, it's weird [22:05:05] but i'm sure they have their reasons [22:05:43] okay. i'm having trouble querying using the analytics/v1/pageviews.yaml [22:05:51] not sure what the query should be [22:16:51] madhuvishy: I think the only thing that should be different is the domain. The path of the pageviews.yaml file shouldn't affect the query path [22:42:25] milimetric, yt? do you know which program is generating limn-language reports?? [22:42:36] yes [22:42:47] :] [22:42:49] it's that weird cron job in my folder on stat100.....2? [22:42:55] hold on [22:43:00] wait, why? are you trying to fix it? [22:43:13] you need my permissions anyway. I have to do that task, it's my mess [22:43:28] I'm trying to do https://phabricator.wikimedia.org/T107504 [22:43:35] oh [22:43:56] yeah, I was trying to finish up the stuff in next-up first so I can get to it [22:43:58] but I was going to grab it [22:44:19] are there a lot of differences from the code in limn-language-data and your home folder? [22:44:54] I'll take a look [22:45:51] milimetric, mmm I don't want to make you stop with your task :(, I will do other stuff, really [22:46:05] ok, so here's what my cron's doing, insanely: [22:46:15] https://www.irccloud.com/pastebin/3QXhPD85/ [22:46:32] so it's copying their SQL [22:46:54] and then here's run_manually: [22:46:57] https://www.irccloud.com/pastebin/qeZG7GPx/ [22:47:02] aha [22:47:27] I think the SQL was just too crazy to execute [22:47:30] so I had to run it directly [22:47:35] ok [22:47:42] ok thanks!! [22:47:44] :] [22:47:46] so the solution there is probably to figure out a way to display their data first [22:47:53] because the current graph is 100% meaningless [22:48:05] mmm I see [22:48:12] and then figure out a way to get the data, hopefully with reportupdater [22:48:29] I could easily put up a new type of viz with dashiki or something, if you find a way to do this well [22:48:42] hm... maybe metrics by project works actually [22:48:45] aha [22:48:49] because it's the same thing - basically every wiki, one metric [22:48:59] yes, no language though [22:49:25] no, they have languages, it's just all wikipedia [22:49:34] oh yea [22:49:49] so the config for available-languages would be different, we'd have to add that to the wiki page that configures the metrics-by-project layout [22:50:15] i think that'd work splendidly. Then we can ask them what languages they want by default and leave the rest to be added after the first load [22:50:37] one question, this makes no difference to tick, right, I thought there were things blocking tick in this task, but there aren't, the queries hit the wiki databases, not EL [22:50:49] yes, makes sense! [22:51:00] yes, true, tick is separate [22:51:10] you should label it frog [22:51:16] ok, will change [22:51:36] do you think it is still doable right now, or we should reprioritize? [22:51:54] this is still broken: [22:51:56] https://www.irccloud.com/pastebin/5PCg8AG5/ [22:52:04] and I've been meaning to get to it but can't [22:52:14] and they're losing data every day that can never be recovered, so I feel bad [22:52:35] if you are on it anyway, you might want to fix just the content_translation_beta_manual.sql script [22:52:58] but if your eyes bleed and sql comes out instead of blood, move on after a few minutes [22:53:10] xDDD [22:53:27] ok, I think it makes sense to work on that :] [23:52:08] milimetric: I think I made progress :) [23:52:28] yay :) [23:52:36] https://github.com/madhuvishy/restbase/commit/6f6299846a89b648a823c65cdf775332748b4940 [23:54:06] cool, I'll take a look tomorrow madhu [23:54:13] milimetric: alright :) [23:54:21] i'm gonna move the tasks to code review [23:54:24] but if it works, I say we'll just go over it with jo and send it [23:54:30] yea, great [23:54:33] may be they'll be there for a while [23:54:36] ya cool [23:54:40] btw, you all asked me to let you know when the recommender thing was online [23:54:48] http://recommend.wmflabs.org/ [23:54:59] milimetric: yay :) [23:55:00] oh marcel's not around [23:55:15] I learned nginx and uwsgi as a result, so that was fun [23:55:28] k, nite all, i'm tired :) [23:56:17] milimetric: nice! take rest, good night :)