[03:33:08] Analytics / EventLogging: Provide a robust way of logging events without blocking until network request completes; use sendBeacon with client-side storage fallback - https://bugzilla.wikimedia.org/42815#c12 (Matthew Flaschen) (In reply to Krinkle from comment #11) > Firefox landed it recently in stable... [07:39:22] Analytics / Visualization: Story: EEVSUser selects Target Site breakdown - https://bugzilla.wikimedia.org/68473#c2 (nuria) Just a brief note here that we are not reporting metrics per site on our 1st release, metrics will only be reported per project. [07:41:52] Analytics / Wikimetrics: Story: WikimetricsUser generates report with Target Site breakdown - https://bugzilla.wikimedia.org/68475#c3 (nuria) This looks to be a duplicate of #68473. Adding the same note: we will not be reporting metrics per site on our 1st release. Metrics will only be reported per pro... [07:57:22] Analytics / Wikimetrics: Wikimetrics can't run a lot of recurrent reports at the same time - https://bugzilla.wikimedia.org/68840#c3 (nuria) >Celery chains are not documented very well. >Apparently, if one of their children raises an error, the chain can stop: This makes total sense given that chains... [08:19:07] Analytics / Wikimetrics: session management - https://bugzilla.wikimedia.org/68833#c2 (nuria) It is funny that this list is not much bigger if we were running queries for all projects (if that is not the case please disregard that comment).. could it be that open sessions are resulting due to timeouts... [08:22:37] Analytics / Visualization: Story: EEVSUser selects time range - https://bugzilla.wikimedia.org/68470#c2 (nuria) > Performance is secondary. >If the performance is bad, we can add another card to improve performance. Sorry, but if filtering is done client side (the only way it can be done giving our cu... [08:40:42] \o/ qchris hellooo [08:40:54] Hi nuria! \o/ [08:41:03] Great to see you in the channel again :-D [08:41:41] Thank you. I am trying to read all bugs and reports filed in my absence. I see [08:42:05] that you did chnaged the credentials for wikimetrics instances (sounds great) [08:42:11] was that hard to do? [08:42:42] No, such things are pretty easy. [08:42:47] Just create a new tool, [08:42:52] And you get new credentials. [08:48:13] but how do you "create a new tool' [08:48:21] is it akin to a dns domain [08:48:25] as in "endpoint" [08:49:52] No, a tool is something like a bot or a service. [08:50:05] It's on Tool Labs. [08:50:13] Let me find a url for it... [08:51:09] ok [08:51:11] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Help#Creating_a_new_Tool_account [08:51:24] ^ nuria: That page explains how to do it. [08:51:32] ahhhh [08:53:08] ok, i see. it was great you took care of that. Will keep on redaing bugs and reports and we can sync up on what work to do next after stand up today? sounds good? [08:53:44] Sure, anytime you like. [08:57:39] I think i need 3 more hours of reading bugs to get everything you guys have done in the last 10 days. [09:27:35] qchris, regarding the dnat rules: https://bugzilla.wikimedia.org/show_bug.cgi?id=69042 [09:28:02] i do not think dan did that 1st time he set up dev instance nor did otto 1st time he set up staging [09:28:25] so I do not think that is a bug on our end but rather the labs setup [09:31:13] From my point of view, it's nothing that needs fix right on the spot, as we [09:31:23] Analytics / Wikimetrics: Labs instances rely on unpuppetized firewall setup to connect to databases - https://bugzilla.wikimedia.org/69042#c1 (nuria) I do not remember us having to do this when we set up neither dev or staging when we fist set them up, which indicates that something might have changed... [09:31:38] do not reboot our machines that often. But we need to have it solved eventually. [09:32:00] I do not care much if it's us that solve it, or if it gets solved by tim or someone else. [09:35:52] so qchris i tried: ssh-ing to wikimetrics-staging1.eqiad.wmflabs [09:35:58] but it no longer works? [09:36:35] Mhmm. Same here. [09:36:42] Can you take care of it? [09:37:03] you did not changed anything there then [09:37:09] no I didn't. [09:37:14] Can you take care of it? [09:37:28] also this does not render: https://metrics-staging.wmflabs.org [09:37:44] nuria: Can you take care of it? [09:38:30] sure qchris, but besides rebooting the machine is there anything else we can do? [09:39:11] nuria: You can for example try getting to the logs and look there. [09:39:12] let me know and i will try to resuciate it [09:39:25] and where are the logs? [09:39:34] :-D [09:39:53] Let me find that again for you :-) [09:40:06] "again"? [09:40:08] https://wikitech.wikimedia.org/wiki/Special:NovaInstance [09:40:13] ^ has a list of instances. [09:40:24] For each instance, there is a link "Get console output" [09:40:36] Might be something is in there. [09:41:00] It is full of OOM. [09:41:16] Can you take it from there? [09:43:09] give me a sec, that page you sent me does not work in chrome [09:43:48] (You might need to filter for "analytics" if you do not see the list of analytics instance) [09:43:57] s/instance/instances/ [09:44:35] the logs are the "get console output?" [09:45:07] "Get console output" gives you the console output of the machine. [09:45:16] So some parts of dmesg might end up there. [09:45:31] In our case, it did, because we see the OOM output. [09:45:35] so where do you see the logs? [09:46:29] When I click "Get console output", a window pops up for me. [09:46:41] I scroll to the bottom of it, and there it reads [09:46:56] [12989755.287774] Out of memory: Kill process 7299 (wikimetrics) score 16 or sacrifice child [09:47:00] [12989755.288698] Killed process 14581 (wikimetrics) total-vm:295336kB, anon-rss:161368kB, file-rss:164kB [09:47:14] Those are good log lines. [09:47:41] It tells us that the machine ran out of memory and killed a process to account for it. [09:47:55] In our case, it killed the process "wikimetrics". [09:48:58] dont't they have mar 8 as date? [09:49:30] No, dmesg does think in months and days. [09:51:00] ok, and remedy is to 'reboot'? [09:51:20] I'd say so. Yes. At that point, a reboot looks good. [09:51:39] ok, after rebbot do we need to run your ip tables script? [09:51:51] Yes. [09:52:12] Are you glad now that I filed a bug and pasted the script, although it is not a bug on our end? ;-) [09:52:21] of course, [09:52:29] SCNR. [09:53:07] i pointed out that it should not be a bug on our end as i do not remember having done this when we 1st set staging. thus it must affect all labs instances like ours [09:53:28] Yes, it does. Did you read the linked bug? [09:54:26] Tim ran into the same problems. [09:54:33] ah, no, i just did [09:55:13] that sounds right. my point was that people responsible for labs infrastructure should get to know about it as it must affect all users [09:55:57] the novainstance ui is not the best , looks like clicking reboot did not do anything, is there somewhere to follow up "reboot" progress? [09:56:22] I guess the "get console" output [09:56:28] Yes. [09:57:38] Analytics / Wikimetrics: Labs instances rely on unpuppetized firewall setup to connect to databases - https://bugzilla.wikimedia.org/69042#c2 (christian) (In reply to nuria from comment #1) > I do not remember us having to do this when we set up neither dev or staging > when we fist set them up, [...]... [10:00:31] qchris the queue of wikimetrics is going crazy [10:00:39] on staging [10:01:05] I guess that's more or less expected. Recall that milimetric is using it for backfilling dashiki. [10:01:52] but ... is it working? i thought using celery chain was killing the reports [10:02:16] nuria: I have no clue about wikimetrics. You are working on it since some months :-) [10:02:56] sure, but i do not know what dan might have deployed to staging, when i left the backfilling was NOT working. [10:03:30] :-D [10:03:54] Let's have a look then. What is the problem you meant by "is it working?" [10:04:30] let me run your script, verify conectivity, that teh UI is working ... etc etc [10:04:39] UI is working. [10:04:40] and after we'll see aabout the reports [10:04:57] db connections to local database is working. [10:05:11] it 'seems' it works, but i just killed the queue so it does not work really [10:05:34] I installed the DNAT rules. [10:05:43] already? [10:05:47] Just now. [10:05:52] is teh script on the instance? [10:05:54] I am logging what I am doing. [10:06:05] No. [10:06:12] The script is not on the instance. [10:06:27] so.. how did you run it? [10:06:48] But just run "iptables -L -v -t nat" as root and you see that I injected the DNAT rules. [10:07:05] I brought the script there, ran it. [10:07:56] Restarted apache2 [10:08:18] why does apache need a restart? [10:08:33] Received "[warn] NameVirtualHost *:80 has no VirtualHosts" warning for apache2 restart, but the Webserver is up and serving pagesn. [10:08:51] I do not know what you turned on/off, so I am restarting the services. [10:09:08] Restarted wikimetrics-queue. [10:09:35] queue produced ~100 processes. [10:11:02] UI is responsive (not sure about working) [10:11:06] Cohort validation works. [10:12:58] Running reports works. [10:20:38] Restarted queue. [10:20:58] wait, let's not both do things at teh same time [10:21:46] So nuria, you feel comfortable taking it again? [10:22:10] i thought i had alredy take it, that is why i killed the queue and chnaged teh scheduler [10:22:26] *changed the scheduler [10:22:29] Ok. I'll stop then again. [10:22:37] staging is yours. [10:23:07] i am not sure why it needs to backfill all the way when backfilling is not working thought [10:23:35] i think to show the prototype we do not need to show data from the very beginning. [10:25:12] :-) That's nothing we decide. For example look at https://metrics-staging.wmflabs.org/static/public/datafiles/NewlyRegistered/enwiki.json [10:25:27] The first data point there is 2012-03-08. [10:25:56] ya, but that is because the "backfilling" started then [10:26:06] it might as well have started 2013-01-01 [10:26:10] Yes. [10:26:19] But that's not our call, is it? [10:26:52] is it not? [10:27:00] for the prototype? [10:27:24] It is certainly not my call, so I cannot provide further insight. [10:27:51] there are 45.000 pending reports which i doubt would get done so why schedule them? [10:28:24] He. I am not trying to argue with you :-D [10:28:42] besides the "console" ui is there somewhere else where to look at logs of the machine? [10:29:12] You'll find the logs in /var/log [10:29:29] but for the apps on teh machine [10:29:35] not the machine itself [10:29:51] right? [10:30:08] is there a log in /var/log that has the OOM we were seeing earlier? [10:31:06] no, right? cause that log is from the host machine [10:31:11] nuria: "You taking care of it" and "Asking qchris for each and every step in an environment that qchris has no clue about, but nuria worked several months in" dos not mix too well. [10:31:22] * i believe [10:31:52] OOM should be underneath /var/log. [10:31:55] Let me check that. [10:32:01] i can manage teh wikimetrics part no problem qchris , my question has to do with the host env of the vm [10:32:28] nuria: You're working on those machine since some months. I don't. [10:33:34] ok, nevermind. I though you might be more familiar with the labs setup of host and vms but I will ask on labs channel. no worries. [10:34:17] nuria: Found the OOM errors. Check /var/log/kern.log [10:35:14] It seems /var/log/apache2/access.wikimetrics.log Holds the app access logs [10:35:34] And /var/log/apache2/error.metrics.log the corresponding apache errors. [10:35:48] right , those i have looked at [10:36:29] And /var/log/upstart/wikimetrics-*.log has the wikimetrics services logs. [10:36:48] Any other logs you need to have hunted down? [10:37:49] I was just wondering about the OOMs the other ones i had already looked at, i did not though the ooms ones will be on the vm itself [10:38:28] Just so it did not get lost in scrollback ... I guess you saw my pointer to /var/log/kern.log. [10:38:41] That one contains OOMs [10:38:43] yes, i did, thank you. [10:39:06] But OOM logs typically do not help too much. [10:39:22] Just seeing them is a sufficient evidence that things are in a bad state. [10:39:58] but it is not access what kill the instance, i believe is the run of the scheduler [10:41:00] Yes, that's very plausible. [11:03:49] qchris, i have stopped the reports on staging , prototype works with files so it should work with current data [11:06:19] nuria: Mhmmm. Ok. Your decision. Be sure to communicate it to the others. [11:06:39] Yes, writing e-mail right now. [11:06:46] Coolio. Thanks. [11:07:10] The prototype pulls from files, so all the data we have on files is "showcasable". [11:08:04] I think is better to ensure prototype is up than anything else. [11:22:44] qchris, e-mail sent, we have a lot of data for prototype so hopefully we are ok. [12:25:53] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112#c7 (christian) PATC>RESO/FIX (In reply to christian from comment #6) > I'll do more checking, when more data has been fed into the pipeline. Things are b... [12:26:24] Analytics / EventLogging: Cleaning up of some (?) EventLogging schemata for Growth - https://bugzilla.wikimedia.org/68931 (christian) a:christian [13:53:05] hi. could someone please invite me to the standup or share the hangout url? [13:53:39] Analytics / Wikimetrics: Need to create a permanent and vetted version the "editor_day" table - https://bugzilla.wikimedia.org/69145 (nuria) NEW p:Unprio s:normal a:None We need to create a permanent and vetted version the "editor_day" table from the analytics-store box in labsdb. Then, cha... [13:54:35] hello jgage [13:54:45] i will try to invite you right now [13:54:50] good morning nuria, thank you [13:54:56] i mean good $localtime [13:55:52] jgage: haha [13:55:58] i just sent you the invite [13:56:28] got it, thank you! [14:30:12] do we use the same hangout for the staff meeting? calendar event doesn't have a hangout url. [14:31:30] yes jgage , same batcave [14:31:46] ottomata: wanna join us too? [14:31:53] hm it says i'm the only one there [14:32:10] jgage: http://goo.gl/1pm5JI [14:32:17] thanks [14:45:15] qchris_meeting, when you have a moment - i think we might have to take over graph generation until hadoop is in place. Might need to pick your brain on where to run our script and where to dump results [15:43:52] Analytics / Wikimetrics: Labs instances rely on unpuppetized firewall setup to connect to databases - https://bugzilla.wikimedia.org/69042#c3 (Tim Landscheidt) AFAIK, the replica DB servers were never accessible under the enwiki.labsdb:$STANDARDPORT scheme without additions to /etc/hosts and iptables o... [15:55:25] jgage: let's do this! [15:55:25] https://gerrit.wikimedia.org/r/#/c/151615 [15:55:31] fix alex's comments, and lets merge and proceed [16:00:59] (Abandoned) Ottomata: Add select_missing_sequence_runs.hql [analytics/refinery] - https://gerrit.wikimedia.org/r/150569 (owner: Ottomata) [16:14:04] yurik: Hey. That's an interesting twist. [16:14:19] will make it much easier for you :) [16:14:20] yurik: From my point of view, it would make things easier for both of our teams. [16:14:25] yep [16:14:36] Did you discuss with kevinator? [16:14:51] Because they should know about it. [16:14:53] question is - would it be enough to analyze zero.tsv files? [16:15:05] i haven't discussed it with anyone, at this point i'm researching feasibility [16:15:18] zero.tsv and (mobiles-sampled-100 or sampled-1000) [16:15:31] Currently, we use zero.tsv and sampled-1000. [16:15:48] why do you need both? doesn't zero.tsv contain everything in sampled? [16:16:33] I sent you an url in private. [16:16:45] "Daily Mobile WP Views By Country, PK" comes from sampled-1000 [16:16:55] That's all mobile traffic for that country. [16:17:04] So non-zero-rated + zero-rated. [16:17:24] You cannot get this line from zero.tsv alone. [16:18:16] But both sampled-1000 and mobile-sampled-100 are only of medium size. One can easily process them on the commandline. [16:18:16] gotcha [16:18:36] are we sending all non-zero traffic to dev/null? [16:18:40] except for sampled? [16:19:03] No. udp2log contains everything. [16:19:16] sampled consumes everything and samples 1:1000 [16:19:25] so in theory if i analyze udp2log files, i will be able to generate everything? [16:19:34] zero consumes everything and greps to zero=[0-9]... [16:19:56] mobiles-sampled-100 consumes everything, filters to mobile frontend varnishes, and samples 1:100. [16:20:02] Yes. [16:20:14] We also only analyze udp2log generated files. [16:20:19] (Currently :-) ) [16:20:54] And since you can easily skip all the cruft we allotted, you can probably get things implemented quite fast. [16:21:19] qchris, but hold on, does udp2log dump the whole log anywhere? [16:21:31] Unsampled? no. [16:21:52] would be nice ) [16:22:03] Yes, would be nice ... bug huge :-D [16:22:09] s/bug/but/ [16:22:14] hadoop [16:22:22] The future that is :-) [16:22:43] ok, so all this is on stat1002? [16:22:57] Yes. /a/squid/archive is your friend. [16:23:28] what's the best way to do it - where should i run the script, and how can it easily copy results to gp. server? [16:24:05] I'd run the scripts on stat1002. From there push to some repo on gerrit. And from gp.wmflabs.org, pull the data in from gerrit. [16:24:28] i don't want to expose result data [16:24:37] it should be private [16:24:56] You're improving on every aspect of the pipeline. Awesome! [16:25:05] (PS1) Yuvipanda: Track 'started' state of tasks as well [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151873 [16:25:15] I am not too sure. about prod vs. labs. I guess ottomata would know better, [16:25:23] how to get private prod data from stat1002 onto [16:25:24] stat1002 is prod? [16:25:25] (CR) Yuvipanda: [C: 2] Track 'started' state of tasks as well [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151873 (owner: Yuvipanda) [16:25:27] a labs instance. [16:25:30] (Merged) jenkins-bot: Track 'started' state of tasks as well [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151873 (owner: Yuvipanda) [16:25:42] stat1002 i prod [16:25:46] qchris, i would much rather host it as part of zero.wikimedia.org [16:25:49] which is prod [16:25:50] private data shouldn't go to labs, right? [16:25:54] private data from prod to labs is kinda.... hard. [16:25:56] it shouldn't go, yeah [16:26:20] ideally, i want analytics data to reside as part of zero.wikimedia.org [16:26:40] i could host limm on zero [16:26:46] which will plot it [16:27:04] the data does not (at this point) need to be behind login, just as long as it is not discoverable [16:27:20] you should talk to milimetric as well, Limn is on the road to being replaced, and the new dashboard aims to be 'unified' - one dashboard location for everything [16:27:20] e.g. behind some magic URL -- /bighexnumber/data [16:27:41] yurikR1: tch tch :P security by obscurity ;) [16:28:06] YuviPanda, nah, that bighexnumber IS the security :-P [16:28:16] tch tch :P [16:28:22] just because password is in URL, doesn't mean its not there :) [16:28:43] but yes, we might want to put it behind login as well [16:28:57] so i will need a good prod location to store data [16:29:04] on a private instance [16:29:11] that only my extension will access [16:29:37] Analytics / General/Unknown: Turn off default vhost on stat1001.wikimedia.org - https://bugzilla.wikimedia.org/68150#c2 (Andrew Otto) NEW>RESO/FIX https://gerrit.wikimedia.org/r/#/c/147226/ and https://gerrit.wikimedia.org/r/#/c/151871/ [16:29:43] We have some private data behind htpasswd on stat1001. [16:29:53] Not sure how much of an option that would be for you. [16:30:06] nah, the login would be part of the zero wiki instance [16:30:24] i will control the access rights [16:30:40] Sure. ... just getting options out there :-) [16:33:47] ottomata, might need to pick your brain re - copying result data from stat1002 to zerowiki prod servers, or accessing stat1002 data directly from the zerowiki extension code [16:34:00] also will really need to figure out which graphing to use [16:34:10] i don't want to spend too much time picking a lib [16:34:17] YuviPanda might know a good source [16:34:51] basic limn replacement with a very little work, that can be easily hosted inside zerowiki [16:37:23] yurikR1: what are the zerowiki prod servers? [16:37:35] yurikR1: milimetric did work on this, moment [16:37:37] we copy files between stat* servers using restricted rsync modules [16:37:37] ottomata, i meant zero.wikimedia.org - a private wiki instance [16:37:44] is it a dedicated host? [16:37:51] yurikR1: https://github.com/wikimedia/mediawiki-extensions-Limn [16:37:55] what node does that run on? [16:38:03] ottomata, no, a regular "private" wiki [16:38:13] probably runs on all hosts [16:38:33] i don't think i know how private (misc?) wikis work [16:38:53] i suspect they are no different from public, except that their uploads dir is in a different location [16:39:01] what node do the files need to be on? [16:39:09] that is the question we need to answer :) [16:39:33] that's the real question :) I want zerowiki extension to be able to read that data [16:39:54] in order to passthrough it to the client [16:41:13] YuviPanda, is that a replacement for limn? [16:43:23] Analytics / General/Unknown: Turn off default vhost on stat1001.wikimedia.org - https://bugzilla.wikimedia.org/68150#c3 (christian) RESO/FIX>REOP Thanks! However, it seems urls like https://stat1001.wikimedia.org/limn-public-data/mobile/datafiles/alltime-numbers.csv are broken now, but are g... [16:46:47] qchris: did the https link work before? i'm not sure how it would have [16:46:56] I am not sure either. [16:47:01] I just saw it used. [16:47:10] i don't see anything in the old configs where https would have worked... [16:47:22] But the fix caught me by surprise ... so I could not check beforehand :-) [16:47:57] Neither did I see anything. There is a https redirect to metrics.wmflabs.org for me though. [16:48:10] So I am not sure with all the apache module reshuffling of late. [16:48:37] well, much of this hasn't changed due to apache module, this is just a vhost change [16:48:56] https://gerrit.wikimedia.org/r/#/c/147226/6/files/apache/sites/stat1001.wikimedia.org [16:48:57] was replaced with [16:48:57] https://gerrit.wikimedia.org/r/#/c/147226/6/files/apache/sites/datasets.wikimedia.org [16:49:16] Sure. But a few weeks back, puppet's apache part was reworked in general, wasn't it. [16:49:31] There, default vhosts handling changed too. [16:49:42] So maybe it worked before that. [16:49:57] I have no clue. [16:50:10] If you say the url didn't work before, I totally trust you. [16:50:47] haha, i do not claim that! but I don't see how it could have [16:50:58] Hahaha :-) [16:51:24] So regardless ... then ... should we make that url work? [16:52:56] After all ... people want protocol relative urls and expect https to work [16:53:02] E.g.: https://gerrit.wikimedia.org/r/#/c/147876/ [17:28:24] um, i don't think we should bother making https work for public-datasets, should we? [17:28:29] that woudl be a feature request! [17:28:39] i'm not trying to sign up for more work here! :p [17:29:59] Fair enough :-) [17:30:01] btw, i edited the url on that page [17:30:07] Cheater! [17:30:07] removed the s [17:30:32] But let's keep the bug open nonetheless. [17:30:42] We really should reach out to the list and announce the change. [17:30:57] So we can remove the redirect from stat1001->datasets at some point. [17:33:52] Analytics / General/Unknown: Turn off default vhost on stat1001.wikimedia.org - https://bugzilla.wikimedia.org/68150#c4 (christian) (In reply to christian from comment #3) > https://stat1001.wikimedia.org/limn-public-data/mobile/datafiles/alltime- > numbers.csv > [...] > Not sure though ... did they wo... [17:39:36] qchris: hm, ok [17:43:01] (PS1) Nuria: Adding wikilytics_in_pageviews.csv wikilytics_in_wikistats_core_metrics.csv [analytics/reportcard/data] - https://gerrit.wikimedia.org/r/151885 [17:59:39] qchris i assumed the retrospective was cancelled right? [18:06:30] nuria: Yes. I think so too. [18:20:39] (CR) MarkTraceur: [C: 1] "That's not a problem - users always survive the Thanks step, because it's the last step in the process!" [analytics/multimedia] - https://gerrit.wikimedia.org/r/150749 (owner: Gergő Tisza) [18:22:59] (CR) Gilles: [C: 2] Query UploadWizard funnel data [analytics/multimedia] - https://gerrit.wikimedia.org/r/150749 (owner: Gergő Tisza) [18:23:04] (Merged) jenkins-bot: Query UploadWizard funnel data [analytics/multimedia] - https://gerrit.wikimedia.org/r/150749 (owner: Gergő Tisza) [18:25:59] (CR) MarkTraceur: [C: 2] "This seems fine. Thanks, tgr." [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/150750 (owner: Gergő Tisza) [18:28:31] (CR) MarkTraceur: [V: 2] Query UploadWizard funnel data [analytics/multimedia/config] - https://gerrit.wikimedia.org/r/150750 (owner: Gergő Tisza) [18:44:44] (CR) QChris: [C: 2] "Date format changed from %Y/%m/%d to %Y-%m for some files," [analytics/reportcard/data] - https://gerrit.wikimedia.org/r/151885 (owner: Nuria) [18:45:33] (CR) QChris: [V: 2] Adding wikilytics_in_pageviews.csv wikilytics_in_wikistats_core_metrics.csv [analytics/reportcard/data] - https://gerrit.wikimedia.org/r/151885 (owner: Nuria) [18:50:47] (CR) Terrrydactyl: [C: -1] [WIP] Add ability to global query a user's wikis (1 comment) [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/129858 (owner: Terrrydactyl) [20:08:27] (PS1) Ottomata: Add shell actions for triggering an icinga passive check via send_nsca [analytics/refinery] - https://gerrit.wikimedia.org/r/151957 [20:36:36] (PS2) Ottomata: Add shell actions for triggering an icinga passive check via send_nsca [analytics/refinery] - https://gerrit.wikimedia.org/r/151957 [20:39:37] (CR) Ottomata: "Getting this working is dependent on:" [analytics/refinery] - https://gerrit.wikimedia.org/r/151957 (owner: Ottomata) [21:18:24] Analytics / General/Unknown: Update reportcard for 2014-07 - https://bugzilla.wikimedia.org/69156 (christian) NEW p:Unprio s:normal a:None Reportcard needs update for August meeting. [21:20:23] Analytics / General/Unknown: Update reportcard for 2014-07 - https://bugzilla.wikimedia.org/69156#c1 (christian) NEW>RESO/FIX Done in Ib842824802dfc0d7346c6a349ac472330c779a7d, and If6024d41f815137a478595c1d5a3aa309ef9544a . [21:24:23] Analytics / General/Unknown: Update reportcard for 2014-08 - https://bugzilla.wikimedia.org/69156 (christian) [21:31:25] (CR) QChris: [C: -1] "I haven't checked whether the current change is working or" [analytics/refinery] - https://gerrit.wikimedia.org/r/151957 (owner: Ottomata) [22:26:30] (CR) QChris: "Sorry." [analytics/refinery] - https://gerrit.wikimedia.org/r/151957 (owner: Ottomata)