[00:32:02] (CR) Kaldari: Fix Thanks graph on mobile reportcard (1 comment) [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162779 (owner: Kaldari) [00:33:28] (CR) Kaldari: Fix Thanks graph on mobile reportcard (1 comment) [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162779 (owner: Kaldari) [11:05:05] (CR) QChris: Add Oozie setup to generate webrequest tsvs (1 comment) [analytics/refinery] - https://gerrit.wikimedia.org/r/162589 (owner: QChris) [11:59:24] Analytics / General/Unknown: kafkatee not consuming for some partitions - https://bugzilla.wikimedia.org/71056 (christian) [11:59:26] Analytics / Refinery: Kafkatee generated files in /a/log/webrequest not updating since 2014-09-18 - https://bugzilla.wikimedia.org/71290 (christian) NEW p:Unprio s:normal a:None Nothing that needs a fix, as the files have not been productionized, but just so we have a place to track: kafka... [12:08:11] morning milimetric :) [12:08:46] hi Ironholds [12:09:20] brb i gotta let the chickens out [12:13:44] have fun :) [12:13:52] * Ironholds has to frantically upgrade ALL THE THINGS. Fuckin' BASH. [12:15:12] Ironholds: can I help? [12:15:26] milimetric, not unless I give you root on my various machines ;p [12:15:51] in that case /me sends moral support [12:15:51] did you not see the bash bug? [12:16:00] no :( [12:16:10] omg, just looked it up [12:16:10] ugh [12:16:36] TL;DR turns out assigning a function to a variable executes code trailing the function definition [12:16:37] ...whoops. [12:16:54] so, 12.04 and 14.04 LTS are both compromised. I've now patched all of mine! :D [12:17:26] but upgrading via terminal in a way that requires restarts is always interesting [12:19:30] * YuviPanda continues refactoring ops/puppet [12:19:34] ubuntu claims a normal software update will fix it, i'm running that now [12:20:48] totally [12:20:49] it just uh. [12:20:53] * Ironholds scratches [12:21:04] well, turns out I kinda..forgot to upgrade from 12.04 on one of my remote machines [12:21:08] so..that's fixed now. [12:21:18] YuviPanda, you're refactoring ops?! [12:21:26] heh [12:21:28] if I find ottomata's head on guiseppe's body I'm going to be very confused [12:21:29] and worried [12:21:30] parts of the repository [12:21:32] hehe [12:21:34] because: where did you put the rest of ottomata. [12:21:47] you can find out via git blame and git log [12:21:57] git bisect sounds like [12:22:05] * Ironholds rimshots [12:22:07] hahaha [12:22:08] :) [12:22:08] permission to stick this on -bash? [12:27:01] oh you mean bash.org? heh, I don't mind [12:27:10] milimetric, naw, the office.wikimedia bash page [12:27:12] YuviPanda? [12:27:25] there' [12:27:29] s a wikimedia bash page? [12:27:32] it's a publicly logged channel, so I expect everything I say here to be public :) [12:27:53] milimetric, thereis! on office wiki [12:29:59] Ironholds: omg greatest thing [12:30:06] for those unaware: https://office.wikimedia.org/wiki/Bash [12:30:22] yup [12:30:39] WikipediaApp/v2.0 (Android 4.4; tablet) [12:30:39] WikipediaApp/v2.0 (Android 4.4; Tablet) [12:30:39] hah! [12:30:39] hah! [12:30:39] ... [12:30:39] user agents: reducing confidence in the theory that we are different people since 1999 [12:30:54] hehehe :D [12:31:21] ahh, memories [12:32:03] * YuviPanda pats Ironholds [12:32:09] it was a simpler time [12:37:23] Analytics / General/Unknown: Packetloss_Average alarm on udp2log machines on 2014-09-20 - https://bugzilla.wikimedia.org/71116#c2 (christian) NEW>RESO/FIX (In reply to christian from comment #1) > and Ops said to > have a proper incident report today (2014-09-22). Since I still could not find a... [13:07:12] morning ottomata :) [13:08:17] morning! [13:16:22] (CR) Ottomata: [C: 2 V: 2] Require mark_directory_done_workflow_file in partition adding bundle [analytics/refinery] - https://gerrit.wikimedia.org/r/162530 (owner: QChris) [13:16:54] (PS3) Ottomata: Tag property descriptions as 'description' in worklow files [analytics/refinery] - https://gerrit.wikimedia.org/r/162531 (owner: QChris) [13:17:21] (CR) Ottomata: [C: 2 V: 2] Tag property descriptions as 'description' in worklow files [analytics/refinery] - https://gerrit.wikimedia.org/r/162531 (owner: QChris) [13:17:26] Ironholds: waazzzuuup? :) [13:17:43] ottomata, saying hi? [13:17:50] running hive queries, having them not break on me [13:17:54] admiring the responsiveness of the system [13:18:11] awesome! [13:18:29] one question, actually [13:18:38] what's the limit on the number of values in a ()? [13:18:47] so, WHERE foo IN('bar','baz','qux'....) [13:19:47] good Q [13:19:48] i dunno! [13:19:54] i am googling but not finding any answer [13:20:08] maybe ask here? [13:20:09] https://hive.apache.org/mailing_lists.html [13:20:15] makes sense. I'm already on help [13:20:18] :) [13:20:24] plan B was "construct the statement I wanna test and see if it runs" ;) [13:20:44] * YuviPanda should play with hive more at some point [13:20:51] YuviPanda, damn right boy. It's great. [13:20:57] It's like if SQL was strict about syntax [13:21:00] oh, and contained data you wanted. [13:21:06] heh [13:21:16] hive is my favourite thing since...damn. hmn. [13:21:18] * Ironholds thinks [13:21:19] hip-hop. [13:21:21] haha [13:21:22] :D [13:21:22] * Ironholds nods firmly [13:21:24] HHVM? [13:21:27] :) [13:21:28] no, the genre [13:21:55] Ironholds: i'm excited about this, once it happens: [13:21:55] http://blog.cloudera.com/blog/2014/07/apache-hive-on-apache-spark-motivations-and-design-principles/ [13:21:59] that will be so coooooOOOl [13:22:09] basically, that is an in-memory hive engine [13:22:31] should be able to run queries faster, especially more complicated ones with many job phases [13:22:40] huh; interesting [13:22:48] there's actually a fairly strong Spark community in Boston [13:23:57] * YuviPanda wants to setup a hadoop/spark/hive thing on labs with dumps imported [13:27:30] what dumps? [13:27:41] if you wanna be really really fun, I'd love to see someone prototype storing the actual pagedumps [13:32:15] dump content? [13:32:46] like wikipage xmldumps? [13:32:55] yeah, i'd really like to play with that tooOOooOo [13:32:57] yup [13:33:00] seeeee? [13:33:30] YuviPanda, I ask you, Honourary Opsen to Honourary NumbersMonkey, to play with inserting the XML dumps into a hadoop instance and using spark as a hive engine over the top. [13:33:38] * Ironholds kneels [13:34:03] Ironholds: yeah, I meant xmldumps [13:34:25] geeeewd [13:34:27] Ironholds: also, perhaps, wikidata JSON dumps [13:34:35] oh god [13:34:41] but then we'd actually be able to /access wikidata data/ [13:34:45] it's not designed for *that* [13:34:47] in fact, wikidata json dumps -> mysql [13:34:53] would be a nice side project [13:37:13] Ironholds: magnus already has such a thing, btw (dumps -> sql) [13:37:17] I should just co-ordinate with him [13:38:11] halfak: you around too? [13:39:18] ottomata: he might be on the way to a plane station [13:52:53] Analytics / General/Unknown: X-Analytics header is "php=zend;php=zend" instead of "php=zend" on bits for some requests - https://bugzilla.wikimedia.org/70463#c4 (Giuseppe Lavagetto) RESO/FIX>REOP Hi, while in some cases this would be expected, this needs to be figured out better; also, I guess w... [13:58:22] Analytics / General/Unknown: X-Analytics header is "php=zend;php=zend" instead of "php=zend" on bits for some requests - https://bugzilla.wikimedia.org/70463#c5 (Andrew Otto) Are you all aware of this from Ori? https://gerrit.wikimedia.org/r/#/c/157841/ [13:59:47] (CR) Ottomata: "Looking good." (2 comments) [analytics/refinery] - https://gerrit.wikimedia.org/r/162589 (owner: QChris) [14:13:23] Analytics / General/Unknown: X-Analytics header is "php=zend;php=zend" instead of "php=zend" on bits for some requests - https://bugzilla.wikimedia.org/70463#c6 (christian) (In reply to Andrew Otto from comment #5) > Are you all aware of this from Ori? > > https://gerrit.wikimedia.org/r/#/c/157841/ M... [14:33:38] Analytics / Refinery: Kafkatee generated files in /a/log/webrequest not updating since 2014-09-18 - https://bugzilla.wikimedia.org/71290#c1 (Andrew Otto) This is known. kafkatee is off, as it is not working at all right now. Should I turn it back on, even though we know the output is incomplete? [14:46:27] Hey ottomata. I saw your ping as soon as I logged back in. [14:46:30] What's up? [14:47:25] oh, just was chatting with oliver and folks about analytics stuff [14:47:28] and Fabian in another chat [14:47:36] was going to call him in here and get you guys to talk about things he could work on [14:48:06] I'm a bit distracted ATM. In an airport terminal. But I'd still love to have that conversation. [14:50:02] cool, np [14:50:09] btw, qchris_meeting: https://github.com/declerambaul/WikiScalding/tree/master/src/main/scala/org/wikimedia/scalding [14:50:09] :p [14:50:11] Woops. need to change spots. brb [14:51:15] ottomata: :-D Cool [15:04:19] (PS3) Milimetric: Add Rolling Recurring Old Active Editor [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161521 (https://bugzilla.wikimedia.org/69569) [15:05:14] nuria_: new patch should be faster, but the optimization wasn't as easy as I thought [18:01:54] qchris_meeting: fyi, i'm playing with fuse more in labs, i'm going to see what happens if namenode is offline and a reboot a node that has hdfs fuse mount in fstab [18:02:16] Awesome! [18:02:26] The qchris_ cluster is all yours :-) [18:03:33] hah, nice, whne namenode is offline [18:03:34] d????????? ? ? ? ? ? hdfs [18:03:37] heh [18:03:41] but that's fine [18:05:53] Analytics / Refinery: Kafkatee generated files in /a/log/webrequest not updating since 2014-09-18 - https://bugzilla.wikimedia.org/71290#c2 (christian) (In reply to Andrew Otto from comment #1) > Should I turn it back on, even though we know the output is incomplete? No, there's no need to. This bug... [18:06:07] k cool :() [18:06:08] :) [18:06:22] Thinking about it ... the production cluster uses HA. The qchris cluster doesn't [18:06:38] Hadoop acts a bit different on HA than on non-HA. [18:06:48] Should we boost the qchris cluster to HA? [18:07:15] qchris, you should do anything that leads to you ending more sentences with HA [18:07:29] because for someone who doesn't know where it stands for it just looks like you find everything hilarious, and that's amusing to read. [18:07:29] What do you mean, ha? [18:07:31] *what it stands for [18:07:40] :-P [18:10:07] haha [18:10:35] hm, qchris, that is curious, i think in our case it won't really matter for this experiemnt, but perhaps the qchris cluster should be boosted to HA [18:10:46] it is more annoying to make the cluster HA after it already exists [18:10:49] but is possible [18:11:04] and, instructions! [18:11:04] https://github.com/wikimedia/puppet-cdh#adding-high-availability-to-a-running-cluster [18:11:15] see, one link and you ruin my blissful ignorance. [18:14:37] For the work I do, non-HA is fine ... but if you want to test fuse against HA please feel free to upgrade. [18:36:57] ottomata: I went through the filters in filters.{oxygen,erbium}.erb, and we /could/ implement all of them using Hive. [18:36:59] only two are not totally trivial: [18:37:02] * packet-loss. But if I am not mistaken, we don't need them any [18:37:06] longer, if we turn off those udp2log instances, as we're monitory [18:37:08] kafka, hadoop, ... differently. [18:37:12] * glam_nara files, as they geolocate. Do they need to geolocate? [18:37:15] But even if they need to geolocate, I checked out the udp-filters, [18:37:18] and it does not consider the X-Forwarded-For, so it's a simple UDF. [18:37:21] Are there other fundraising uses? [18:38:33] we do not need packet-loss at all [18:38:44] i believe that glam_nara needs to geolocate [18:38:57] packet-loss.log is for udp2log monitoring only [18:39:09] I thought so about the packet-loss monitoring. [18:39:19] The UDF for nara would be simple enough. [18:39:23] aye [18:39:39] i talked to magnus briefly today, he hasn't had time to look at our bug [18:39:41] So no other fundraising use of udp2log that we need to take care of? [18:39:43] he said hopefully next week [18:39:46] k. [18:39:54] fundraising yes, on erbium, ja? [18:40:03] Yes, but they are simple filters. [18:40:18] The'd be easy to migrate. [18:40:29] yessssss, but they go to a different directory and are rotated in a special way by some fundraising scripts [18:41:06] Yes. But we could make the Hive generated scripts look like the udp2log generated ones. [18:41:49] they also do something wierd with an nfs mount? netapp? something [18:41:53] but yes, what you say is possible :) [18:42:10] erbium even does not see nginx, ... so nothing we could not reproduce. [18:42:10] btw, see rotate_fundraising_logs script in puppet [18:42:28] Yup. [18:42:34] I do not aim to change those :-) [18:44:35] Ok. I think that finally paves a way forward without kafkatee ... if we have to :-/ [18:51:18] (PS1) Milimetric: Fix json syntax error [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162945 [18:51:34] (CR) Milimetric: [C: 2] Fix json syntax error [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162945 (owner: Milimetric) [18:52:05] qchris, i think you are doing the right thing but it makes me kinda sad! [18:52:13] i like kafkatee a lot and i think its really good! if only it worked 100%! [18:53:24] I'd also like to see kafkatee come back to live for us. [18:53:31] brb, gotta pick up my car [18:53:36] s/live/life/ [19:02:55] Analytics / General/Unknown: Make kafka write to graphite (instead of / as well as) ganglia - https://bugzilla.wikimedia.org/71322 (Yuvi Panda) NEW p:Unprio s:normal a:None To work towards removing check_ganglia from our infrastructure! [19:35:18] (CR) Jdlrobson: [C: 2] "whatever" [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162779 (owner: Kaldari) [19:35:23] (Merged) jenkins-bot: Fix Thanks graph on mobile reportcard [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/162779 (owner: Kaldari) [19:47:09] Analytics / General/Unknown: Make kafka write to graphite (instead of / as well as) ganglia - https://bugzilla.wikimedia.org/71322#c1 (Andrew Otto) If you are very interested in making this happen, here are the relevant places! https://github.com/wikimedia/operations-puppet/blob/production/manifests/r... [20:07:52] ottomata: could you delete from line 114 to the end of /a/limn-public-data/mobile/datafiles/thanks-daily.csv on stat1003? (I don't have root) [20:08:00] basically, all the entries that have 0s in them [20:08:09] (but not the first few, those don't matter) [20:09:21] to the end? [20:09:26] the most recent lines don't have 0s [20:09:27] milimetric: ^ [20:09:38] only throuhg 2014-08-30 [20:09:45] ottomata: one second, [20:10:00] ottomata: never mind! [20:10:04] ok! [20:10:09] it's backfilling by itself [20:10:14] i have no idea how that's happening :) [20:28:58] (PS1) Yurik: opera value in graphs [analytics/zero-sms] - https://gerrit.wikimedia.org/r/162970 [20:29:11] (CR) Yurik: [C: 2] opera value in graphs [analytics/zero-sms] - https://gerrit.wikimedia.org/r/162970 (owner: Yurik) [20:29:26] (CR) Yurik: [V: 2] opera value in graphs [analytics/zero-sms] - https://gerrit.wikimedia.org/r/162970 (owner: Yurik) [20:36:25] Analytics / Dashiki: Cannot add project to dashboard - https://bugzilla.wikimedia.org/71333 (Kevin Leduc) NEW p:Unprio s:normal a:None Not sure exactly how to reproduce... It doesn't seem to happen every time. 1- Load or refresh dash 2- type 'armenian' in search box 3- click 'Armenian' 4-... [20:50:23] Analytics / Dashiki: Cannot add project to dashboard - https://bugzilla.wikimedia.org/71333#c1 (Kevin Leduc) This one time it was happening, I tried over and over to add the metric and it didn't work. Even reloading the page doesn't work every time. I removed all metrics and then it worked. I might... [20:50:34] (CR) Nuria: [C: 2] Use archive_userindex for speed [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161365 (owner: Milimetric) [20:50:44] (Merged) jenkins-bot: Use archive_userindex for speed [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161365 (owner: Milimetric) [20:51:11] (CR) Nuria: [C: 2] Make apply timeseries more flexible [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161366 (owner: Milimetric) [20:51:25] (Merged) jenkins-bot: Make apply timeseries more flexible [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161366 (owner: Milimetric) [21:09:04] qchris: stat1002 [21:09:06] ls /mnt/hdfs [21:09:06] :) [21:09:17] I saw the commit in gerrit :-D [21:09:30] Yay! [21:09:31] Nice! [21:09:35] :-D [21:09:55] oh boy [21:09:56] ls /mnt/hdfs/wmf/data/archive/webstats/2014/2014-09 [21:10:09] Haha. [21:10:55] That is really great \o/ [21:10:58] whoaaaa i'm comparing to webstats collector [21:11:04] just visually [21:11:28] pretty close, mostly exact! [21:11:35] Yup. [21:11:45] some things missing maybe? [21:11:57] Missing? [21:12:15] i'm only just looking at the top of projectcounts-20140925-190000 [21:12:29] ah no, i think yours has mroe [21:12:30] sorry [21:12:47] hive has more. right. zero-dot and m-dot. [21:12:56] ah m. [21:12:56] yeah [21:13:00] that's what i'm seeing [21:13:26] And the files grew a bit. But that is expected :-/ [21:13:38] this looks awesooome [21:13:47] so um, should we rsync this somewhere? [21:13:49] dumps probalby, eh? [21:14:04] We should! But we should also wait for a firm �yes" from legal. [21:14:12] haha, maybe so [21:14:15] are we asking them? [21:14:30] why would they care in this case? there is no PII, and it is the same dataset that is already out there [21:14:31] Right. Who cares about legal. Let's sync them over already. [21:14:32] just with some bonuses [21:14:34] hahah [21:14:51] Zero is pretty sensitive about seeing their data used. [21:15:02] So I reached out to them and legal. [21:15:12] We have three "it's probably ok" up to now. [21:15:21] oh zero [21:15:22] ok. [21:15:44] naw let's wait. [21:17:05] But there is nothing that stops us from preparing (not merging) the rsync jobs. [21:17:15] Because, if they say that we're not allowed [21:17:27] we have to fix the hive query, nothing else. [21:17:53] hm ok [21:17:56] so where to then? [21:18:08] I'd prefer dumps, because [21:18:12] dumps.wikmedia.org/other/webstats/{page,project}counts/? [21:18:19] pagecounts-raw and pagecounts-ez are already there. [21:18:22] yes [21:18:24] i think I agree [21:18:32] http://dumps.wikimedia.org/other/pagecounts-hive/ [21:18:48] http://dumps.wikimedia.org/other/pagecounts-kafka/ [21:18:51] http://dumps.wikimedia.org/other/pagecounts-hadoop/ [21:18:58] JAAaaaaa. I thought about that too, it sticks what is ther enow [21:19:04] but i'd rather not put the tool in the name...>>...hm [21:19:10] http://dumps.wikimedia.org/other/pagecounts-raw-cluster/ [21:19:20] http://dumps.wikimedia.org/other/pagecounts-raw-no-frigging-udp2log/ [21:19:23] hahah [21:19:48] pagecounts-qchris! [21:19:48] haha [21:19:56] Haahaha [21:20:07] we already got -ez, why not eh? :p [21:20:34] http://dumps.wikimedia.org/other/pagecounts-{raw-,}hive [21:20:44] would be nice, but I aggree to your "no tool name" argument. [21:21:01] What's the new thing about the files? [21:21:07] They are coming from the /cluster/. [21:21:14] Ha! [21:21:22] http://dumps.wikimedia.org/other/pagecounts-all [21:21:27] http://dumps.wikimedia.org/other/pagecounts-all-sites [21:21:35] http://dumps.wikimedia.org/other/pagecounts-raw-with-mobile [21:22:00] http://dumps.wikimedia.org/other/pagecounts-full [21:22:16] They are less prone to packet loss [21:22:26] They contain all sites (see above) [21:22:46] The are the second iteration of the webstats concept [21:24:06] They do not change the meaning of the lines in the old files. [21:24:56] http://dumps.wikimedia.org/other/pagecounts-progeny [21:24:59] http://dumps.wikimedia.org/other/pagecounts-offspring [21:26:12] http://dumps.wikimedia.org/other/pagecounts-refinery [21:26:27] Meh ... that was again a "tool" name. [21:26:37] at least in some sens. [21:27:04] http://dumps.wikimedia.org/other/pagecounts-1.5 (since it's no longer 1.0, but also not 2.0) [21:27:52] Oh ... I see. Mount problems. Let's shelf the discussion for now. [21:47:04] qchris: w00t [21:47:48] :-) [21:48:16] But it sounds better than it is. [21:48:37] It's only the webstatscollector pageview definition extended to mobile and zero sites. [21:49:08] qchris: yes, I was just poking kevinator to have more context [21:49:32] qchris: I think we should be mindful about how we announce this data, it’s awesome but it might create some confusion: [21:49:43] for some time there will be 3 different PV dumps [21:49:47] WSC-legacy [21:49:49] WSC-new [21:49:56] new-new [21:50:07] Yup. [21:50:28] DarTar, then we get so-old-it's-new-and-cool-again [21:50:34] and the dumps do a revival tour around the world [21:50:44] breaking up in Europe when their drummer spontaneously combusts [21:51:31] the two WSC-* are a fantastic way for people to measure discrepancies caused by operational issues (like the underreporting in the legacy system) [22:12:06] hmm, i'm more ok with -refinery..maybe, but like your ideas about describing [22:12:12] how about [22:12:13] plus? [22:12:14] heheh [22:12:17] pagecounts-plus? [22:12:17] :p [22:12:18] :D [22:12:27] haha [22:12:29] -pro [22:12:32] -platinum [22:12:47] btw, Ironholds, [22:12:56] ls /mnt/hdfs/user/ironholds [22:12:59] on stat1002 :) [22:13:29] qchris: does the HDFS mount mean we can trivially write Hive queries that are sync'd and available to the 'outside' world? :) [22:13:35] ottomata: ^ [22:13:43] also [22:13:55] ls /mnt/hdfs/var/log/hadoop-yarn/apps/ironholds/logs [22:14:05] that means you can cat /grep your application logs pretty easy now! [22:14:07] we have logs? OH SWEET BABY JESUS [22:14:18] * Ironholds cries tears of joy [22:14:26] i mean, they have always been there [22:14:29] but you ahd to hdfs dfs -cat [22:14:32] finally, the 300 line Java exceptions can be replaced by 900 line Java exceptions that actually tell me what broke [22:14:33] now you can just cat [22:14:40] they will also soon be in logstash :) [22:14:49] oooh, logstash. nice [22:15:28] ottomata: I am not much fan of "pagecounts-plus", because what would we call the pagecount that follow the new fancy shiny definition then? There is no better than plus (except for double plus ... maybe). [22:16:27] ++ [22:16:27] haa [22:16:37] Actually "pageviews-double-plus-good" would be nice. Reflects Big Brother quite nicely :-D [22:16:41] uhm, we will not call them pagecounts-anything [22:16:47] they will have a new name altogether [22:16:52] pageviews maybe? [22:16:52] dunno [22:16:55] articleviews? [22:17:42] pagecounts++ [22:17:49] and then we can replace it with Qagecounts [22:17:57] which will actually be seven different projects, by seven different people [22:18:05] all of which will exist concurrently to Pagecounts# [22:18:07] Not sure ... actually ... I am nontheless not too much fan of "pagecounts-plus". [22:18:25] then Pagecounts++ 11 comes out and we automatically generate pageviews, thus undermining the entire point of the system [22:18:33] * Ironholds is not annoyed by the automatic type detection, honest. [22:19:07] Ken we also get "K&R pagecounts" then? [22:19:15] yup [22:19:40] Then pagecounts+ it is. [22:19:53] wait, straight K&R or Stroustrup's modified version? [22:21:32] Back to naming :-) [22:21:40] haha [22:21:48] ottomata: you like pagecounts-plus? [22:21:59] I think it makes them sound better than they are. [22:22:12] pagecounts-complete :p [22:22:22] After all ... the webstatscollector pageview definition does no longer match our access paths. [22:22:25] no change in the def, but no loss and more inclusive [22:22:30] kafcounts [22:22:34] or kafkounts [22:22:39] kounts [22:22:45] Krusty the Kounts [22:22:53] Count Logula [22:23:04] pagecounts-complete would be nice. But the webstatscollector pageview definition ignores the api :-/ [22:23:24] we’ll have -more-complete for the 3rd version :) [22:23:28] -completest [22:23:33] --completissimo [22:24:22] Can we please go with Count Logula? For something? [22:24:29] I mean, ideally something related to counting logs. [22:26:12] -new? [22:26:16] hah [22:26:42] Ironholds: was that a Herbie Hancock reference? [22:27:01] no, it was a Vlad Dracul reference [22:27:05] milimetric's most famous countryman! [22:27:11] Ironholds: sad face [22:27:23] "-new" is also not working out, once we have the "new-new" [22:27:28] ottomata: I think we can’t go with -new for the reasons mentioned above [22:27:31] ditto [22:27:38] okay. so..2? [22:27:42] I mean, we can increment indefinitely. [22:28:04] why not, after all version numbers were invented for a reason :) [22:28:15] ah, but then we have to follow semantic versioning [22:28:22] which I think makes this version 1.5.0 [22:28:27] or 1.1.0? [22:28:55] Mhmmmm [22:29:03] I guess I am too tired for naming. [22:29:22] how do you feel about cache invalidation? [22:29:26] we can solve this tomorrow. [22:29:38] miss/404 [22:30:07] heh [22:32:06] Good night everyone! [22:32:33] goodnight! [22:32:34] haha [22:32:50] me too, i'm out [22:32:51] latesr! [23:01:03] (CR) Nuria: "Performance on patch #4 is worst than patch #3." [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161521 (https://bugzilla.wikimedia.org/69569) (owner: Milimetric) [23:05:41] Analytics / Wikimetrics: 1st run of recurrent report has empty resultset - https://bugzilla.wikimedia.org/71338 (nuria) NEW p:Unprio s:normal a:None 1st run of recurrent report has empty resultset cause it runs not for teh day scheduled but for the day after. [23:13:41] (CR) Nuria: [C: 2] Update NamespaceEdits metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161470 (https://bugzilla.wikimedia.org/71008) (owner: Milimetric) [23:13:58] (Merged) jenkins-bot: Update NamespaceEdits metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161470 (https://bugzilla.wikimedia.org/71008) (owner: Milimetric) [23:24:52] (CR) Nuria: [C: 2] Update PagesCreated metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161472 (https://bugzilla.wikimedia.org/71009) (owner: Milimetric) [23:24:59] (Merged) jenkins-bot: Update PagesCreated metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/161472 (https://bugzilla.wikimedia.org/71009) (owner: Milimetric)