[00:00:31] (PS1) Yuvipanda: Make sure that db connections are alive before being used [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151552 [00:01:02] (CR) Legoktm: [C: 2] Make sure that db connections are alive before being used [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151552 (owner: Yuvipanda) [00:01:44] (CR) Legoktm: [C: 2] Don't display superseeded runs in query runs [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151542 (owner: Yuvipanda) [00:01:49] (Merged) jenkins-bot: Don't display superseeded runs in query runs [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151542 (owner: Yuvipanda) [00:01:53] (Merged) jenkins-bot: Make sure that db connections are alive before being used [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151552 (owner: Yuvipanda) [00:09:03] (PS1) Yuvipanda: Add fabfile [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151554 [00:10:12] (CR) Legoktm: [C: 2] Add fabfile [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151554 (owner: Yuvipanda) [00:10:17] (Merged) jenkins-bot: Add fabfile [analytics/quarry/web] - https://gerrit.wikimedia.org/r/151554 (owner: Yuvipanda) [10:51:16] (CR) Gilles: [C: -1] "The query has a problem for the relative value of the last step (thanks):" [analytics/multimedia] - https://gerrit.wikimedia.org/r/150749 (owner: Gergő Tisza) [11:34:23] Analytics / Wikimetrics: Labs instances rely on unpuppetized firewall setup to connect to databases - https://bugzilla.wikimedia.org/69042 (christian) [13:27:07] Analytics / Wikimetrics: Use separate credentials for production instance to connect to labsdb - https://bugzilla.wikimedia.org/69043#c3 (christian) NEW>RESO/FIX I created the Tool Labs tools tools.wikimetrics-production tools.wikimetrics-staging tools.wikimetrics-development , granted ac... [13:49:48] (CR) Ottomata: [C: 2 V: 2] "Looks awesome. Sorry, I didn't mean I needed an explanation to merge this patchset, but was hoping to just chat with you about it before " [analytics/refinery] - https://gerrit.wikimedia.org/r/150844 (owner: QChris) [13:58:38] Analytics / General/Unknown: Use zero.wikimedia.org's API to get carrier configuration - https://bugzilla.wikimedia.org/68519 (christian) NEW>RESO/FIX [14:39:08] Analytics / Wikimetrics: Make wikimetrics use standard Redis configuration again - https://bugzilla.wikimedia.org/66911 (christian) PATC>RESO/FIX [14:56:15] qchris_away: quick brain bounce requested [14:56:21] about user accounts on analytics nodes [15:44:14] Mhmmm ... all ottomatas left the building :-/ [15:55:00] there is ottomata again. [15:55:07] hallo [15:55:11] About the group thing. [15:55:14] I read the scrollback in ops. [15:55:22] Looks fine to me to have two groups. [15:55:46] Also ... hadoop can handle acls. [15:56:10] I haven't tried them on hadoop, but the "default permission" things in plain fs get easier with facls. [15:56:24] So I suppose that might be useful in hdfs too. [15:58:02] qchris, ideally this would be done via ldap anyway :) [15:58:22] Ok. [15:58:47] I probably misunderstood the question then :-) [15:59:04] naw, i'm sure you didn't [15:59:10] we don't do ACls anywhere else, afaik though [15:59:54] Sure, no need to use them. [16:12:03] qchris: in your review with the exit_* functions [16:12:07] why local $message? [16:12:11] instead of just using $1? [16:12:24] You can use "$1" [16:12:42] But I just grew to like limiting of $1 etc. [16:12:48] ? [16:12:48] They are not very descriptive. [16:12:52] aye, I agree [16:13:02] ok, so just for readability, i'll keep it [16:13:02] i like it [16:13:11] Yes, just for readability. [16:13:17] i think local is something I should use more often, btw, i never think about it [16:14:38] Ja. It improves readability a lot from my point of view, because you know that the variably is not expected for other purposes. [16:14:48] But I am sloppy myself :-( [16:14:58] And for short scripts ... ;-) [16:21:36] ah qchris [16:21:37] with set -e [16:21:43] if $# == 0 [16:21:47] shift causes the script to exist [16:22:04] ? [16:22:31] No brackets around the condition? [16:22:39] shift with no empty cli args [16:22:42] no i mean [16:22:51] i'm changing the while condition to -z $hdfs_path [16:22:55] and, if there are no cli args [16:23:01] shift will get called at the end of that loop [16:23:06] which exists when set -e is set [16:23:09] exits* [16:23:33] guess i could set +e around the shift too.. [16:23:43] No. There is something else wrong then. [16:23:56] The shift is in the "while [ $# != 0 ]" loop [16:24:02] So shift should work. [16:24:09] yes, but you wer suggessting to change the condition, no? [16:24:26] oh, no you weren't... [16:24:33] No. The "if [ $# -eq 0 ]; then" from line 44. [16:24:36] Right. [16:24:45] oh, yes you were [16:24:48] were you ? [16:24:55] $# -eq 0 [16:24:55] should become [16:24:55] -z "$hdfs_path" [16:24:59] 1 sec. [16:25:05] OH, in the check [16:25:05] hm [17:38:39] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112 (christian) NEW p:Unprio s:normal a:None Created attachment 16135 --> https://bugzilla.wikimedia.org/attachment.cgi?id=16135&action=edit /wiki/ an... [17:39:53] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112#c1 (christian) The graph and above numbers were computed by the /a/squid/archive/zero/zero.tsv.log-* files on stat1002. [17:42:08] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112 (christian) [17:56:37] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112#c2 (Yuri Astrakhan) This is per https://www.mediawiki.org/wiki/Requests_for_comment/Unified_ZERO_design#Analytics The new requests should still have X-Analytic... [18:03:21] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112#c3 (Toby Negrin) Hi Yuri -- how was this change communicated? It seems like we were taken by surprise. thanks, -Toby [18:11:31] qchris, i'm trying to figure out why the logs have no zero=..., the logic in zero.vcl seems ok [18:12:03] yurikR1: My team made me stop looking at vcl files :-( [18:12:26] why's that? :) [18:12:30] Let me take a look nonetheless. [18:12:36] detrimental to your mental stability? [18:12:45] (i don't blame them ) [18:13:01] yurikR1: Hahaha. No, because they say I should focus on analytics things not wikipedia-zero things. [18:13:20] well, x-analytics is the analytics thing :) [18:13:46] But the agreement is that your team sets it, and analytics consumes it. [18:13:55] true ) [18:14:10] except that it is actually ops who deal with varnish... [18:14:18] Hahaha. True. [18:14:20] the ultimate entangelment ... [18:15:43] I noted that for the zero=250-99 tags also the relative rate of different aspects changed a bit. Let me re-generate and example. [18:17:52] qchris, i noticed that kafkatee.pp has hardcoded regex - destination => "/bin/grep -P 'zero=\\d{3}-\\d{2}' >> ${webrequest_log_directory}/zero.tsv.log", [18:18:05] yup. [18:18:17] so has the udp2log filters. [18:18:36] not sure we should hardcode things like that - remember we said it is best to keep the ID opaque [18:19:11] Yes, sure. For future things. [18:19:28] But that's where we currently are. And I doubt that this gets in the way around 250-99. [18:20:20] probably not. Do we have a full log of everything? [18:20:45] just to double check that 250-99 doesn't get filtered out somehow [18:21:57] we have sampled-1000 of everything. [18:22:14] We have mobile-sampled-100 (1:100 sampled from mobile frontend caches) [18:22:21] And we have hadoop. [18:22:43] But seriously ... I started in the very hour the switch to the unified design took place. [18:22:54] Hour filters have not been touched in ages. [18:23:20] It would be strange if our filters all of a sudden break in the right hour, and selectively filter one carrier. [18:23:32] http://dpaste.com/19WVXT1 [18:23:47] ^ is a breakdown of zero tags per file for 250-99 [18:29:20] qchris, make sure you exclude ZeroRatedMobileAccess with zcmd=... [18:29:31] In counting? [18:29:37] yes [18:29:44] The bug is about the logs. [18:30:01] ZeroRatedMobileAccess with zcmd is coming in through index.php [18:30:09] We do not count those anyways. [18:30:13] So they are not counted. [18:32:13] yurikR1: I double checked with two sampled-1000 files, and additionally with Hadoop. The filters do not kill 250-99 requests. [18:32:50] qchris, so you think it is varnish logic that's not re-adding it somehow? [18:33:03] yes. [18:33:15] It started the very hour varnish logic was mangled with. [18:33:26] And also with the carrier that was mangled with. [18:33:53] qchris, i just checked - the X-Analytics is being correctly sent to the clients [18:33:53] this is beyond strange [18:34:13] if you give me your IP, i will add you to the http://en.m.wikipedia.beta.wmflabs.org/ [18:34:29] and you will see all the response headers [18:35:28] qchris, lets try it this way - i will remove all the filtering we do for analytics since now you get all the data from us [18:35:31] via api [18:35:43] yurikR1: wait. [18:35:52] It's not an api issue for us. [18:36:13] i understand, i suspect there might be an error in how we filter X-CS in varnish [18:36:13] We're not seeing them in the zero logs. That's before /we/ use the zero api.. [18:36:24] Ah. Ok. [18:36:28] submitting a patch, one sec [18:42:01] https://git.wikimedia.org/blob/operations%2Fpuppet.git/3581f7814f1129b2484e385cf1282cb0503d7d5e/templates%2Fvarnish%2Fzero.inc.vcl.erb#L252 [18:42:31] yurikR1: ^ here you set X-CS to "ON" when the url is not "(action=zeroconfig|:ZeroRatedMobileAccess)($|&|\?)" [18:42:38] That kills 250-99. [18:43:10] qchris, if filtering was done right, it should have became 250-99 again in case it was a zero request [18:43:26] Seeing that I now checked for zero=ON. They zero=ON increased. [18:43:53] yurikR1 ??? By "filtering" you mean "filtering on analytics part"? [18:44:38] sec [18:44:58] yurikR1: The agreement is that Wikipedia Zero keeps the proper zero tags in X-Analytics header. [18:45:05] yes [18:45:13] x-CS=on shouldn't have been there [18:45:26] for now, you can treat all x-cs=on as x-cs=250-99 [18:45:38] you shouldn't see them after my new patch [18:45:41] Ok. [18:48:27] qchris, https://gerrit.wikimedia.org/r/151693 [18:48:37] Thanks! [18:50:52] qchris, i might have made a mistake in filtering that thing - it should have set x-cs=250-99 when that request was either EN or RU, but maybe there is a logic mistake [18:56:43] qchris, you should now be able to browse beta labs as a "TESTON" carrier [18:57:01] Ok. Thanks. [18:57:02] qchris, btw, i would suggest removing the \d{2} test [18:57:30] we never guarantee the \d{2}-\d{3} patternn [18:57:34] There are many parts where we could improve Wikipedia Zero handling on our end :-( [18:58:29] qchris, also, the patch is in, should be in production within the next 20 min [18:58:42] Yes, I saw it getting merged. Thanks again. [18:58:43] could you check if it works? i will be afk for the next half an hour [18:58:54] thx for spotting it! [18:59:11] I want to ... but dr0ptp4kt is running udp2log ... have to work around it ;-) [18:59:36] qchris: lemme go kill that process [18:59:47] No worries. udp2log has a port parameter. [18:59:55] I'll just grab another one. [19:04:44] qchris, any way to retroactivelly fix logs? [19:05:41] yurikR1: Not really :-/ [19:06:08] If we really, really, really, really have to, we can make it work. [19:06:15] But that will probably take me a few days. [19:06:30] So I'd much rather mark those few days as "bad dates" [19:06:43] meh, guess we will just have to live with a dip in traffic... although explaining it might be a pain [19:07:05] The dip will go away if we treat those dates as bad dates. [19:17:53] yurikR1: 250-99 is increasing strongly in the tsv. So the fix looks effective and good for now. But I'd need to do a more thorough check, if more data is in on the zero stream. [19:18:10] qchris, thx [19:18:34] qchris, once you are ready, i will switchover the rest of the carriers. ( dr0ptp4kt fyi) [19:19:21] You mean changing using "zero=ON" for all carriers? [19:20:00] qchris, correct [19:20:07] except if coming via opera [19:20:09] still broken [19:20:37] I guess it's best to coordinate with kevinator. I guess that part is at least months away. [19:21:30] qchris, why would we need to coordinate? it shouldn't affect anything, you would still be getting zero=NNN [19:21:58] and since you now support proper filtering, you will be able to determine what is zero and what is not [19:22:23] Sorry. I probably misunderstood. I thought we talked about having "zero=ON" is the tsvs? [19:22:47] We do not support proper filtering. We heavily rely on the zero=250-99 tag. [19:22:59] With zero=ON, we cannot do much. [19:27:55] qchris, zero=ON was a mistake, which is fixed [19:28:10] what i meant was that zero=NNN-NN will stay [19:28:24] but you will use API results to determine if the request was actually free or not [19:28:50] for example, 250-99 whitelists EN & RU, which means that if you see zero=250-99 for a request that goes to pl., you will not count it [19:28:58] are my assumptions correct? [19:29:36] We currently only have partial filtering on our end, not fully complete filtering. [19:29:52] So for example we filter languages. [19:29:58] We do not filter https. [19:30:28] But we get "which languages to filter" through the API. [19:33:44] qchris, you don't evaluate https or proxy??? [19:33:49] i thought you were already doing that [19:33:58] No, we don't. [19:34:06] We never said so. [19:34:31] It has never been prioritized for us. [19:34:39] Please escalate with kevinator [19:34:45] qchris, i was under the impression that you were, otherwise it makes no sense to even look at the API or meta [19:35:21] I'd love to filter those. But again ... please escalate with kevinator. [19:35:25] qchris, is that a python script? I could try to expedite it by helping you with the script [19:35:33] kevinator, we need this :) [19:36:15] yurikR1: See my email from today where I replied to your request for code. [19:36:36] Get kevinator to prioritize us putting wikipedia zero in normal working mode. [19:38:29] I know it sucks big time. It does ever since. But I lack spare cycles to do it outside of the sprint. [19:39:10] qchris, but that's my point - let me try to help you directly with the code [19:39:23] we don't need to get it fully deplayable/automated via git [19:39:45] btw, in any case, we have a massive problem when something like this runs on someone's computer :) [19:39:53] Sure. [19:39:54] Full ACK. [19:39:56] trust me, i know - i'm still running SMS analytics locally [19:40:02] :D [19:40:11] But there is little I can do. [19:40:18] Please escalate with kevinator [19:48:22] Analytics / General/Unknown: Drop by ~80% in zero=250-99 tagged lines with "/wiki/" in URL in zero logs - https://bugzilla.wikimedia.org/69112#c6 (christian) (In reply to Yuri Astrakhan from comment #2) > The new requests should still have X-Analytics=X-CS=250-99, even though the > X-CS header will be... [20:08:51] qchris, so you filter by language, but not HTTPS. What about subdomain and proxy? [20:09:22] subdomain being sites (m vs zero)? We filter by that. [20:09:28] We do not filter by proxy. [20:10:06] ok, so you filter by the URL, but not protocol (https) or proxy. Got it, thx [20:10:15] how much work would it be to add them? [20:11:04] We have to X-Forwarded-For handling, and both of them would require it. That's more expensive. [20:12:00] qchris, why do you need xff handling? proxy is already part of X-Analytics [20:12:13] same with https [20:13:05] look at http://dpaste.com/19WVXT1 -- https=1 or proxy=Opera [20:13:29] Some lines you asked something along the lines of "Can we fix the data for this bug?". I want to be able to respond with yes. Otherwise, filtering on our end is worth nothing. [20:13:55] But if we rely on the tags in X-Analytics, the answer will most of the time be no. [20:14:10] And we need X-Forwarded-For handling for other things too. [20:14:27] So if we implement a detour for Wikipedia-Zero, it's wasted effort. [20:14:44] And it would make maintainence on our end not exactly easier. [20:17:07] s/Some lines/Some lines above/ [20:18:12] But talk to kevinator. If he says I get time for implementing filtering, I'd certainly love to do it. [20:20:13] qchris, not sure i understand - you currently rely on zero= in x-analytics to determine which zero partner to match against. The https=1 is always set regardless of being part of zero or not. We could also make it that proxies are always set regardless of zero [20:20:46] so the question is - could you in addition to using zero= use https= and proxy= to match against the zero api results [20:21:06] it should be 1 or 2 lines at most [20:21:10] in my understanding [20:21:12] Sure, we could. [20:21:15] It would be easy. [20:21:21] But it would buy anything. [20:21:23] exactly [20:21:28] ? [20:21:56] without that filtering, i will have to bring back that varnish patch [20:22:23] Sorry. s/would buy/wouldn't buy/ . [20:22:33] yurikR1: I do not oppose it. [20:22:50] It's just that from my point of view, there are better ways to spend our time. [20:22:59] qchris & yurikR1: I am watrhing this thread and looking at the proposed work [20:23:10] excellente :) [20:23:40] so my quick and dirty proposal - filter requests by https & proxy based on x-analytics header [20:23:46] :) [20:23:56] Get kevinator to prioritize it :-) [20:24:40] From my point of view this loads responsibilities from your end to our end. We do not have the capacity to deal with it. [20:24:47] Overall, that would be a loss. [20:25:04] Just look at how long it takes us to prioritize small stuff. [20:25:07] kevinator, i was under the impression that all 4 things were filtered - language, subdomain, https, and proxy. It turns out that only language and subdomain were, not the last 2. [20:25:29] even though all of them are present in the X-Analytics: http://dpaste.com/19WVXT1 [20:25:35] I don’t know what is filtered. I’m new to this :-) [20:25:48] scroll to the bottom [20:26:51] in short, when a request comes in, it gets identified as belonging to a zero partner. But being from a zero partner does not mean it is zero rated - they could have only whitelisted a few languages, or they might not support HTTPS [20:27:16] X-Analytics contains description of the request - partner ID (zero=...), proxy, https [20:28:03] currently, qchris' script that he runs locally on his machine (!) filters the requests based on if the wiki language is whitelisted, according to our API [20:28:35] what it should do is also check https & proxy values [20:28:43] otherwise we are overcounting [20:29:17] in my mind, it should be a very simple one liner if statement [20:29:40] that's why i was hoping i could help by looking at the code and providing a patch [20:30:17] yurikR1: Wait ... why would we be overcounting. The agreement is that [20:30:38] zero=250-99 gets only set in varnish if proxy and https is ok. [20:31:07] So we currently should not be overcounting, should we? [20:31:12] because i thought you weren't filtering at all, and once you started using API, you would be filtering based on all params [20:31:23] that's why i removed the post-filtering in varnish [20:31:28] now i have to bring it back "partially" [20:31:56] Sorry to say so yurikR1, but our filtering has not changed in the last year. [20:32:13] And the agreement around that only changed where we pull data from. [20:32:18] Not what we filter. [20:32:24] qchris, i understand that now, but because we never saw the code, we couldn't see what filtering was being done, and we kept all the language filters in varnish too [20:32:25] So please do bring them back. [20:34:01] qchris, kevinator, could you introduce a quick and dirty one liner in there to also include proxy & https? This way both ops and zero teams will be very very happy :) [20:34:27] otherwise currently it is a massive work to maintain the configs in varnish [20:34:43] i am willing to do all the scripting work if you check it in somewhere [20:34:46] even privately [20:35:30] I do not like it. But kevinator is to decide. [20:35:36] I’m waiting to hear what qchris has to say. 1 line sounds like a small ask, but what are the implications and followup work? [20:36:19] qchris, i'm not sure i understand your concerns. Anything i could do to help? [20:36:20] kevinator: In short, we do not want to use X-Analytics all together. Using more values of it, just buys us tech debt. [20:36:55] yurikR1: Stuff breaks too often already. If we add more "quick and dirty", that does not help. [20:37:13] qchris, but if you are already using it, why does it matter if you are using one value from it or 3? Once you switch to XFF parsing, you will get all 3 values from there [20:38:01] but in the mean time, since varnish does the parsing, just use the 3 values. Also, what are the potential breaking points? [20:38:01] yurikR1: Our teams have a hard time on agreements. Currently, our agreement is simple. Wikipedia Zero sets "zero" tag in X-Analytics header only [20:38:26] if it is a zero request. I'd really like to keep the agreement as simple as that until we have a robust solution on our end. [20:38:55] yurikR1: Today we had a potential breaking point. bug 69112. [20:38:56] qchris, but that means that both zero & ops have to maintain a VERY error-prone and unstable system [20:39:09] yurikR1: also that you silently removed filtering is another breaking point. [20:39:31] yurikR1: I'd much rather have you maintain it then us, because we never signed up for it. [20:39:39] You did. [20:40:05] We are overworked already, moving more tech debt onto us does not help. [20:40:53] yurikR1: Do not get me wrong. I'd really love to implement the real thing. I really do. But half-implemented [20:40:57] qchris, 69112 was caused by our attempt to keep the existing system in place :) -- by providing the extra filtering in varnish. And as I already explained - i was under the impression (due to not being able to see the code), that you did not do ANY filtering at all [20:41:00] things only cause more issues on our end. [20:41:43] hence - i removed it. Currently the system is as weird as it comes - some of the values are filtered, some are not [20:42:09] hence i proposed a very simplistic solution - we give you everything via api [20:42:33] yurikR1: The agreement is that Wikipedia Zero is doing /all/ of the filtering. We only do part-filtering, as it's part of our legacy system. [20:42:58] But our part-filtering is not part of the expectations. [20:43:08] qchris, but in that case why do you need api or meta configs??? [20:43:14] i'm getting soo confused :( [20:43:35] Because our legacy system does a partial filtering. [20:43:47] We can rip that out if you prefer. [20:43:51] But that's more time. [20:44:08] ok, so can i help with fixing it a bit until the new allmighty system be put in place? [20:44:39] i would prefer a very simple solution of course :) [20:44:39] Get kevinator to prioritize streaming all the fire-fighting-stuff back to gerrit. [20:44:50] Then whole-heartedly: Yes! [20:45:05] But before that, it'll get difficult. [20:45:55] ok, how about this. I will write an email outlining current system from our perspective, and you will write how long it will take to solve it in a quick (less desirable) way, and how long for a good solution [20:46:09] kevinator will decide on the timelines [20:46:10] qchirs: is getting it to gerrrit sufficient? the code will still be running on your machine. [20:47:02] yurikR1: We went through this a few times already. We always came to the conclusion to wait for Hadoop. I'd prefer we not play the same game once again. [20:48:45] kevinator: Sorry. You're right "Getting it into gerrit". Is not enough. The real chain of events is a bit more complicated. But by "getting it into gerrit" I mean to properly bring all the fire-figthing things into good shape gain. [20:48:54] s/gain/again/ [20:49:14] There are currently many manual tasks. [20:49:20] Many cron jobs on my machine. [20:49:27] Some cron jobs on stat1002 [20:49:27] qchris, we clearly had a disconnect since i didn't even realize you were doing language+subdomain filtering [20:49:53] A custom glue repo around wp-zero, wp-zero/data, kraken repo [20:49:54] qchris, hence we need to somewhat restart the discussion to get a better understanding of expectations [20:50:19] Custom patches on top of kraken, and wp-zero. [20:50:25] Local pig setup. [20:50:44] Hooks to be able to use older GeoIP databases. [20:50:50] It's nasty. [20:51:01] i'm only looking for the piece of code that pulls configs via API and applies it to each request to filter it [20:51:25] but yes, sounds horrid :) [20:51:53] The API gets pulled on my machine. Translated into code. Those code snippets are put onto stat1002. There the real filtering happens. [20:53:06] ok, so just send me two pieces of code - the code that pulls & translates api, and the snippets dump [20:53:17] you don't need to put it into gerrit [20:53:23] email is fine [20:53:38] Ok. But it's terrible code. [20:53:51] its ok :) not getting it done hurts much more :D [20:54:46] i have to jump off the keyboard, be back later [20:54:59] will email a short summary just to be on the same page [20:55:21] yes, please summarize. I think I’m getting the alternatives/options jumbled [20:58:39] yurikR1: I sent you the code. [20:58:57] yurikR2: ^ So many yuriks :-D [20:59:06] I'll have to grab something to eat. [20:59:12] Starving too long already. [20:59:18] I'll be back later. [22:05:10] kevinator: You wanted to chat. [22:05:20] If you have time now, I'm in the batcave. [22:05:34] ok, on my way… give me a minute [22:53:11] (PS7) Terrrydactyl: Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 [22:53:19] (CR) jenkins-bot: [V: -1] Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 (owner: Terrrydactyl) [22:54:26] (PS8) Terrrydactyl: Add ability to delete wiki users [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/142045 [23:05:20] qchris, i restored the code to filter out X-CS (checks only proxy & https) - https://gerrit.wikimedia.org/r/#/c/151795/2/templates/varnish/zero.inc.vcl.erb [23:05:46] let me know if you get a huge drop in zero=... in half hour [23:06:10] yurikR: Thanks! [23:06:17] But as I should have been to bed long ago, [23:06:24] I won't be around in half an hour. [23:06:31] tomorrow is ok :) [23:06:47] if we had a week-long outage, we should be fine i think with an extra day ) [23:06:59] ;-) [23:07:08] Also ... do not rely on us filtering for site and languages. [23:07:17] qchris ??? [23:07:21] i thought you said you already do that [23:07:33] We do, but do not rely on it. [23:07:46] The agreement is that we get [23:07:55] zero tag set for zero requests. [23:08:19] qchris, the ONLY code that knows for sure if something is zero or not is PHP [23:08:27] varnish does not have that [23:08:33] so we manually copy things there [23:08:53] and most of the time, it lags and its incorrect [23:09:14] we have to rely on you, otherwise we will never have anything managable [23:09:27] yurikR: I really understand you concerns. And I understand the maintainence burden that comes with it. [23:09:37] But I do not want to load more maintainence on us. [23:09:54] If you feel we should revisit our current agreement, please escalate with kevinator. [23:09:56] qchris, if at this point you already handle filtering for lang & subdomain, lets keep that [23:10:14] otherwise why do you even need to get our configs?? [23:10:47] yurikR: I understand you. Completely. [23:11:02] However, I do not want to buy new requirements for our team. [23:11:09] If you rely on us filtering that, [23:11:27] that works ok now. But it might change tomorrow. [23:11:28] qchris, its an old req - i have been explaining what is zero and what is not very very long time ago - about a year. [23:11:48] it just that we have been getting mixed confirmations [23:12:01] you were using something from meta.wikimedia.org/ZERO: [23:12:06] we weren't sure what [23:12:15] now you are saying you weren't using anything? or not to rely? [23:12:55] lets keep status quo for now, in which case minimize total code and maintenanec cost - by removing all lang & subdomain, but keeping proxy & https filtering in varnish [23:13:01] yurikR: We agreed before on where we pull what and for what purpose. [23:13:08] I do not want to revisit those agreements. [23:13:13] If you want, that's fair. [23:13:20] But please do so with kevinator. [23:13:40] I do not want to buy extra requirements for our team. [23:13:44] qchris, i will try to find those agreements as it seems we were thinking of different things :) [23:14:10] regardless, i really don't want to keep you awake in the wee hours :))) [23:14:19] :-P [23:14:34] Ok. [23:14:47] Good night then. [23:15:00] qchris, good night, and send me those agrreemnts just so that we are on the same page [23:15:03] tomorrow! [23:15:05] not today