[21:01:00] #startmeeting RFC meeting [21:01:00] Meeting started Wed Sep 26 21:01:00 2018 UTC and is due to finish in 60 minutes. The chair is kchapman. Information about MeetBot at http://wiki.debian.org/MeetBot. [21:01:00] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [21:01:00] The meeting name has been set to 'rfc_meeting' [21:01:10] #topic RFC: Modern Event Platform: Scalable Event Intake Service https://phabricator.wikimedia.org/T201963 [21:01:24] #link https://phabricator.wikimedia.org/T201963 [21:01:50] hi all, who is here for the RFC discussion? [21:01:58] <_joe_> I am :) [21:02:00] o/ [21:02:03] o/ [21:02:55] o/ [21:03:02] ottomata do you want to kick things off? [21:03:36] sure [21:04:31] as part of Modern Event Platform, we want to unify the event intake techs we use for both analytics and production uses, so we don' thave to maintain multiple systems [21:05:07] <_joe_> you mean multiple software stacks, right? [21:05:07] this RFC is about deciding if we want to refactor our existing homegrown codebase that currently does this (EventLogging) [21:05:14] yes [21:05:16] Hi [21:05:19] <_joe_> ok, sorry [21:05:23] <_joe_> go on :) [21:05:40] ...or do something different [21:05:51] the different thing could be either Kafka REST Proxy, or to write something new entirely [21:06:13] I'm not 100% sure if Kafka REST Proxy would be suitable, and there are some downsides (e.g. possibly having to maintain a fork) [21:06:32] ottomata: one question: no matter what we choose here, eventlogging (the original platform excluding eventbus), will need to be adapted either way, correct? [21:06:40] no [21:06:49] well, yes? depends on what you mean by 'eventlogging' there [21:06:55] if you mean: the analytics events, then yes [21:07:06] i mean everything that is there in the repo besides eventbus :P [21:07:12] yes that [21:07:32] by having clients send events in a new format (using schemas in the new schema registry) to the new service [21:07:42] <_joe_> ottomata: my question would be: say we want to unify the two interfaces in a more general service; would a lot of code from EL/EB be reusable in a rewrite? [21:07:50] it seems to me that these changes are more or less needed and the same, regardless of the option we choose [21:07:57] what happens after the event gets validated and in Kafka we can deal with separately [21:08:15] agreed [21:08:55] mobrovac: it's true also that much of the existing python eventlogging code would not be needed in a new system (like piping events through a validator and on to mysql, since validation would be unified and mysql would not be supported in the same way) [21:09:10] hm [21:09:22] <_joe_> frankly from the information I have from the tickets, I would suggest to base your new work on the operational and development experience we have with those platforms, and reuse the code that's already been battle-tested [21:09:27] <_joe_> as much as possible [21:09:27] milimetric: the consumer side of eventlogging is not reall relevant here [21:09:42] we'd like to stop using that eventually too, but that isn't relevant to this question i think [21:09:47] right, true [21:09:47] they are decoupled [21:10:41] will respond to joe's ^ comment in a sec, but i'm not sure i understand the previous questions about analytics events and reuse [21:10:54] assuming we stuck with eventlogging/eventbus [21:11:23] for analytics, we'd set up some new (public?) eventbus instances and allow remote clients to POST events [21:11:25] to it [21:12:33] ah, mabye that is the question about reusability? yes, we'd strip EventLogging codebase of any mediawiki / event capsule specific stuff [21:12:34] i presume that means until such time we unify the backends for both systems and have one big kafka cluster [21:12:40] refactor the tornado server in eventbus service [21:13:01] that would be a big win imho (stripping EL/EB) [21:13:16] a simplification of an existing system is a good way for this [21:13:29] <_joe_> you will have one big kafka cluster over my cold dead body, but it's OT for today's discussion :P [21:13:35] haha :) indeed [21:13:39] and we already have the experience of both EL and ES when dealing with validation [21:13:58] ES? [21:14:04] <_joe_> EB I guess [21:14:06] s/ES/EB/ [21:14:06] aye [21:14:36] so, tbh, i don't like my eventbus implementation, so it'd be a rewrite for sure...who knows maybe we wouldn't use tornado? it sounds like _joe_ has opinions there [21:14:50] <_joe_> so [21:14:57] my personal preference would be to use upstream kafka rest proxy, adapting it for our needs, trying to upstream our patches and maintaiing a fork in the meantime [21:15:17] #info if option 1 is chosen, EventLogging codebase would be stripped of any mediawiki / event capsule specific stuff [21:15:17] Unifying on EventBus for EventLogging, what would that mean with regards to public client-side API, e.g. do we replay varnishlog query strings into POST somewhere? [21:15:18] using out-of-the house software is so much better [21:15:23] <_joe_> Pchelolo: that would be my next suggestion, yes [21:16:11] Pchelolo: I think that's wise given our requirements are so similar to REST proxy's, but do you think it's reasonable to expect them to upstream json as an equal alternative to avro? [21:16:12] <_joe_> ottomata: you want to create a REST bridge to kafka with some validation [21:16:23] <_joe_> with high concurrency I guess [21:16:27] Krinkle: there would be 2 ways to get events in from outside. 1 is like we do now, with GET query strings, fire and forget style. Those events would need to be processed/validated by something else. The other way is HTTP POST directly to the endpoint [21:16:33] like eventbus does now internally [21:16:38] milimetric: do you think we're the only people using json? [21:16:51] s/using/wanting to use/ [21:16:56] <_joe_> the concurrency model offered currently by php-fpm is IMHO superior to what node/php offer whenever you need to do significant computing in the application [21:16:56] Pchelolo: no but we looked around and couldn't find any obvious other forks or activity [21:17:09] _joe_: yes. [21:17:21] Pchelolo: we are not the only people who'd want this feature [21:17:26] <_joe_> my point was that option #2 should not be "nodejs by default because it's a non mediawiki service" [21:17:30] _joe_: validating isn't really cpu-intensive though [21:17:34] _joe_: that's fair [21:17:37] event validating [21:18:11] <_joe_> mobrovac: ok, I assumed parsing/validating could become intensive, but I don't know actual numbers [21:18:27] well most events are very small (under 1k) [21:18:27] To your point of using something that is 'battle tested', i'd rather write the service in something we already deploy and use often, like node (btw, do we have other public facing python services, just a q.). [21:18:30] but that doesn't mean it would have to be [21:18:47] i can modify the RFC to remove the node as the option 2 solution [21:18:51] <_joe_> ottomata: we have a fairly solid experience deploying php applications [21:18:53] and make it just say 'write something new' [21:18:56] ottomata: Right. I think those are different types of clients. One could migrate to the other, but the principle is imho not based on the protocol, but something else. E.g. there could be something important in our apps that needs validation and would use the public POST. But, on the other hand, even if we migrate the bulk of mw JS EventLogging to use POST, those should ideally remain non-blocking . [21:19:03] <_joe_> that's why I didn't suggest go or rust :P [21:19:22] * mobrovac mentions ruby [21:19:22] Krinkle: agree [21:19:34] <_joe_> Krinkle: ++ [21:19:35] shucks, I feel like I missed my chance to pitch C# [21:19:41] lol [21:19:42] both would be available, and most analytics usage would be fire-forget [21:19:50] <_joe_> ahah sorry I didn't want to derail the discussion [21:20:01] In that case, I'm missing what public proxy is going to handle the /event traffic that varnish is handling for us now. [21:20:10] #action i can modify the RFC to remove the node as the option 2 solution and make it just say 'write something new' [21:20:23] _joe_: i maybe can dispell the PHP choice: there are no good kafka clients :/ [21:20:24] <_joe_> Krinkle: I guess still varnish sending it to the new platform [21:20:25] given varnishlog doesn't store post bodies. [21:20:43] we actually want to remove the one discovery is using in MW now. [21:20:55] its keeping us from migrating it off of the old analytics cluster (version problems) [21:20:55] <_joe_> ottomata: alright, it was an example, fair enough [21:21:11] #info for option 2, there is no consensus of whether it's worth it nor in which technology it should be written [21:21:29] Krinkle: i'd want either service able to handle it [21:21:39] we do about 2.5K events / second in eventlogging analytics now [21:21:43] which isn't very much imo [21:22:00] <_joe_> Krinkle: well varnish could proxy requests to the new platform, that was what I meant, sorry [21:22:13] actually, if we did write something new, it might be possible to use the very same service with differnet endpoints for both fire-and-forget and responsed [21:22:19] <_joe_> # info < ottomata> we do about 2.5K events / second in eventlogging analytics now [21:22:20] so the plan is to keep varnish and varnishlog/varnishkafka as the edge, but where it currently gets consumed/validated by EventLogging, it would instead be turned into POST requests to EventBus. [21:22:20] which would be a + for unifying [21:22:45] Are we thinking about godog kafka loggin document in here as well? [21:23:06] Krinkle: i think we have to support POST bodies, as well as GET query strings like we do now, if we also want to still support non JS clients (beacon.gif style) [21:23:09] <_joe_> Pchelolo: that's a topic I wouldn't unify tbh [21:23:38] ottomata: I assume for the POST body version, there'd have to be a different endpoint that varnish miss-pass'es to the proxy directly. [21:23:49] yeah, there does seem to be some overlap with the sentry thing that some folks want? I'm not familiar with the state of that convo [21:23:54] ottomata: +1, for small, fire-and-forget reqs GET is way faster and less cumbersome [21:24:25] <_joe_> sorry, the whole sync/async paradigm is not the topic we wanted to discuss today [21:24:30] <_joe_> let's get back to it [21:24:39] I mean, if we could get away with not using GET, i would, but i think we need to support non JS clients still [21:24:43] <_joe_> can I try to summarize what we said about option #1? [21:24:49] #info ottomata: actually, if we did write something new, it might be possible to use the very same service with differnet endpoints for both fire-and-forget and responsed [21:25:10] (u like how I made up a verb there ^ :p) [21:25:25] :P [21:25:33] <_joe_> 1 - we could add the tornado server to EB [21:25:33] _joe_: plz do [21:25:45] We'll definitely need compat with GET query strings for clients without sendBeacon support. Although I will say that making a POST request is also possible without sendBeacon, XHR/Ajax can do it just fine. [21:25:55] <_joe_> 2 - EL is not much reusable and the difference with a full rewrite is not that big [21:26:05] So we could converge on POST body, no problem. all clients can do it. [21:26:10] <_joe_> ottomata: is my understanding correct? [21:26:13] Krinkle: even non JS ones? [21:26:21] ottomata: Define eventlogging from non-JS. [21:26:36] _joe_: (2) is true regardless, as EL code will need to change with the new way we do things, afaict [21:26:36] _joe_: i believe so, 1 - isn't worded quite right...(eventbus uses tornado) [21:27:24] Krinkle: maybe this doesn't happen ever, but iirc we used to support embedding a 1x1 invisible pixel img that with href=/beacon.gif?event=... [21:27:43] seems like that would duplicate page view / web requests. [21:27:53] I don't see when/where/why we would do that. [21:27:56] <_joe_> ottomata: so if the amount of things we can reuse is small, and we can't just adapt eventbus to our new use, #1 might make no sense [21:27:56] ya, but it would send specific events [21:28:07] on page load [21:28:11] maybe we don't at all Krinkle! [21:28:17] if we don't need that, then we don't need GET at all [21:28:18] Maybe on the old blog, but we use piwik/matamo for that now. [21:29:15] how do you add an action item? i want to add one to investigate if we need to support GET [21:29:27] oh i see it [21:29:28] > #action [21:29:42] #action investigate if we need to support GET event intake anymore [21:29:42] or > #info [21:30:02] GETs are good for small payloads with high freq [21:30:03] the first parameter to #action is ideally a nickname, then meetbot sorts the action items by assignee [21:30:16] oh [21:30:19] with GET requirement option 3 is out of the table, it's too much to change upstream [21:30:39] Pchelolo: yeah i wouldnt' built GET into that, but keep using probably varnishkafka like we do now [21:30:41] :) [21:30:44] haha [21:30:49] mobrovac: does it really better GET/POST if both are handled by varnish and/or the service? [21:30:57] matter* [21:31:22] Note that 99% of our current "GET" events from clients are actually POST with a query string in the url. [21:31:34] ah ok, that's a different story [21:31:40] because secondBeacon always performs POST, we just send no data and use the query string. [21:31:44] then no, in that case it doesn't matter [21:31:50] i see [21:31:55] it's not cacheable either way [21:32:08] I might be confused but I thought varnish just didn't have access to the POST body so that's why we sent the query string [21:32:28] milimetric: i believe that's true [21:32:34] varnishlog doesn't have it [21:32:36] Indeed. I thought so too. [21:33:10] I assume that given we wanted to scale EventBus, that means we can handle the traffic directly, or do you depend on varnish to debounce it into kafka before "really" doing it. [21:33:30] the public endpoint shoudl hanlde the traffic [21:33:36] ottomata: re sentry, yeah it needs a transport system and it would be great to use nextgen eventlogging for that. The difference from analytics events is that 1) we can't really control the volume, 2) ideally when the volume is too high events would be discarded in an intelligent, deduplication-like fashion [21:33:41] directly [21:33:53] OK [21:34:08] tgr would you want those events to be validated? [21:34:11] <_joe_> Krinkle: when i said "through varnish" i meant edge traffic will flow through varnish and go to the application directly [21:34:34] (yes 'directly' through varnish in that sense) [21:34:56] So the new ingestion point is HTTP based, can handle the /event traffic publicly (without indirection of varnishlog/kafka), and could be POST-only or support GET depending on whether we want to migrate clients first or after. [21:35:19] I don't think validation is needed, if it's easily available then it would be useful though as a security hardening measure [21:35:23] <_joe_> what Pchelolo said sems relevant [21:35:27] correct Krinkle (if we dont' need GET, i wouldn't include a GET endpoint) [21:35:50] My question in that case, aside from the difference in where schemas are fetched from, could this proxy be EventBus mostly as-is? (except that depending on the outcome of the schema registry RFC, it may need a secondary way of validating) [21:35:53] <_joe_> if we need GET, option #3 is not viable, did I get that right? [21:36:03] i think that's true [21:36:06] well no. [21:36:06] _joe_: I think it's not [21:36:07] sorry [21:36:17] it just means that we wouldnt' use the same service to handle the GETs [21:36:24] we'd keep using likely varnishkafka for that [21:36:32] <_joe_> ugh [21:36:35] which is a red flag [21:36:38] <_joe_> yes [21:36:42] Krinkle: yes, i would tend to think so [21:36:43] <_joe_> I agree with Pchelolo [21:36:52] agree, it seems cleaner if the same service handled it all. [21:36:53] we are trying to unify thinngs not make more things [21:36:55] <_joe_> also we're moving away from varnish [21:36:59] true [21:37:09] <_joe_> I'd avoid relying on specific techs on that front [21:37:11] but we'll need a kafka logger for ats too [21:37:15] for webrequset [21:37:15] <_joe_> yes [21:37:30] yeah, seems better to have the endpoint support both, rather than for the endpoint to have an HTTP port, but also a kafka consumer for varnishkafka that would programmtically invoke the same code [21:37:33] _joe_: hehe I was typing "let's not discuss varnish it's the past and we're here for the future" [21:38:02] Krinkle: i agree. but, for a moment, let's assume we don't need GET [21:38:10] ottomata: only midly trolling, but, did you checkout Apache Pulsar? [21:38:23] <_joe_> Krinkle: so edge GET => edge layer => kafka => $new_service => kafka ? [21:38:41] more like edget GET => VLC 400 [21:38:47] VCL HTTP 400 [21:39:03] so check out https://phabricator.wikimedia.org/T185233 [21:39:42] currently in that i have the event validation of the GET events from varnishakfak validated and processed and reproduced to Kafka by Stream Processing service (this is somethign that EventLogging codebase is handling right now) [21:40:01] _joe_: but yeah, I would think during the days/weeks while we transition, either both services will produce to the kafka queue of 'validated events' or the old GET/varnishkafka/eventlogging service will forward POST to the new POST/EventBus. [21:40:10] either way, not really part of the architecture, just temporary [21:40:17] <_joe_> oh ok [21:40:36] consuming and and then POSTing to service is not a bad idea. [21:40:38] #link https://phabricator.wikimedia.org/T185233 [21:40:56] oh sorry, i thoguht that was lininking to the image [21:41:11] anyway scroll down to the bottom of the description [21:41:56] so ok, 20 minutes left. hm [21:42:03] for a minute, let's assume we don't need GET [21:42:11] <_joe_> oh ok so here /beacon requests would go to kafka not via the platform [21:42:14] <_joe_> I see [21:42:21] then my fav is still option 3 [21:42:31] <_joe_> I agree with Pchelolo [21:42:39] if ottomata has recearch that say otherwise [21:42:50] ottomata: interesting, so both the POST entry in that image shows it going to kafka and then be validated afterwards, similar to what we do now with GET /beacon. Does that mean support for validation to clients is going to be removed (which EventBus currently supports, right?) [21:42:51] what if we can't upstream the fork? [21:43:04] in my mind option #1 is still the most viable [21:43:07] no, sorry, validation is done by REST PRoxy for POST [21:43:13] I know someone said it was out of scope, but it seems in scope of we rely on kafka to validate post-receive. [21:43:19] OK [21:44:03] for the RFC, we don't need to choose between option 2 and 3, but I'd like to get some movement on at least if they are worth prototyping/trying [21:44:17] if option one was strongly preferred by everybody here, then we woudln't bother [21:44:17] <_joe_> ottomata: it's clear we would need to put effort in working with upstream; AIUI json schema validation is needed by others? [21:44:18] mobrovac: most viable vs most desired are different question. I agree with you 1 is the easiest but 3 is so much better long term if we succeed [21:44:28] I agree about EventBus being quite viable. My only concern would be that we'd probably have separate clusters of the same service to isolate faults between authenticated producers part of our infra in some way, and analytics/web clients. [21:44:41] _joe_: yes, but i doubt they'd upstream it unless we made jsonschema work with their Schema Registry [21:44:46] and we aren't suggesting to do that [21:44:57] the non-blocking aspect could either be opt-in on both, or be set alongside the same separation. Probably the latter would suffice. [21:45:02] <_joe_> ottomata: in a void, I'd pick option #1 all the time, but it seems you really don't like it [21:45:13] one consideration is that Kafka rest is JVM and for some reason nobody likes jvm here [21:45:28] <_joe_> Krinkle: the infra would need to be separated for a series of reasons [21:45:29] Krinkle: there will likely be different deployments/endpoints of any of these services [21:45:35] cool [21:45:42] <_joe_> s/likely/surely/ :P [21:45:48] picking option #1 still allow us to switch to option #3 if later it turns out that confluent starts or wants to support jsonschema as well [21:45:58] <_joe_> mobrovac: right [21:46:00] it seems to have the better cost/benefit ratio here [21:46:24] I think between picking 2 and 3, it would be nice if as much as possible we can go for 3 to build on upstream. But unlike what I suspected, I now hear that option 1 (eventBus) is quite ready for it as-is, which makes me somewhat suspicious as to why that wasn't obvious. [21:46:24] <_joe_> #info < mobrovac> picking option #1 still allow us to switch to option #3 if later it turns out that confluent starts or wants to support jsonschema as well. it seems to have the better cost/benefit ratio here [21:46:38] except throwing away all of the code if we switch to 3 eventually [21:46:48] btw 15 minutes left :) [21:47:02] <_joe_> so if upstreaming is hard, option # 3 is less viable [21:47:16] i think upstreaming will be hard [21:47:23] <_joe_> and then I'd pick # 1 too. [21:47:26] with #1 how would large-volume large-size events be handled? [21:47:29] i don't know that for sure, but i think it'd be safe to assume that our changes to #3 woudlnt' be upstreamed [21:47:32] Pchelolo: right but otion #3 locks us in no matter what, option #1 leaves us wiggle room to reconsider if circumstances change in the future [21:47:43] GET is size-limited, the POST endpoint AIUI doesn't really scale [21:48:02] tgr: it's 1 character difference. What scale concern? [21:48:05] all 3 of these options have POST endpoints [21:48:05] <_joe_> tgr: same as with #2, we would have to solve scalability issues that are not discussed on the ticket at all [21:48:14] (in size) [21:48:34] tgr: EventBus is taking in 4 megabytes per event for job queue now [21:48:47] <_joe_> frankly I don't think a sentry-like service would make a lot of use of this vs connecting directly to kafka [21:48:47] q: do we have other (highish volume) python public services? [21:49:04] Pchelolo: how many events though? [21:49:16] <_joe_> ottomata: no, we only have high volume php services though [21:49:22] <_joe_> :) [21:49:24] https://grafana.wikimedia.org/dashboard/db/eventbus?refresh=1m&orgId=1 [21:49:30] _joe_: not really "public" though, for php. [21:49:33] yayay but i'm almsot certain we can't use PHP for this (no kafka client) [21:49:47] tgr about 1K / second in eventbus it looks like [21:50:01] <_joe_> ottomata: my point being, there is no good reason I know to pick node over python/tornado for scalability of intake [21:50:02] and some of them are really big [21:50:17] is there a reason to pick python over node? [21:50:28] <_joe_> Krinkle: well, you access it via varnish pretty much like you'd access this service [21:50:29] ottomata: Do we have any stats on processing speed of current eventlogging (the one that consumes from varnish /beacon) [21:50:43] <_joe_> ottomata: that we already have the code? [21:50:50] _joe_: except that our php layer definitely cannot handle the same load, given 99% cache hit? [21:50:51] the only reason for python would be we have some code to copy-paste [21:50:56] wrt client-side error logging, if something goes wrong you have one event per pageview, which I think at peak time is several thousands / sec? [21:51:24] <_joe_> Krinkle: sorry, let me rephrase: all nodejs service don't make up a fraction of the requests we serve from php [21:51:49] fwiw, some of the functionality of the eventbus code I wrote is also in service-template-node [21:51:49] just 9 minutes left [21:52:22] tgr: yeah, like 7k per second [21:52:24] ottomata any last minute questions/clarifications? [21:52:34] phew yeah i think tons? not sure how much closer we are [21:52:47] <_joe_> ottomata: I really don't see a full rewrite to be justified based on the information I have; If you feel it's really needed, I would need more arguments in that direction to be convinced [21:52:50] tgr: we never tried to kill it [21:52:53] _joe_: re directly connecting to kafka: sentry for javascript would send error events from the browser, that can't go directly into kafka I assume [21:52:57] it is always possible to continue the discussion on the ticket and then possibly have another meeting if needed [21:53:01] we currently have no public python services. we do we have EventBus take in quite a lot of traffic internally._joe_ : I agree Node.js/Python is not an argument in scale or perf. But I'd assume with Restbase/parsoid/changeprop/*oids we do serve more request there then on app servers, which is only cache miss and logged-in views. [21:53:15] <_joe_> tgr: right sorry I was thinking internal applications [21:53:32] ok here's a clarification maybe we can settle: [21:53:39] assuming we cannot upstream REST proxy changes [21:53:44] <_joe_> Krinkle: I think that if you include async traffic in both, you don't [21:53:47] should we not pursue REST Proxy? [21:53:51] I'd be surprised if we have more php req/s on app servers than on all the *oids and restbase/CP combined. Either way, I'd support using Python for this. [21:54:04] _joe_: hm.. if you include jobrunners php reqs, then yeah, maybe. [21:54:20] <_joe_> if you don't include async traffic, I'm definitely sure the php traffic is way higher [21:54:23] I was thinking web+api app servers vs restbase/*oid/changeprop. [21:54:51] <_joe_> but we're off-topic, sorry :P [21:55:14] Krinkle: restbase is also behind varnish and *oids/CP scale with edits, not views [21:55:44] tgr: views now are a big portion with page previews and math [21:55:46] opinions about my REST Proxy Q ^^^^^^ ? [21:56:08] Sure sure. It's a moot conversation, but regardless of current request stats, we can agree that our php servers will respond slowly and scale significantly worse (given the same hardware) than any of our node.js or python services. [21:56:13] ottomata: i would say that option #3 is not off the table, but its timing is not the best [21:56:34] Which is not due to PHP but due to MW, and is a fixable problem. [21:56:44] as in, we should reconsider #3 in the future [21:56:57] ok, in the interest of time, I'll focus more on eventbus refactor vs. node (or something else) now then [21:57:05] i'll have a go at what a real refactor woudl look like [21:57:11] (of eventbus) [21:57:20] <_joe_> ottomata: yeah if upstreaming our changes seems impossible, and even making upstream so that our changes are an extension is, it should not be considered right now [21:57:22] ottomata: Yeah, once we've unified on EventBus, it's implementation could be swapped I suppose. That seems like a nice follow-up if interesting to your team. [21:57:26] I love #3 as it's the best eventually, but #1 is so much easier [21:57:28] and if it becomes particually nasty, i'll report back and argue for new thing [21:57:41] just 3 minutes left to wrap it up. Anything that needs to be added to the minutes with #info? [21:57:53] I'm not sure if #3 is actually easier than writing a new node based service. [21:57:57] We should use Docker containers. [21:57:59] <_joe_> ottomata: sounds good [21:58:11] And kill Hitler. [21:58:13] <_joe_> ahah [21:58:30] its definetly easier to swap implementations out if we start with eventbus. we could keep the POST API mostly the same for MEP [21:58:31] Maybe with a Docker container? [21:58:32] <_joe_> I was about to ask what was the catchphrase [21:59:12] obligatory https://www.tor.com/2011/08/31/wikihistory/ [21:59:37] #action ottomata to research refactoring eventbus first before other options [21:59:58] and on that note, please continue the conversation on the phab ticket. [22:00:11] #endmeeting [22:00:11] Meeting ended Wed Sep 26 22:00:11 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [22:00:11] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2018/wikimedia-office.2018-09-26-21.01.html [22:00:12] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2018/wikimedia-office.2018-09-26-21.01.txt [22:00:12] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2018/wikimedia-office.2018-09-26-21.01.wiki [22:00:12] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2018/wikimedia-office.2018-09-26-21.01.log.html [22:00:50] milimetric: Thanks [22:01:05] cheers! [22:01:18] thanks all!