[21:03:16] Krinkle is going to start the meeting at some point, right? [21:03:43] hope so. otherwise, i'll do so [21:03:53] #startmeeting RFC meeting [21:03:53] Meeting started Wed Oct 25 21:03:53 2017 UTC and is due to finish in 60 minutes. The chair is Krinkle. Information about MeetBot at http://wiki.debian.org/MeetBot. [21:03:53] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [21:03:53] The meeting name has been set to 'rfc_meeting' [21:03:57] Times passes quickly [21:04:01] :) [21:04:30] Krinkle: can you also do #topic and #link? [21:04:38] do we have MaxSem and anomie? [21:04:43] * anomie is here [21:04:45] #topic T169266 Clarify recommendations around using FauxRequest [21:04:46] T169266: Clarify recommendations around using FauxRequest - https://phabricator.wikimedia.org/T169266 [21:04:47] * MaxSem too [21:04:52] yay! [21:05:16] RFC author is jdlrobson [21:05:16] no yuri... jdlrobson maybe? [21:06:07] Ok. So the main question today is: if and when is it ok to use FauxRequest to make an internal call to the web API in production code. [21:07:03] oh, for the record: jdlrobson wrote the rfc to get clarification of earlier discussions. i pushed for it to be discussed as an rfc here [21:07:26] i don't know anyone disagrees with the high level concept of having a better interface to mediawiki functionality than FauxRequest is desirable. I think the problem is generally around that it mostly doesn't exist [21:07:30] if and when the alternatives are even worse [21:07:43] there is lots of functionality you can only get from the api unless you want to tack an extra 200 lines of code onto your feature [21:08:16] Participants from phab task: anomie, ebernhardson, MaxSem [21:08:23] ebernhardson: interestingly, jdlrobson argued the opposite: many things cannot be done via the web API, and they should be possible via the web API, so let everythign use the web API. [21:08:37] has anyone looked specifically at the watchlist continuation issue which apparently inspired most of the comments? [21:08:37] Sorry i missed the meeting start can i have phab link :/ [21:08:50] Zppix: https://phabricator.wikimedia.org/T169266 [21:08:55] Thanks daniel [21:09:23] everyone talks about the backends that you could use instead, but T111074 is about interfacing with the wrapper bit of the API [21:09:23] T111074: Watchlist query continuation handling - https://phabricator.wikimedia.org/T111074 [21:09:29] DanielK_WMDE: its generally the same thing but in a different order, there are *also* lots of features that are only implemented in special pages or whatnot but not available via api. Basically there is a huge feature disparity between things [21:09:39] TimStarling: i have not, but i ran into a similar problems with listing language links a while ago. [21:09:42] Running into a situation where it seems like a FauxRequest into the API is a good idea is usually really a situation where you have an opportunity to refactor things to reduce tech debt. [21:10:01] addshore: you around? want to talk about paging for the watchist API? [21:10:06] #info main question today is: if and when is it ok to use FauxRequest to make an internal call to the web API in production code. [21:10:22] #info Running into a situation where it seems like a FauxRequest into the API is a good idea is usually really a situation where you have an opportunity to refactor things to reduce tech debt. [21:10:48] so do we need to refactor the API to provide a better internal interface for continue parameters? [21:10:56] #info there is lots of functionality you can only get from the api unless you want to tack an extra 200 lines of code onto your feature [21:11:15] T111074 is specifically about code currently doing a FauxRequest *and* hacking up the wlcontinue parameter manually, which they want to change to stop hacking up wlcontinue. But a better idea would probably be to use WatchedItemQueryService now that it exists. [21:11:42] #info do we need to refactor the API to provide a better internal interface for continue parameters? [21:12:03] TimStarling: i would like to see the paging infrastructure, and also the query/enright pipeline of ApiQuery, decoupeld from the API, so it can be used by Special pages as well as API modules [21:12:12] but do they also have frontend code that consumes the API? [21:12:24] there was talk of infinite scrolling [21:12:33] i'm fully in favor of having the api and special pages all be a bit thinner with some sort of shared implementation that does the majority of the non-parameter parsing/output formatting. But it seems more like a wish than a suggestion for how to write code today [21:12:59] TimStarling: yea - infinite scrolling is nice, but I did not get how that relates to php code on the backend. [21:12:59] +1 ^ [21:13:02] Do we actually want PHP code in MediaWiki to use continue/paging, e.g. making multiple requests? Or would it be preferable to use underlying interfaces to make a larger batch request directly? Also relates to possibility of caching. [21:13:26] ebernhardson: it's my strong suggestion for how to write code today. [21:13:35] Krinkle: ideally we would want what would be less performance intentsive [21:13:52] ebernhardson: it does mean investing time into refactoring, though. so it's not the quickest way to close a ticket. [21:14:33] Krinkle: my use case for paging in php is across requests, not in a single request. [21:14:41] I think multiple requests to FauxRequest is quite bad, but I think the use case here isn't about using 'continue' to make multiple requests, but rather to expose paging to the front-end via a special page. [21:14:45] DanielK_WMDE: Right [21:14:50] Krinkle: think of special pages with next/previous links [21:14:53] Yeah [21:15:30] Those aren't typically something you'd find in a PHP class interface. I don't think we have examples of php interfaces that take intuitive offsets beyond very basic numerical offsets, which aren't enough for complex things like whatlinkshere. [21:15:41] I think situation where API calls contain business logic is not good. And makes it harder to reuse the logic (I had to refactor several places because that logic is not reusable and clearly FauxRequest antipattern is another reaction to the same) [21:16:04] ebernhardson: if we could use FauxRequest as a stepping stone towards properly isolated application logic, then i'd agree. but using FauxRequest seems more like a step in the opposite direction. It doesn't get us closer, and it makes refactoring harder. [21:16:12] internal use of page linsk just queries them all at once, but for user-facing things like a special page, we do need the continuing and if we don't want to use FauxRequest/ApiQuery for those cases, what would be an alternative? [21:16:28] that said, we'll probably not rewrite whole API soon, so practically... [21:16:40] Conceptually, many of the API query modules and many of the Special pages do basically the same thing. But API query modules aren't abstracted, and Pager is extremely UI-oriented. So unifying the two would be a rather big design project, and a bigger amount of work converting things. [21:17:05] anomie: but what'S the alternative? [21:17:13] A lot of the long-term goal of using PHP interfaces internally is already becoming a reality with MediaWikiServices. Much further than a few years ago where PrefixIndex was basically the only properly abstracted PHP service. [21:17:15] #info Conceptually, many of the API query modules and many of the Special pages do basically the same thing. But API query modules aren't abstracted, and Pager is extremely UI-oriented. So unifying the two would be a rather big design project, and a bigger amount of work converting things. [21:17:27] DanielK_WMDE: Alternative to what, a bunch of work to do things right? [21:17:29] However, it doesn't currently address intelligent offsets for query modules. [21:18:10] anomie: i mean if we don't refactor all those api modules and special pages, what other way is there to share more code? [21:18:15] well, FauxRequest, I guess... [21:18:28] DanielK_WMDE: Bad hacky code would be the alternative, I suppose. [21:18:33] hehe [21:18:42] so you propose to invest the time to do it right (tm)? [21:18:47] i.e FauxRequest calling into the API [21:19:00] with infinite scrolling, is the goal to have PHP deliver the first page of results, and then JS will construct subsequent pages using API calls? [21:19:21] I think investing the time to do it right would be the best long-term result. The problem, as always, is finding the time to invest. [21:19:23] and wouldn't that make it difficult to keep the two in sync, visually? [21:20:05] TimStarling: flow exactly that, but its done by basically what is suggested here: the api is a thin wrapper over some other code and the api and regular page render all call the same stuff and have access to the same resulting data [21:20:10] TimStarling: my guess is that this was the idea, and the desire is to use the same templates for rendering, and the same api requests for the data, to keep it in sync. [21:20:16] TimStarling: The visual sync is probably why some are pushing for redoing frontend code in server-side js. :/ [21:20:27] But that's offtopic. [21:20:38] TimStarling: but i wonder - if we load most of the content with JS anyway, why not also load the first page with JS? That would make things so much simpler. [21:20:40] #info with infinite scrolling, is the goal to have PHP deliver the first page of results, and then JS will construct subsequent pages using API calls? [21:21:14] For this use case we'd have to make sure the offsets translate between special page query parameters and API modules. [21:21:36] Which seems reasonable, if we make it an abstract concept not specific to either, but specific to the PHP class that provides that query. [21:21:39] Krinkle: if they use the same pager, that would Just Work :) [21:21:44] Yeah [21:21:51] Krinkle: Which is easily enough solved by having the backend "service" return the offsets that the API just passes through. [21:21:55] different Pager subclasses or traits, but ultimately the same base class. [21:22:19] We did something similar with user tokens. They used to be API specific, but are no longer afaik. [21:22:40] Which helped a lot with ajaxification of watch, patrol and (eventually) rollback [21:23:00] as well as ajax editing/VE [21:23:13] I'd like to look at this from the Code Monkey perspective. Given the task to write a SpecialPage that provides the same functionality as an ApiModule, what should Code Monkey do, and how much effort would it be? [21:23:21] I think I preferred the old approach of having thin frontends, with private ajax APIs delivering HTML fragments instead of structured data [21:23:29] I note WatchedItemQueryService handles continuation by passing a 2-item array, which ApiQueryWatchlist implodes/explodes for sending to and reading from the client. [21:24:37] anomie: that sounds like a good model. The serialisation presumably would mean that ApiModule and SpecialPage have to do it the same way for client-side JS to be able to use them interchangeable. ideally that string format would be maintained as part of the class itself or a dedicated class. [21:24:44] TimStarling: the world seems to be moving into the opposite direction :) But I also see advantages in that approach. [21:24:50] TimStarling: The drawback there is that it doesn't provide a usable API for any other client than the thin frontend. [21:25:48] First render should always happen server-side. SPA(Single-Page-App) did go a long way with client-side JS but the industry is moving back. "Server-side rendering" as funny as it sounds at first, is a big thing, and a "recent trend" for front-end devs nowaddays. [21:26:25] Krinkle: ...in JS [21:26:47] Sure, be it isomorphic JS that is largely shared between Node/browser, or even entirely shared (node-serviceworker). [21:27:03] yeah, in JS, we can't really recommend that our devs write everything twice as a development policy [21:27:16] But the abstraction layer can also be JSON and html templates, the JS/PHP can be mostly stateless and wouldn't require as much duplication. [21:27:19] But, we drift off-topic. [21:27:39] we started that direction with mustache templating which used the same api response on both ends to render, but it seems to have been generally disliked [21:27:51] well, if we started from scratch tomorrow, the architecture could be for PHP to emit JSON only, and rendering happens at the edge using some JS template framework. [21:27:59] that'S what gabriel was aiming for, i think [21:28:00] Whether or not we also have an API module that renders an HTML fragment doesn't change that we'd need both PHP/SpecialPage and PHP/ApiModule to respond with compatible offsets. [21:28:48] I'd propose to keep the format of the API module response (HTML vs data) orthogonal for this RFC. [21:29:19] Ok. so. in practical terms, if i want to write a SpecialPage, and have an API module, I have two choices: use FauxRequest and deal with the encoding/decoding/rendering, or refactor the API module (and still deal with rendering, but based on an ideomatic php data model) [21:29:22] DanielK_WMDE: Ideally, Code Monkey would take the existing logic in the API module and turn it into a class that takes input data and produces output. Then the API module would parse its parameters into the format needed by the class (possibly constructing Title objects, etc), instantiate and call that class, then process the results (e.g. turning Title objects into standard data structures for output). The special page would do basically the [21:29:22] same thing, except its "standard data structure" would be HTML rather than a PHP data structure to be serialized to JSON/XML/etc. [21:30:01] anomie: so, in terms of effort, how does that compare to "use FauxRequest"? [21:30:35] DanielK_WMDE: A bunch more work, obviously. It's the old dilemma between fast and good. [21:31:02] using fauxrequest also requires you to already have an api module ;) if you're starting from scratch on a new feature, is there a different calculus? [21:31:02] #info Ideally, Code Monkey would take the existing logic in the API module and turn it into a class that takes input data and produces output. The API module would parse its parameters into the format needed by the class (possibly constructing Title objects, etc) [..]. The special page would do basically the same thing, except its "standard data structure" would be HTML [21:31:08] well, actually, i don't think it's even *that* much more work. it needs a bit more thinking, but... [21:31:20] we already i think have complaints that writing api modules is harder than it should be [21:31:32] teh api module already has the right structure. it already takes well-defined input and produces a data structure as the output [21:31:37] (due in part to the xml-ish intermediate data structures) [21:31:41] Good point. Would we ever encourage putting SQL logic in a (new) Api module and recommend FauxRequest for the special page rendering? [21:31:48] it shoudl eb simple enough to refactor this into somethign that is decoupled from the api framework [21:32:18] Let's challenge our ideas and see if they hold without the cost of refactoring. [21:32:49] #info using fauxrequest also requires you to already have an api module ;) if you're starting from scratch on a new feature, is there a different calculus? [21:33:31] Krinkle: i would not encourage that, since it violates separation of concerns. app logic should not be bound to protocol stuff. [21:33:35] ideally, it shoudl also not be bound to the storage details, but in case of listings based on complex queries, that point is a bit off [21:34:10] I don't like the idea of SQL logic in API. That way if you need the same logic somewhere else (and eventually you do), it's very hard to reuse it [21:34:29] no, it doesn't make sense for new code [21:34:41] Krinkle: to me, the API framework is a presentation layer (result serialization) and request handling (input validation, etc). Structurally, it'S very much like processing a form from a special page. It's anm alternative framework to plug your app logic into [21:34:57] but maybe we can talk about VirtualRESTService etc.? [21:34:57] #info I don't like the idea of SQL logic in API. That way if you need the same logic somewhere else (and eventually you do), it's very hard to reuse it [21:34:58] SMalyshev: I don't think anyone is arguing for SQL logic in API. But we do have a lot of tech debt along those lines. [21:35:02] It sounds like we all agree FauxRequest/ApiModule should not be the "ideal" interface for PHP code needing to query a backend store. Is that right? [21:35:33] Krinkle: i would call it "option of last resort" [21:36:06] suppose you want to have a service written in PHP, optionally accessed over the network [21:36:27] then FauxRequest starts to make more sense, except that it's too generic [21:36:41] anomie: well, yes, debt is debt, that is cleat. But Krinkle was talking about new modules, and i'd say it's a no-no [21:36:43] ideally you would want to avoid the serialization cost in the case of internal routing [21:36:47] *clear [21:37:39] the old way to do this is with an RPC framework, again way too old fashioned for us so we must find a harder way to do it [21:37:44] I think prior RFCs and coding conventions already dictate that SQL interaction should be abstracted via a PHP class. However, the query services we are talking about do more than querying the database. Presumably we could still be in this situation even if WhatLinksHere had a Storage backend class, with still a large amount of code in the API module, that one could use via FauxRequest in the special page. How do we feel about that? [21:38:15] TimStarling: yes - the app logic should not know about that. it should see a service (php) interface. which can of course be backed by standalone service. [21:38:26] Serialization cost, and possibly also data re-validation cost after being deserialized on the receiving end. e.g. Title -> string -> Title with all the secureAndSplit sort of stuff in the second arrow. [21:38:41] TimStarling: Would MediaWikiServices() (and previously VirtualRESTService) be an adequate answer to that? The backend provides a PHP interface, but could query a database, or a file, or an HTTP service etc, or shell out.. [21:38:52] TimStarling: i'm good with an RPC inplementation of service interfaces :) [21:39:34] Using HTTP (or VirtualRest) has a similar characteristic as FauxRequest, in that all parameters must be strings. Do we want that? [21:40:34] i'd imagine most of the overhead of string interfaces is if you actually have to encode/decode (split/parse/etc) [21:40:45] (~20 minutes remaining) [21:40:47] if you're passing structured arrays of strings, perhaps not as awful [21:41:21] if you have remote service implementations using MediaWikiServices, then that is more of an RPC model, as opposed to VirtualRESTService which is more like HTTP emulation [21:41:26] it'S not just serialization cost. it's also readability, type safety, static analsys, tooling (IDEs etc) [21:41:34] that's a pretty big factor, imho [21:41:46] *nod* [21:41:51] true! [21:42:10] Krinkle: i'm apposed to VirtualRest for the same reaqson I'm opposed to FauxRequest. [21:42:29] TimStarling: Just to confirm, are you advocating for RPC/MediaWikiServices or HTTP emulation? [21:42:54] brion: http://wiki.c2.com/?StringlyTyped [21:43:10] I think RPC would be better [21:43:14] :) [21:43:41] #info i'm opposed to VirtualRest for the same reaqson I'm opposed to FauxRequest. Using VirtualRest has a similar characteristic as FauxRequest, in that all parameters must be strings. [21:43:46] so, what about acceptable uses of FauxRequest? Unit tests is one, I think. [21:44:02] prototypes? [21:44:56] DanielK_WMDE: If the API module is powered mostly by another class, we'd see API unit tests that are actually unit tests. that'd be cool. [21:45:08] when you need shit done but there's no clean PHP interface to do what you need and it can't be produced in a reasonable amount of time [21:45:12] E.g. mock the requset and mock the internal response. Test the processing of input/output. [21:45:21] I was thinking that if we had purely presentational php code that *only* uses teh API and doesn't bind to MediaWiki directly, and we want to be able to run it inside MediaWiki or standalone - that would be an acceptable use of FauxRequest. Though that should still be wrapped in peroper service interfaces. [21:45:30] MaxSem: What deadlines do WMF have that would pass that criteria? [21:45:36] Isn't it all arbitrary and debatable? [21:46:07] Krinkle: in Wikibase you can find some like this. Though most API moduels in wikibase are far from perfewt, and some of the "unit" tests are psoitively scary. [21:46:32] Krinkle: the problem is injecting any services, API modules resist DI. but we digress [21:47:27] MaxSem: in the Wikimedia case, it's never external pressure, just personal impatience, I think... [21:48:05] DanielK_WMDE: i think there are a number of things where if it takes a day, it gets prioritized. If it takes a week it gets punted into the pile of tickets to do "someday" [21:48:13] Krinkle: if e.g. refactoring an API takes order of magnitude as long as using FauxRequest, lots of people would ask if it's worth it [21:48:25] (where day and week are arbirary numbers i chose) [21:48:39] Apologies for the poor phrasing there, I didn't mean to suggest our features aren't important. What I mean is, our features serve mission critical purposes and long-term goals that we need to meet. But, I'm questioning the short-term time difference to add say, another week or two delay, given no external pressure. Not questioning a go/no-go. [21:48:44] MaxSem's statement reminds me of a parable bd808 told me about once: "pioneers" try to prove something is possible in a quick-and-dirty way, "settlers" make it work, and "city planners" make it work well and efficiently. Unfortunately we have a tendency for the pioneers to do something and that something gets put into production with minimal cleanup, leaving the tech debt for someone else to eventually have to deal with. [21:49:55] that describes pretty well the output of most software engineering efforts [21:50:09] I think technical debt can be acceptable and by definition FauxRequest is technical debt if today's discussion yields that it is an ant-pattern. Like all responsible debt, the team in question would agree to address it once the feature has been shipped to a certain extent. [21:50:43] In the idea universe, an org has all three kinds of people and they move through projects in succession [21:50:43] However if we consider it generally aceptable, it isn't strictly considered debt, and then anomie's phrase applies - in that it isn't paid off, and basically not considered debt (another way of saying the same thing) [21:50:57] MaxSem: i don't think it's ever an order of magnitude. at least not if you take into account the overhead that FauxRequest causes [21:51:24] but refactoring api modules is scary. calling them is more accessible to new(ish) devs. [21:51:26] #info DanielK_WMDE> so, what about acceptable uses of FauxRequest? when you need shit done but there's no clean PHP interface to do what you need and it can't be produced in a reasonable amount of time [21:51:42] maybe there could be some offer of help with the refactoring? [21:51:55] that may make this a lot more appealing... [21:52:19] help from whom? [21:52:33] you [21:52:37] me, [21:52:42] brad, tim... [21:52:43] I ran a team that was going to tackle this for ... 9 days. :) [21:52:44] wmf mentoring services ™ [21:53:00] then the reorg of Doom™ happened [21:53:12] bd808: whoever owns api code and tech debt [21:53:21] that would be the platform team, i guess [21:53:29] team tech debt [21:53:35] uhu [21:53:47] ok, since we are approaching the 5 minute mark, i want to formulate some proposals. [21:53:52] I'm pretty sure that's not what TimStarling and anomie thought they were signing up for [21:53:54] great to have one team responsible for paying down tech debt, and all the others responsible for accruing it [21:54:03] I'm sure that will work out [21:54:05] #info What deadlines do WMF have that would pass that criteria? in the Wikimedia case, it's never external pressure, just personal impatience, I think... [21:54:09] Thanks, All (TimStarling: will be in touch re WUaS in new WUaS Miraheze Mediawiki and Wikidata). Cheers [21:54:27] A team that owns all the tech debt would need much more than the number of people MW Platform has. [21:54:33] 1) the use of FauxRequest in production code is discrouraged and considered tech debt. If that debt is deemed acceptable, the team creating such code must commit to resolving it in a timely manner. [21:54:44] it'd need to be half of engineering ;) [21:54:48] Krinkle: there's always team goals too;) [21:54:53] 2) use of FauxRequest is acceptable (and even encouraged) in unit tests (and integration tests?) [21:54:59] anomie: 2.5 engineers isn't enough?! slackers ;) [21:55:24] #info if e.g. refactoring an API takes order of magnitude as long as using FauxRequest, lots of people would ask if it's worth it. i think there are a number of things where if it takes a day, it gets prioritized. If it takes a week it gets punted into the pile of tickets to do "someday" [21:55:32] TimStarling: i was thinking of api ownership... and of helping people fix it, not doing it all yourself. [21:55:45] the idea is to make people less scared to fix old code. [21:56:08] I think anomie always helps with API changes in gerrit [21:56:10] i think one reason we have to much debt is that new devs often don't want to refactor old code, they will rather work around it [21:56:40] I missed the whole bash FauxRequest party here, but its a dirty hack and everyone should see it as such. It is at best a testing harness for poorly factored code. [21:56:53] TimStarling: that'S good to hear. i was aiming at something like: instead of saying "no you can't do that", say "no you can't do that, we'll help you to do the right thing". [21:56:54] I always help with API changes in gerrit. Too often I get pushback that doing it right would be too time-consuming. :( [21:57:18] #info its a dirty hack and everyone should see it as such. It is at best a testing harness for poorly factored code. [21:57:20] Responsible dealing with technical debt is related, but perhaps best further discussed another day. Do we agree use of FauxRequest in production code is technical debt? [21:57:35] yes [21:57:43] Yes [21:57:49] yes [21:57:57] anomie: doing it wrong is more time consuming, but to other people. so code doping it wrong should not be merged, or get reverted. [21:58:06] Krinkle: +1 [21:58:19] where is yurik to yell NO? [21:58:30] bd808: he actually acreed on the ticket :) [21:58:37] /73/57 [21:58:58] Do we have a recommendation for how to avoid such technical debt? E.g. MediaWikiServices? Is WatchedItemQueryService the shine example? [21:59:03] (2 minutes left) [21:59:27] bd808: yourik said: "I think we shouldn't use FauxRequest objects at all (for the reasons outlined elsewhere, such as no type safety, etc)" ... "So ideally we should partition existing API into the internal API and an extremely thin, no logic layer to convert Request into it." [22:00:14] Krinkle: yes, a stateless php service class, managed via MWServices. [22:00:19] I personally don't like the internal structure of WatchedItemQueryService, but in general it's probably one of the best current examples of a large thing having the SQL in a backend. [22:00:19] +1. Business objects and marshaling interfacers [22:00:33] *interfaces [22:00:39] DanielK_WMDE: one word: AjaxDispatcher. action=ajax [22:00:45] brion introduced FauxRequest in 2003, which I have to say, was an excellent time to be introducing tech debt [22:00:46] bd808: it really seems jdlrobson is the only one who likes the idea. with ebernhardson considering it ok-ish. [22:00:58] if ever there was a justification for rushing out features, it was in those early years [22:01:21] Krinkle: what about it? [22:01:23] got to hurry to the A/B test before the enwiki community votes the project off the wiki [22:01:24] #agreed Use of FauxRequest in production code is considered technical debt. [22:01:49] TimStarling: i think we actually need the class. for testing. and yea, the bad old days... [22:01:53] DanielK_WMDE: AjaxDispatcher as deprecated as it is, matches " partition existing API into the internal API and an extremely thin, no logic layer to convert Request into it." [22:02:03] it literally invokes a PHP function with said parameters from the request URL [22:02:17] One of the best APIs we had (only mildly trolling here) [22:02:20] #endmeeting [22:02:20] Meeting ended Wed Oct 25 22:02:20 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [22:02:20] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-10-25-21.03.html [22:02:20] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-10-25-21.03.txt [22:02:20] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-10-25-21.03.wiki [22:02:20] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-10-25-21.03.log.html [22:02:40] Krinkle: you know who (re)wrote that, right?... [22:03:11] thanks for chairing, Krinkle! [22:03:26] https://github.com/wikimedia/mediawiki/commit/97666d062ddb817a7a0783480a4592c0ffb9fd62#diff-828e0013b8f3bc1bb22b4f57172b019d [22:03:42] I didn't. But logs mention someone called Jens Frank [22:04:07] jeluf, yes. and me, when writing CategoryTree [22:04:15] because we didn't have a web api back then [22:04:18] Right [22:04:29] And it was better than query.php (or was that already gone?) [22:04:32] i think i generalized it a bit [22:04:56] may even have been before query.php. or at the same time [22:05:00] I think action=ajax was the first, BotQuery was an extension [22:05:14] The PHP side of it only died out a few years ago. The client-side took much longer because sajax_ became the go-to way to do cross-browser XHR before jQuery. [22:05:18] also by yurik, iirc [22:05:22] Only really removed about a year ago. [22:05:33] Almost as long deprecatin phase as wikibits (which was removed this year) [22:05:41] hehe [22:12:13] DanielK_WMDE, Krinkle: https://www.mediawiki.org/w/index.php?title=API:Calling_internally&diff=2600160&oldid=2479123 [22:12:41] please amend if needed:) [22:12:54] Thx [22:13:01] https://phabricator.wikimedia.org/T169266#3711464 [22:15:23] DanielK_WMDE: Little bit of German slipped in here - https://phabricator.wikimedia.org/T172165#3699559 [22:15:31] Auto-correct :)