[17:12:51] JFYI: 1. Russian Wikipedia added to the list of extremist materials for the article "Dynamite" and might be blocked in Russia. [17:13:03] 2. Five articles about drugs banned by Federal Drug Control Service, and Wikipedia might be blocked in Russia. [17:13:30] These are two separate unrelated events. [20:52:52] xelizabx: hey, thanks for setting up the iOS email list! [20:53:03] xelizabx: i have another email account question when you have a minute [21:02:39] go for it [22:00:58] o/ hi everyone! [22:01:09] #startmeeting RFC meeting [22:01:10] Meeting started Wed Nov 18 22:01:09 2015 UTC and is due to finish in 60 minutes. The chair is TimStarling. Information about MeetBot at http://wiki.debian.org/MeetBot. [22:01:10] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [22:01:10] The meeting name has been set to 'rfc_meeting' [22:01:22] #topic API-driven web front-end | RFC meeting | Wikimedia meetings channel | Please note: Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE) | Logs: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/ [22:01:44] hi all [22:02:04] lets get started? [22:02:46] #link https://phabricator.wikimedia.org/T111588 [22:03:23] o/ [22:03:53] for me it would be valuable to hear about your thoughts and concerns about this direction [22:04:43] "Authenticated views aren't currently cacheable" <-- Parser cache? [22:05:04] Varnish (front end) cache [22:05:04] Marybelle: as in CDN cacheable [22:05:27] whereas you propose assembling the page on the client side? [22:05:30] You want a CDN to cache authenticated views? [22:06:03] TimStarling: client side or edge (CDN), yes [22:07:01] api.php can currently serve parsed page text. [22:07:05] What's being proposed here? [22:07:39] Marybelle: the task describes several detailed steps [22:07:42] * aude waves [22:08:04] I've said this in another meeting, but for me the largest single concern is that we don't somehow make having service worker browser support and/or nodejs compositing services a strict requirement for any use of MediaWiki as a wiki platform [22:08:29] As long as we are talking about what amounts to a very fancy skin I'm ok with the concept [22:08:58] yeah, at this point it's about enhancement [22:09:17] I am a bit concerned about good support for no-script type clients but I think you are trying to cover that [22:09:32] bd808: Is that what we're talking about? The task uses a lot of words to just say "very fancy skin." [22:09:52] first load and no-script is an interesting question we are currently thinking about [22:10:14] reading is focusing on a two-phase load process, with a lead section and a separate remainder [22:10:15] Marybelle: I think in general you can think about this proposal as something similar to wikiwand or the mobile frontend [22:10:45] MobileFrontend is an abomination. [22:11:01] So hopefully we won't be re-making that monster. [22:11:02] the main point being separating the authenticated user differences from the anon content for as long as possible [22:11:13] another option I am leaning towards is leveraging streaming HTML parsing in browsers for a similar effect, without the complexity of the two-stage load process [22:11:15] I thought we already did that with parser cache. [22:11:25] Maybe I'm just confused. [22:12:29] first benchmarks on 2G show that just deferring the loading of images already helps a lot [22:12:35] for first paint time [22:13:05] so would we do some kind of bandwidth detection? [22:13:40] a lot of mobile browsing (at least in the developed world) happens over wifi with megabits available [22:13:48] yeah, browsers are growing apis for that, and another option would be to time the load of some early resources [22:13:59] gwicke: reading is open to looking at the different loading strategies. i think we'll have to try two-step load and full html with deferred image load. in any event, reduction of html size makes a big impact [22:14:31] also, deferred until when? [22:15:06] dr0ptp4kt: agreed on HTML size reduction, esp. navboxes could be optional in low bandwidth conditions [22:15:10] dr0ptp4kt: Reduction of HTML size meaning size of all assets or size of the HTML DOM itself? [22:15:40] TimStarling: yeah, deferred until when is a good question :) i think we'll have to examine a few approaches [22:15:46] MobileFrontend is already doing some of that navbox hiding/stripping. It's horrible. [22:16:07] Marybelle: the html itself, although of course all assets loaded have some level of impact on load time and bandwidth [22:16:09] TimStarling: there are different options; a common one is to prioritize above-fold images & only load those further down if the user scrolls there; another is to drastically change the quality of the images [22:16:20] High-level user interfaces shouldn't be impinging on editorial control. [22:16:24] so it sounds like this is not really a plan for approval yet, we are at a pretty early design stage [22:16:57] this is also pretty vague: TimStarling: client side or edge (CDN), yes [22:17:07] I mean, you eventually have to pick one to implement first [22:17:34] Marybelle: yeah, the editorial control vs optimizing for different form factor / connection stuff is complicated, to be sure [22:17:36] TimStarling: currently, I am running the same serviceworker either in the client or on the server [22:17:39] There are concrete steps that could be taken to improve api.php and the parser, it looks like. Like making Special page data more easily available or changing how we mark stub/red links. [22:17:40] i don't think "client side only" is feasible at this time. [22:18:34] yes, client side only is not realistic [22:18:38] I'd like to see existing end-points improved before creating new interfaces and layers, personally. [22:18:59] so in the non-serviceworker case, node would assemble all logged-in requests? [22:19:14] currently, the server side environment has some advantages like streaming support for composite responses [22:19:55] TimStarling: yes, for clients without serviceworker, as you say [22:20:02] Marybelle: I think the nicest solution for preserving editorial control and allowing better support for multiple device form factors will be through page components (T105845) and template styles (T483) [22:20:16] Template styles would be great. [22:20:35] Firefox plans to enable SW in December, which will push up support [22:20:37] I started auditing just display:none; uses recently on the English Wikipedia. It's a nasty mess. [22:21:02] Marybelle: that was the one you emailed about, right? [22:21:07] Yeah. [22:21:31] a year from now, we'll probably be somewhere in the 60-70% of clients with solid ServiceWorker support [22:21:44] Mobile clients or all clients? [22:21:57] it's fairly uniform [22:22:07] It seems like this task is targeting really basic devices, but also presuming ServiceWorker support among them? [22:22:10] Which seems weird. [22:22:26] Jon got the numbers for our mobile site specifically, and they are within a couple % of the web at large [22:22:59] so do you need to somehow detect SW support in the initial page view request? [22:23:15] gwicke: are you saying the serviceworker support is close to parity between mobile and desktop UAs? [22:23:25] no, initial page view for a first-time visit will always be without serviceworker [22:23:48] the installation is asynchronous [22:24:06] dr0ptp4kt: that's my recollection from my conversation with Jon on this, yes [22:24:29] IIRC he got something around 45% [22:25:01] gwicke: k. there *is* an el schema that i think tries to log the level of support for the feature, although it would by definition include compression proxied browsers and the like. that said, we can use some napkin math around that. i haven't actually queried that table myself. [22:25:05] so the initial page view will still vary on login cookies? [22:25:13] anyhow, even for jquery capable devices, that's promising [22:25:33] TimStarling: yes, but it can be assembled from cached fragments [22:25:47] There's also sidebar cache, right? [22:25:50] ironically if we have better load time, we may necessarily shift those numbers (even if only slightly) the other way. anyhow, it seems the trend is at least 3 of the 5 major UAs will have the support [22:25:51] And messaging cache. [22:26:10] I'm still unclear what's really being proposed here. We already cache a lot of things for most users... [22:26:48] TimStarling: I did look some into how cheap we can make this assembly, and have some promising results for element matching & replacement [22:27:11] replacing all
s on 1.4mb Obama can be done in around 2ms, for example [22:27:22] Marybelle: you're referring to origin side caching, varnish, or both? [22:27:24] not bad [22:28:12] Marybelle: what is proposed is to make more use of the varnish cache [22:28:41] late to the party, but: [22:28:43] so would we do some kind of bandwidth detection? a lot of mobile browsing (at least in the developed world) happens over wifi with megabits available [22:28:47] My understanding is that we've typically ignored that as the vast majority of requests are anonymous. [22:28:52] we need to do that (and plan to do that) anyway [22:28:54] and it's feasible [22:28:59] Marybelle: another way of looking at it, is to completely eliminate the repeated reload of non-article content on the page. [22:29:08] sidebar, ui, logo, all that stuff. [22:29:20] ori: indeed; also, the reverse is also true, with desktop browsers on slow connections [22:29:23] cscott: We could do that now with a bit of JavaScript, no? [22:29:34] (i can talk about it more, but it'll take us off-topic, so just making a note for now) [22:29:37] cscott: Like just use api.php to fetch the parsed page contents? [22:29:46] And leave the UI chrome alone. [22:29:47] Marybelle: yes, and that's more or less what gwicke is proposing, except with a bunch of other cool stuff. [22:30:02] I guess we disagree on the "cool stuff" part. [22:30:05] Marybelle: i think the fundamental question is *how* to best do this. [22:30:22] These discussions always seem to be "let's rewrite everything in JavaScript." [22:30:26] Marybelle: quite possible. i'm not 100% sure i agree with gwicke on the details either, but i definitely agree on the fundamental idea. [22:30:27] Marybelle: the goal is to make as many parts cacheable as possibly by unbundling things that are independent of the user from those that are [22:30:47] *a goal, I should say [22:30:48] gwicke: My point has been that lots of pieces are already unbundled, right? [22:31:06] The sidebar doesn't vary. The message interface doesn't vary. The parsed page content mostly doesn't vary. [22:31:12] i'd say, "not as many as you'd think" [22:31:16] it helps to split this up into the different ua classes, speed, and connection metering. [22:31:21] afaik, sidebar is cached in memcached [22:31:24] our CSS is still entangles between article and sidebar/UI, for instance. [22:31:28] *entangled [22:31:29] Marybelle: currently, none of those are cached in Varnish once you are logged in [22:31:30] it's completely different layer vs. varnish [22:31:51] there are a bunch of user preferences which affect article appearance, like thumbnail size, stub size limits, language variant support, even user language. [22:32:03] gwicke: I guess a more direct question might be: why is caching in Varnish so important? Like how much does the cache location matter? [22:32:08] personally, i'd rather for UAs that lack jquery compatibility keep it very simple most of the time (even if there's server compositing via a node.js endpoint or if we figured out a way to do inside of php) [22:32:23] Marybelle: varnish bypasses mediawiki + php [22:32:24] the performance difference for logged-in vs. logged-out browsing used to be a lot bigger than it now is, largely thanks to work that the performance team has been doing [22:32:28] but it's still significant [22:32:30] this is mostly compression proxy browsers at this point for these types of UAs. [22:32:44] I've talked with gwicke about this in the past and I think it's a good direction to pursue. It is risky (in that there are not many precedents for this, so I don't think we can estimate the amount of work that this would entail with any accuracy, and there is the possibility of failure). I am not sold on this being a priority over some other things, but on the whole this has a cautious +1 from me. [22:32:48] memcached is application layer (e.g. php) [22:32:59] i think the cute thing with "service workers" is that the basic URL for the page doesn't break. all the javascript magic is done behind the covers when you navigate using the bog-standard URL mechanisms. [22:33:09] it's like a look-aside cache, client-side. [22:33:28] yeah, it's nice that we can carry on serving a sensible document for the benefit of search engine crawlers etc. [22:33:35] Marybelle: latency differs depending on geographical location [22:33:40] Sure. [22:33:45] gwicke: did you paste the magic url which lets you actually see which service browsers are currently running in your browser? [22:33:56] gwicke: i was somewhat surprised to see i already had service browsers for magic websites. [22:33:59] I think reducing the user preferences impact on parsed page contents would be great. [22:34:05] we certainly wouldn't be the first [22:34:27] gwicke, So first page load is always slow old-school but thereafter supported browsers use this new approach for every click. [22:34:29] when I last gathered my thoughts about this, here was my way of justifying this work: [22:34:43] "I think that the fact that so much of our architecture is oriented around full pages as the basic unit of content is going to increasingly limit the relevance of Wikipedia content, and that in the absence of a forward-looking idea on how to tackle that, what we are likely to continue seeing is tremendous redundancy in our network traffic and a lot of unnecessary CPU cycles spent on prying apart content that should have been [22:34:43] kept separate to begin with." [22:34:50] cscott: not sure if you meant this, but in chrome, you can see current serviceworkers at chrome://serviceworker-internals/ [22:35:09] gwicke: yeah, i think that's what i meant [22:35:24] every page you get from medium.com uses service workers. and chrome's own "new tab" page uses it, too. [22:35:41] i'm in a clean-ish browser session right now, so those are the only two that show up for me. [22:35:58] but i don't know that anyone has complained about medium using this. i think it's been completely transparent for them. [22:36:06] ori: +1 [22:36:12] ori: Splitting up page content (or the idea of a single page object) seems pretty distinct from what's being discussed here. [22:36:37] ori: agreed on it being risky, which is why we are moving cautiously [22:36:42] ori: this is a cute hack to continue to serve "full pages as the basic unit of content" at the URL level, and at the fallback level, while fundamentally changing this for users of recent-enough browsers. [22:36:57] cscott: yeah [22:37:01] cscott: I very much doubt Medium.com has anywhere near the level or diversity of traffic that Wikimedia wikis have. [22:37:15] Marybelle: there is also a site called google.com using it [22:37:23] i would say that the biggest danger is that we fragment our implementation, and end up with subtle differences (or bugs) between the various modes of serving content. [22:37:29] but i think gwicke has some clever ideas about that as well [22:37:38] Marybelle: splitting things up would be a required step on the way to this final product. This proposal is basically gwicke showing us where he'd like to end up and we need to work backwards a bit to figure out how to get there [22:37:41] I would expect gmail to be a heavy user fairly soon [22:37:54] "limit the relevance of Wikipedia content" -- seems oversold as usual [22:38:17] (basically always using this mechanism to serve content, but running the same page-fragment reassembly code server-side for old browsers. so both old and new browsers are using the exact same code paths, they are just done server side for old and client side for new.) [22:38:32] we are talking about making things a bit faster for the small (possibly even shrinking) proportion of users on slow network connections [22:38:44] gwicke: maybe not, gmail doesn't really use different urls. it's already a single-page website, basically. [22:39:03] TimStarling: where did you get that quote from? [22:39:04] TimStarling: limit the relevance inasmuch as there are many contexts in which wikipedia content could be somehow interpolated if we had better ability to address content on a more granular level than a full page [22:39:13] gwicke: from ori above [22:39:13] gwicke it seems a lot of work to make the subsequent page content match a logged-in users prefs. How much work is it for anon, seems just get the HTML from RESTBase and stick it in the content area. [22:39:18] oh, nm- ori's comment [22:39:34] TimStarling: we are not going to overcome the spped of light issues any time soon so getting more content into edge caches will actually help a lot of the world population [22:39:38] re: google.com / medium.com -- Google is pushing serviceworkers very aggressively at the moment, because they think they help narrow the gap between native app capabilities [22:39:48] from my perspective, there's two parts to the work. one is the particular client-side implementation, which i could quibble with, but i think could also be easily replaced if something better comes along. [22:39:53] but it's not a done deal; other browser vendors are still cautious about it [22:40:02] i think it has a fair chance of getting adopted but it's not certain [22:40:14] the other part of the work is actually refactoring core to allow separately serving these bits of content, and disentanglish user preferences from article content. i think that refactoring is very healthy in the long term. [22:40:18] Why not focus on parser cache fragmentation to start? [22:40:18] between native app and web apps, I meant [22:40:30] And eliminate/address most of the ways we currently fragment parser cache? [22:40:32] FF are pretty far now & will un-flag it in the next release [22:40:40] gwicke: is that definite now? [22:40:52] i'm hearing they're close as well [22:40:57] yes, according to one of their engineers & their public roadmap [22:41:01] i'm hearing servo is close too [22:41:05] hehe [22:41:08] ok, so nothing reliable [22:41:11] ha [22:41:12] they do hire for servo ;) [22:41:13] i mean, i am optimistic, don't get me wrong [22:41:22] https://phabricator.wikimedia.org/T114057#1683608 would be a pretty easy win for less parser cache fragmentation [22:41:22] bd808: you mean for satellite users? people on the moon [22:41:32] here I am in australia and it is totally fine already [22:41:33] again, from my perspective, we could do this in a "traditional" way with client-side javascript as well, if service workers don't get adopted. or something else. the fundamental refactoring is worthwhile regardless. [22:41:34] Or https://phabricator.wikimedia.org/T39902 maybe. [22:41:55] #info again, from my perspective, we could do this in a "traditional" way with client-side javascript as well, if service workers don't get adopted. or something else. the fundamental refactoring is worthwhile regardless. [22:41:56] TimStarling: I was thinking more pacific rim, but yeah moon and mars will be important "soon" ;) [22:42:04] there are a number of "client side wikipedia apps". i wrote one in the tutorial session at last wikimedia, for instance. ;) [22:42:04] (cscott: yes, +1) [22:42:24] Is there a list of ways in which we fragment parser cache currently? [22:42:40] i have a very modest suggest about how to introduce service workers [22:42:45] https://phabricator.wikimedia.org/T30424 I guess. [22:43:20] my experience in Fiji was not awesome when logged in and most of the lag traced out to the interconnect from Fiji to our US datacenters (more lag than the cellular bridge to the fiber ring) [22:43:30] instead of showing the default "page cannot be loaded", let's use service workers to install a neat interface that shows you the wikipedia logo, tells you that you are offline, and gives you a few "did you know?" facts, as a fun easter egg [22:43:34] btw, if you'd like to try browsing wikipedia with a serviceworker & trust me, you can copy https://en.wikipedia.org/wiki/User:GWicke/vector.js to your own [22:44:03] ori: You're talking about in a Wikipedia app? [22:44:04] that won't impact any existing functionality, shouldn't take more than a day to implement, and would allow us to start accumulating experience with web workers [22:44:10] and then navigate to /w/iki/Foobar [22:44:26] it's a hack because the script is served from /w/ [22:44:27] anyway, whether it will completely change the world or make things a bit faster for some small proportion of mobile users, I think I am fine with it [22:44:53] ori: yeah, similar to what the guardian has done [22:45:01] oh did they? haven't seen that! [22:45:07] https://www.theguardian.com/info/developer-blog/2015/nov/04/building-an-offline-page-for-theguardiancom [22:45:15] I have no idea what you're talking about. Someone offline will get the Wikipedia logo and fun facts... how? [22:45:36] Actually, I have seen that, and that is where I got the idea. Not sure how I forgot that. Probably it suited my ego better to think it was an original idea. [22:45:45] Marybelle: serviceworkers enable some offline bits like long-term caching [22:45:49] i think the offline stuff is more around not roundtripping unnecessarily with the if-modified-since piece [22:45:57] Oh, the Guardian thing is for an app. [22:46:07] are there open questions? resourcing issues? what are the next steps? [22:46:10] Yeah, if you install an app, you can include a bunch of Easter eggs and the logo. Who cares? [22:46:17] there is a dearth of #info / #action items [22:46:43] yeah [22:47:04] Marybelle: it's a low-risk, low-effort way to introduce a new web technology to the stack. [22:47:06] #info Service workers and/or nodejs should not be required to use MediaWiki in general [22:47:11] this is just about MF, right? the desktop site won't be initially migrated? [22:47:18] bd808: ok grampa, glad you got that in :P [22:47:38] we'd like to discuss the r&d outcomes from reading at the summit [22:47:42] bd808: sure. the real question is whether the service worker code might be required to run server side, at some point in the future. [22:47:44] ori: Good, we were running low on new web technologies. [22:47:45] TimStarling: don't say the 'm' word if you're hoping to close the conversation [22:47:50] TimStarling: yes, and talk of 'migration' is a bit early anyway [22:48:04] bd808: i actually think that would be a good plan, long-enough in the future, since I don't want to fragment our code base into "old" and "new" browser paths. [22:48:07] cscott your excellent demo doesn't have to update a skin's tabs, sidebar Special links, etc. for the new article. That's doable per skin, I'm not sure if it's what gwicke proposes [22:48:22] there are several bits in there that can be phased in independently, and without replacing everything at once [22:48:27] cscott: it should not be required in general. It can be required for Wikipedia but not for MediaWiki in my opinion. [22:48:31] spage1: well, it was only 40 lines of code. you can totally do more if you like. ;) [22:48:32] the reading effort is very much focused on this [22:48:52] #info there are several bits in [this proposal] that can be phased in independently, and without replacing everything at once [22:48:55] bd808: i think that's a conversation worth having in the future. probably not before gwicke does PoC of this and we get some deployment experience, of course. [22:49:15] bd808: but we should think at some point down the line about how we return to having a single code path for serving pages [22:49:17] gwicke: what are the next steps? [22:49:44] reading & myself will continue iterating on this until the dev summit [22:49:47] cscott: A single URL structure would also be nice. [22:49:54] wikipedia is data, not code [22:50:03] the code is incidental and will die one way or another [22:50:06] spage1: i think updating the tabs, special links, etc is all part of what i say when i say "refactor core". we should get to a point where we have a solid idea of "what else" needs to happen to the skin when we move to a new URL. [22:50:07] Kill the "m." nonsense. Kill MobileFrontend. [22:50:18] spage1: right now the code is all muddled together. [22:50:30] & summarize our experience for a discussion there [22:50:42] time for #action items imo [22:50:56] Marybelle: yeah, most of the guys who have been doing mobilefrontend want to, um, gently replace it with something better [22:50:59] the exact proposals beyond the dev summit depend a bit on our results & the general priorities [22:51:05] dr0ptp4kt: so i keep hearing ;) [22:51:12] ori: sure, but code rots. i want all browsers to continue to have non-broken articles. the best way to do that is to ensure that all folks continue to run the same code, as much as possible. [22:51:30] we already have issues w/ (for example, because it's my bugbear) LanguageConverter, because it's not run by enwiki or dewiki. [22:51:53] #action prototyping work in reading and services will continue until the dev summit [22:51:53] by the way, MF was way better than the mobile site which preceded it [22:52:05] we need space to fork code for experiments, but also longer-term plans to merge the forks for maintenance. that's my big picture philosophy at least. [22:52:12] which was implemented in ruby and had its own wikitext parser [22:52:27] yes, and the DDR was better than Nazi Germany [22:52:31] * ori is JOKING [22:52:36] bad_ori [22:52:43] ;) [22:53:03] * robla is asking gwicke IRL about possibility of publishing some of the early benchmarks [22:53:11] dr0ptp4kt: Minus all the people who contributed to that pile of technical debt, I guess. [22:53:31] Though maybe some of them have turned as well. [22:54:05] the problem with MF today is that we don't have a way to allow editorial control over rendering of things that don't by default smash into a small viewport. That's a core MediaWiki problem. [22:54:21] That's one problem, sure. [22:54:29] a lot of people really like MF, and it does a lot of things right [22:54:35] my comment above was really just a stupid joke [22:54:38] The extension has also been used as a trojan horse to sneak in a lot of unrelated features. [22:54:42] let's not turn this into an MF bitch session [22:54:46] #info One question we are currently looking into is first page load, with main contenders being sectioned loading vs. HTML streaming & deferred image loading. [22:54:58] IIRC there's a technology which is meant to be able to format content for multiple output media [22:55:06] right, we're trying to componentize stuff generally, and actually trying to ship on desktop and mobile. we're learning there. [22:55:10] oh yeah, CSS, that's what it's called [22:55:17] Oh, I was going to guess PHP. [22:55:18] i've heard of that css thing [22:55:27] maybe we should get into that [22:55:42] bd808: yes, https://phabricator.wikimedia.org/T90914 is part of that discussion. [22:55:49] That points back to https://phabricator.wikimedia.org/T483 [22:55:51] TimStarling: you mean, with a skin that's content-first & heavily inspired by csszengarden? [22:55:59] Or what cscott linked. [22:56:10] bd808: we need to get back to semantic markup of media in order to allow it to be restyled for different viewports. [22:56:10] gwicke: something like that [22:56:27] cscott: agreed. I like this proposal because it seemed intended to build on many of these fundamental compoents [22:56:31] cscott: Improving/enhancing file inclusion syntax is another good idea, sure. [22:56:34] gwicke: he's right that CSS provides some capabilities for selecting and arranging content based on the medium [22:56:34] there are probably non-media aspects as well, of course. [22:56:42] But none of these ideas require ServiceWorkers. [22:56:42] but media is a big part [22:56:42] btu yeah, there is more to it than that [22:56:44] ori: you tell me ;) [22:56:49] Or additional technologies in the "stack." [22:56:58] you know, I wrote a skin based on those ideas a while ago [22:57:04] i don't think from a practical rollout perspective we can have one css approach to rule them all. obviously we should do rwd where appropriate [22:57:09] i shouldn't say obviously [22:57:13] I think we're done with the RFC discussion so I'm going to end the notes [22:57:24] cool [22:57:27] #endmeeting [22:57:27] Meeting ended Wed Nov 18 22:57:27 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [22:57:27] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-18-22.01.html [22:57:27] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-18-22.01.txt [22:57:27] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-18-22.01.wiki [22:57:28] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-11-18-22.01.log.html [22:57:33] the "shadow namespaces" proposal is also relevant, because if we can central some of the templates which are used to style content, then we have a leg up on *restyling* in a cross-wiki way as well. [22:57:38] but that's mostly offtopic here. [22:57:39] thanks for the discussion, everybody! [22:57:52] gwicke: good luck with the implementation! [22:57:55] thanks [22:57:57] next week we have a discussion about PHP versions and updating our coding standards [22:58:04] PHP 7!@#!@ [22:58:11] * cscott is mostly joking [22:58:17] shoot...missed my oppty to get this in the notes. Next week: https://phabricator.wikimedia.org/E91 [22:58:32] cscott: I might try to nag you into helping ;) [22:58:41] get in line. ;) [22:58:56] er....next week: https://phabricator.wikimedia.org/E92 [22:58:57] but you can bribe me with offline support. [22:59:05] ( https://phabricator.wikimedia.org/T118932 ) [22:59:13] cscott: I pushed for discussing
s soon [23:00:07] possibly two weeks from now [23:01:03] link to meeting two weeks from now: https://phabricator.wikimedia.org/E66/11 [23:01:28] U.S. holiday next week, of course. [23:01:50] cscott: gwicke : please, by all means put the
thing in the comments of E66/11 [23:02:24] gwicke, robla: T118517 and T118520 are somewhat related. [23:03:05] * robla moves his IRC attention to #wikimedia-tech [23:03:43] * cscott has to run pick up his kid, but he commented on https://phabricator.wikimedia.org/E66/11 [23:05:19] cool, thanks cscott! note that the act of commenting automatically created a new event Phab ticket: https://phabricator.wikimedia.org/E93 [23:05:26] ...which is kinda cool [23:05:42] E66/11 redirects there now