[11:06:38] 3MediaWiki-Core-Team, Multimedia, MediaWiki-extensions-GWToolset: Upload stopps regularly - https://phabricator.wikimedia.org/T76450#954579 (10Nicolas_Rueck_WMDE) 5Open>3Resolved a:3Nicolas_Rueck_WMDE After about 10 days the file transfer just went on. Afterwards i had no more problems with further uploads... [11:34:43] 3Librarization, MediaWiki-Core-Team, Continuous-Integration: Publish MediaWiki codesniffer config on Packagist - https://phabricator.wikimedia.org/T85631#954621 (10hashar) 5Open>3Resolved The repository now has a composer.json with the package name `mediawiki/mediawiki-codesniffer`. I have added it to packa... [13:45:42] 3MediaWiki-Core-Team: HttpError should not be logged as an unhandled exception. - https://phabricator.wikimedia.org/T85795#954941 (10daniel) 3NEW [14:04:07] 3MediaWiki-Core-Team: HttpError should not be logged as an unhandled exception. - https://phabricator.wikimedia.org/T85795#954970 (10hoo) IMO we should just have an UnloggedHttpError subclass without any distinction on status codes or anything. [15:57:40] <_joe_> manybubbles: I won't be able to be at the wikidata query service meeting (I am not feeling well at all), do you need my assistance with something about it? [15:58:29] _joe_: we'll be fine - the goal today is to settle on a backend and officially stop foundation funded work on the other option. [15:58:50] I _think_ you are pretty squarely in the titan/cassandra camp [15:59:11] <_joe_> manybubbles: well, it seemed the least insane option [15:59:35] :) [15:59:41] <_joe_> but I'm talking about architecture, not implementation [16:00:00] <_joe_> I still have to get a hold on it [16:03:42] 3MediaWiki-Core-Team: HttpError should not be logged as an unhandled exception. - https://phabricator.wikimedia.org/T85795#955240 (10JanZerebecki) Often 404 errors where this is used are of no interest to ops or devs, so I prefer the different log stream, at least for 404. [16:03:56] 3MediaWiki-Core-Team, MediaWiki-General-or-Unknown: HttpError should not be logged as an unhandled exception. - https://phabricator.wikimedia.org/T85795#955242 (10Aklapper) [16:04:24] 3MediaWiki-Core-Team, MediaWiki-General-or-Unknown: HttpError should not be logged as an unhandled exception. - https://phabricator.wikimedia.org/T85795#955244 (10JanZerebecki) This came up during https://gerrit.wikimedia.org/r/#/c/182750/ T76458. [16:12:35] 3Librarization, MediaWiki-Core-Team, Wiki-Release-Team: Add release notes for change "Expose installed external libraries on Special:Version" - https://phabricator.wikimedia.org/T85726#955278 (10bd808) [16:24:26] _joe_: this is really just deciding which of the two implementations we have now to focus on. We'll certainly have time to change pretty much everything else later. Hell, we've been known to admit we were wrong and change things later on if we need to. [16:25:02] <_joe_> yeah I think that makes sense :) [16:25:28] <_joe_> manybubbles: I won't be around in mwcore this evening either, FYI [16:26:02] oh yeah - I'll let them know why. sleep and get better [16:40:18] <^d> manybubbles: tracking down people using lsearchd is time consuming :\ [16:40:26] pita [16:40:27] <^d> i'm tempted to jfdi and say "i warned you" [16:40:35] maybe in a month? [16:41:04] <^d> drag out the decom another month? :( [18:01:49] Guest61026 aka csteipp: Ping [18:02:13] Hm.. there's a phpunit failure in a wmf branch (wmf/1.25wmf13). Looks like the mess around Tidy hasn't been resolved. [18:02:29] Who knows more about that? ori? [18:02:50] csteipp: ping [18:02:58] hoo: What's up? [18:03:20] Krinkle: I thought I backported the tidy unit test fix to wmf13... [18:04:12] csteipp: Would it be a huge problem if we had https only links to Wikimedia sites within the Wikidata UI? [18:04:42] We have problems now that ruwikinews is all https, so we went all https... but now our links are, guess it, https [18:05:04] hoo: That would be an issue for China/Iran, in that they won't be able to follow the links [18:06:14] Is that a huge portion? If not, are the few who are around aware of that and can correct the URLs, if they hit them? [18:06:42] AJAX stuff will still be protocol relative [18:06:43] I have no idea. Is this wikidata users who are then going to a wikipedia page? [18:06:49] Yeah [18:06:59] not links within the Wikipedias, they would stay relative [18:07:01] I've checked that [18:07:34] Cool. Would probably be worth checking with... I forget the name of the guy who does all the bot work from Iran [18:07:41] Amir? [18:07:46] Yeah, I think so [18:08:40] Just to see what the wikidata usage from Iran is like. Only some isp's in China are now blocked. [18:08:48] So Iran is the only place that has 0 https [18:09:39] Mh [18:09:53] Maybe we should do that now and prepare a fix this week that makes stuff relative agai [18:09:54] n [18:10:04] so that for now at least ruwikinews can be linked again [18:10:21] not sure what the bigger trade of is: Not being able to interact with ruwikinews or the links [18:10:41] Does ruwikinews not redirect http->https? [18:10:51] I would have thought we would do that [18:10:52] It does [18:11:01] But MediaWiki doesn't follow that [18:11:10] Oh, is this server side? [18:11:13] we ourselves do api requests in the cluster [18:11:14] yes [18:12:45] So why aren't you guys just forcing https from the api call? [18:13:15] Because there's no sane way to do that [18:13:30] without hacking that assumption into MW core [18:13:42] wich right now assumes proto-relative => http [18:14:40] that smells like a bug [18:14:43] We could make it follow redirects, but that would make it slower and could be unsafe, if we ever have 3rd party sites in there [18:14:54] So the other option is to be intelligent about the redirect [18:15:07] bd808: Not really... the real fix is to make the sites table have the right protocols for the sites [18:15:32] Yeah, slower, but you can ensure you're going to the same url, just different proto [18:15:58] hoo: Is the api coming from a cron or a user edit? [18:16:18] s/api/"api call" [18:17:02] csteipp: Mostly it's hit of while adding sitelinks (we need to verify/ normalize them) [18:24:05] bd808: so for some reason https://gerrit.wikimedia.org/r/#/c/182379/ didn't get pushed to github... https://github.com/wikimedia/mediawiki-tools-codesniffer/commits/master ? [18:24:43] ori: have you seen any changes in number of edits (or any other non-performance metrics) since the switch to hhvm? [18:24:51] legoktm: weird [18:25:07] and I may have asked this before, but is there any way to have http://grafana.wikimedia.org/#/dashboard/db/Edit%20performance start the y axis at 0? [18:25:25] bd808: I think Jeroen made his commits on github and not gerrit :/ [18:25:58] legoktm: Ugh. [18:26:11] But you people actually started having stuff only in github [18:26:22] So, I can see that there is confusion [18:26:22] so we need to pull them in via gerrit and then force push I suppose [18:28:35] "you people" hoo? :) [18:28:58] bd808: You and legoktm I think [18:29:02] [10:28:31] (PS1) Legoktm: Fix formatting [tools/codesniffer] - https://gerrit.wikimedia.org/r/182855 [18:29:02] [10:28:33] (PS1) Legoktm: Update README.md [tools/codesniffer] - https://gerrit.wikimedia.org/r/182856 [18:29:02] some gerrit stuff [18:29:19] * composer [18:29:21] doh [18:29:27] well, [18:30:07] at the top of the repo, it says "Github mirror of MediaWiki tool codesniffer - our actual code is hosted with Gerrit (please see https://www.mediawiki.org/wiki/Developer_access for contributing)" [18:30:36] legoktm: merged [18:31:02] ty [18:31:59] https://github.com/wikimedia/mediawiki-tools-codesniffer/commits/master [18:32:04] yay for force pushing :D [18:32:23] I bet he made those changes using the edit file button on github. I wonder if that can be turned off somehow [18:33:22] I don't see anything in the repo settings. [18:34:40] just force push and override them [18:34:40] They'll soon learn when they disappear :P [18:35:24] gerrit has broken my brain. I keep force pushing pull request updates to github for other projects [18:37:23] ori: did you touch/do something to /var/lock/scap on tin? [18:37:54] reedy@tin:~$ ls -al /var/lock/scap [18:37:54] -rw-rw-r-- 1 ori wikidev 0 Jan 5 17:43 /var/lock/scap [18:38:36] I think it has had that ownership for a long time. [18:39:21] the timestamp changes on very scap/sync [18:39:26] *every [18:40:14] bd808: do you know what https://www.mediawiki.org/wiki/Composer/Future_work is about? [18:41:03] nope. looks like some older ideas from Jeroen [18:42:30] I'm not too excited about running our own satis service [18:51:13] bd808: also, I have an draft email to send to the mediawiki-distributors list, could you review it before I send it? [18:54:34] <^d> csteipp: Actually it's already on-wiki. https://www.mediawiki.org/wiki/Wikimedia_Engineering/2014-15_Goals/Q3#Proposed_top_priorities [18:56:17] legoktm: Sure, just give me your gmail password and I'll look at the draft. ;) [18:56:56] <^d> Easiest way to share :) [18:58:10] bd808: heh. http://fpaste.org/165923/48424214/ [18:58:15] boring [18:58:43] thunderbird encrypts my gmail drafts so giving you my password wouldn't help at all :P [19:01:39] legoktm: 'MediaWiki expects that there will be an autoloader at "vendor/autoload.php" which will load these libraries.' -- I suppose there really isn't any way other than that at the moment is there? That's the only extra autoloader script we look for. [19:02:15] I think that's fine actually, but wanted to make sure there wasn't another way that might be nicer for some packager [19:02:59] there are other ways you could technically do it, but they would all be hacky [19:03:19] More hacky than packaging MW? o_0 [19:03:21] :P [19:04:20] I played around with packaged php libraries on fedora, and they don't come with an autoloader. you have to require_once "Monolog/Foo/Bar.php" manually >.< [19:04:37] * bd808 has flashbacks to rpm managed perl libs [19:10:55] bd808: Hmn.. any particular reason for the Yoda conditional at https://gerrit.wikimedia.org/r/#/c/180849/1/tests/phpunit/includes/api/format/ApiFormatWddxTest.php ? [19:11:22] bd808: so email looks fine? I expect they'll let us know if the autoload location is an issue [19:11:36] tests like that Bryan writes [19:11:36] <^d> manybubbles: Is T75673 fixed? I think so because T75532 is fixed. [19:11:36] legoktm: yeah email looks good [19:11:48] * manybubbles reading them [19:12:14] its fixed [19:12:56] <^d> {{done}} [19:12:56] <^d> thx [19:12:56] <^d> Thought so [19:13:11] sent! [19:15:15] 3wikidata-query-service, Wikidata, MediaWiki-Core-Team: Evaluate Titan as graph storage/query engine for Wikidata Query service - https://phabricator.wikimedia.org/T76373#955849 (10Smalyshev) [19:15:15] 3wikidata-query-service, Wikidata, MediaWiki-Core-Team: Restoring Titan functionality after Cassandra disconnect - https://phabricator.wikimedia.org/T85513#955847 (10Smalyshev) 5Open>3Resolved Looks like setting backend to "cassandra" solves it: https://github.com/thinkaurelius/titan/issues/908 [19:15:49] 3MediaWiki-Core-Team, Librarization: Update [[mw:Composer]] to include information about usage with libraries - https://phabricator.wikimedia.org/T85172#955851 (10Legoktm) https://www.mediawiki.org/wiki/Composer is now much smaller, and I copied the old content to https://www.mediawiki.org/wiki/Composer/For_exte... [19:29:47] Reedy: i just removed the lock file, so it should be recreated with the proper ownership next time someone runs scap. [19:29:56] Krinkle: what was your question re: tidy? [19:30:06] Krinkle: also: "the mess around Tidy" ... I see what you did there :P [19:30:21] swtaarrs: no, we don't have those numbers yet. [19:30:41] ori: There was some clean up recently that broke unit tests. I think you and bd808|LUNCH were at it and then holidays happened. [19:31:01] Krenair: unit tests on wmf13 are consistently failing. [19:31:10] https://gerrit.wikimedia.org/r/#/c/180987/ , you mean? [19:31:28] ori: e.g. https://gerrit.wikimedia.org/r/#/c/182843/ [19:32:09] ori: I mean https://gerrit.wikimedia.org/r/#/c/176862/ and other related commits with a unit test [19:32:15] or whatever is causing the failure. I lost track. [19:32:22] hi [19:32:47] Krenair: I'm pinging myself accidentally and in doing to pinged you. Sorry :) [19:32:51] :D [19:34:05] <^d> AaronS: Heh. https://gerrit.wikimedia.org/r/#/c/182865/ [19:34:16] is this what happened on https://gerrit.wikimedia.org/r/#/c/182811/ ? [19:37:53] * ori looks [19:39:30] Hm.. looking in the ExtensionProcessor stuff commits. Interesting. [19:39:52] This is the first time I've seen code use newAccelerator() though. Is there a rule of thumb of when to use APC and when to use Redis/Memcached? [19:40:57] APC is local to each server. [19:40:58] o/ [19:41:40] If you need data structures richer than byte strings (set / list / hash), then Redis is appropriate [19:42:22] IMO we should dump memcached in favor of redis for the sake of simplifying our stack. Redis does everything memcached does. (Though it is nominally slower, IIRC.) [19:42:43] Krinkle: TimStarling and I just talked about using APC, I never tested it with redis/memcached, but my guess is that unless you're loading an extremely large number of extensions it'll be faster to re-scan then use memcached. [19:43:24] 3ContentTranslation-Deployments, MediaWiki-extensions-ContentTranslation, MediaWiki-Core-Team: Content Translation Beta Feature security review - https://phabricator.wikimedia.org/T85686#952269 (10csteipp) [19:44:51] legoktm: OK. I get that. But then why is APC faster? [19:46:02] Or is APC in the same process and not shared? [19:46:03] * legoktm looks for IRC logs [19:48:45] I know APC's opcode caching. But not sure how its arbitrary blob store works. I guess it's not like memcached and just for the localhost and presumably faster access as its not a separate process with communication protocol. [19:51:40] So, I'm not sure how the apc user store works, just that Tim told me it was extremely fast under HHVM. [19:53:20] <^d> Krinkle: it's just shared memory between the apache processes. [20:05:01] AaronS: can you add a link explaining your rating of orientdb under "node changes without cluster restart"? I want to read more:) [20:06:32] I never rated that [20:13:26] legoktm: hey :) are you part of the mediawiki/core team nowadays ? [20:13:32] hashar: yes :) [20:13:55] I should attend your meetings from time to time but the hour is killing me now that I have a 2nd kid :/ [20:14:07] legoktm: kudos on the cdb / composer stuff. That is exciting! [20:14:08] Krinkle: on my vm, an APC cache hit is 2.10% 10.856, and a redis cache hit is 4.30% 26.394 for the same set of fake extensions [20:14:52] :D [20:14:53] legoktm: nice [20:15:37] bd808|LUNCH: Reedy: Feel like moving https://gerrit.wikimedia.org/r/177095 / https://phabricator.wikimedia.org/T45086 along? [20:19:15] Krinkle: that's going to be a very spammy log isn't it? [20:19:42] bd808: nothing we aren't already logging in apache2/error.log and hhvm's equiv. [20:19:47] just with stracktraces now [20:20:31] yeah which is likely to e the spammy part [20:20:39] *to be [20:21:34] "up 680M in under a minute" [20:21:51] that's a lot of log traffic [20:22:28] Krinkle: by the way, just a brainstormy idea: the flame graphs on are generated by , which takes as input a collection of stack traces, and generates graphs based on how often a call path was on-CPU. If you fed it exception / fatal logs instead of trace profiles generated by Xenon, the resultant graph would show which code-paths are most error prone. [20:22:40] it'd be very interesting to do, IMO. [20:32:12] bd808: my commits aren't enabling anything though. The log is already in core, and these commits iterate on them. They've been disabled at the moment. [20:32:49] They have the potential to generate a lot of data if we regress again in our errors. But the same applies to errors and exceptions with stacktraces. [20:32:49] A good reason to cut down some of those errors :) [20:36:29] <^d> Would redis be a reasonable place to store search stats data in a queue before flushing it to storage? [20:37:47] <^d> Basically, I want something like BagOStuff's incr() I can do in memcached, which I'll occasionally poll and write to a permanent place. [20:38:08] ^d: Are these the kind of stats you could just outsource to statsd + graphite? [20:38:19] <^d> Ah, RedisBagOStuff has an incr() too. [20:39:57] <^d> bd808: I guess it's possible...might make re-exposing the data back to the wikis harder though. [20:41:09] ah. Yeah I don't know what our answer for that is here. At Kount I was trying to figure out how to setup a separate graphite cluster for customer facing stats [20:41:37] but I never got it done [20:42:05] <^d> Yeah I'm looking at tracking stuff like "most often searched pages" etc. [20:42:15] <^d> All of which I'd like to expose via a dashboard or somesuch back to the wikis. [20:42:51] <^d> So I figured I'd stash the data in ES, but I don't want to write to ES on every search and make things sad :p [20:43:02] you want search analytics, that should really be done via some type of log analysis [20:43:05] <^d> Hence using redis or memcached or something as a queue to hold it in-between. [20:44:05] by which I mostly mean that the search app shouldn't care about such things but it should be possible to derive the data from the data the app does care about [20:44:23] <^d> Logs are a little less useful here I think. I'm less interested in what people are typing. [20:44:25] <^d> I'm more interested in what results Cirrus is spitting back to them. [20:45:13] <^d> ie: I don't care if people have 800 spellings for "barack obama" but I do care that people are searching for and ending up at [[Barack Obama]] [20:45:20] *nod* [20:45:53] is that really click tracking outbound from the search results page? [20:46:23] <^d> I wasn't going to track the clicks. I was going to do a reverse scoring based on the top-10 results. [20:46:38] <^d> But I might just be making stats up now :p [20:47:02] 3MediaWiki-API, MediaWiki-Core-Team: API initializes user preferences on every request - https://phabricator.wikimedia.org/T85635#956084 (10Anomie) a:3Anomie [20:47:13] "most found articles" [20:47:16] <^d> I don't know what I'm after really. Just that it would make pretty charts :p [20:49:05] ^d: there's a wfIncrStats() [20:49:26] <^d> Which is just a wrapper for $wgMemc->incr(), right? [20:49:42] <^d> StatCounter::singleton()->incr( $key, $count ); [20:49:49] that's the statsd pipeline I thought [20:51:42] manybubbles: https://github.com/orientechnologies/orientdb/issues/2305 [20:52:08] hot changes will be in the OS stuff, the enterprise stuff is for the GUI [20:52:09] not sure when that will actually land [20:52:19] 3MediaWiki-extensions-Scribunto, MediaWiki-Internationalization, MediaWiki-Core-Team: Scribunto unit tests failing due to MediaWiki core changes - https://phabricator.wikimedia.org/T85854#956127 (10Anomie) 3NEW a:3Anomie [20:53:13] 3MediaWiki-extensions-Scribunto, MediaWiki-Internationalization, MediaWiki-Core-Team: Scribunto unit tests failing due to MediaWiki core changes - https://phabricator.wikimedia.org/T85854#956127 (10Anomie) 5Open>3stalled Stalled on the question of whether the change to Language::listToText() should be revert... [20:53:17] AaronSchulz: does distributed configuration mean adding new nodes and remove dead ones? [20:53:35] 3MediaWiki-extensions-Scribunto, MediaWiki-Internationalization, MediaWiki-Core-Team: Scribunto unit tests failing due to MediaWiki core changes - https://phabricator.wikimedia.org/T85854#956147 (10Anomie) [21:01:20] manybubbles: I'll ping luca to clarify on the docs [21:01:45] <^d> $max_int_length = strlen((string) PHP_INT_MAX) - 1; [21:01:45] <^d> heh [21:07:45] manybubbles: did any of that pool counter stuff ever get deployed? [21:08:47] AaronSchulz: not yet! I'm going to ask some opsent tomorrow in our search meeting [21:09:25] lets see if I filed a phab task [21:10:05] looks like you did on december 20th [21:10:22] 3operations, MediaWiki-Core-Team: Deploy multi-lock PoolCounter change - https://phabricator.wikimedia.org/T85071#956177 (10Manybubbles) [21:10:48] 3operations, MediaWiki-Core-Team: Deploy multi-lock PoolCounter change - https://phabricator.wikimedia.org/T85071#938038 (10Manybubbles) Adding operations as the deb needs to be rebuild for poolcounter and it needs to be redeployed. [21:32:29] legoktm: The php LOC graph on this page -- https://www.openhub.net/p/mediawiki/analyses/latest/languages_summary -- what do you think that drop from March to April 2014 was? skins? [21:33:17] bd808: i18n? [21:33:32] oh. I bet that was it [21:33:34] I don't remember the date [21:33:56] and they don't count json files [21:35:32] <^d> Gotta be [21:36:24] skins didn't come out until Wikimania and they wouldn't have been that big [21:36:37] <^d> 2014-04-01: "Implementation JSON based localisation format for MediaWiki nearly completed" [21:38:18] yeah, i18n. [22:04:55] 3MediaWiki-Core-Team, MediaWiki-extensions-TitleBlacklist: Title blacklist intermittently failing, allowing users to edit things they shouldn't be able to - https://phabricator.wikimedia.org/T85428#956369 (10csteipp) a:3csteipp [22:06:26] TimStarling: When you get a chance, I'd appreciate feedback on T759 (xhprof for MW) [22:06:40] I think it's done but not sure if you agree [22:06:52] 3MediaWiki-Core-Team: Parsoid performance analysis - https://phabricator.wikimedia.org/T85870#956387 (10tstarling) 3NEW [22:07:59] 3MediaWiki-Core-Team: Parsoid performance analysis - https://phabricator.wikimedia.org/T85870#956387 (10tstarling) p:5Triage>3Normal a:3tstarling [22:08:15] bd808: ok [22:10:48] I think however that hhvm got shutdown on the cluster by _joe_ (with help from ^d and myself) because of what _joe_ thought was a linux kernel bug [22:10:57] s/hhvm/xhprof/ [22:12:33] At least I don't remember hearing that hhvm.stats.enable_hot_profiler got turned back on in puppet [22:15:18] <^d> legoktm: It was tracking edit counts as of $somePointInTime [22:15:29] <^d> I have an entry for my user_id, but it's about 3k edits off. [22:16:12] ok [22:16:51] thanks [22:16:53] <^d> yw [22:19:50] 3MediaWiki-Core-Team, MediaWiki-extensions-TitleBlacklist: Title blacklist intermittently failing, allowing users to edit things they shouldn't be able to - https://phabricator.wikimedia.org/T85428#956453 (10tstarling) Could be some sort of cache poisoning. If so, a cache miss log showing number of TB entries sa... [22:20:16] 3MediaWiki-Core-Team, Performance-Metrics-Dashboard: Parsoid performance analysis - https://phabricator.wikimedia.org/T85870#956455 (10Qgil) Related? #Performance-Metrics-Dashboard [22:29:36] bd808: i am looking at the wikidata role in vagrant and it appears (and remember setting it that way) that it uses --prefer-source [22:29:47] do you remember where in the puppet this is done? [22:32:08] <^d> Hah. "Smart belt loosens buckle when you've eaten too much" [22:32:17] <^d> Just what I want from my wearables...something to remind me I'm fat. [22:32:29] eh? [22:32:40] <^d> [22:33:01] _I'm_ fat, you're way behind :P [22:33:33] as long as it also tightens when I've been up 12 hours coding and forgotten to eat anything [22:33:50] <^d> Haha [22:34:15] bd808: nevermind, think i figured it out :/ [22:35:57] aude: If puppet is doing it in mw-v it would be in -- https://github.com/wikimedia/mediawiki-vagrant/blob/master/puppet/modules/php/manifests/composer/install.pp -- and I don't see that there [22:36:29] we get source for wikibase, because it's dev stability [22:36:37] our libraries (e.g. data model are dist) [22:36:46] *nod* [22:37:15] * aude would prefer them all source, since in vagrant, the point is (usually) to develop [22:37:41] don't know what/how others do? [22:37:46] sure. we could probably figure out how to make that happen [22:37:57] if it's desired and makes sense [22:39:04] lol anomie, logFeatureUsage also causes a user load, and it gets called on almost every query action :P [22:40:34] MaxSem: Not much we can do there if we want to log user names. [22:40:56] (which will probably be useful if we have to go after people) [22:41:06] now about not spitting that continue warning? XD [22:41:32] "Why didn't you tell me it was changing?". Meh. [22:45:26] ergh, and ApiPageSet::initFromTitles() [22:45:33] hopeless [22:48:54] initFromTitles? Where? [22:50:33] aaaaand ApiMain->logRequest [22:50:57] holy censored censored censored [22:51:01] Again, user name. [23:06:53] 3MediaWiki-API, MediaWiki-Core-Team: API initializes user preferences on every request - https://phabricator.wikimedia.org/T85635#956552 (10MaxSem) 5Open>3Resolved [23:30:35] bd808: I started filling in https://www.mediawiki.org/wiki/MediaWiki_1.25#External_libraries and added a section where you probably want to write up something for structured logging? [23:31:32] legoktm: Awesome. I have a task for that somewhere in the never ending backlog. [23:39:15] The backlog ends when you die. [23:42:17] or when phabricator dies. [23:44:54] Someone just asked me why this channel exists. [23:44:58] And I'm struggling to answer. [23:45:38] Fiona: Internal team communications for the MediaWiki-Core team [23:46:02] #mediawiki is pretty quiet and friendly these days. As is #wikimedia-tech. [23:46:06] But shrug. [23:46:19] I'll relay that answer, thanks. [23:48:00] I think most (all?) of us are in those channels too. I think of this channel as the place to turn around and poke the folks I'd be sitting near if we were all in one physical office space [23:51:35] the virtual/proverbial team water cooler [23:55:54] my google fu is failing me. Anybody got links or titles for good essays about smaller code bases being easier to develop and maintain? Looking for citations to go with the blog post.