[01:19:56] Do we still need to set custom config variables for extensions in the LocalSettings? Or can that be done via extension.json now? [01:23:49] jfolv: LocalSettings.php is not obsolete by any means, no; and no significant progress has been made towards the config DB either, sadly [01:25:22] Hm. So does that mean that the "config" parameter of the extension.json isn't currently functional? The manual states that I should be able to set variables with "config": { "var": { "value": "val" } } [01:25:58] Unless I'm confusing the intended functionality there [01:28:35] Specifically, I'm talking about this page: https://www.mediawiki.org/wiki/Manual:Developing_extensions#Making_your_extension_user_configurable [01:31:57] No, that works fine [01:32:17] And LocalSettings will override [01:32:27] (though for complex arrays it's a bit iffy) [01:32:47] Ah, okay. I'm just trying to avoid adding more garbage to our localsettings. It's already a jumbled mess and I've been trying to clean it up and separate things logically [01:33:08] Though, for years you could set config vars in the PHP entry point [01:33:17] Just in LocalSettings if you need to ovveride [01:34:41] Good to know. And as long as they're in the extension.json, I can just access them via global $var right? No special loading to consider? [01:35:16] Indeed [01:35:36] It's not generally "best practice" these days [01:35:45] But it will work fine, and isn't going to be deprecated any time soon AFAIK [01:36:16] Would it be better to just create a file to include in the localsettings? [01:36:27] Why? [01:36:48] Idk, you said it isn't exactly best practice, and I want to separate the configs by where they're used [01:37:34] I mean `global $foo;` isn't best practice [01:37:43] Oh, okay. [01:37:45] Getting it from a context is better [01:38:12] for example, in various MW subclasses you can do stuff like `$this->getConfig()->get( 'foo' );` [01:38:18] But it all depends on what you're doing where [01:38:23] Ahhhhh, now I see what you mean. [01:38:59] The latter helps if we ever get to storing config say in the database etc [01:38:59] That would mean I would need to extend a class with the extension then, wouldn't it? [01:39:13] Maybe. Or inject the right services... [01:39:20] Or... [01:40:17] You can do `RequestContext::getMain()->getConfig( 'foo' )` [01:40:33] Oh, that's perfect. I already use the context to get the current user. [01:40:41] So it'd just be another call. [01:41:05] Yeah, it's basically the same as no more $wgUser (or suggesting not to use it) [01:41:27] Gotcha, that's nice and easy. Thanks! [02:21:04] @Reedy: the getConfig method seems to just be returning the prefix "wg" with no other options. In addition, I'm also getting warnings from my IDE that getConfig() does not accept arguments. Am I using the wrong function? [02:21:41] you'll need to do getConfig()->get( 'namewithouttheprefix' ); [02:26:55] That didn't work, both in RequestContext::getMain()->getConfig('name') and even the $this->getConfig() from the class extending SpecialPage [02:27:27] Relevant extension.json: https://pastebin.com/58UE5B0B [02:27:48] In a special page `$this->getConfig()->get( 'CheckUserMaximumRowCount' );` [02:28:09] Also, ReadOnlyFile is a core one.. [02:28:22] So that probably won't have the desired effect [02:28:27] I thought it was just ReadOnly [02:28:39] !wg ReadOnlyFile [02:28:39] https://www.mediawiki.org/wiki/Manual:%24wgReadOnlyFile [02:28:48] Introduced in version: pre 1.1.0 [02:28:48] Removed in version: still in use [02:29:02] What the hell was he doing with this extension then [02:29:28] Setting $wgReadOnlyFile in the extension PHP entry point... probably worked [02:30:22] It's supposed to create a special page that allows admins to shut off editing for non-approved groups. My site uses it when there's major new content for the subject because it's impossible for the admin team to keep up with the millions of people trying to edit [02:31:00] I think he has it creating a file containing the reason for the edit lock, and then it just removes all edit-related permissions for non approved users [02:31:28] But considering the rights manipulation was disabled, maybe that's actually what's going on. He just sets the variable and writes the file. [02:33:34] This'll take more investigating, then. I keep finding half-finished code and I'm starting to think it doesn't actually work as intended. [02:35:55] Or maybe I've got this wrong... Maybe he actually reinstates the rights for anyone exempt from the editlock... [02:36:10] That would explain why the rights REMOVAL was unused... [02:36:45] Setting $wgReadOnlyFile shouldn't actually disable database writing, right? It just strips permissions? [02:39:00] That's the file it will write to in Special:Lockdb [02:47:06] I see what's going on here. He built the extension to use the same file as the Lockdb special. Probably not the best of ideas. [02:48:50] I'll just rewrite it to use a custom one. [03:36:06] Are there any hooks that get called when a user tries to edit with the lockfile set? If I can use those to override the lock for anyone with the right permission, I can cut this whole extension down to a single hook and a permission. [05:41:14] Hi everyone. I have a website which is based on wordpress but media wiki is installed alongside. when wordpress backs up its database, does it back up the database of mediawiki as well? [06:13:18] Unless you have them both in the same database (which is unlikely), no. [06:13:42] Depends on how you installed it though [06:41:44] thanks jfolv [08:16:55] bd808: lack of h2c is definitely not a bug, but non-plaintext h2 requires you to spend CPU time on TLS when, within your internal infrastructure, you do not really need encryption that much [08:17:52] and Varnish developers stated multiple times that there will be no SSL/TLS in Varnish, ever [08:18:17] Its becoming more and more common practise to use encryption on links even within a datacenter [08:18:17] (as an example) [08:18:36] bawolff: I mean localhost connexions [08:18:53] would you run TLS over UNIX sockets? [08:19:19] Depends, have I wrapped my laptop in tin foil to prevent the aliens from looking :P [08:19:41] but yeah, within a local connection seems excessively paranoid [08:20:40] considering for localhost you would often use self-signed certificates or have to deal with real weird stuff… [08:20:58] Sometimes people use TLS in an internal network to use client certificates, as a method of doing authentication. I suppose in that context, on localhost - it could be seen as a method to prevent SSRF, for people who are super paranoid [08:21:14] but yeah, it doesn't really make sense [08:21:48] uh [08:22:06] At the same time, the efficiency gains are probably low enough, that it totally doesn't matter [08:22:13] Remilia: that statement is a bit outdated, today varnish has tls support :) [08:22:52] wait did PHK really cave [08:22:56] yes [08:23:00] believe it or not [08:23:24] they need to add h2 properly then [08:23:33] right now it is a right mess because you cannot use both [08:26:23] tykling: judging by what I see at a glance Varnish still uses Hitch? [08:29:24] also it only does h2c for inbound connexions, there is no support for outbound which means no HTTP Push pipeline [08:29:46] honestly I wish haproxy people had their own take on RAM caching [08:30:59] my MediaWiki install is haproxy → Varnish → Apache → PHP-FPM and it works quite well but I would really love h2 multiplexing and push support throughout [08:35:29] Remilia: I thought http/2 push was dead [08:35:38] As in, chrome killed support for it [08:35:49] hmm [08:36:02] oh well [08:38:16] bawolff: it really annoys me when Google can single-handedly decide the direction and future for protocols [08:38:36] nevertheless [08:38:40] multiplexing is a must [08:38:47] What google givith google taketh away :P [08:39:34] They're not wrong though that HTTP/2 push adds a lot of complexity, and its very difficult to deploy in a way that actually results in better preformance in the face of unknown cache state [08:40:21] Even if they didn't essentially have a monopoly in the browser space, they could probably still remove features that effectively nobody uses, without too much fuss [08:41:52] They're a lot more benevalent overlords of the web than microsoft was back in the day when MS had the IE monopoly [08:48:03] whatever the case, multiplexing would be nice for cdncacheupdate purposes but without a full h2 pipeline it is essentially impossible :\ [08:53:30] hmm, i wouldn't expect HTTP/2 multiplexing being that much better in this case then HTTP/1.1 pipelining [08:53:54] after all, you're never waiting on the result body of the PURGE request [08:54:45] which is the case that HTTP/2 makes better [08:55:07] I have no idea if varnish supports HTTP/1.1 pipelining or not [08:55:17] it does, but the problem is not with that [08:55:39] bawolff: https://daniel.haxx.se/blog/2019/04/06/curl-says-bye-bye-to-pipelining/ [08:55:44] note the date [08:56:06] MediaWiki has pipelining hardcoded into cdncacheupdate [08:56:35] and there is no real replacement for pipelining in HTTP/1 [08:57:02] Hmm, i didn't realize it was so poorly supported [08:57:34] But if varnish supports it, and mediawiki supports it, I'm not sure what the benefit of HTTP/2 over it would be. Unless varnish is planning to drop it [08:58:21] I know wikimedia used to use multicast UDP (And now it uses kafka or something), which ultimately seems like it is much better than any version of HTTP would be at providing purge requests [09:00:29] bawolff: mediawiki uses cURL [09:00:45] cURL no longer supports pipelining [09:01:49] bawolff: "PHP Warning: curl_multi_setopt(): CURLPIPE_HTTP1 is no longer supported at w/includes/libs/http/MultiHttpClient.php:451" [09:02:03] Oh [09:02:56] Guess we could make our own implementation, but that's kind of crazy [09:03:05] that would be absolutely insane [09:03:18] pipelining is one of the worst things I saw [09:03:29] implementing it in php would be hell [09:04:01] SquidPurgeClient uses a home-grown keep-alive implementation which is already pain [09:07:50] bawolff: if we were thinking of nice things to have, a QUIC aka HTTP/3 PURGE would be real cool [09:12:42] One of the first comments from that blog states: [09:12:44] > Even with multiple connections sent from the client, all operations are still processed synchronously on the server. The first connection to the server is always processed first and blocks other connections (head-of-line blocking) until that’s done. [09:13:17] Vulpix: right, but that wouldn't matter in an HTTP PURGE [09:13:45] That matters if there is some response that's ready, and you're waiting for it, but the web server is doing something you care less about first [09:14:35] But with an HTTP PURGE, you discard the response without even looking at it, so you're never waiting on it [09:14:44] Are you sure Varnish will process PURGE requests in parallel? There may be some locking on Varnish to ensure consistency [09:15:22] hmm, maybe, although that's just as likely to apply to the HTTP/2 case [09:16:09] I am thinking of, in regards to my own installation, writing a new clientpool class implementing CloudFlare API and then implementing a CF API proxy in Rust, for MW to dial instead of Varnish for purging [09:16:35] cloudflare API uses HTTP POST with JSON arrays of URLs [09:17:11] not sure if overengineering, but generally having CF API support might be nice [09:17:18] (I will never use CF) [09:18:09] I do think if you want really efficient purging, kafka like WMF does, is the way to go. It will scale much better than a hypothetical HTTP/2 implementation would [09:20:49] https://wikitech.wikimedia.org/wiki/Kafka does not give much information on how purging is done, time to dive deep [09:22:07] https://github.com/wikimedia/varnishkafka [09:22:21] But also, its probably much more effort to setup [09:22:43] For WMF, the primary performance issue is not purging a lot of pages, its purging all the pages on a lot of different varnish servers [09:23:08] whoops [09:23:13] varnishkafka is not the right package [09:23:17] that's the other thing [09:25:09] yeah I am looking at https://wikitech.wikimedia.org/wiki/Kafka_HTTP_purging [09:25:23] but Kafka feels like massive over-engineering [09:25:44] for standalone wikis, that is [09:26:16] I think CloudFlare API support might actually be something worth having because a lot of people use CloudFlare these days [09:28:52] That's true [09:29:17] bawolff: yeah the Kafka set-up does not seem really feasible for anything small-scale, you need a whole ton of services running [09:29:27] eventgate, Kafka itself, etc. etc. [09:30:37] going to seriously look into adding CF API support, probably with Guzzle since it is bundled with MW anyway [09:36:45] bawolff: oh, I could actually… implement CF API in VCL lol [09:38:08] purge support is usually implemented in Varnish by adding code that handles HTTP PURGE requests, so why not create a fake API endpoint lol [12:51:14] y0 and big thanks for the awesomest wiki engine in the Laniakea Supercluster [12:52:17] I need to get the navigation issues sorted for a few wikis. And find out if that one ticket I created is only about an upgraded several versions Mediawiki that probably I broke in the process [12:53:27] Free suggestion: Having previews and having diffs is great, but the awesomest for formatting bug hunting would be a "Show preview and changes"-button [12:54:03] I mean you already have the means to provide both preview and diff, so why not package the views into one? Should not be much work [12:59:28] another ridiculously simple feature that would be great when doing content sorting / reorganizing is if one could see the plus-minus bytes count already in the preview [13:00:30] So if I'm changing a chronology to reverse by hand, with this feature I could see that if it holds within +/- 0-1 I have not made any content-losing mistakes [16:01:59] Should I make 2 phabricator tickets? #1 See plus/minus bytes in each preview and each diff view when editing, instead of only after the "publish"-button is pressed #2 Please make a "Show preview and changes"-button happen ? [16:02:54] I'm sure people would enjoy both features and very little people will be upset [22:04:17] Hello, can anybody tell me what "comment" plugin this particular page uses? https://www.mediawiki.org/wiki/Manual_talk:Upgrading [22:04:53] flow [22:05:09] cloudcell_: v [22:05:12] https://www.mediawiki.org/wiki/Extension:StructuredDiscussions [22:05:14] though its now called StructuredDiscussions [22:05:16] https://www.mediawiki.org/wiki/Extension:StructuredDiscussions [22:05:44] thank you DannyS712 and p858snake [23:18:17] DannyS712: is it possible to enable StructuredDiscussions (Flow) on pages that had preexisting conversations as markdown text ? [23:23:59] is it possible to enable StructuredDiscussions (Flow) on pages that had preexisting conversations as markdown text ? [23:24:27] Yes [23:24:42] probably, but I would advise against it - in my experience normal wikitext discussions are much easier to use (also I don't think you can switch back) [23:28:37] I want to try that on a test page, how can I do that? [23:33:07] Someone shared me this today, about improving talk pages: https://commons.wikimedia.org/wiki/User:Jack_who_built_the_house/Convenient_Discussions [23:46:12] DannyS712: what is the best practice of using normal wikitext discussions ? (I'm really new to this) [23:48:23] do I just create topic and watch for changes for every topic? [23:48:45] *create topics