[04:48:03] @apergos Makes perfect sense, thank you [04:49:44] is it possible to handle chunked uploads differently in a custom file backend? [04:49:54] @ashley and thank you as well [04:50:59] I'm implementing an S3-backed file backend and want to take advantage of the native multipart upload feature of S3 when using chunked data [04:51:27] that way the S3 bucket doesn't need to do all the temp file shenanigans [05:27:24] hello everyone. I am setting up a website and need to make sure that both the wordpress part and mediawiki part have a tag in the header for google analytics. so my question is "does wordpress header.php gets picked up by mediawiki or do i have to insert google tag to mediawiki php as well? (or use some plugin to edit the header)" [05:28:17] it looks i found something: https://www.mediawiki.org/wiki/Extension:Google_Analytics_Integration [05:36:50] hmm, looks like i need help after all [05:37:20] i followed the instructions from the page above, but my google analytics account is not picking up visits to the wiki [05:38:30] I updated LocalSettings.php with my UA-xxxxxxxxxx-x number but nothing seems to work :( [05:43:25] I do not understand why this part is there: [05:43:42] https://www.irccloud.com/pastebin/5002uq85/ [06:55:51] problem solved, never mind [10:40:48] Hi, I am using mediawiki to document my IT infrastructure (what services does it offer, taking notes on incidents, what does network/hardware/software consist of...) Now there is some kind of audit/assesment which will require me to provide documentation on the topics I just mentioned. What are your recommendations to providing access to *parts* of my wiki but not all of it? [10:41:42] I have looked into exporting pages/categories to PDF. The result will most likely not pass formal requirements. Also I only want/need to export some parts of the pages, not all of it. [10:42:48] My best bet right now is to make a new namespace, give the auditor access to that namespace and transclude the relevant parts into that namespace. [10:44:19] I was also considering to use pandoc and convert the mediawiki markdown into something more presentable. But I think it's not worth it (need to introduce "export toolchain with pandoc", suitable templates, etc..) [10:44:32] And it will also not solve access to the files referenced in the wiki. [13:44:45] Hi All, I was facing issues with logging in to the gerrit. [13:45:13] I had forgotten my password, so how do I go about resetting it? I [13:46:36] IIRC, it's linked to the Wikitech account, but when I reset my password there I get no reset mail, probably I don't have access to the email that was used then. Anyway I created a new account on Wikitech yet I am unable to login with the new account. Can anyone help me out? [14:50:16] do I need to include the Vector repository in my 1.35 installation or not? I thought it was part of the core, but the Installation page says to download the repository and add it to skins/ [14:57:25] It's not part of core. It's bundled with the taball [14:57:32] If you don't want hte vector skin, you don't need to download it [14:59:36] thanks Reedy. i've been using it, but since upgrading to 1.35 i am having problems with what looks like asset paths being in the wrong place [15:07:15] Did you update it? [15:07:35] it should be on branch REL1_35 [15:07:55] the special:version page shows `--` as the version number though [15:08:09] so, maybe? [15:11:54] looking at the last commit on the REL1_35 branch of the Vector repository on GitHub makes me think that it is probably deployed at the correct version [15:16:44] actually, the issue appears when forcing other skins [19:20:46] so after what I would say is herculean effort, I finally figured out a way to do chunked uploads to s3 via multipart uploads, without having to store file chunks on disk [19:21:43] but I basically needed to create custom classes for ApiUpload, UploadFromChunks, UploadStash, UploadStashFIle, and LocalRepo, not to mention the storage backend [20:02:39] hey, is this the right place to ask questions about mediawiki development/code? [20:03:48] specifically I'm wondering about why this integration test is failing: https://integration.wikimedia.org/ci/job/mediawiki-fresnel-patch-docker/31052/consoleFull [20:04:03] I noticed this error: PHP Notice: "" is not a valid magic word for "transliterate" [Called from Language::getMagic in /workspace/src/languages/Language.php [20:04:24] it's related to the patchset but I'm not entirely sure what's causing it or how to fix it [20:09:40] ningu: this is the right place! I'm not experienced enough with that part of the code to actually answer your question, but if you hang around someone else may stop by and help out [20:10:20] ok, cool [20:10:29] I may also reply with a comment on the patchset [20:10:30] it may help if you link the patchset it's related to [20:10:45] https://gerrit.wikimedia.org/r/c/mediawiki/core/+/627938 [20:11:19] it looks like the page load cost error is not fatal -- but I did notice that warning message which suggests some sort of minor issue in my patch [20:11:46] I created a new parser function #transliterate and I needed to add "transliterate" to the list of magic words [20:11:51] so, it's definitely something I did or didn't do [20:12:29] the magic word is in MessagesEn.php [20:13:48] I dunno if it's just because it hasn't been translated to other locales yet or what [20:14:57] it doesn't seem to matter though. not all strings are translated in every messages file anyway [20:15:09] shouldn't do, everything fallsback to en [20:15:13] right [20:15:37] yeah, so I dunno how I've done anything different for the transliterate magic word than for others, or why just that one produces a warning [20:16:57] it thinks that the entry isn't an array for some reason [20:18:42] hmm, could be cache related [20:18:55] does the CI environment run in a new environment each time or can it use cache entries from previous runs? [20:19:28] yeah I dunno [20:19:36] the whole CI setup is a bit unclear to me [20:19:51] cscott fixed an earlier issue with it [20:19:55] I mean with the tests [20:20:21] that notice is consistent with the localisation cache not containing an entry for transliterate [20:20:31] well, that was to do with the test needing to properly init each time and not carry over settings from the previous test. this is more to do with the cache as you said [20:21:11] I also don't know why this CI test is timing out but it's not clear it's anything I did [20:21:33] also, why is Language::getMagic being called in the first place? [20:21:58] because you have {{#transliterate:}} in a parser test? [20:22:08] so it's running parser tests? [20:22:16] this seems to be a live test loading pages and such [20:22:22] the parser tests are run elsewher I thought? [20:23:02] hmm yeah, doesn't seem to be the case [20:23:16] this is under "This patch might be adding a page load cost" [20:23:34] are you able to repro locally? [20:23:38] (guessing not) [20:23:40] I think parser tests are under composertest [20:23:51] well, no, because I'm not even sure how this test is being run, but I could try to figure that out [20:24:00] https://github.com/wikimedia/fresnel/blob/a75e792ccb093d6fbfa10b5f62455965b407de4d/src/conductor.js#L109 [20:24:08] it looks like it's purposely running a few requests first [20:24:49] yes, and { scenario: 'Read a page', run: 0 } [20:25:22] oh [20:25:31] 12:15:26 + git checkout -q HEAD~1 [20:25:34] that's before the failed test [20:25:57] aka checking out the commit prior to yours; seemingly to get the difference in timings between your code and previous code? [20:26:09] hrm... maybe? [20:26:30] prior commit would not have the transliterate stuff at all [20:26:39] right [20:26:55] since it was around the timeout, I thought that warning might be related [20:26:58] but maybe it's just noise [20:29:07] I still don't understand why it's looking for the magic word in the first place [20:29:30] some cache is my best guess [20:29:38] yeah I guess so [20:29:58] ok, well, if folks on the patchset thread don't care about it I'll just leave it [20:30:06] it caches the fact the magic word exists since it runs your code first, then it reverts to before-your-code, but cache may still be hanging around saying the magic word still exists [20:30:11] this just happened after a rebase, no code changes [20:30:14] (again, that's just a guess) [20:30:43] it may have even happened before and I didn't notice. I noticed this time cause the whole CI test timed out and was marked failed [20:31:05] yeah not sure how or why a timeout would happen because of that [21:45:01] Hi, I am using mediawiki to document my IT infrastructure (what services does it offer, taking notes on incidents, what does network/hardware/software consist of...) Now there is some kind of audit/assesment which will require me to provide documentation on the topics I just mentioned. What are your recommendations to providing access to *parts* of my wiki but not all of it? [21:47:23] !cms [21:47:23] Wikis are designed for openness, to be readable and editable by all. If you want a forum, a blog, a web authoring toolkit or corporate content management system, perhaps don't use wiki software. There is a nice overview of free tools available at including the possibility to try each system. For ways to restrict access in MediaWiki, see !access. [21:47:35] !access [21:47:35] For information on customizing user access, see . For common examples of restricting access using both rights and extensions, see . [21:59:57] Thank you