[02:36:29] So my Log In link works on this page: https://wiki.zoneminder.com/Special:SpecialPages but not from the home page (/). [02:36:39] Anyone seen that - know how to fix it? [02:38:10] does not work for me on either, just leaves me on the same page, i think your webserver is misconfigured [02:39:03] gry: On the main page, right? The SpecialPages link should work [02:39:23] it does not, sorry [02:39:36] Strange [02:39:45] Anyone have an nginx config for MW they could share/ [02:39:52] if it works for you, try in another browser or clear cache [02:40:27] i think for short url config with nginx, start here, https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Short_URL/Nginx [02:40:37] if it is fubar, check corresponding logs [02:44:01] Hm. Something definitely be wonky. [10:15:04] hi guys [10:20:00] hey Combined2857, what's up? [11:20:55] hello [11:21:14] Hiya, does anyone know of an extension that works like this one but would work with the latest MediaWiki version? https://www.mediawiki.org/w/index.php?oldid=945560 [11:21:48] im sorry im a complete novice, no clue [11:25:52] Yeah, me too, no worries :-) [12:14:26] is anyone able to help me? i got mediawiki through my host (softaculous) and im trying to upgrade it 1.30 so i can get visual editor working [12:18:13] please? [12:31:48] Can anyone recommend how to use percona with mediawiki? Percona-XtraDB-Cluster prohibits use of DML command on a table (wikiitdb.archive) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER [15:07:33] Hi! I'm refactoring a mediawiki deployment into docker, and am currently working on the cron part for the job queue. If I'm trying to trigger a job queue run via http, is it correct that I should call SpecialRunjobs.php? [15:15:42] JosefAssad: no [15:16:08] why are you trying to trigger the run via http? Is there something that is preventing you from running cli scripts? [15:16:26] if you can run a php script via cli, the maintenance/runJobs.php script is what you're after [15:19:15] but, you cannot directly call anything in includes/ or maintenance/ from http [15:19:20] roger [15:19:44] the reason I was looking into that is, I'm trying to keep it to 1 process per container [15:20:14] I am currently triggering those croonjobs from the host's cron, but I wanted to move it into a little alpine based docker image (cleaner) [15:20:40] maybe I'll install a ssh daemon on the mediawiki container and use ssh from alpine to trigger them [15:20:58] the runJobs maintenance script would be the preferred method, as it runs every job in the job queue (or you can specify an option to only run a certain # of jobs) [15:21:28] iirc, the Special:RunJobs page honors $wgJobRunRate, meaning each request would only run a single job (by default) [15:21:37] aha [15:21:43] ok so that isn't an option either [15:21:57] good, I think the ssh thing might be the way forward, though it feels hackish [15:22:19] so you'd need some logic in LocalSettings.php to check for a certain parameter and adjust $wgJobRunRate dynamically if you really want to go that route [15:22:45] the docker image can't have its own cron? [15:22:53] well, it can. And it used to have [15:22:55] granted, that IS another process [15:23:03] but I am refactoring it into "1 process per container" [15:23:23] my first version had mysql and apache and cron running in the same container actually [15:23:30] it was stupid but it worked [15:24:11] it's not a super critical refactoring, but I have a bit of time to invest in cleaning the deployment architecture so I thought why not [15:24:42] you could spin up another container to run the jobs [15:24:53] and that container would just have the cron process to run the maintenance script [15:24:55] I did a bit of (light and unwilling) drupal work years ago, they trigger cron via http. but I remember how that created a lot of extra work securing the endpoint from DDOS'ers [15:25:18] you'd need to ensure configs remain in sync between that and the web container somehow, but there are lots of solutions for that [15:25:23] yep, that's precisely what I have done. Only thing is, the containers talk to each other over an internal network. :) [15:25:37] which is why I was looking for an easy http solution [15:25:40] since the runJobs.php script only needs to talk to the database, it won't need to talk to the live wiki instance [15:25:59] oh [15:26:03] I see what you mean [15:26:12] haha that is a little bit crazy in a cool way [15:27:01] it's a possibility if you're dead set on one process per container [15:27:01] it would probably work but I think a container running cron ssh exec'ing inside the mediawiki container is a bit easier :) [15:27:22] yeah no you're right, I think your solution would work. bit crazy but not wrong :) [15:27:28] it's more management on the config side though; you need to ensure that the mediawiki files remain in sync between both instances (including LocalSettings and any extensions) [15:27:55] yeah that's not hard, I already mount the configs from a host-based file [15:28:20] but I still think the ssh route is a tad easier :) Mad props for the out of the box thinking though [15:28:32] yep, go with what works best for you [15:38:24] JosefAssad: looking at the code, it seems I was a bit wrong above; you specify the # of jobs to run when querying Special:RunJobs as a POST parameter [15:38:40] so that may open that option back up, but I'd still recommend the maintenance script route [15:39:25] if you do the special page route, you would POST to index.php?title=Special:RunJobs and pass in all the correct parameters, including the signature and sigexpiry parameters [15:39:48] the signature is based on a hash of the rest of the parameters using $wgSecretKey [15:41:20] Skizzerz: yeah I agree with you [15:41:51] I feel that it's more reliable to call the script directly, so I'm setting up the alpine-based container with the openssh client to execute that "remotely" [15:42:54] sh > http :) [16:11:30] Could anyone please help me figure out why uploads are not working (I either get "The file you uploaded seems to be empty", 503 error or 504 error [16:40:00] Reception123, what steps do you perform, what exactly happens / is shown at which stage, what does the log file of your webserver software say, which MW version? [16:40:53] https://www.mediawiki.org/wiki/Manual:Configuring_file_uploads for general info [16:42:36] andre__: I try to upload any image using Special:Upload, most of the times I get a "503 Backend fetch failed" error. The logs don't say more than the error. MediaWiki version 1.28.2. [16:47:54] Reception123, which exact log did you check? [16:52:10] I can't remember, I checked all of them and only found something that mostly repeated 503 backend fetch error [16:53:10] which ones are "all of them" exactly? [16:56:29] php error, access logs, exception logs [16:58:40] Now I can't seem to find anything in the logs. Any suggestions for where I should search? [17:01:19] Along with these file uploads, intermittent 503 errors are also happening by themselves [19:20:16] Hey all, I would like to create a wikitable which is fixed at the right top of an article. The behaviour should be like an infobox. What's the most simple way to do this? [19:20:37] Aspecially the text should not start at the bottom of the table [22:17:19] Heh, I read that mw-bot. [22:17:24] Heh, I read that as mw-bot. [22:17:43] I was like "who taught mw-bot the word aspecially?"