[00:04:35] modifying other people's changesets is a pretty common thing to do when reviewing and giving +2. Folks do it a lot when there are just some small, uncontroversial things that need to be fixed before merging like a manual rebase, spelling errors, etc. [01:49:28] Hi, I remember that there is some site/wiki for testing skins, but it seems that I can't find it right now. Does someone remember where is it? [01:50:19] Ah, I found it actually. It's skins.toolforge.org. [02:16:44] Bd808 trustedcontribs in Territory should give the rights to do it, it was just from when there was some gerrit spam iirc [03:13:44] I'm completely new to MediaWiki. Downloaded 1.43.0 [03:13:58] Where do I find the app that I actually use to build a site? [10:59:29] Hi! I just wanted to update our MediaWiki installation including several extensions. However, the extension downloader (https://www.mediawiki.org/wiki/Special:ExtensionDistributor/) seems to be broken. It just says "Unable to fetch extension list!" [11:01:42] which extension are you trying to download? [11:01:45] I tried ShortUrl and it worked for me [11:03:48] or does it show the error on that page, before you can select an extension? [11:05:14] I see a handful of errors in logstash (one from GerritExtDistProvider, more from graphite) [11:09:43] looking at logstash over the past 30 days, it looks like these errors might have become *slightly* more frequent but they’ve always been happening at some level (100-ish per day)… I’d say try again and hope it works then? [12:04:05] Seems it has been fixed in the meantime. [12:04:37] When I posted this, it was impossible to download *any* extension [12:15:05] from memory the tool can hit rate limits on the toolforge and/or github sometimes [14:31:11] my attempt at upgrading 1.29 to 1.35 (with sqlite) is still stuck at "Beginning migration of revision.rev_comment to revision_comment_temp.revcomment_comment_id" (counted up to "... 1626"). [14:31:23] the sqlite database has grown by ~50% [14:31:26] and cpu is still 100% [14:32:45] any idea if it makes sense to wait or to cancel? the update.php output said "Migrating comments to the 'comments' table, printing progress markers. For large databases, you may want to hit Ctrl-C and do this manually with maintenance/migrateComments.php." just before this [14:32:59] it's a wiki with just 1000 articles [14:43:45] I upgraded 8/9 wikis successfully from 1.39.11 to 1.43, but the last one maybe has a non-empty title in non-main namespace. When update.php is "Running migrateLinksTable.php on pagelinks..." and doing "Populating the pl_target_id column" I get error: [14:43:47] "Wikimedia\Assert\ParameterAssertionException from line 72 of /var/www/consumerium.org/develop/mediawiki/vendor/wikimedia/assert/src/Assert.php: Bad value for parameter $title: should not be empty unless namespace is main" [14:45:23] Sorry, I mean an _empty_ $title in non-main namespace. I have irccloud and will wait for hours happily, cheers [14:58:13] yuker: did the page work in the old MediaWiki version? [14:58:22] (I would’ve thought it would result in some kind of error even on 1.39) [14:58:59] Lucas_WMDE: I did not notice any broken pages in 1.39.10 nor 1.39.11 [14:59:21] so you used to access the page with /wiki/Namespace: or similar? [15:00:08] I do not know which page it is that through some mishap has an empty title, in a non-main namespace. I should probably head over to the MariaDB? [15:00:17] ah ok [15:01:05] yeah try something like SELECT * FROM page WHERE page_title = '' (it’ll be slow because no index is good for that query… if your db is large maybe add AND page_namespace in (/* insert list of all ns IDs here */)) [15:01:22] I reverted the last upgrade with the hosting provider's snapshot utility to last known working state, but naturally kept a copy of the error message [15:01:50] ok, will do. a few minutes. thank you Lucas_WMDE [15:02:12] it wasn’t clear to me if you were describing a “known” page or not ^^ [15:02:23] depending on the contents, it might be best to drop the row from the page table… [15:02:28] or update its page_title to be nonempty [15:02:44] (or maybe the error message actually means something else… I guess we’ll see whether the query finds a row or not) [15:03:05] Lucas_WMDE: returned empty set in a blink [15:03:09] hm [15:04:16] how about SELECT * FROM pagelinks WHERE pl_title = '' LIMIT 10 [15:04:25] ok, running [15:05:09] the query returned two rows [15:06:06] Lucas_WMDE: should I pastebin the rows? [15:06:25] I don’t think I can do much with them [15:06:35] but you could check which pages their pl_from IDs correspond to [15:06:50] (SELECT page_namespace, page_title FROM page WHERE page_id IN (/* IDs here */)) [15:07:07] and then see if you can figure out what the links mean, maybe [15:07:38] tbh it’s probably relatively safe to just delete those two rows from the pagelinks table, and then null-edit the two pages later so that their pagelinks are rebuilt from scratch [15:07:56] (I think there’s also a maint script to rebuild pagelinks entirely but if it’s only two rows that’s probably not necessary) [15:10:03] The corresponding articles are https://develop.consumerium.org/wiki/Namespaces (an ok article) and https://develop.consumerium.org/wiki/Consumerium:Policies_guidelines_and_instructions (a deleted article) [15:11:37] I don’t see anything strange in the source wikitext of the former [15:11:41] Lucas_WMDE: this may have been just the first internal consistency problem thrown by the update.php, but let us hope this is a one-off freak incosistency [15:11:52] maybe try a null edit and see if the row just goes away (before even upgrading anything) [15:12:06] ok, that I can try now too [15:12:54] null edit == edit wikitext and hit save, without changing anything? I've noticed this helps with one weird bug [15:14:38] yup, exactly [15:14:51] (https://en.wikipedia.org/wiki/Help:Purge#Null_edit) [15:15:29] doing a null edit to [[Namespaces]] didn't change a thing in the result of 'SELECT * FROM pagelinks WHERE pl_title = '' LIMIT 10;' [15:15:37] hm, odd [15:15:52] what’s the pl_namespace of that row? [15:15:57] This wiki has been running since March 2003, so there may be some odd things [15:16:02] ... by now [15:16:21] pl_namespace of both rows is 4 [15:16:50] that’s NS_PROJECT [15:17:37] pl_from_namespace are 0 and 1. [15:17:59] I guess it must be the [[Consumerium:namespace]] link, but I don’t know why the pl_title would be empty then [15:18:54] But it is a normal page in main namespace called Namespaces. [15:19:07] pl_namespace is what the link points to [15:19:28] pl_from_namespace is where it comes from (so it’s 0 because [[Namespaces]] is the main namespace) [15:19:46] ah, ok, I get it now [15:20:04] normally [[Consumerium:namespace]] should become a pagelinks row with pl_namespace 4 (project/Consumerium namespace) and pl_title namespace (or Namespace probably – initial uppercase) [15:20:20] I think it’s probably safe to just delete those two rows [15:20:30] write down their values first in case you need to manually recreate them later ^^ [15:20:39] good advice, thanks [15:20:48] I don’t know where the rows would come from but it might just be old data weirdness [15:21:39] Lucas_WMDE: my SQL is a bit rusty ... and I would need to know what pl_from is. is it unique? [15:22:00] or can there be many instances of the same pl_from? [15:22:08] yes, pl_from is the page ID of the page that the link is on [15:22:43] so a page with multiple outgoing links will have multiple pagelinks rows with the same page_from [15:22:56] can I ask what pl_title actually should be? the name of the article that is linked to or linked from? [15:23:14] (and in case you’re wondering, yes pl_from_namespace is redundant with page.namespace, it’s just there for more efficient sorting/filtering) [15:23:21] name of the article that is linked *to* [15:23:30] without namespace [15:26:00] ah, so there is an outgoing link recorded for them, but accidentally / mishap / bug / intruders have caused these two pagelink rows to contain a malformed empty value, where the name of the actual linked page should be? And therefore it is safe to remove these? [15:27:12] yes, exactly [15:27:14] If I just modify the select-statement to be a delete-statement the wiki should be fine after that [15:27:55] I think so [15:28:19] (btw the maintenance script to rebuild the pagelinks table is refreshLinks.php – though I don’t know if it would delete this row or not) [15:28:20] I have the backups of 1.39.11, but I'm going to image the system disk at my provider, before I proceed [15:28:27] 👍 [15:31:47] Danke für ihre Hilfe Lucas_WMDE :) [15:32:26] gerne ;) [15:55:16] OATHAuth isn't applying its database changes when upgrading from 1.39.11 to 1.43 btw. I was banging my head on it yesterday for a while, till I found this workaround by some nice korean bloke https://www.mediawiki.org/wiki/Topic:Yakailc03f0u43c0 [15:56:17] Workaround is to manually run extensions/OATHAuth/sql/mysql/tables-generated.sql or in the korean bloke's case the equivalent sqlite [16:08:09] I have successfully upgraded all 9 wikis to 1.43. Thank you to everyone who made this possible. [16:10:12] I should probably look for a Phabricator ticket about OATHAuth not applying its database schema changes when upgrading to 1.43, as that topic I linked probably will not reach the right people very quickly. I did see people having the same types of issues with 1.42.1 [16:13:33] \o/ [16:19:12] The talk topic on OATHAuth claims to be tracked in https://phabricator.wikimedia.org/T371849 but this bug report is vastly different from the situation the korea bloke and I worked our way around. Then again it may work correctly if it has worked correctly all the time, and our wikis just had skipped some step in some upgrade that caused the upgrade to fail with errors about the updater not finding the right schema for OATHAuth [16:24:49] uh i think this update might be in a bad loop. i dumped the sqlite at two times during the still running update and it seems to populate the "comment" table with many many identical rows. [16:24:59] i got ~600000 rows of the same three entries: https://dpaste.org/nYBVX [16:59:49] hannes_: this indeed looks wrong. The idea of the comment table is to store distinct comments. You shouldn't have duplicate values there [17:10:51] yikes =( [17:15:03] i assume it has migrated all valid comment rows already, what would happen if I Ctrl-C now? [17:15:31] there is still stuff to do for update.php, the wiki currently says "General error: 1 no such table: actor" [17:24:47] I moved everything from images/thumb to another location as I was told by Reedy that this is safe. After that _some_ of the thumbnails are being recreated (new directories show up), but this is not the case for all images and this isn't specific to either images from Instant Commons or locally uploaded images, but both are either showing up in the page or not. Clicking on the broken thumbnails takes me to the correct image page and they are ok [17:26:12] I have $wgGenerateThumbnailOnParse = true; in my LocalSettings.php [17:27:36] Here are some example pages with missing and properly displayed thumbnails https://develop.consumerium.org/wiki/Technologies and my userpage https://develop.consumerium.org/wiki/User:Jukeboksi. Main Page is loading all thumbnails ok. [17:30:28] apiThumbCacheExpiry variable is set to 86400 seconds. If anyone has any pointers as to how to fix the situation, it would be much appreciated. The images/thumb -folder had accumulated 5.5GB so I needed to get rid of it (or at least mv it out of the way to another location). Cheers. [17:31:54] doing a null edit doesn't fix it [17:59:53] yuker: those thumbnail URLs are redirecting to the main page. Did you add a rule on your server that redirects to the main page on a 404 error page? [18:04:19] Vulpix: Not as far as I remember. What do you mean by redirecting? If I click one of the "broken" thumbnails it directs me to the image-page where the image is displayed correctly [18:08:35] yuker: right-click on the broken thumbnail and select "open image on a new tab" or similar [18:10:00] they are not redirecting for me. I am logged in, could this be what is causing different behavior ? [18:10:15] let me fire up another browser to check [18:11:26] nope. I do not see any redirecting on right click + open in new tab. it just takes me to the image-page as expected [18:12:03] you need to open the image URL on the browser, not the link [18:12:13] ah ok [18:12:37] ah yes, now I see [18:13:13] apparently, it fails for images from commons [18:15:39] not only for images from commons, and not in all cases. could the filetype have something to do with this? [18:20:35] editing a page and doing a preview should regenerate those, or at least that's what I'd expect [18:20:57] forcing a thumbnail with thumb.php works, at least (for local images, not commons ones) [18:21:31] I will try to edit and preview [18:22:57] edit + preview fixed exactly one thumbnail on page https://develop.consumerium.org/wiki/Technologies [18:23:07] maybe you'll need to clear the cache entirely, in case mw saved "this thumbnails already exists, don't try to generate it". Depending on your configuration, it may be memcached, redis, or database (objectcache table) [18:23:35] very good point. I'll restart memcached [18:23:58] should I change the cache epoc ? some other time that was useful [18:24:27] restarting memcached should suffice, I think [18:25:08] although with $wgGenerateThumbnailOnParse = true; you'll need to manually edit every page for it to generate the missing thumbs [18:25:28] but it should at least try to generate them [18:28:43] Yes. You solved it Vulpix. I had forgotten to restart memcached, as I did not realize to think about memcached being the reason why the thumbnail regeneration didn't work. Thank you once again! [18:32:15] yw! [19:04:07] is there a official way to migrate a wiki from sqlite to postgresql without fully starting from scratch?