[07:47:28] Hello, I can't figure how to open a pull request on Gerrit, any Quick Start guide? [07:47:40] I cloned the repo and commited locally [07:51:44] Ani: basically do [07:51:53] git push origin HEAD:refs/for/master [07:51:58] I did, I got unauthorized [07:52:11] fatal: Authentication failed for 'https://gerrit.wikimedia.org/r/mediawiki/extensions/ReplaceText/' [07:52:13] remote: Unauthorized [07:52:25] Did it ask you for your password? [07:52:35] Yes, I assume it's the same as the account I used to log into gerrit [07:52:55] Its actually not (to be super confusing) [07:53:11] In gerrit, go to your preferences, go to HTTP credentials, and click generate a password [07:54:40] remote: You need 'Create Change' rights to upload code review requests. ! [remote rejected] HEAD -> refs/for/HEAD (prohibited by Gerrit: not permitted: create change on HEAD) [07:54:52] That worked for authentication, but now I got this [07:56:51] you need to do refs/for/master not refs/for/HEAD [07:57:22] (It will probably also error on wanting you to install the pre-commit hook to add a change-id, but that's really easy to install) [07:58:08] Yeah, got that one now [07:58:18] Installing [08:07:42] https://gerrit.wikimedia.org/r/c/mediawiki/extensions/ReplaceText/+/631706 [08:07:43] Got it [08:07:59] I've tested the bug and the fix on my installation [08:14:04] Ani: Hmm, which version of mysql are you using? I think INTEGER is valid in mariadb [08:14:09] MySQL 5.6 [08:14:40] I copied the query and executed it manually [08:14:45] with INTEGER `1064 - You have an error in your SQL syntax;` [08:14:52] with SIGNED it works fine [08:15:05] And the extension is now working again on my mediawiki install [08:17:05] Ah, that explains it, I think I have a newer version when I was testing [08:17:27] After I fixed it, a colleague pointed me to https://www.mediawiki.org/wiki/Extension_talk:Replace_Text , someone else is having that too [08:17:40] I'm having other issues with mediawiki now, everything is slowed down to a crawl [08:18:12] There's also a CAST to INTEGER function in the main wiki repo but I couldn't find any use case of it when testing, I've replaced it for SIGNED as well on my install [08:18:27] I'll PR a change there too either way [08:18:55] Maybe slowness due to job queue, try running the script runJobs.php [08:19:51] My only concern with the patch, is what if it breaks postgres. I don't know why the postgres docs are so hard to search [08:20:25] I wouldn't know myself, but I hope the person with postgres that did the breaking commit can share some input on that [08:20:45] I can't run stuff through CLI right now, I ran the normal mw-config, tried mediawiki with all extensions disabled too, some JS parts are not loading, not sure what is going on [08:21:22] It seems like something is stuck, error log and trying error_reporting doesn't show anything, I only found the ReplaceText query error when testing, and already patched that one, but still problematic [08:21:41] Well, at least the extension works [08:21:51] Don't worry, gerrit will run the CI tests for you [08:21:58] Running CI locally is a PITA [08:22:13] https://wiki.rpcs3.net/index.php?title=Help:Game_Patches/Main [08:22:16] for example, here [08:22:21] stuff was expandable [08:22:54] it seems JS is not working somehow, we've been upgrading for many versions without an issue, and I can't wrap my head around what's wrong as I can't get an actual error, it just seems to deadlock the script until it reaches php max execution time [08:24:23] Oh, that is super weird [08:24:33] I've tried everything I could think of [08:25:00] I fear something else may have broken with MySQL 5.6, but I can't get any actual output [08:25:07] As it just seems to deadlock [08:25:10] Need to get a profiler going [08:25:28] Ani: if you set $wgJobRunRate = 0; does it still happen (Probably not a setting you want generally, just something to narrow down causes) [08:26:09] Yeah, still happening [08:26:49] Is there any built in profiling code in mediawiki? We don't have the recommended php7 profiler installed in our server yet, have to wait on that one I guess [08:26:57] Would be helpful to see where code is stalling [08:27:19] Unfortunately you need to have the php profiler thing installed [08:27:28] there used to be a long time ago, but it all got removed [08:27:47] The debug log though might give hints as to what the code is doing, just by looking at what the last debug log entry was [08:28:44] The reason JS is broken, is it might be waiting for the document to fully load before executing (which basically never happens) [08:28:51] I'm not sure if I tried debug log correctly, I added the path line on LocalSettings but no file was created, perhaps I need to create it manually [08:29:10] Using $wgDebugLogFile [08:29:12] ? [08:29:24] It should be created automatically, provided that MW has sufficient rights [08:29:47] Often php is setup though so that it doesn't have access to edit files in its own directory, so that might prevent it from creating the log [08:30:01] I believe I may need to use a relative path here, I don't think root of my account is the same as root of the whole server [08:30:40] you can do $wgDebugLogFile = __DIR__ . '/debug.log'; to do it in same directory as your LocalSettings.php [08:31:34] I wasn't sure what __DIR__ would be, makes sense, made it go to my root through relative now, let's see [08:32:13] Its also weird, in that stuff like api.php is not hanging (even if your accessing the help page) but Special:BlankPage [08:32:25] __DIR__ will always be the directory that the current file is located in [08:32:49] $IP will also be set to the install path of mediawiki. [08:37:40] Hmm nothing specific calling my attention on debug.log [08:37:53] not sure what to look for [08:38:07] I see request ending with connection closure [08:38:32] weird [08:40:03] Also all calls show with [0s] [08:41:50] Hmm, when i try and download your site with curl I get [08:41:52] curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2) [08:42:00] I wonder if maybe the issue isn't on the mediawiki layer [08:42:27] That may be some cloudflare protection [08:43:41] curl worked fine here [08:44:12] `curl -JLO https://wiki.rpcs3.net/index.php` [08:45:31] Do you get the hanging behaviour when not using cloudflare? [08:45:54] I disabled the rocket loader and also cache and it didn't change anything [08:47:38] I tried disabling all extensions on settings, tried repairing/optimizing db tables [08:49:18] well, I see a lot of queries just for main page, not sure if it's normal. may just be mysql 5.6 being really slow for some type of query that was added recently [08:49:45] I'll wait and see if someone finds anything, in the meantime I'll try to get it updated to a more recent version [08:58:16] I really think its a webserver or cloudflare level issue. It looks like the response from the server is violating the HTTP spec, which is something I don't think mediawiki can even do, as that sort of thing is handled by the webserver [09:00:25] From browser debugger I can see the reply being http 200, it just takes dozens of seconds to load [09:01:10] Swapping to desktop irc [09:05:20] I disabled every single cloudflare setting [09:05:27] It seems to have fixed this [09:05:30] Damn [09:05:42] You were correct, this was so weird [09:06:13] I remember one issue with cloudflare long time ago when their rocket loader broke our main website, that's why I tested disabling it now earlier but didn't do anything [09:06:21] I went ahead and disabled every single thing on cloudflare panel [09:07:54] Well, both ReplaceText and wiki weird issues fixed now, thanks for the help [09:08:05] I will keep an eye for the commit for the comments, need to see regarding Postgres [09:08:18] I am sure it's correct as far as MySQL goes though [09:10:43] Not sure what's the appropriate talk page to add this, but if someone has this infinite load crap, on "Speed" and "Caching" in Cloudflare, disable everything and test (test with development mode on as well) [09:12:06] Hmm, curl is still reporting that the connection was closed cleanly, but the actual website is loading properly now [09:12:17] * bawolff *shrugs* that's a really weird issue [09:25:36] Bisected, offending settings seem to be "Brotli" for infinite load, and "Rocket Loader" for JS not loading [09:26:17] Weird that it only triggered this immediately after I upgraded to 1.35 (and remained even after a few days), CF settings really are a mistery [09:33:10] https://support.cloudflare.com/hc/en-us/articles/200168056 - Rocket Loader: If you have a Content Security Policy, you will need to update it to allow Rocket Loader to run [09:33:40] this makes totally sense, since Rocket Loader is mangling the HTML and JS generated by MediaWiki [09:48:17] Weirdly enough, after many version upgrades, this is the first time it caused the issue on our wiki [09:48:30] But will keep in mind now [09:57:18] CSP stuff has had some work going into1.35 [09:57:25] So not really a complete surprise [10:18:26] You would probably get errors in the browser console if CSP was the issue [12:15:41] Reedy: I guess I will have to add some more debug logging to includes/libs/http/MultiHttpClient.php to get full tracebacks, like maybe adding a check for CURLPIPE_HTTP1 [12:15:51] cannot catch it properly yet [17:51:54] So I'm trying to update from 1.29.1 to 1.35 here, and I've run into bug after bug with my database. The latest one involves this error: https://pastebin.com/bqLyss6D which I was able to find a bug report page for here https://www.mediawiki.org/wiki/Topic:Vdfrw1pgsuw01icy [17:52:24] The solution the author used of inserting dummy data into the text table did not fix my problem. The updater continues to report a failure and will not continue [17:54:01] If anyone has any ideas I'm all ears. I've been stuck on this for days. [17:55:37] Did the error change when you inserted dummy data? [17:57:40] jfolv: Have you tried running the findBadBlobs.php maintenance script? [17:58:51] No change after dummy data insertion, and no, I haven't tried that script. I'll give it a shot. [18:06:31] That's giving me a lot of "Internal error: couldn't find slots for rev #" [18:07:10] large blocks of those, actually [18:08:17] despite all that, it tells me it found 0 bad revisions [18:11:16] That script is new in 1.35... So it's possible it doesn't catch all edge cases [18:11:23] would be good to get that a bit more documented in a bug report [18:13:29] So you think it's finding them but not fixing them? [18:14:37] could be a multitude of things [18:15:20] The person who wrote it isn't about, and is probably gone for the weekend now [18:16:15] In the meantime, is there any possible way to work around this? [18:16:45] Well, that is kinda what the scrupt is supposed to do... [18:17:36] https://phabricator.wikimedia.org/T205936 [18:19:53] It seems odd that doing the DB update like mentioned on that thread you linked didn't help [18:20:57] if there are references to blobs with id=0, things are already screwed from before the upgrade. The upgrade script is just halting because it now needs to do some modifications and fails when processing those [18:23:01] How can I check for that? [18:23:08] rev_id = 0? [18:24:53] according to https://phabricator.wikimedia.org/T205936#4642641, rows where old_text ends in /0 [18:25:12] Alright, let me run a query [18:26:54] Looking at your error message, that particular case (tt:399632) should be old_id = 399632 [18:29:03] Well, looking for that one (select * from text where old_id = 399632;) shows empty set. [18:33:28] Maybe I should try inserting the dummy data there? [18:42:08] hmm, I wonder where does the tt: comes from, then... I have seen those identifiers before [18:43:01] @Vulpix Thank you clarifying that was the old_id [18:43:09] I inserted dummy data there and the updater is continuing now [18:43:17] I'd been trying to insert the data into the wrong place [18:43:33] Text Table == tt [18:44:09] While we're at it, is there a known bug with cleanupUsersWithNoId just not working? [18:44:21] I got a few of the errors telling me to run it, but it didn't fix them. [18:44:34] Doesn't halt the script though so I wasn't terribly worried about it [18:44:36] Ahh, it should be referenced by the content table: https://www.mediawiki.org/wiki/Manual:Content_table#content_address [18:46:09] This table doesn't exist on 1.29. This means it got populated during the upgrade, inserting wrong ids from the revision table [18:46:22] Yikes, that's a bit worrying [18:46:46] What are the implications of that? Corruption down the line? [18:46:48] But they should be screwed up from before the update [18:47:19] That wouldn't surprise me, considering how long this wiki's been around [18:47:26] This means some revisions had a wrong rev_text_id, and were already inaccessible before the upgrade [18:48:00] you may query the revision table finding rows with rev_text_id = 399632 [18:48:22] Assuming the upgrade hasn't deleted that column [19:00:12] Well it does show a revision when I run that query [19:23:56] that revision was apparently broken before the upgrade [20:53:41] We don't, usually