[00:06:14] thanks Reedy i'm still getting https://dpaste.org/ktkc [00:07:32] >in /home/coronaun/myuk/workflow/policy/LocalSettings.php on line 179 [00:07:34] What is line 179? [00:07:43] it sounds like you've left require_once ''; or somehting [00:08:18] 178 and 179 respectively require_once [00:08:19] wfLoadExtension( 'FlaggedRevs' ); [00:08:54] you don't need the require_once at all [00:08:57] just wfLoadExtension( 'FlaggedRevs' ); [00:09:32] oh fair enough - should it be in a separate area to the ones that it detects by default :) That's what the script suggests anyway :) [00:09:43] I just took the require once from those old instructions! [00:10:40] can a user delete their mediawiki.org account if they've not made any edits? is there a policy? they've signed up by mistake [00:10:55] accounts cannot be deleted [00:11:20] right. is there a written page about it, that i could pass on? [00:11:36] They can request the account be renamed VanishedUser XXXXX if desired [00:12:07] gry: https://en.wikipedia.org/wiki/Wikipedia:Username_policy#Deleting_an_account [00:12:39] they've remarked they made zero edits [00:13:47] There is still log enteries and such that they creation is linked to, we can't do deletions as mentioned [00:13:47] ahh Reedy - i've obviously messed up somewhere here: FlaggedRevs is not compatible with the current MediaWiki core (version 1.35.1), it requires: >= 1.36.0. [00:14:36] p858snake: ok, thanks [00:14:37] https://www.mediawiki.org/wiki/Special:ExtensionDistributor/FlaggedRevs [00:17:01] Reedy: how do you know what version of media wiki you are running as well please? Also, what is your suggested route - I presume i'm going to need to somehow remove the existing one - BUT, that said, that error seems to suggest that the latest mediawikicore that I downloaded at the same time is not up to date enough which is quite confusing. Thanks [00:18:53] tpanarch1st_: https://www.mediawiki.org/wiki/Extension:Approved_Revs might also be a simplier extension depending on the features you want from FlaggedRevs [00:19:32] FlaggedRevs is used at a few Wikimedia wikis, and would be currently supported. [00:19:43] It's being stripped of "features" as we speak [00:19:49] sure p858snake i'm definitely open to exploring that. Essentially, we are p** poor and essentially was looking for something as near to Github for document version control [00:19:49] Special:Version will tell you. But so does that error message [00:20:05] 1.36 isn't released yet, so 1.35 is what you want [00:20:13] You need to pick the version of the extension that matches your version of MW [00:20:18] Reedy, link to progress? [00:20:30] Reedy, of the strippage of features, that is [00:20:33] https://github.com/wikimedia/mediawiki-extensions-FlaggedRevs/commits/master [00:20:38] pretty much every commit by ladsgroup on there :P [00:20:58] gry: https://phabricator.wikimedia.org/T185664 [00:20:59] https://phabricator.wikimedia.org/T277883 [00:23:10] p858snake: which would you say we should role with, we are looking to be able to have recommended edits that we subsequently check and "yay or nay" - bells and whistles would allow the older versions to still be stored [00:23:19] roll not role too much acl! [00:25:47] tpanarch1st_: I havn't really used either [00:26:54] ah :) I mean it might be a little unorthodox to ask, but would either fit in your opinion or is using a Wiki a bit like overkill [00:28:45] Reedy: sorry not ignoring you either and I appreciate your help too! If you have any ideas please feel free to throw your hat in the ring! [13:41:41] Hey, I think I broke something. I get a load of weird unicode question marks when I try and login to my clean install of MediaWiki? [13:43:09] https://imgur.com/a/alHGCj0 [14:21:23] Benji79: This seems to be another instance of this bug: T235554 [14:21:23] T235554: MediaWiki::outputResponsePayload seemingly causes net::ERR_HTTP2_PROTOCOL_ERROR 200 and compression issues in 1.35 - https://phabricator.wikimedia.org/T235554 [14:22:01] Only solution so far is to actually hack MediaWiki core commenting out one line, as explained in the red box [15:27:21] Hi - for the OOUI DateInputWidget (currently used in UploadWizard and maybe elsewhere), is there a way to change the first day of the week shown in the calendar input from Sunday to other days (since in many countries it's Monday or even Saturday)? [15:29:00] Krinkle: could you chime in on https://gerrit.wikimedia.org/r/c/mediawiki/core/+/673534/ ? It's going in the opposite direction from what you suggested, but the analysis on T277795 seems to indicate that it'S what we need. [15:29:01] T277795: User not found by actor ID: [id] - https://phabricator.wikimedia.org/T277795 [15:31:03] Krinkle: I'm not quite sure I understand the failure mode. Is it possible for one job to see a stale repeatable-read snapshot from a previous job?... If not, how can the ID be missing from master? [17:13:25] Generally speaking, is there a better place than here to ask OOUI-related questions? [18:50:29] Yaron: #wikimedia-editing perhaps [18:51:11] duesen: Hm. is this about the insert code path for finding by user id/name, or for the find code path for a specific actor ID? [18:51:46] I think its the latter, and I don't see why a master query could ever be needed there, since if you got the actor ID somehow, that means the row should be visible using the same means (e.g. replica query). [18:52:34] Krinkle: a subsequent POST request (API or runSingleJob) tries to render an RC entry (because Special:REcentChanges is being transcluded), and fails to find the actor name for the actor ID in the recentchanges row. [18:52:46] ...even after falling back to master, apparently. [18:53:09] which seems impossible. curiously, it only happens when transcluding RC, not when viewing it. [18:53:51] BY now, I'm pretty sure that patch won't help... but it'S the only thing I could think of. [18:54:00] duesen: if it queries a recentchanges row from a replica, and finds an actor ID, then it's going to find the actor row pointed to as wel on that same replica no? They're in the same transaction [18:54:38] it should. but it doesn't. [18:56:43] and we have examples where it was able to find it upon checking the master in such cases? Or was it just a bad insert alltogether? [18:56:57] No, it doesn't find it on master either [18:57:07] But when investigating, it's there [18:57:10] right, then I'd be looking for a bad insert. [18:57:11] the insert didn't fail. [18:57:25] e.g. not in the same transaction, part of it rolled back or cancelled. [18:57:58] ok, if it didn't fail, then that still means they weren't in the same transaction. [18:58:05] and thus maybe replicated later? [18:58:21] but we are falling back to master. we should find it then ,right? [18:58:25] given we allow lazy creation of actors on GET requests right now, it wouldn't surprise me if that logic used a deferred commit or something [18:59:09] we don't typically do lazy creation of actors on GET requests. It'S not entirely impossible, but I can#t think of a case where it would happen [18:59:11] maybe, butat that point it's two or three failures away from the source, for all I know that's not the same actor row anymore, but a re-creation [18:59:27] yes, I was starting to wonder that... [18:59:44] if the insert DID fail, the same auto-increment ID would be re-used for a later insert, right? [19:00:26] Oh... I'm starting to remember that we hit a similar case with revision IDs... they got written into a cache already, then something failed, the ID was re-used, and things got really confusing. [19:00:38] I think so yes, if a transaction was never committed, it seems legal for the db to allocate that diferently at some point. I don't know if mariadb does or doesn't. [19:01:30] I can start to see a scenario where e.g. something fatals, but then a deferred update maybe keeps interacting with the same process cache. [19:02:08] I don't know if we have a general/established way of dealing with that [19:02:14] That would mean we are trying to write an RC record for somethign that actually failed to happen [19:02:28] That points to a bigger problems. [19:02:42] But still... why then only during RC transclusion?! [19:02:42] Would deleting my "inport" logs be possable? [19:02:56] but what comes to mind is that perhap db callbacks can be used to clear caches accordingly. [19:03:08] I thnk we use the same logic for the RC entry insert. [19:03:32] hm... but if something failed, why does the RC entry itself get written? [19:03:42] duesen: maybe look if any part of the RC code uses something other than a replica connection if we're in a POST request with pending writes, e.g. when parsing wikitext. [19:03:54] Krinkle: thanks for the pointer. [19:04:07] I'm starting to get a clearer picture now though. [19:04:19] Krinkle: why would action=parse have pending writes? [19:05:09] But that was only one occurrence. most are in RefreshLinks. That does write, but I'd expect that to happen later. [19:05:14] duesen: I don't know about that one per se, but post requests that parse wikitext are usually edits or jobs, which generally do have pending writes. [19:05:32] duesen: RC insert uses a callable defered with dbw cancellation hoooked up [19:05:37] so that scenario should not happen afaik [19:05:40] But yes, if we keep using the actor ID after the insert was roleld back, that would result in confusion. [19:06:23] I don't quite understand the scenario yet, but we should definitly clean up the cache after a rollback. I will work on that. Perhaps that will fix the error. [19:06:52] We may have some misattributed RC entries... though I have the suspicion that these would be RC entries for things that actually failed to happen. [19:06:57] duesen: dbw->onTransactionResolution, trigger_rollback -> clear process cache [19:07:20] Krinkle: yea, thanks. [19:09:09] duesen: another scenario might be that, if the RC queries sometims use a master conn (shouldn't) then maybe its possible it is showing edits that haven't been committed yet, e.g. the current edit, or a recent edit from the same php proc if it was a batch job or something. That would be fixable by making sure SpecialRC never uses the master db, regarldess of http method or hasWrites etc. [19:09:32] but there are places where we automagically query (lb->hasWrites ? master : replica) which we'd want to avoid there if it somehow got into there [19:12:14] That Scenario would be fixed by having ActorStore fall back to master, no? [19:13:13] Krinkle: --^ [19:15:40] duesen: if we use a master connection for RC data while parsing wikitext, then i should be fixed by not using a master. using a master would expose uncommitted data in the parser outut which would be wrong. [19:16:04] such automatic db selection would not be used for things like that [19:16:27] falling back in actor store would mask the problem and expose more master queries instead of fewer. [19:17:05] afaik there should be no need for any currently supported use case to query master from actor store, the revision store use case is quite different because rev IDs are public. [19:18:17] ...we'd still need to fall back to master when looking up by name, no? [19:18:23] because names are public [19:19:39] hm... no. we should just accept the false negative. it does no harm. we'd notice as soon as we try to insert another actor ID for the same name. [19:23:21] I've just been staring at the command 'php update/maintenance.php' and wondering why it doesn't work for ... longer than I care to admit [19:25:24] duesen: yeah, if it's absent on replica we can do a single master query in the form of a lazy insert I guess. That'd be enough rathe rthan asking the master multiple times by default which optimises for the wrong thing. This of course requires a process cache to be put in place first since the most common case of absence will be when we're in a post request for doing the insert, in which subsequent calls in that same requst should indeed get [19:25:25] it from the cache. [19:25:35] I think you were proposnig two separate process caches or something [19:25:42] that sounded good to me :) [19:26:17] Petr has a patch up for that. I didn't look at it yet. [19:28:02] Krinkle: sorry to interrupt the other discussion, but I just tried to post to #wikimedia-editing, and I don't have permission to post there, just so you know. [19:30:53] I guess you need a NickServ login [19:30:56] any login is fine. [19:31:04] guests were disabled in some channels due to spam [19:31:45] the team is also in #mediawiki-visualeditor and on Wikitech-l mailing list [19:35:40] Krinkle: okay, thanks for those suggestions. I just posted my question at https://www.mediawiki.org/wiki/Talk:OOUI - if nobody responds there, I'll try one of the other places you suggested. [20:46:50] Krinkle: This made me think that we should perhaps log/warn when a DeferredUpdate is enqueued from within a DB transaction. If that deferred update relies on the outcome of the DB update, and gets enqueued before the transaction is committed, it will operate on bad assumptions about the database state. [22:24:28] Yaron: you around? [22:30:00] RhinosF1: yes. [22:31:05] Yaron: re cargo, I see no reason not to restrict it fully if it's problematic. It just needs to be handled in that case cleaner and creation etc blocked. Maybe also a script to detect / fix broken ones? [22:32:24] Are you talking about single quotes in table names? [22:33:02] Yaron: ye [22:34:28] Alright. A script seems like unnecessary work... I doubt many people have a single or double quote in any table/field names. [22:34:56] If they did, they probably would have run into the same problems you did. [22:35:02] True [22:35:32] But handling of it should be consistent even if that's just ensuring they can't be created [22:40:23] Sure. Disallowing single and double quotes does seem like the best solution...