[04:11:11] <… ...> السلام عليكم ورحمه الله وبركاته [04:25:31] Petscan is down, with a "This web service cannot be reached. Please contact a maintainer of this project." error. Is there an easy way of one finding out what is the problem? Not being the maintainer, I mean. [04:29:06] Is back on, but my question remains :) (re @ederporto: Petscan is down, with a "This web service cannot be reached. Please contact a maintainer of this project." error. Is there an ea...) [09:05:22] I hope it's not related to this question [09:05:23] https://wikitech.wikimedia.org/wiki/Talk:News/Cloud_VPS_2024_Purge (re @ederporto: Petscan is down, with a "This web service cannot be reached. Please contact a maintainer of this project." error. Is there an ea...) [09:05:34] https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge [11:40:06] I am currently trying "webservice --backend=kubernetes python3.9 restart" on Toolforge but get an error [11:40:55] Hold on. I found my error. I logout out one step too much [11:41:21] I got it restarted! [15:15:17] been a while but our favorite bot went down yesterday bd808 :) [15:15:41] would u mind giving it a bit of a jump start ..hehe [15:33:47] stemoc: you are going to have to remind me again. Its a commons bot right? [15:36:16] yes FlickreviewR_2 [15:36:34] yeah my memory is as bad as yours.. [15:37:41] https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.yifeibot/SAL [15:40:42] !log bd808@tools-bastion-12 tools.yifeibot `kubectl delete pod flr-6d74b958d9-bgkdw` after reports of FlickreviewR 2 not working on IRC [15:40:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.yifeibot/SAL [15:41:08] stemoc: {{done}} 56 days of uptime this time around, so not the worst bot at all :) [15:41:57] yeah hopefully triple digit days next time :) [15:44:27] I think that beat iNaturalistReviewer, which I also had to restart yesterday [15:44:52] probably should get around to putting that on toolforge jobs and writing a health check script [19:38:04] The query [19:38:05] SELECT [19:38:06] COUNT(*) AS count [19:38:08] FROM [19:38:09] flaggedpage_pending [19:38:11] has been displaying outdated results for nearly a day. How to fix that? [19:42:01] @bd808: Can you look at this? [19:49:04] I think this affects all wikis using the FlaggedRevs extension. Must be fixed ASAP [19:55:45] @yetkin: https://replag.toolforge.org/ isn't showing any lag, so whatever it is that you think is out of sync is something other than database lag. [20:00:43] @bd808: I have already checked the replag. What I meant is not the cause, but the effect :-/ [20:01:25] How to go about determining the root cause? Any WMF developers available to look at it? [20:01:34] You have not given any data or even named which wiki at this point. [20:02:06] I have already stated that it is on trwiki (re @wmtelegram_bot: You have not given any data or even named which wiki at this point.) [20:02:53] You just now did, but not before as far as I can see on the IRC side of this channel. [20:04:16] If you believe that the flaggedpage_pending table for trwiki has invalid/incomplete data I think you should start with a phabricator task explaining what data you are looking at and how you believe that the data shows inconsistency that needs to be addressed. [20:05:22] See this: https://t.me/wmcloudirc/69197 (re @wmtelegram_bot: You just now did, but not before as far as I can see on the IRC side of this channel.) [20:05:23] flaggedrevs is always a hot potato for developers. nobody likes touching it. A.mir1 has done some work to try and make flaggedrevs less horrible on the database servers in the past months though [20:06:03] ah, The "on trwiki" at the end never made it to IRC [20:07:20] What data I want to see there is the one on the wiki (https://tr.wikipedia.org/w/index.php?title=%C3%96zel:BekleyenDe%C4%9Fi%C5%9Fiklikler&namespace=&size=1&offset=20240506125218&limit=100) (re @wmtelegram_bot: If you believe that the flaggedpage_pending table for trwiki has invalid/incomplete data I think you should start with a...) [20:08:19] there is an inconsistency between the live wiki and the replia database, which affstcs our tools to malfunction [20:10:09] replication is working, so it seems more likely that there is a logic bug of some sort causing the difference in counts [20:10:55] it I suppose it could be that the trwiki replica that is feeding the wiki replicas its data is somehow detached from the rest of the cluster's state [20:13:41] it used to work fine. Stopped approx. 22-23 hours ago and the db displays the same number (6031) from then [20:14:01] there has been no changes (increase/descrease) at all [20:19:08] T365568 says that that table is now dead code @Yetkin [20:19:09] T365568: Drop flaggedpage_pending from production - https://phabricator.wikimedia.org/T365568 [20:19:31] See also https://gerrit.wikimedia.org/r/c/mediawiki/extensions/FlaggedRevs/+/1025821 and T277883 [20:19:33] T277883: Drop all low-use and unused features of FlaggedRevs to make it more maintainable - https://phabricator.wikimedia.org/T277883 [20:21:01] I would guess that means that your query stopped working when the new code hit trwiki yesterday [20:31:23] @bd808: Thanks for the info. I think there shoudl be announcemkents before such changes take place, for future notice [20:33:36] I don't disagree, but I have no control over such things happening [20:35:31] I doubt that A.mir1 was deliberately keeping this information from the technical community. I think it is more likely that he didn't know that anyone was using that table as part of a workflow outside of MediaWiki. [20:36:57] a lot of things change in any given week. it is sometimes hard to guess who needs to be told about what. and so many things change that it is really tricky to think about telling everyone about everything [20:54:46] As of Mediawiki's workflow, I would prioritize information sharing as one can easily see whether a specific table is being used or not. I have a tool that queries that tabkle 100+ times a day 😀 (re @wmtelegram_bot: a lot of things change in any given week. it is sometimes hard to guess who needs to be told about what. and so many thi...) [21:43:00] We don't keep any analytics about what queries flow through the wiki replicas. It would be technically possible I guess, but generally not useful for the amount of storage it would consume. [21:45:33] Tools that work from the wiki replicas are always going to be fragile. We don't try to break things there purposefully, but MediaWiki is going to evolve and users of a redacted copy of a copy of the live MediaWiki data are not likely to be able to hold back that change.