[00:20:23] [1/2] @cosmicalpha [00:20:23] [2/2] whats this in the config settings of CW of marking a wiki as experimental? [00:21:17] That is something that should be removed from CW and moved to a mw-config implementation soley it is not used in CW at all. [00:22:06] Is it used anywhere at all lol [00:22:23] Formerly in mw-config for checking $cwExperimental [00:22:30] for enabling certain things [00:22:42] But probably should remove [00:22:50] If we still need it we can move to hooks [00:22:58] Hooks support everything now lol [00:25:08] basically the Phorge philosophy [03:14:49] @cosmicalpha system is still trying to reboot, what RAM value should I set it at? [03:16:05] Give it another 20GB for now I guess [03:16:36] db181 is still hung... [03:16:47] ugh [03:16:49] systemd keeps saying it received a SIGINT but doesn't restart [03:16:57] and mysql complains writing to cache takes over 300 seconds [03:17:38] hmm [03:17:39] I don't want to hard restart as mysql still seems to be running queries [03:18:06] Yeah don't do that [03:18:13] Even if we have to wait for awhile [03:18:27] I would rather stay down for awhile longer than risk data corruption [03:18:34] yep [03:20:28] load goes down and then shoots right back up [03:20:48] Can you kill the backup script somehow? [03:21:04] ssh is ded [03:21:10] maybe through qm [03:21:13] what about from cloud [03:22:35] [1/2] ```[root@cloud18.wikitide.net:~]# qm guest exec 102 "killall wikitide-backup" [03:22:35] [2/2] No QEMU guest agent configured``` [03:22:47] were the VMs not installed with qemu-agent enabled? [03:23:00] I have used it before so should... [03:23:14] on db181? [03:23:35] no [03:27:03] finally restarted [03:27:11] We should set limitcpu or something so mysql kills itself if it hits a certain percentage [03:28:46] I agree [03:32:08] @agentisai how are we on RAM usage on cloud* [03:33:50] isnt MySQL killing itself the *issue* here? [03:34:05] No the issue is that it wasn't killing period [03:34:09] Not even rebooting [03:34:32] [1/4] 75% on cloud15 [03:34:32] [2/4] 65% on cloud16 [03:34:32] [3/4] 60% on cloud17 [03:34:32] [4/4] 50% on cloud18 [03:34:53] While yes theoretically it may go down sometimes if it kills with certain CPU usage it would be a lot less downtime and easier to resolve without things hanging I think. [03:34:56] heh, at least it wasn't db171 that was acting up [03:35:10] because then the whole farm would have gone down [03:35:14] Cool. Planning for more RAM next year. [03:35:18] inevitably when mem* once again fills up, cloud16 and cloud17 will be at around 85% to 90% [03:35:26] here herehere here [03:35:35] mem is empty as it crashed recently [03:35:40] silly text box [03:36:05] cosmicalpha: just download more [03:36:13] we need more RAM soon [03:36:32] or figure out how to reclaim some RAM [03:36:45] How are we on disk [03:37:33] most servers are on 70% [03:37:39] but cloud18 is at 80% usage [03:37:46] hm [03:38:00] How fast is that going up [03:38:34] 200GBs per month it seems [03:38:37] per server [03:38:48] uhhh [03:38:52] that's unfeasible... [03:39:31] across all servers? [03:39:39] yes [03:39:42] WHAT ON EARTH [03:39:49] thats [03:39:56] 800GBs per month are added to our servers across all 4 clouds [03:39:56] THATS NEARLY A CLOUD SERVER A MONTH [03:39:59] I don't know how [03:40:01] WHAT [03:40:05] HOW [03:40:20] wait thats ram [03:40:21] I think we will need to reconsider the Dormancy policy changes if that is real [03:40:39] Hm [03:40:49] Alpha, do we deleted files of deleted wikis? [03:40:54] Yes [03:40:56] https://cdn.discordapp.com/attachments/1006789349498699827/1296316995012136961/image.png?ex=6711d8c8&is=67108748&hm=17e5e4346d469943466423e5e5c4217aa5ca0cdad32db350c9de4f00f46679f3& [03:40:57] https://cdn.discordapp.com/attachments/1006789349498699827/1296317000573915206/image.png?ex=6711d8c9&is=67108749&hm=449dd96cfd1ac1d538ecee0c499452e1f889808dd2d3ae495e90e36cb9d8e90f& [03:41:12] What in avernus [03:41:32] That is quite impossible to use so much per month [03:41:41] Some wikis do have a lot of larger files [03:41:53] I suspect much of the usage comes from swift [03:42:00] could be [03:42:14] If we hit a certain number there will have to be some actions taken to limit that. [03:42:18] my plan for infrastructre accounted for issues like this [03:42:21] one avenue is more disks [03:42:26] another is moving to R2 [03:42:27] and yeah we stopped maintaining a resource usage list ala https://issue-tracker.miraheze.org/P386 [03:42:32] We need to be more proactive IMO [03:42:45] What wikis are using the most file space [03:43:11] I will update the resource usage limits on Tuesday probably. [03:43:26] I know i recently imported a 13.6 GB image dump [03:43:34] there may be larger ones though [03:43:51] If any are file host wikis that violate ConPol, we can purge them [03:44:55] If there are large ones like that are closed let me know so we can immediately purge [03:44:55] and 13.6 GB for the record is three times the size of MH commens which is 4 GB [03:45:19] rest in peace miraheze commons [03:45:27] you won't really be missed [03:45:38] I have seen wikis use 200GB of images [03:45:46] AVID? [03:45:51] Bo [03:45:52] no [03:46:08] WMF commons for comparison is 564 TB [03:46:14] Ask Stewards for any closes of file hosts they remember [03:46:36] our biggest wiki was that one feet wiki with like 700GBs of pictures (including thumbnails) [03:46:38] we could also make a script to dig through deletion logs [03:46:51] this may be what I was thinking. [03:46:54] soft deleted a while ago [03:47:00] was it hard nuked? [03:47:05] yes [03:47:14] good [03:47:14] I wonder if there is a way to query the sizes of swift containers [03:47:16] the file host provision of the CP targetted them [03:47:41] I need to make a script to compare swift containers with DBs and find containers that have no DB [03:47:52] That would help [03:47:54] Just in case there are some that can be purged [03:48:23] I'll talk to stewards in the morning for as many file host deleted wikis as they recall [03:48:39] and forward to infra for expedited hard deletion [03:49:12] I guess now that my RW stuff is basically done I'll move on to something that may be just a tiny bit more important lol and start looking into what on earth is using so much darn storage. [03:49:22] oh yeah gotta saw [03:49:24] it looks [03:49:25] awesoem [03:49:32] actually using it [03:50:38] still technically more to come lol [03:50:41] hmm so I now have more parameters for what to formulate for the fundraiser + future expansion plans as per the plan of wikitide [03:50:51] I'll try and finish the implementation for fast track deletion [03:51:05] thanks for that [03:51:07] Is there a way to more often deleted wikis that were fast tracked? [03:51:14] Since there is quite literally nothing lost [03:51:32] though granted also the least space [03:51:47] how much space does one blank brand new wiki take up [03:52:39] shit almost 12 am [03:53:33] each swift object server is at about 88% disk usage with 1.5 TB total storage each [03:54:03] how? [03:54:11] Okay so this is more pressing than I previously thought on disk space. [03:54:13] ejorvdlvn;zlvgmshermesor;gcs [03:54:36] welp time to declare a state of wiki emergancy [03:54:50] wiki creators have an excuse to up standards again [03:54:58] at least according to grafana [03:55:07] I will have to discuss with the board if we need more this year. My plan was to expand next year but at this rate it seems like we need to move that up... [03:55:58] is actor_id a climbing int starting at one in the database [03:56:15] I think it is AUTO_INCREMENT [03:56:31] so starts at 1? (or 0) [03:56:46] Yep `actor_id BIGINT UNSIGNED AUTO_INCREMENT NOT NULL,` [03:57:05] I don't think you can have a 0 actor [03:57:11] so what is envisioned in the plan was for purchases to be done in November and December when it is most favorable and hold a fundraiser during that time [03:57:17] I do not want to deal with joins so my thinking is since every wiki has Mediawiki default edit first I can assume that actor will always be actor_id one [03:57:24] IPs are 0 actors but not really stored as such [03:57:36] more WD Greens? [03:57:43] I think this is what we will have to do at this point [03:57:49] of course [03:57:52] actually, HDDs [03:57:56] 3200 rpm hdds [03:58:00] of course [03:58:07] SSDs are for woke people [03:59:03] anyways [03:59:07]   $row = $dbr->selectRow( [03:59:07]    'recentchanges', [03:59:08]    'rc_timestamp', [03:59:08]    [ [03:59:09]     "rc_log_action != 'renameuser'", [03:59:09]     "rc_log_action != 'newusers'", [03:59:10]     "rc_actor != 1" [03:59:10]    ], [03:59:11]    __METHOD__, [03:59:11]    [ [03:59:12]     'ORDER BY' => 'rc_timestamp DESC' [03:59:12]    ] [03:59:13]   ); [03:59:18] this will probably work right? [04:00:16] I think it already checks for no result `return $row ? $row->rc_timestamp : 0;` [04:00:48] I would make that null but eh [04:03:12] so each new wiki is ~1.5 Gb? [04:04:21] but text content size is minuscule, real growth goes w/ files? [04:05:08] No I don't think a new database would be that much [04:05:35] This is still an issue though [04:06:09] I'll leave the 0 in case I break something [04:06:22] its midnight! [04:06:27] time for resty [04:06:49] gn [04:07:01] night [04:07:09] you much better at sleeping than me lol [04:07:27] you act like I have a choice [04:07:59] high school is already annoying sleeping at 10:30 or smt [04:08:28] I'll try and finish a first draft of the implementation tmr then make a PR to run it against CI [04:08:42] sounds good [04:08:49] around 4MBs [04:08:59] that's the size of a recently created wiki [04:09:13] checking for rc_actor != 1 should be fine to ignore Mediawiki default ja? [04:09:30] And we create what 500 wikis a month? [04:09:48] time for a fun query [04:10:07] I literally wrote a script to graph this [04:10:20] although I still need to figure out how to plot the slope [04:12:23] seems about right [04:12:29] 400 wikis created so far [04:12:35] 410 technically [04:13:02] 7122 this year?? [04:13:19] Did you not see the graph I sent lol [04:13:25] its insane [04:13:35] I thought it was more [04:13:43] I thought we got to 40k in 24 [04:14:10] we did [04:14:16] agentisai: wrong number [04:14:20] its at least 10k [04:14:28] #40000 was Jan 17 [04:15:03] A 5th of the farms questions over the last 9 and a half years have been in the last 10 months [04:15:05] I ran a query showing the number of wikis approved this year [04:15:10] Oh [04:15:12] approved [04:15:19] important bit yea [04:15:32] you see cosmic? [04:15:36] this is why you sleep [04:15:54] I could see number created as well [04:16:04] night night [04:16:20] wiki work will continue tmr to combat the wiki crisis [04:16:59] how much is upgrading storage and RAM gonna cost though yeesh [04:17:15] not stonks [04:17:50] yeah, 7149 wikis created [04:17:54] vs 7122 approved [04:18:34] 12049 wikis requested this year [04:18:55] up from 10384 requested all last year [04:20:10] only 6767 requested in 2022 [04:20:45] 6214 in 2021, 5610 in 2020 and 3450 in 2019 [05:21:42] damn [05:48:21] Turns out that a video calling you the post-Fandom harbor of next resort makes waves. 😦 [05:48:48] Big thanks to all our wiki creators past and present for their work handling this unprecedented surge in interest. [07:11:41] <_changezi_> Miraheze o google searchs [07:16:59] mossbag, or something new? [07:17:06] blame pizza tower wiki [07:44:23] hey I just had to watch that video. I knew about it but only watched it for the first time now lol [07:46:39] huh, surprised you haven't watched it until now [07:47:47] another video was made later but it didn't get that much traction [07:47:56] Me too lol [07:48:00] mossbag is over 3 million views now [07:48:08] > [17/10/2024 18:47] another video was made later but it didn't get that much traction [07:48:09] linkie? [07:48:17] fandom started shitting itself a week after it emerged [07:49:34] I almost forgot about that McDonald's thing at Fandom like WTF were they thinking just 💰 🤑 💸 💲 🪙 💶? [07:49:56] They could not have thought that would go over well. [07:50:06] companies gotta company [07:50:31] til github has an endpoint for your gpg key [07:50:33] https://github.com/BlankEclair.gpg [07:50:42] well yeah but still lol [07:51:01] wait how'd I get in IRC lol [07:51:17] you are merging with the irc consciousness [07:51:21] I must have clicked my ping notif from my Discord message sending a ping to IRC [07:51:21] please do not resist [07:51:26] lol [07:51:35] reverse of the irc xkcd lol [07:54:52] [10:05:48] [1/3] another wiki failed at creation, per community portal [10:05:48] [2/3] [10:05:49] [3/3] [11:26:51] how often does the manageInactiveWikis run [11:28:14] PixDeVl: check puppet [11:29:42] aaaaaaaaaaaa not puuuupppeeeetttt [11:29:49] he's scary [11:31:22] reception123: https://issue-tracker.miraheze.org/P526 uuuh I thought we deleted a few of these. also, why the hell is AVID here [11:32:57] PixDeVl: I can look at puppet in half an hour if you want [11:33:06] Well like 25 minutes [11:33:13] eh i can look [11:33:24] its on diffusion [11:34:07] [1/6] 📢 From Wikipedia/Wikimedia Tech News: 📢 [11:34:08] [2/6] The Structured Discussion extension (also known as Flow) is starting to be removed. This extension is unmaintained and causes issues. It will be replaced by DiscussionTools, which is used on any regular talk page. A first set of wikis are being contacted. These wikis are invited to stop using Flow, and to move all Flow boards to sub-pages, as archives. At these wikis, a script will [11:34:08] [3/6] move all Flow pages that aren't a sub-page to a sub-page automatically, starting on 22 October 2024. On 28 October 2024, all Flow boards at these wikis will be set in read-only mode. [11:34:08] [4/6] https://www.mediawiki.org/wiki/Special:MyLanguage/Help:DiscussionTools [11:34:08] [5/6] https://www.mediawiki.org/wiki/Structured_Discussions/Deprecation [11:34:09] [6/6] https://phabricator.wikimedia.org/T370722 [11:34:37] Yup [11:34:43] Will be coming to Miraheze soon [11:35:02] Next summer cause branch cut for next already happened [11:35:21] Well probably in late Q1 / early Q2 to prep for a mid year upgrade [11:35:25] @rodejong [11:35:59] Sounds good [11:36:02] hi Robert how ya doin [11:36:25] Better. We had a rough time in our family so [11:36:45] hense my absense [11:36:52] It's already restricted so no new installations [11:37:10] Oh, I'm sorry to hear that man, Hope everything is okay :heart: [11:37:49] @cosmicalpha's theory is because the process was killed during deletion [11:37:54] so we'll have to try to re-delete them from swift [11:38:28] Still lingering after the passing of my father in law. My wife and he were very close, so had to be there for her [11:39:21] Yea, I know that feeling [11:39:30] [1/2] https://www.mediawiki.org/wiki/MediaWiki_Product_Insights/Reports/September_2024 [11:39:30] [2/2] https://www.mediawiki.org/wiki/Technical_Community_Newsletter/2024/October [11:44:09] RhinosF1 found it [11:44:11]             minute => '5', [11:44:11]             hour => '12' [11:44:14] now tf this means [11:44:57] PixDeVl: starts at 12:05 am [11:45:12] why [11:45:22] but everyday ig so yay [11:46:06] dont really matter I'll probably have to just change some of the logic so the bit in the else can also run. or just have each state be one day after the previous [11:46:06] good to know that animatedmusclewomenwiki files are still kicking around [11:46:11] and so are drawnfeetwiki apparently [11:46:16] Not for much longer [11:46:26] true [11:46:46] anyone wanna hit me up for my backup of animatedmusclewomenwiki? file hosting is gonna a bit of a nightmare though xD [11:46:48] boys (and girls if so inclined of course), burn it down! [11:46:56] don't forget enbies [11:47:04] and non binary [11:47:05] and yes, i'd love to engage in some arson <3 [11:47:15] i mean, enby is basically a shorthand of non-binary [11:47:27] comes from Non-Binary [11:47:45] I still don't get where the terms enby and ace came from(ace I can see coming from asexual but still) [11:47:49] ... [11:47:51] oooooooooooooh [11:47:57] that's silly [11:47:58] ace is asexual yeah [11:47:59] Why we run that daily I've no idea [11:48:00] so it much be right [11:48:06] aro is aromantic [11:48:12] The more you know [11:48:15] We do deletions like every 6 months [11:48:18] aroace is a portmanteau of aro and ace [11:48:31] and by the way, that's the first time i've had to use the word portmanteau this year [11:48:45] remind me what aromantic means again [11:48:56] you don't experience romantic attraction [11:49:15] then what's asexual [11:49:30] you don't experience sexual attraction [11:49:40] yeah there's a difference [11:50:06] oh [11:51:48] (do you need me to explain the difference between sexual and romantic attraction?) [11:52:22] nah, I'm aware (prob) [11:52:29] oki cool [11:52:31] less effort for me [11:53:15] may Google later but hey I'm already late [11:53:28] should probably go to #offtopic [11:53:31] what do we run daily? [11:53:38] anyone can deal w/ failed wiki creation mentioned earlier? [11:53:47] manageInactiveWikis [11:53:50] which one? It just needs a manual Special:CreateWiki [11:54:06] uh... why does wiki creation fail? [11:54:08] ah, I mean I guess we could run it less often but there's not really a need to [11:54:14] ^ [11:54:14] pixl disconnected btw [11:54:41] It failed due to CA's rewrite at the beginning and some wikis weren't created [11:54:55] oh darn, we're still feeling the effects from that? [11:55:17] yeah that was on 12 october [11:55:21] also wow, first look at the new design [12:07:41] Why not [12:07:52] Discord ✨ [12:07:52] touche [12:58:49] Where? [13:07:38] RWQ [14:37:13] going back to the conversation from last night about disk space i've managed to determine that the largest swift container is 240 GB [14:37:37] woah, which [14:39:04] it's a private wiki, not 100% sure if I can publicly reveal the name, since Special:MediaStatistic would not be visible to most [14:39:46] oh [14:39:55] hopefully it complies with the content policy then [14:40:08] Was gonna is, is it in compliance [14:40:09] Or actuve [14:41:03] i mean 190k jpg type files certainly will do it [14:41:27] Could be worse, could be png [14:41:28] eepygirlwiki [14:41:31] ahem [14:41:33] .gif [14:41:46] Touché [14:43:45] So not a file host specific wiki [14:46:29] the largest container for a public wiki is for stardustlabswiki [14:46:58] while the actual files aren't that big, they have 126GB worth of thumbnails [14:47:26] thumbnails go into a seperate container [14:47:29] MacFan4000: how much for files and how much thumbs? [14:48:13] https://stardustlabs.miraheze.org/wiki/Special:MediaStatistics [14:48:19] Files should be proxied through thumbor or something [14:48:23] That would make it better [14:48:27] the normal file container is 7 GB [14:49:35] Wdym thumbnails [14:49:43] Hm? [14:51:20] btw they way i'm getting these stats was by piping the output of swift list -l into a txt file, and then putting together a python script that finds the largest size value [14:51:25] MacFan4000: what about thumbs [14:51:35] as I said 126GB [14:52:47] MacFan4000: 18 thumbs to a file sounds wrong [14:54:21] the wiki has mostly PNGs going by MediaStatistics [14:55:07] ugh [14:55:16] db161 is exploding [14:55:28] MacFan4000: that's still a lot [14:55:32] should be like 4 or 5 [14:55:34] I managed to SSH in time so I can see the load is at 170 [14:55:41] 180 now [14:58:08] [1/2] ```[agent@db161:~]$ sudo reboot [14:58:09] [2/2] Call to Reboot failed: Connection timed out``` [14:58:24] a true catch-22 [15:03:17] Nah thats about right [15:03:49] You can arbitratily force a different dimension and MediaWiki will generate a thumb for it [15:03:57] yes you can [15:04:20] but it could be being abused [15:04:29] That's MediaWiki 🤷‍♀️ [15:04:33] Even worse if its in a gallery [15:04:47] I doubt it’s intentional but should probably look at that [15:06:50] 18 sounds too high [15:06:55] for normal use [16:29:22] It really doesn't