[00:01:09] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/96306f57e8e10541fd63ccccf7bb7b945a42859f [00:01:09] 02statichelp/03main 07WikiTideBot 0396306f5 Bot: Auto-update Tech namespace pages 2025-06-20 00:01:07 [00:05:06] RECOVERY - cp171 Disk Space on cp171 is OK: DISK OK - free space: / 56750MiB (12% inode=99%); [00:05:52] RECOVERY - cp201 Disk Space on cp201 is OK: DISK OK - free space: / 59683MiB (13% inode=99%); [00:05:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750374320000&orgId=1&to=1750377920000 [00:06:29] RECOVERY - cp191 Disk Space on cp191 is OK: DISK OK - free space: / 57133MiB (12% inode=99%); [00:51:19] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/dba0ec853bff18f8b7d747fc6b05de57ca782cda [00:51:20] 02statichelp/03main 07WikiTideBot 03dba0ec8 Bot: Auto-update Tech namespace pages 2025-06-20 00:51:17 [01:12:53] PROBLEM - mwtask181 Disk Space on mwtask181 is WARNING: DISK WARNING - free space: / 23635MiB (10% inode=95%); [01:14:53] [02puppet] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/6b80f88c6495de776a346f1e6e9bc04f18760ed2 [01:14:53] 02puppet/03main 07CosmicAlpha 036b80f88 techdocs: fix [01:16:12] miraheze/puppet - Universal-Omega the build passed. [01:16:53] RECOVERY - mwtask181 Disk Space on mwtask181 is OK: DISK OK - free space: / 27807MiB (12% inode=95%); [01:21:44] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/7c5cdd5179fd94b818cb3fa2b11e77cc92fe7536 [01:21:44] 02statichelp/03main 07WikiTideBot 037c5cdd5 Bot: Auto-update Tech namespace pages 2025-06-20 01:21:42 [02:17:22] [02puppet] 07Universal-Omega created 03phd-monitor (+1 new commit) 13https://github.com/miraheze/puppet/commit/84451548d5d0 [02:17:22] 02puppet/03phd-monitor 07CosmicAlpha 038445154 phorge: use systemd::monitor for monitoring phd [02:17:27] [02puppet] 07Universal-Omega opened pull request #4415: phorge: use systemd::monitor for monitoring phd (03main...03phd-monitor) 13https://github.com/miraheze/puppet/pull/4415 [02:17:31] [02puppet] 07coderabbitai[bot] commented on pull request #4415:
[…] 13https://github.com/miraheze/puppet/pull/4415#issuecomment-2989612728 [03:00:27] RECOVERY - db171 Check unit status of db-backups on db171 is OK: OK: Status of the systemd unit db-backups [04:22:53] PROBLEM - mwtask181 Disk Space on mwtask181 is WARNING: DISK WARNING - free space: / 23089MiB (10% inode=95%); [04:24:53] RECOVERY - mwtask181 Disk Space on mwtask181 is OK: DISK OK - free space: / 28075MiB (12% inode=95%); [05:06:33] PROBLEM - mwtask181 Disk Space on mwtask181 is WARNING: DISK WARNING - free space: / 22679MiB (10% inode=95%); [05:14:27] RECOVERY - mwtask181 Disk Space on mwtask181 is OK: DISK OK - free space: / 24545MiB (11% inode=95%); [05:18:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750393100000&orgId=1&to=1750396733353 [05:28:15] PROBLEM - mwtask181 Disk Space on mwtask181 is WARNING: DISK WARNING - free space: / 23220MiB (10% inode=95%); [05:28:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750393640000&orgId=1&to=1750397240000 [05:30:13] RECOVERY - mwtask181 Disk Space on mwtask181 is OK: DISK OK - free space: / 28287MiB (12% inode=95%); [06:41:33] [02mw-config] 07songnguxyz opened pull request #5994: Change settings for cgwiki and lhmnwiki (07miraheze:03main...07songnguxyz:03multi) 13https://github.com/miraheze/mw-config/pull/5994 [06:42:27] miraheze/mw-config - songnguxyz the build passed. [06:42:44] miraheze/mw-config - songnguxyz the build passed. [06:51:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750398680000&orgId=1&to=1750402313357 [07:06:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750398680000&orgId=1&to=1750403180000 [08:06:31] PROBLEM - db171 Current Load on db171 is CRITICAL: LOAD CRITICAL - total load average: 309.48, 124.55, 47.47 [08:06:33] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [08:06:34] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [08:06:39] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:41] PROBLEM - cp171 HTTPS on cp171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [08:06:41] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:44] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:50] PROBLEM - mw202 HTTPS on mw202 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:52] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:52] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:53] PROBLEM - mwtask151 MediaWiki Rendering on mwtask151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:06:55] PROBLEM - mw201 MediaWiki Rendering on mw201 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [08:06:56] PROBLEM - mwtask161 MediaWiki Rendering on mwtask161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:06:56] PROBLEM - mw181 HTTPS on mw181 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:06:58] PROBLEM - mwtask171 MediaWiki Rendering on mwtask171 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:07:02] PROBLEM - mwtask181 MediaWiki Rendering on mwtask181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:07:07] PROBLEM - cp161 HTTPS on cp161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [08:07:13] PROBLEM - mw193 HTTPS on mw193 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:07:17] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [08:07:17] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [08:07:19] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:07:23] PROBLEM - mw192 HTTPS on mw192 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [08:07:25] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.017 second response time [08:07:40] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.018 second response time [08:07:40] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 19 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw163 mw173 mw183 mw191 mw192 mw193 mw201 mw202 mw203 mediawiki [08:07:42] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [08:07:44] PROBLEM - mw193 MediaWiki Rendering on mw193 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [08:07:47] PROBLEM - cp201 HTTPS on cp201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [08:07:52] PROBLEM - mw202 MediaWiki Rendering on mw202 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:07:52] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:07:54] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:07:59] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [08:08:05] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:08] PROBLEM - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is CRITICAL: CRITICAL - NGINX Error Rate is 76% [08:08:09] PROBLEM - mw203 HTTPS on mw203 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [08:08:11] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 19 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw163 mw173 mw183 mw191 mw192 mw193 mw201 mw202 mw203 mediawiki [08:08:12] PROBLEM - cp191 HTTPS on cp191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [08:08:12] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [08:08:13] PROBLEM - mw192 MediaWiki Rendering on mw192 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:18] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [08:08:18] PROBLEM - mw191 HTTPS on mw191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [08:08:19] PROBLEM - mw203 MediaWiki Rendering on mw203 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:22] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [08:08:25] PROBLEM - mw201 HTTPS on mw201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [08:08:28] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:28] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:28] PROBLEM - mw191 MediaWiki Rendering on mw191 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:08:30] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 19 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw163 mw173 mw183 mw191 mw192 mw193 mw201 mw202 mw203 mediawiki [08:08:32] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [08:08:34] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 19 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw163 mw173 mw183 mw191 mw192 mw193 mw201 mw202 mw203 mediawiki [08:09:26] PROBLEM - db171 APT on db171 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:09:39] PROBLEM - db171 Puppet on db171 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:11:22] PROBLEM - db171 MariaDB Connections on db171 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db171.fsslc.wtn...', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_conne [08:11:22] on line 66Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db171.fsslc.wtn...', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.017 second response time [08:12:06] RECOVERY - mw203 HTTPS on mw203 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.017 second response time [08:12:08] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:12:09] RECOVERY - mw191 HTTPS on mw191 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:12:12] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:12:15] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.019 second response time [08:12:18] RECOVERY - mw201 HTTPS on mw201 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.016 second response time [08:13:25] RECOVERY - mw193 HTTPS on mw193 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 2.445 second response time [08:13:29] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.019 second response time [08:13:40] RECOVERY - mw192 HTTPS on mw192 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.014 second response time [08:13:47] RECOVERY - cp201 HTTPS on cp201 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4038 bytes in 0.063 second response time [08:14:11] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [08:14:12] RECOVERY - cp191 HTTPS on cp191 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4038 bytes in 0.059 second response time [08:14:30] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [08:14:32] RECOVERY - db171 APT on db171 is OK: APT OK: 54 packages available for upgrade (0 critical updates). [08:14:34] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [08:14:37] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:14:44] RECOVERY - cp171 HTTPS on cp171 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4038 bytes in 0.059 second response time [08:14:46] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.017 second response time [08:14:47] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.021 second response time [08:14:49] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:14:50] PROBLEM - db171 MariaDB on db171 is CRITICAL: Can't connect to server on 'db171.fsslc.wtnet' (115) [08:14:58] RECOVERY - mw181 HTTPS on mw181 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.020 second response time [08:14:59] RECOVERY - mw202 HTTPS on mw202 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.021 second response time [08:15:04] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.018 second response time [08:15:04] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 200 - Status line output matched "HTTP/2 200" - 271 bytes in 0.024 second response time [08:15:13] RECOVERY - cp161 HTTPS on cp161 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4093 bytes in 0.066 second response time [08:15:40] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [08:16:02] PROBLEM - mwtask181 Check unit status of mediawiki_job_update-wikibase-sites-table on mwtask181 is CRITICAL: CRITICAL: Status of the systemd unit mediawiki_job_update-wikibase-sites-table [08:17:32] !log [blankeclair@mwtask181] starting deploy of {'config': True} to all [08:17:42] !log [blankeclair@mwtask181] DEPLOY ABORTED: Canary check failed for publictestwiki.com@mw181 [08:18:17] !log [blankeclair@mwtask181] starting deploy of {'config': True, 'force': True} to all [08:18:32] !log putting c3 in maintenance mode since db171's cpu is through the roof [08:18:36] oh wait [08:18:39] !log [blankeclair@mwtask181] finished deploy of {'config': True, 'force': True} to all - SUCCESS in 21s [08:19:19] eh, whatevs [08:19:28] --force is because i got grafana'd for the canary check [08:21:26] PROBLEM - db171 APT on db171 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:23:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750404200000&orgId=1&to=1750407833354 [08:27:27] RECOVERY - db171 Puppet on db171 is OK: OK: Puppet is currently enabled, last run 39 minutes ago with 0 failures [08:29:15] RECOVERY - db171 APT on db171 is OK: APT OK: 54 packages available for upgrade (0 critical updates). [08:30:30] RECOVERY - db171 Current Load on db171 is OK: LOAD OK - total load average: 1.38, 0.44, 0.16 [08:30:50] RECOVERY - db171 MariaDB on db171 is OK: Uptime: 100 Threads: 22 Questions: 108963 Slow queries: 0 Opens: 619 Open tables: 613 Queries per second avg: 1089.630 [08:31:01] !log v [08:31:03] ope [08:31:04] !log sudo -u www-data kill 3780355 3791051 [08:31:18] RECOVERY - db171 MariaDB Connections on db171 is OK: OK connection usage: 2.3%Current connections: 23 [08:31:40] !log [blankeclair@mwtask181] starting deploy of {'config': True} to all [08:31:55] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.232 second response time [08:32:01] !log [blankeclair@mwtask181] finished deploy of {'config': True} to all - SUCCESS in 21s [08:32:03] RECOVERY - mw192 MediaWiki Rendering on mw192 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.186 second response time [08:32:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:32:08] RECOVERY - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is OK: OK - NGINX Error Rate is 38% [08:32:09] RECOVERY - mw203 MediaWiki Rendering on mw203 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.201 second response time [08:32:16] !log repool c3 [08:32:18] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.189 second response time [08:32:18] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.188 second response time [08:32:19] RECOVERY - mw191 MediaWiki Rendering on mw191 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.188 second response time [08:32:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:32:26] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.180 second response time [08:32:26] RECOVERY - mwtask151 MediaWiki Rendering on mwtask151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.248 second response time [08:32:30] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.174 second response time [08:32:31] RECOVERY - mwtask161 MediaWiki Rendering on mwtask161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.213 second response time [08:32:37] RECOVERY - mwtask181 MediaWiki Rendering on mwtask181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.237 second response time [08:32:56] RECOVERY - mw201 MediaWiki Rendering on mw201 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.188 second response time [08:33:12] RECOVERY - mwtask171 MediaWiki Rendering on mwtask171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.197 second response time [08:33:20] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.164 second response time [08:33:20] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.167 second response time [08:33:25] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.203 second response time [08:33:40] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.202 second response time [08:33:42] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.198 second response time [08:33:42] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.203 second response time [08:33:44] RECOVERY - mw193 MediaWiki Rendering on mw193 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.206 second response time [08:33:44] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.195 second response time [08:33:56] RECOVERY - mw202 MediaWiki Rendering on mw202 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.193 second response time [08:43:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750405280000&orgId=1&to=1750408880000 [09:47:06] PROBLEM - cp171 Disk Space on cp171 is WARNING: DISK WARNING - free space: / 49874MiB (10% inode=99%); [10:34:29] PROBLEM - cp191 Disk Space on cp191 is WARNING: DISK WARNING - free space: / 49944MiB (10% inode=99%); [10:46:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750412780000&orgId=1&to=1750416413356 [10:51:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750412900000&orgId=1&to=1750416500000 [12:10:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750417820000&orgId=1&to=1750421453355 [12:25:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750418480000&orgId=1&to=1750422080000 [13:13:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750421600000&orgId=1&to=1750425233354 [13:18:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750421840000&orgId=1&to=1750425440000 [13:52:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750423940000&orgId=1&to=1750427573351 [14:22:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750425680000&orgId=1&to=1750429280000 [14:28:58] [02ssl] 07asko1 opened pull request #861: T13876: Redirect trollpasta.miraheze.org to trollpasta.com (07miraheze:03main...07asko1:03T13876) 13https://github.com/miraheze/ssl/pull/861 [14:33:39] [02ssl] 07coderabbitai[bot] commented on pull request #861: A new redirect entry named `trollpastawiki` was added to the `redirects.yaml` configuration. This entry redirects requests from `trollpasta.miraheze.org` to `trollpasta.com` and includes SSL and HSTS settings specific to Cloudflare. […] 13https://github.com/miraheze/ssl/pull/861#issuecomment-2991850979 [15:15:52] PROBLEM - cp201 Disk Space on cp201 is WARNING: DISK WARNING - free space: / 49835MiB (10% inode=99%); [16:53:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750434800000&orgId=1&to=1750438433352 [17:10:54] RECOVERY - mwtask181 Check unit status of mediawiki_job_update-wikibase-sites-table on mwtask181 is OK: OK: Status of the systemd unit mediawiki_job_update-wikibase-sites-table [17:43:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750434800000&orgId=1&to=1750441400000 [18:18:40] !log fixed corrupted JSON on nynthidbwiki, see T13865#277956 for the command [18:18:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:29:33] !log fixed corrupted JSON on gfiwiki (T13872#277964) and kelevarwiki (T13874#277967) [18:29:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:41:31] [02puppet] 07Universal-Omega created 03mydumper (+1 new commit) 13https://github.com/miraheze/puppet/commit/2e9df47e716b [18:41:31] 02puppet/03mydumper 07CosmicAlpha 032e9df47 wikitide-backup: improve sql dumps to use mydumper and cleanup [18:41:36] [02puppet] 07Universal-Omega opened pull request #4416: wikitide-backup: improve sql dumps to use mydumper and cleanup (03main...03mydumper) 13https://github.com/miraheze/puppet/pull/4416 [18:41:42] [02puppet] 07coderabbitai[bot] commented on pull request #4416:
[…] 13https://github.com/miraheze/puppet/pull/4416#issuecomment-2992510813 [18:43:30] [02puppet] 07Universal-Omega pushed 1 new commit to 03mydumper 13https://github.com/miraheze/puppet/commit/28e239e12a67071f5e5cc6dfd1d19c374d8a5098 [18:43:30] 02puppet/03mydumper 07CosmicAlpha 0328e239e Fix typo [18:51:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750441880000&orgId=1&to=1750445513353 [19:01:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750441880000&orgId=1&to=1750446080000 [19:17:18] PROBLEM - farthestfrontier.wiki - Cloudflare on sslhost is WARNING: WARNING - Certificate 'farthestfrontier.wiki' expires in 28 day(s) (Sat 19 Jul 2025 06:50:21 PM GMT +0000). [19:42:53] [Grafana] FIRING: The estimated time for the MediaWiki JobQueue to clear is excessively high (8 hours) for an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750444940000&orgId=1&to=1750448573356 [19:43:32] [02ssl] 07Universal-Omega merged 07asko1's pull request #861: T13876: Redirect trollpasta.miraheze.org to trollpasta.com (07miraheze:03main...07asko1:03T13876) 13https://github.com/miraheze/ssl/pull/861 [19:43:32] [02ssl] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/9b9c415198db9ac71c40b72c3df327cfb9c59cde [19:43:32] 02ssl/03main 07Asko 039b9c415 T13876: Redirect trollpasta.miraheze.org to trollpasta.com (#861)… [19:47:53] [Grafana] RESOLVED: MediaWiki JobQueue is stalled https://grafana.wikitide.net/d/GtxbP1Xnk?from=1750444940000&orgId=1&to=1750448770009 [19:52:37] [02CreateWiki] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/CreateWiki/commit/f337df2ea0477a4aa57627889123ac26bf31d04c [19:52:38] 02CreateWiki/03main 07CosmicAlpha 03f337df2 Add extra safety check [19:52:54] !log [universalomega@mwtask181] starting deploy of {'versions': '1.43', 'upgrade_extensions': 'CreateWiki'} to all [19:52:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:53:18] !log [universalomega@mwtask181] finished deploy of {'versions': '1.43', 'upgrade_extensions': 'CreateWiki'} to all - SUCCESS in 24s [19:53:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:04:21] [02ManageWiki] 07Universal-Omega created 03json from 03main (+0 new commit) 13https://github.com/miraheze/ManageWiki/compare/json [20:05:46] [02ManageWiki] 07Universal-Omega pushed 1 new commit to 03json 13https://github.com/miraheze/ManageWiki/commit/3578b3fbf99a3979ecf6933a7068777e71c753b4 [20:05:46] 02ManageWiki/03json 07CosmicAlpha 033578b3f Use JSON datatype for JSON fields.… [20:05:58] [02ManageWiki] 07Universal-Omega opened pull request #709: Use JSON datatype for JSON fields. (03main...03json) 13https://github.com/miraheze/ManageWiki/pull/709 [20:06:04] [02ManageWiki] 07coderabbitai[bot] commented on pull request #709:
[…] 13https://github.com/miraheze/ManageWiki/pull/709#issuecomment-2992666328 [20:11:13] miraheze/CreateWiki - Universal-Omega the build passed. [20:11:37] miraheze/ManageWiki - Universal-Omega the build passed. [20:21:34] !log deleted a few GUP related kafka topics because it was stuck on spam, it did something eventually good - cursed jobqueue [20:21:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:21:57] !log also attempted to fix the stalled alert by preventing /0 on grafana (take 2) [20:22:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:59:22] PROBLEM - db172 Puppet on db172 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[apt_update] [21:01:22] RECOVERY - db172 Puppet on db172 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:11:24] [02puppet] 07Universal-Omega created 03mariadb-apt (+1 new commit) 13https://github.com/miraheze/puppet/commit/6468c51fa08e [21:11:24] 02puppet/03mariadb-apt 07CosmicAlpha 036468c51 mariadb: fix apt package [21:11:28] [02puppet] 07Universal-Omega opened pull request #4417: mariadb: fix apt package (03main...03mariadb-apt) 13https://github.com/miraheze/puppet/pull/4417 [21:11:34] [02puppet] 07coderabbitai[bot] commented on pull request #4417:
[…] 13https://github.com/miraheze/puppet/pull/4417#issuecomment-2992917377 [21:12:28] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/4079ca09ee3128182b9b15a2fda649bf6024123a [21:12:28] 02puppet/03mariadb-apt 07CosmicAlpha 034079ca0 Add [21:30:08] !log [somerandomdeveloper@mwtask181] starting deploy of {'versions': '1.43', 'upgrade_extensions': 'ThemeToggle'} to all [21:30:09] !log [somerandomdeveloper@mwtask181] finished deploy of {'versions': '1.43', 'upgrade_extensions': 'ThemeToggle'} to all - SUCCESS in 1s [21:30:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:30:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:31:13] !log [somerandomdeveloper@mwtask181] starting deploy of {'force_upgrade': True, 'versions': '1.43', 'upgrade_extensions': 'ThemeToggle'} to all [21:31:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:31:36] !log [somerandomdeveloper@mwtask181] finished deploy of {'force_upgrade': True, 'versions': '1.43', 'upgrade_extensions': 'ThemeToggle'} to all - SUCCESS in 23s [21:31:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:32:19] PROBLEM - db151 Puppet on db151 is WARNING: WARNING: Puppet is currently disabled, message: Universal Omega, last run 1 minute ago with 0 failures [22:32:34] PROBLEM - db181 Puppet on db181 is WARNING: WARNING: Puppet is currently disabled, message: Universal Omega, last run 4 minutes ago with 0 failures [22:33:15] PROBLEM - db161 Puppet on db161 is WARNING: WARNING: Puppet is currently disabled, message: Universal Omega, last run 18 minutes ago with 0 failures [22:33:38] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/0a087d008e20582ffda3cc660c77b05b2fa7afdd [22:33:39] 02puppet/03mariadb-apt 07CosmicAlpha 030a087d0 Update [22:33:56] PROBLEM - db182 Puppet on db182 is WARNING: WARNING: Puppet is currently disabled, message: Universal Omega, last run 13 minutes ago with 0 failures [22:34:06] PROBLEM - db171 Puppet on db171 is WARNING: WARNING: Puppet is currently disabled, message: Universal Omega, last run 15 minutes ago with 0 failures [22:34:10] [02puppet] 07github-actions[bot] pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/e2376b61a3abf3ca127ae235c9c095f7f13b25d3 [22:34:11] 02puppet/03mariadb-apt 07github-actions 03e2376b6 CI: lint puppet code to standards… [22:34:55] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/978e9c70849552b1bf713681f60f1ca2044011d2 [22:34:55] 02puppet/03mariadb-apt 07CosmicAlpha 03978e9c7 Fix [22:35:39] PROBLEM - db172 Puppet on db172 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:37:33] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/6843bc92342d5ac778f5a750ed5f85e69db454ad [22:37:34] 02puppet/03mariadb-apt 07CosmicAlpha 036843bc9 Add spaces [22:37:39] RECOVERY - db172 Puppet on db172 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:52:07] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/e562bc0fc8a89b142cb874687d0634689ed4838e [22:52:07] 02puppet/03mariadb-apt 07CosmicAlpha 03e562bc0 Remove 10.5 and add 11.8 support [23:01:53] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/01fe890b8e1482e060d221e815d9630e3f3a9e64 [23:01:53] 02puppet/03mariadb-apt 07CosmicAlpha 0301fe890 - [23:04:02] [02puppet] 07Universal-Omega retitled PR #4417: "mariadb: fix apt package" ➜ "mariadb: fix apt package and add support for 11.8" 13https://github.com/miraheze/puppet/pull/4417 [23:05:44] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/733ddc188a15877bc835e90c8cb4328a9b1af646 [23:05:45] 02puppet/03mariadb-apt 07CosmicAlpha 03733ddc1 Use mariadb-dump [23:09:00] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/53ad0a0b6ac1316a5d88f029b8256597d35ef46a [23:09:00] 02puppet/03mariadb-apt 07CosmicAlpha 0353ad0a0 Remove mysql alias [23:19:09] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/a6dfefcd312a84c218bace742341e00f09946794 [23:19:09] 02puppet/03mariadb-apt 07CosmicAlpha 03a6dfefc Remove mysql.service [23:32:51] [02puppet] 07Universal-Omega pushed 1 new commit to 03mariadb-apt 13https://github.com/miraheze/puppet/commit/bd97b74b58d15236724813d64f51fd902639a7e8 [23:32:51] 02puppet/03mariadb-apt 07CosmicAlpha 03bd97b74 Update options [23:58:41] [02python-functions] 07dependabot[bot] created 03dependabot/pip/dot-github/flake8-7.3.0 (+1 new commit) 13https://github.com/miraheze/python-functions/commit/b7a3e1e7d382 [23:58:41] 02python-functions/03dependabot/pip/dot-github/flake8-7.3.0 07dependabot[bot] 03b7a3e1e build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github… [23:58:43] [02python-functions] 07dependabot[bot] added the label 'dependencies' to pull request #153 (build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github) 13https://github.com/miraheze/python-functions/pull/153 [23:58:45] [02python-functions] 07dependabot[bot] added the label 'python' to pull request #153 (build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github) 13https://github.com/miraheze/python-functions/pull/153 [23:58:47] [02python-functions] 07dependabot[bot] opened pull request #153: build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github (03main...03dependabot/pip/dot-github/flake8-7.3.0) 13https://github.com/miraheze/python-functions/pull/153 [23:58:49] [02python-functions] 07dependabot[bot] added the label 'dependencies' to pull request #153 (build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github) 13https://github.com/miraheze/python-functions/pull/153 [23:58:51] [02python-functions] 07dependabot[bot] added the label 'python' to pull request #153 (build(deps): bump flake8 from 7.2.0 to 7.3.0 in /.github) 13https://github.com/miraheze/python-functions/pull/153 [23:58:53] [02python-functions] 07coderabbitai[bot] commented on pull request #153: --- […] 13https://github.com/miraheze/python-functions/pull/153#issuecomment-2993167782