[00:01:28] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/11a0cea9e32d044bd7b08e0deb23bdad4dc99441 [00:01:28] 02statichelp/03main 07WikiTideBot 0311a0cea Bot: Auto-update Tech namespace pages 2025-09-22 00:01:25 [00:11:31] RECOVERY - cp171 Disk Space on cp171 is OK: DISK OK - free space: / 53161MiB (11% inode=99%); [00:11:53] RECOVERY - cp201 Disk Space on cp201 is OK: DISK OK - free space: / 53031MiB (11% inode=99%); [00:12:52] RECOVERY - cp191 Disk Space on cp191 is OK: DISK OK - free space: / 53710MiB (11% inode=99%); [01:25:22] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=haplessdatabasewiki dump.xml --no-updates (START) [01:25:23] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=haplessdatabasewiki dump.xml --no-updates (END - exit=0) [01:25:24] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=haplessdatabasewiki (START) [01:25:25] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=haplessdatabasewiki (END - exit=0) [01:25:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:25:27] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/initSiteStats.php --wiki=haplessdatabasewiki --update (END - exit=0) [01:25:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:25:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:25:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:25:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:27:55] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=skiesofarcadiawiki dump.xml --no-updates (START) [01:27:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:30:24] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=skiesofarcadiawiki dump.xml --no-updates (END - exit=256) [01:30:25] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=skiesofarcadiawiki (START) [01:30:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:30:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:32:27] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=skiesofarcadiawiki (END - exit=256) [01:32:28] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/initSiteStats.php --wiki=skiesofarcadiawiki --update (END - exit=0) [01:32:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:32:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:51:53] PROBLEM - cp201 Disk Space on cp201 is WARNING: DISK WARNING - free space: / 49805MiB (10% inode=99%); [02:57:31] PROBLEM - cp171 Disk Space on cp171 is WARNING: DISK WARNING - free space: / 49833MiB (10% inode=99%); [03:22:52] PROBLEM - cp191 Disk Space on cp191 is WARNING: DISK WARNING - free space: / 49897MiB (10% inode=99%); [03:29:42] PROBLEM - matomo151 Check unit status of matomo-archiver-2 on matomo151 is CRITICAL: CRITICAL: Status of the systemd unit matomo-archiver-2 [06:12:34] [02mw-config] 07c-oreills commented on pull request #6113: I tried in incognito for both Firefox and Chrome on my mobile, and also using Chrome developer tools on my laptop with mobile mode enabled. […] 13https://github.com/miraheze/mw-config/pull/6113#issuecomment-3317091448 [07:55:39] PROBLEM - mwtask181 Current Load on mwtask181 is CRITICAL: LOAD CRITICAL - total load average: 24.66, 17.13, 12.06 [08:01:42] RECOVERY - matomo151 Check unit status of matomo-archiver-2 on matomo151 is OK: OK: Status of the systemd unit matomo-archiver-2 [09:07:39] PROBLEM - mwtask181 Current Load on mwtask181 is WARNING: LOAD WARNING - total load average: 17.64, 20.90, 23.73 [09:15:39] RECOVERY - mwtask181 Current Load on mwtask181 is OK: LOAD OK - total load average: 9.13, 13.66, 19.35 [10:13:40] PROBLEM - db171 Current Load on db171 is WARNING: LOAD WARNING - total load average: 5.63, 10.41, 6.25 [10:15:40] RECOVERY - db171 Current Load on db171 is OK: LOAD OK - total load average: 2.01, 7.45, 5.67 [11:18:33] !log [somerandomdeveloper@test151] starting deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'SemanticScribunto'} to test151 [11:18:35] !log [somerandomdeveloper@test151] finished deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'SemanticScribunto'} to test151 - SUCCESS in 1s [11:18:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:18:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:20:14] !log [somerandomdeveloper@mwtask181] starting deploy of {'folders': '1.44/extensions/SemanticScribunto'} to all [11:20:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:20:38] !log [somerandomdeveloper@mwtask181] finished deploy of {'folders': '1.44/extensions/SemanticScribunto'} to all - SUCCESS in 24s [11:20:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:23:19] !log [somerandomdeveloper@test151] starting deploy of {'versions': ['1.44', '1.45'], 'upgrade_skins': 'Citizen'} to test151 [11:23:22] !log [somerandomdeveloper@test151] finished deploy of {'versions': ['1.44', '1.45'], 'upgrade_skins': 'Citizen'} to test151 - SUCCESS in 2s [11:23:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:23:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:37:58] !log [somerandomdeveloper@mwtask181] starting deploy of {'versions': '1.44', 'upgrade_skins': 'Citizen'} to all [11:38:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:38:26] !log [somerandomdeveloper@mwtask181] finished deploy of {'versions': '1.44', 'upgrade_skins': 'Citizen'} to all - SUCCESS in 27s [11:38:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:01:00] !log [skye@mwtask171] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php CirrusSearch:UpdateSearchIndexConfig --wiki=superstarracerswiki --startOver (END - exit=0) [12:01:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:01:19] !log [skye@mwtask171] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php CirrusSearch:ForceSearchIndex --wiki=superstarracerswiki (END - exit=0) [12:01:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:05:01] !log [somerandomdeveloper@test151] starting deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'DarkMode'} to test151 [12:05:02] !log [somerandomdeveloper@test151] finished deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'DarkMode'} to test151 - SUCCESS in 1s [12:05:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:05:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:06:01] !log [somerandomdeveloper@mwtask181] starting deploy of {'versions': '1.44', 'upgrade_extensions': 'DarkMode'} to all [12:06:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:06:27] !log [somerandomdeveloper@mwtask181] finished deploy of {'versions': '1.44', 'upgrade_extensions': 'DarkMode'} to all - SUCCESS in 26s [12:06:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:15:22] [02DiscordNotifications] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/DiscordNotifications/commit/0bef0e2db3dfbee0dc8c8bf19d55e6f60f5c97af [12:15:22] 02DiscordNotifications/03main 07translatewiki.net 030bef0e2 Localisation updates from https://translatewiki.net. [12:15:23] [02ErrorPages] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/ErrorPages/commit/066527b70ca535d5174c625f7b5e8640b83cb640 [12:15:23] 02ErrorPages/03main 07translatewiki.net 03066527b Localisation updates from https://translatewiki.net. [12:15:28] [02ManageWiki] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/ManageWiki/commit/3486360c4aa7614ec56774364719735230fd63c1 [12:15:28] 02ManageWiki/03main 07translatewiki.net 033486360 Localisation updates from https://translatewiki.net. [12:15:28] [02DataDump] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/DataDump/commit/fb211b79d03caa3568476bc5450ba9d544c4b11f [12:15:29] 02DataDump/03main 07translatewiki.net 03fb211b7 Localisation updates from https://translatewiki.net. [12:15:30] [02MatomoAnalytics] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/MatomoAnalytics/commit/56a2a0aa8f22cb41cf42373b7a33f2cc1f88ad4c [12:15:32] [02landing] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/landing/commit/23d20ffc18dda83d8926f1807cefe4dfa420bf62 [12:15:34] 02MatomoAnalytics/03main 07translatewiki.net 0356a2a0a Localisation updates from https://translatewiki.net. [12:15:36] [02RequestCustomDomain] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/RequestCustomDomain/commit/a759ee264d17a5890f9af80a6776de9b7a652fd5 [12:15:38] [02MirahezeMagic] 07translatewiki pushed 1 new commit to 03main 13https://github.com/miraheze/MirahezeMagic/commit/daccf7a84eef521dc8cde7f86302916304c8256f [12:15:40] 02landing/03main 07translatewiki.net 0323d20ff Localisation updates from https://translatewiki.net. [12:15:41] 02RequestCustomDomain/03main 07translatewiki.net 03a759ee2 Localisation updates from https://translatewiki.net. [12:15:43] 02MirahezeMagic/03main 07translatewiki.net 03daccf7a Localisation updates from https://translatewiki.net. [12:17:27] !log [somerandomdeveloper@test151] starting deploy of {'config': True} to test151 [12:17:28] !log [somerandomdeveloper@test151] finished deploy of {'config': True} to test151 - SUCCESS in 0s [12:17:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:17:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:20:18] miraheze/ErrorPages - translatewiki the build passed. [12:20:27] miraheze/landing - translatewiki the build passed. [12:20:29] miraheze/DataDump - translatewiki the build passed. [12:20:36] miraheze/MatomoAnalytics - translatewiki the build passed. [12:21:01] !log [somerandomdeveloper@test151] starting deploy of {'config': True} to test151 [12:21:02] !log [somerandomdeveloper@test151] finished deploy of {'config': True} to test151 - SUCCESS in 0s [12:21:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:21:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:21:10] miraheze/ManageWiki - translatewiki the build passed. [12:29:05] !log [somerandomdeveloper@test151] starting deploy of {'config': True} to test151 [12:29:06] !log [somerandomdeveloper@test151] finished deploy of {'config': True} to test151 - SUCCESS in 0s [12:29:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:29:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:30:16] miraheze/MirahezeMagic - translatewiki the build passed. [12:38:48] miraheze/DiscordNotifications - translatewiki the build passed. [12:39:04] miraheze/RequestCustomDomain - translatewiki the build passed. [13:05:57] [02puppet] 07SomeMWDev opened pull request #4535: Allow adding security overrides to PrivateSettings.php (07miraheze:03main...07SomeMWDev:03mw-security-settings) 13https://github.com/miraheze/puppet/pull/4535 [13:10:43] [02puppet] 07SomeMWDev opened pull request #4536: Fix typo in variable name (07miraheze:03main...07SomeMWDev:03fix-typo) 13https://github.com/miraheze/puppet/pull/4536 [13:21:27] !log [somerandomdeveloper@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php cleanupTitles --wiki=mtrwikiwiki --dry-run (END - exit=0) [13:21:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:22:09] !log [somerandomdeveloper@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php cleanupTitles --wiki=mtrwikiwiki (END - exit=0) [13:22:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:28:47] [02puppet] 07paladox merged 07SomeMWDev's pull request #4536: Fix typo in variable name (07miraheze:03main...07SomeMWDev:03fix-typo) 13https://github.com/miraheze/puppet/pull/4536 [13:28:48] [02puppet] 07paladox pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/795dc64c53937763082018647da46f117e1e9773 [13:28:49] 02puppet/03main 07SomeRandomDeveloper 03795dc64 Fix typo in variable name (#4536)… [14:02:27] [02mw-config] 07dependabot[bot] created 03dependabot/composer/phpunit/phpunit-12.3.12 (+1 new commit) 13https://github.com/miraheze/mw-config/commit/f83d57a73f4a [14:02:27] 02mw-config/03dependabot/composer/phpunit/phpunit-12.3.12 07dependabot[bot] 03f83d57a Update phpunit/phpunit requirement from 12.3.11 to 12.3.12… [14:02:29] [02mw-config] 07dependabot[bot] added the label 'php' to pull request #6122 (Update phpunit/phpunit requirement from 12.3.11 to 12.3.12) 13https://github.com/miraheze/mw-config/pull/6122 [14:02:29] [02mw-config] 07dependabot[bot] opened pull request #6122: Update phpunit/phpunit requirement from 12.3.11 to 12.3.12 (03main...03dependabot/composer/phpunit/phpunit-12.3.12) 13https://github.com/miraheze/mw-config/pull/6122 [14:02:31] [02mw-config] 07dependabot[bot] added the label 'dependencies' to pull request #6122 (Update phpunit/phpunit requirement from 12.3.11 to 12.3.12) 13https://github.com/miraheze/mw-config/pull/6122 [14:02:33] [02mw-config] 07dependabot[bot] added the label 'php' to pull request #6122 (Update phpunit/phpunit requirement from 12.3.11 to 12.3.12) 13https://github.com/miraheze/mw-config/pull/6122 [14:02:35] [02mw-config] 07dependabot[bot] added the label 'dependencies' to pull request #6122 (Update phpunit/phpunit requirement from 12.3.11 to 12.3.12) 13https://github.com/miraheze/mw-config/pull/6122 [14:03:29] miraheze/mw-config - dependabot[bot] the build passed. [14:38:11] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=skiesofarcadiawiki dump.xml --no-updates (START) [14:38:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:38:38] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=skiesofarcadiawiki dump.xml --no-updates (END - exit=0) [14:38:39] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=skiesofarcadiawiki (START) [14:38:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:38:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:41:33] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=skiesofarcadiawiki (END - exit=0) [14:41:35] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/initSiteStats.php --wiki=skiesofarcadiawiki --update (END - exit=0) [14:41:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:41:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:58:03] [02mw-config] 07AgentIsai pushed 1 new commit to 03main 13https://github.com/miraheze/mw-config/commit/d2a7d1081265fe65c75b4ddb35d6f0cfc02e9abe [14:58:03] 02mw-config/03main 07Agent Isai 03d2a7d10 Update sitenotice for maintenance [14:58:48] PROBLEM - matomo151 Disk Space on matomo151 is WARNING: DISK WARNING - free space: / 1954MiB (10% inode=89%); [14:59:02] miraheze/mw-config - AgentIsai the build passed. [14:59:52] !log [agent@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [14:59:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:00:17] !log [agent@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 25s [15:00:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:07:00] PROBLEM - cp191 Disk Space on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:25] PROBLEM - ping on os191 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.152) [15:07:28] PROBLEM - mw193 SSH on mw193 is CRITICAL: connect to address 10.0.19.164 and port 22: No route to host [15:07:31] PROBLEM - mem191 memcached on mem191 is CRITICAL: connect to address 10.0.19.154 and port 11211: No route to host [15:07:33] PROBLEM - mw192 Disk Space on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [15:07:33] PROBLEM - mw192 Puppet on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [15:07:33] PROBLEM - mw192 APT on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [15:07:37] PROBLEM - os191 SSH on os191 is CRITICAL: connect to address 10.0.19.152 and port 22: No route to host [15:07:39] PROBLEM - mw191 php-fpm on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [15:07:39] PROBLEM - mw192 php-fpm on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [15:07:39] PROBLEM - mw192 ferm_active on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [15:07:42] PROBLEM - mw191 APT on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [15:07:45] PROBLEM - ping on mw191 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.160) [15:07:45] PROBLEM - mw191 Puppet on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [15:07:45] PROBLEM - mw193 MediaWiki Rendering on mw193 is CRITICAL: connect to address 10.0.19.164 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:07:46] PROBLEM - mw193 conntrack_table_size on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:07:46] PROBLEM - mem191 PowerDNS Recursor on mem191 is CRITICAL: connect to address 10.0.19.154 port 5666: No route to hostconnect to host 10.0.19.154 port 5666: No route to host [15:07:46] PROBLEM - cp191 Haproxy TLS backend for mw152 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:51] PROBLEM - mw192 SSH on mw192 is CRITICAL: connect to address 10.0.19.161 and port 22: No route to host [15:07:52] PROBLEM - cp191 Nginx Backend for mw181 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Nginx Backend for mw181 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Nginx Backend for phorge171 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Nginx Backend for mw203 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Haproxy TLS backend for reports171 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Nginx Backend for mw201 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:52] PROBLEM - cp191 Haproxy TLS backend for mw171 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:53] PROBLEM - cp191 Haproxy TLS backend for mw193 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:07:53] PROBLEM - mw201 MediaWiki Rendering on mw201 is CRITICAL: connect to address 10.0.20.162 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:07:54] PROBLEM - mw193 APT on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:07:54] PROBLEM - mw193 Current Load on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:07:55] PROBLEM - swiftobject191 ferm_active on swiftobject191 is CRITICAL: connect to address 10.0.19.120 port 5666: No route to hostconnect to host 10.0.19.120 port 5666: No route to host [15:07:55] PROBLEM - cp191 Haproxy TLS backend for mw183 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [15:08:14] PROBLEM - Host swiftobject191 is DOWN: CRITICAL - Host Unreachable (10.0.19.120) [15:08:16] PROBLEM - ping on mw193 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.164) [15:08:18] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 9 backends are down. mw151 mw181 mw182 mw191 mw192 mw193 mw201 mw202 mw203 [15:08:19] PROBLEM - mw192 MediaWiki Rendering on mw192 is CRITICAL: connect to address 10.0.19.161 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:08:19] PROBLEM - mw192 HTTPS on mw192 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw192.fsslc.wtnet port 443 after 1898 ms: Couldn't connect to server [15:08:20] PROBLEM - cp201 HTTPS on cp201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp201.wikitide.net port 443 after 2972 ms: Couldn't connect to server [15:08:22] PROBLEM - Host mw192 is DOWN: CRITICAL - Host Unreachable (10.0.19.161) [15:08:25] PROBLEM - mw193 PowerDNS Recursor on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:08:25] PROBLEM - mw193 Disk Space on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:08:28] PROBLEM - mw193 Puppet on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:08:30] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:08:30] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:32] PROBLEM - mw201 HTTPS on mw201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw201.fsslc.wtnet port 443 after 820 ms: Couldn't connect to server [15:08:34] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:37] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:42] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [15:08:45] PROBLEM - mw203 MediaWiki Rendering on mw203 is CRITICAL: connect to address 10.0.20.165 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:08:46] PROBLEM - mw193 php-fpm on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [15:08:46] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 15 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw163 mw173 mw183 mw191 mw192 mw193 mw201 mw202 mw203 [15:08:46] PROBLEM - cp161 HTTPS on cp161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [15:08:48] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:53] PROBLEM - mw193 HTTPS on mw193 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw193.fsslc.wtnet port 443 after 3076 ms: Couldn't connect to server [15:08:55] PROBLEM - mw203 NTP time on mw203 is CRITICAL: connect to address 10.0.20.165 port 5666: No route to hostconnect to host 10.0.20.165 port 5666: No route to host [15:08:55] PROBLEM - mw203 Puppet on mw203 is CRITICAL: connect to address 10.0.20.165 port 5666: No route to hostconnect to host 10.0.20.165 port 5666: No route to host [15:08:57] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:58] PROBLEM - ping on mw202 is CRITICAL: CRITICAL - Host Unreachable (10.0.20.163) [15:09:02] PROBLEM - cp171 HTTPS on cp171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [15:09:03] PROBLEM - ping6 on cloud19 is CRITICAL: PING CRITICAL - Packet loss = 100% [15:09:03] PROBLEM - cp201 Nginx Backend for mw191 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:04] PROBLEM - Host mw203 is DOWN: CRITICAL - Host Unreachable (10.0.20.165) [15:09:04] PROBLEM - mw202 Puppet on mw202 is CRITICAL: connect to address 10.0.20.163 port 5666: No route to hostconnect to host 10.0.20.163 port 5666: No route to host [15:09:04] PROBLEM - swiftobject201 Disk Space on swiftobject201 is CRITICAL: connect to address 10.0.20.145 port 5666: No route to hostconnect to host 10.0.20.145 port 5666: No route to host [15:09:04] PROBLEM - swiftobject201 Current Load on swiftobject201 is CRITICAL: connect to address 10.0.20.145 port 5666: No route to hostconnect to host 10.0.20.145 port 5666: No route to host [15:09:05] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [15:09:06] PROBLEM - cp201 Haproxy TLS backend for mw192 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:06] PROBLEM - cp201 Haproxy TLS backend for mw191 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:06] PROBLEM - cp201 Haproxy TLS backend for mw183 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:06] PROBLEM - Host mw193 is DOWN: CRITICAL - Host Unreachable (10.0.19.164) [15:09:07] PROBLEM - mem201 Current Load on mem201 is CRITICAL: connect to address 10.0.20.148 port 5666: No route to hostconnect to host 10.0.20.148 port 5666: No route to host [15:09:07] PROBLEM - mw202 MediaWiki Rendering on mw202 is CRITICAL: connect to address 10.0.20.163 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:09:18] PROBLEM - ping on swiftobject201 is CRITICAL: CRITICAL - Host Unreachable (10.0.20.145) [15:09:18] PROBLEM - os201 conntrack_table_size on os201 is CRITICAL: connect to address 10.0.20.156 port 5666: No route to hostconnect to host 10.0.20.156 port 5666: No route to host [15:09:19] PROBLEM - os201 PowerDNS Recursor on os201 is CRITICAL: connect to address 10.0.20.156 port 5666: No route to hostconnect to host 10.0.20.156 port 5666: No route to host [15:09:19] PROBLEM - cloud19 SSH on cloud19 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:09:20] PROBLEM - cp201 Nginx Backend for swiftproxy171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:20] PROBLEM - cp201 Haproxy TLS backend for mw151 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:21] PROBLEM - cloud19 NTP time on cloud19 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:09:21] PROBLEM - cp201 Nginx Backend for mw153 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:22] PROBLEM - changeprop201 changeprop on changeprop201 is CRITICAL: connect to address 10.0.20.149 and port 7200: No route to host [15:09:22] PROBLEM - changeprop201 conntrack_table_size on changeprop201 is CRITICAL: connect to address 10.0.20.149 port 5666: No route to hostconnect to host 10.0.20.149 port 5666: No route to host [15:09:23] PROBLEM - os201 Current Load on os201 is CRITICAL: connect to address 10.0.20.156 port 5666: No route to hostconnect to host 10.0.20.156 port 5666: No route to host [15:09:23] PROBLEM - cp201 Check unit status of haproxy_stek_job on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:24] PROBLEM - cp201 Haproxy TLS backend for mwtask171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:24] PROBLEM - cp201 Nginx Backend for mw182 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:25] PROBLEM - os201 NTP time on os201 is CRITICAL: connect to address 10.0.20.156 port 5666: No route to hostconnect to host 10.0.20.156 port 5666: No route to host [15:09:25] PROBLEM - os202 Current Load on os202 is CRITICAL: connect to address 10.0.20.167 port 5666: No route to hostconnect to host 10.0.20.167 port 5666: No route to host [15:09:26] PROBLEM - cp201 Haproxy TLS backend for mw181 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:26] PROBLEM - cp201 Nginx Backend for mwtask161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:38] PROBLEM - mw201 Disk Space on mw201 is CRITICAL: connect to address 10.0.20.162 port 5666: No route to hostconnect to host 10.0.20.162 port 5666: No route to host [15:09:38] PROBLEM - changeprop201 Haproxy backend for localhost:9007 on changeprop201 is CRITICAL: connect to address 10.0.20.149 port 5666: No route to hostconnect to host 10.0.20.149 port 5666: No route to host [15:09:39] PROBLEM - cp201 Nginx Backend for swiftproxy161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:39] PROBLEM - os201 APT on os201 is CRITICAL: connect to address 10.0.20.156 port 5666: No route to hostconnect to host 10.0.20.156 port 5666: No route to host [15:09:40] PROBLEM - Host changeprop201 is DOWN: CRITICAL - Host Unreachable (10.0.20.149) [15:09:40] PROBLEM - cp201 Nginx Backend for test151 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:41] PROBLEM - cp201 Haproxy TLS backend for phorge171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host PROBLEM - cp201 ferm_active on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:41] PROBLEM - cp201 Haproxy TLS backend for reports171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:42] PROBLEM - cp201 Nginx Backend for mw162 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:42] PROBLEM - puppet181 Check unit status of listdomains_github_push on puppet181 is CRITICAL: CRITICAL: Status of the systemd unit listdomains_github_push [15:09:43] PROBLEM - cp201 Nginx Backend for mw163 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:43] PROBLEM - cp201 Haproxy TLS backend for swiftproxy161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:44] PROBLEM - Host cloud19 is DOWN: PING CRITICAL - Packet loss = 100% [15:09:44] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [15:09:45] PROBLEM - swiftobject201 Puppet on swiftobject201 is CRITICAL: connect to address 10.0.20.145 port 5666: No route to hostconnect to host 10.0.20.145 port 5666: No route to host [15:09:45] PROBLEM - ping on cp201 is CRITICAL: CRITICAL - Host Unreachable (10.0.20.166) [15:09:46] PROBLEM - cp201 Haproxy TLS backend for matomo151 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:09:46] PROBLEM - cp201 Nginx Backend for mwtask181 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:00] PROBLEM - cp201 Haproxy TLS backend for mw172 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:00] PROBLEM - cp201 Haproxy TLS backend for mwtask151 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:00] PROBLEM - cp201 Haproxy TLS backend for mwtask161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:03] PROBLEM - cp201 Haproxy TLS backend for swiftproxy171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:05] PROBLEM - Host swiftobject201 is DOWN: CRITICAL - Host Unreachable (10.0.20.145) [15:10:05] PROBLEM - ping on os201 is CRITICAL: CRITICAL - Host Unreachable (10.0.20.156) [15:10:06] PROBLEM - cp201 Puppet on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [15:10:06] PROBLEM - Host cp201 is DOWN: CRITICAL - Host Unreachable (10.0.20.166) [15:10:07] [02puppet] 07paladox created 03paladox-patch-6 (+1 new commit) 13https://github.com/miraheze/puppet/commit/c57402fd1366 [15:10:07] 02puppet/03paladox-patch-6 07paladox 03c57402f mediawiki: Increase apcu and opcache size [15:10:08] PROBLEM - Host os202 is DOWN: CRITICAL - Host Unreachable (10.0.20.167) [15:10:11] [02puppet] 07paladox opened pull request #4537: mediawiki: Increase apcu and opcache size (03main...03paladox-patch-6) 13https://github.com/miraheze/puppet/pull/4537 [15:10:18] PROBLEM - ping6 on cloud20 is CRITICAL: PING CRITICAL - Packet loss = 100% [15:10:19] PROBLEM - Host os201 is DOWN: CRITICAL - Host Unreachable (10.0.20.156) [15:10:20] [02puppet] 07coderabbitai[bot] commented on pull request #4537:
[…] 13https://github.com/miraheze/puppet/pull/4537#issuecomment-3319642112 [15:10:33] PROBLEM - cloud20 conntrack_table_size on cloud20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:10:36] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.832 second response time [15:10:36] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.965 second response time [15:10:37] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 8.303 second response time [15:10:37] PROBLEM - cloud20 Puppet on cloud20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:10:38] PROBLEM - Host cloud20 is DOWN: PING CRITICAL - Packet loss = 100% [15:10:46] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.008 second response time [15:10:51] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.644 second response time [15:10:57] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.127 second response time [15:10:59] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.179 second response time [15:11:18] PROBLEM - mw192.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw192.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:11:40] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:11:51] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:12:03] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [15:12:35] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:13:39] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.186 second response time [15:14:11] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:14:17] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:14:46] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [15:14:47] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:15:02] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [15:15:07] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [15:15:10] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [15:15:13] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:15:15] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:15:15] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10000 milliseconds with 0 bytes received [15:15:17] PROBLEM - mw181 HTTPS on mw181 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10000 milliseconds with 0 bytes received [15:15:18] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:15:20] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:15:42] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.990 second response time [15:15:58] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.184 second response time [15:16:39] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.044 second response time [15:17:41] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.208 second response time [15:17:48] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:17:58] PROBLEM - mw191.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw191.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:18:11] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.061 second response time [15:18:11] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:18:13] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:18:45] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.216 second response time [15:19:14] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.193 second response time [15:19:21] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:19:23] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.553 second response time [15:20:10] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.517 second response time [15:20:20] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.188 second response time [15:20:21] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:20:37] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:20:43] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [15:20:49] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:21:00] PROBLEM - mw203.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw203.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:21:11] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.183 second response time [15:21:14] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.185 second response time [15:21:14] RECOVERY - mw181 HTTPS on mw181 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.347 second response time [15:21:26] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.753 second response time [15:21:37] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.192 second response time [15:21:47] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.200 second response time [15:22:54] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10000 milliseconds with 0 bytes received [15:23:04] [02puppet] 07AgentIsai pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/d77e70db944528412f46ba48241797f9293a70ee [15:23:04] 02puppet/03main 07Agent Isai 03d77e70d Remove shard03 and shard04 [15:23:15] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:23:24] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:23:31] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.764 second response time [15:23:43] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:24:30] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.898 second response time [15:24:33] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [15:24:45] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.673 second response time [15:24:49] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.057 second response time [15:25:03] RECOVERY - Host cloud20 is UP: PING OK - Packet loss = 70%, RTA = 0.27 ms [15:25:07] RECOVERY - ping6 on cloud20 is OK: PING OK - Packet loss = 0%, RTA = 0.28 ms [15:25:26] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [15:25:39] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.174 second response time [15:25:41] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.993 second response time [15:26:19] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.190 second response time [15:26:31] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.061 second response time [15:26:36] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.202 second response time [15:26:39] PROBLEM - cloud20 IPMI Sensors on cloud20 is CRITICAL: IPMI Status: Critical [2 system event log (SEL) entries present] [15:26:39] RECOVERY - cloud20 SSH on cloud20 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [15:26:44] RECOVERY - cp161 HTTPS on cp161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4216 bytes in 0.061 second response time [15:26:53] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.200 second response time [15:26:53] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.057 second response time [15:26:57] RECOVERY - cp171 HTTPS on cp171 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4161 bytes in 0.060 second response time [15:26:58] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.209 second response time [15:27:07] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.062 second response time [15:27:13] RECOVERY - cloud20 Puppet on cloud20 is OK: OK: Puppet is currently enabled, last run 25 minutes ago with 0 failures [15:27:16] RECOVERY - cloud20 conntrack_table_size on cloud20 is OK: OK: nf_conntrack is 0 % full [15:27:21] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.166 second response time [15:27:22] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.057 second response time [15:27:27] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.067 second response time [15:27:45] RECOVERY - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is OK: OK - NGINX Error Rate is 12% [15:27:55] PROBLEM - mw193.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw193.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [15:30:15] RECOVERY - Host swiftobject201 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [15:30:43] RECOVERY - swiftobject201 Disk Space on swiftobject201 is OK: DISK OK - free space: / 1360815MiB (50% inode=92%); [15:30:54] RECOVERY - Host mem201 is UP: PING OK - Packet loss = 0%, RTA = 0.29 ms [15:30:55] RECOVERY - swiftobject201 Current Load on swiftobject201 is OK: LOAD OK - total load average: 0.70, 0.23, 0.08 [15:31:04] RECOVERY - ping on swiftobject201 is OK: PING OK - Packet loss = 0%, RTA = 0.23 ms [15:31:06] RECOVERY - mem201 Puppet on mem201 is OK: OK: Puppet is currently enabled, last run 24 minutes ago with 0 failures [15:31:19] RECOVERY - swiftobject201 Check unit status of disable-rsync on swiftobject201 is OK: OK: Status of the systemd unit disable-rsync [15:31:26] RECOVERY - swiftobject201 NTP time on swiftobject201 is OK: NTP OK: Offset 0.05542480946 secs [15:31:30] RECOVERY - Host mw202 is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [15:31:35] RECOVERY - swiftobject201 Puppet on swiftobject201 is OK: OK: Puppet is currently enabled, last run 50 minutes ago with 0 failures [15:31:37] RECOVERY - Host changeprop201 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [15:31:42] RECOVERY - changeprop201 Haproxy backend for localhost:9007 on changeprop201 is OK: TCP OK - 0.000 second response time on localhost port 9007 [15:31:50] RECOVERY - swiftobject201 ferm_active on swiftobject201 is OK: OK ferm input default policy is set [15:31:50] RECOVERY - swiftobject201 APT on swiftobject201 is OK: APT OK: 76 packages available for upgrade (0 critical updates). [15:31:53] RECOVERY - Host mw201 is UP: PING OK - Packet loss = 0%, RTA = 0.28 ms [15:32:06] RECOVERY - Host os202 is UP: PING OK - Packet loss = 0%, RTA = 0.26 ms [15:32:13] RECOVERY - Host cp201 is UP: PING OK - Packet loss = 0%, RTA = 0.23 ms [15:32:20] RECOVERY - cp201 HTTPS on cp201 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4161 bytes in 0.068 second response time [15:32:23] RECOVERY - Host os201 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [15:32:36] RECOVERY - mw202 ferm_active on mw202 is OK: OK ferm input default policy is set [15:32:43] RECOVERY - cp201 Nginx Backend for mw191 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8129 [15:32:45] RECOVERY - cp201 Haproxy TLS backend for mw162 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8116 [15:32:46] RECOVERY - cp201 Haproxy TLS backend for mw153 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8121 [15:32:46] RECOVERY - cp201 Current Load on cp201 is OK: LOAD OK - total load average: 1.94, 0.57, 0.20 [15:32:47] RECOVERY - Host mw203 is UP: PING OK - Packet loss = 0%, RTA = 0.44 ms [15:32:48] RECOVERY - cp201 Haproxy TLS backend for mw192 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8130 [15:32:49] RECOVERY - cp201 Haproxy TLS backend for mw191 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8129 [15:32:50] RECOVERY - mw203 Puppet on mw203 is OK: OK: Puppet is currently enabled, last run 51 minutes ago with 0 failures [15:32:51] RECOVERY - cp201 Haproxy TLS backend for mon181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8201 [15:32:52] RECOVERY - ping on mw202 is OK: PING OK - Packet loss = 0%, RTA = 0.65 ms [15:32:56] RECOVERY - cp201 Nginx Backend for matomo151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8203 [15:32:57] RECOVERY - cp201 Nginx Backend for mwtask171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8161 [15:32:58] RECOVERY - cp201 Haproxy TLS backend for mw183 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8127 [15:32:59] RECOVERY - changeprop201 changeprop on changeprop201 is OK: TCP OK - 0.000 second response time on 10.0.20.149 port 7200 [15:32:59] RECOVERY - mem201 Current Load on mem201 is OK: LOAD OK - total load average: 0.29, 0.14, 0.05 [15:33:00] RECOVERY - cp201 Nginx Backend for mw161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8115 [15:33:00] RECOVERY - cp201 conntrack_table_size on cp201 is OK: OK: nf_conntrack is 1 % full [15:33:00] RECOVERY - os202 conntrack_table_size on os202 is OK: OK: nf_conntrack is 0 % full [15:33:03] RECOVERY - cp201 Nginx Backend for puppet181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8204 [15:33:04] RECOVERY - cp201 Nginx Backend for mon181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8201 [15:33:04] RECOVERY - changeprop201 conntrack_table_size on changeprop201 is OK: OK: nf_conntrack is 0 % full [15:33:04] PROBLEM - mw203 HTTPS on mw203 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw203.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [15:33:04] RECOVERY - mw202 Puppet on mw202 is OK: OK: Puppet is currently enabled, last run 54 minutes ago with 0 failures [15:33:04] RECOVERY - cp201 Nginx Backend for mw203 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8134 [15:33:05] RECOVERY - os202 ferm_active on os202 is OK: OK ferm input default policy is set [15:33:05] RECOVERY - os202 Disk Space on os202 is OK: DISK OK - free space: / 402829MiB (90% inode=99%); [15:33:06] RECOVERY - cp201 Haproxy TLS backend for mw152 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8114 [15:33:07] RECOVERY - cp201 Nginx Backend for swiftproxy171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8207 [15:33:07] RECOVERY - os202 Current Load on os202 is OK: LOAD OK - total load average: 1.16, 0.54, 0.21 [15:33:08] RECOVERY - cp201 Nginx Backend for mwtask161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8163 [15:33:09] RECOVERY - os201 NTP time on os201 is OK: NTP OK: Offset -0.0001370608807 secs [15:33:09] RECOVERY - os201 conntrack_table_size on os201 is OK: OK: nf_conntrack is 0 % full [15:33:09] RECOVERY - os201 PowerDNS Recursor on os201 is OK: DNS OK: 0.042 seconds response time. os201.fsslc.wtnet returns 10.0.20.156 [15:33:14] RECOVERY - os201 Current Load on os201 is OK: LOAD OK - total load average: 0.67, 0.43, 0.18 [15:33:14] PROBLEM - mw202 HTTPS on mw202 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw202.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [15:33:14] RECOVERY - cp201 Haproxy TLS backend for mw151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8113 [15:33:14] RECOVERY - cp201 Nginx Backend for mw182 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8120 [15:33:19] RECOVERY - mw201 conntrack_table_size on mw201 is OK: OK: nf_conntrack is 0 % full [15:33:24] RECOVERY - cp201 Haproxy TLS backend for mw161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8115 [15:33:24] RECOVERY - cp201 Nginx Backend for mw151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8113 [15:33:24] RECOVERY - cp201 Haproxy TLS backend for mw181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8119 [15:33:24] RECOVERY - cp201 SSH on cp201 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [15:33:24] RECOVERY - cp201 Nginx Backend for mw173 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8125 [15:33:24] RECOVERY - mw201 NTP time on mw201 is OK: NTP OK: Offset -0.0001024603844 secs [15:33:24] RECOVERY - cp201 Nginx Backend for mw193 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8131 [15:33:25] RECOVERY - mw201 APT on mw201 is OK: APT OK: 132 packages available for upgrade (0 critical updates). [15:33:29] RECOVERY - os202 APT on os202 is OK: APT OK: 75 packages available for upgrade (0 critical updates). [15:33:29] RECOVERY - cp201 Check unit status of haproxy_stek_job on cp201 is OK: OK: Status of the systemd unit haproxy_stek_job [15:33:29] RECOVERY - cp201 Haproxy TLS backend for mwtask171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8161 [15:33:29] RECOVERY - cp201 Nginx Backend for mw153 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8121 [15:33:29] RECOVERY - os201 APT on os201 is OK: APT OK: 84 packages available for upgrade (0 critical updates). [15:33:29] RECOVERY - cp201 Nginx Backend for mw171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8117 [15:33:29] RECOVERY - cp201 Nginx Backend for mw202 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8133 [15:33:44] RECOVERY - os201 Disk Space on os201 is OK: DISK OK - free space: / 216608MiB (48% inode=99%); [15:33:44] RECOVERY - cp201 Haproxy TLS backend for mwtask181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8160 [15:33:44] RECOVERY - cp201 Nginx Backend for mwtask181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8160 [15:33:44] RECOVERY - cp201 Nginx Backend for test151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8181 [15:33:44] RECOVERY - cp201 Haproxy TLS backend for matomo151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8203 [15:33:44] RECOVERY - cp201 Haproxy TLS backend for phorge171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8202 [15:33:44] RECOVERY - mw201 php-fpm on mw201 is OK: PROCS OK: 25 processes with command name 'php-fpm8.2' [15:33:49] RECOVERY - cp201 Haproxy TLS backend for mw202 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8133 [15:33:49] RECOVERY - cp201 Haproxy TLS backend for mw201 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8132 [15:33:49] RECOVERY - os201 Puppet on os201 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [15:33:49] RECOVERY - cp201 APT on cp201 is OK: APT OK: 83 packages available for upgrade (0 critical updates). [15:33:49] RECOVERY - ping on cp201 is OK: PING OK - Packet loss = 0%, RTA = 0.25 ms [15:33:59] RECOVERY - ping on os202 is OK: PING OK - Packet loss = 0%, RTA = 0.25 ms [15:33:59] RECOVERY - cp201 Nginx Backend for mwtask151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8162 [15:33:59] RECOVERY - cp201 Haproxy TLS backend for mwtask161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8163 [15:33:59] RECOVERY - cp201 Nginx Backend for mw172 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8118 [15:33:59] RECOVERY - cp201 Haproxy TLS backend for mwtask151 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8162 [15:33:59] PROBLEM - cp201 Disk Space on cp201 is WARNING: DISK WARNING - free space: / 34931MiB (7% inode=99%); [15:33:59] RECOVERY - os201 ferm_active on os201 is OK: OK ferm input default policy is set [15:34:00] RECOVERY - cp201 Nginx Backend for reports171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8205 [15:34:04] RECOVERY - cp201 Haproxy TLS backend for mw172 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8118 [15:34:04] RECOVERY - cp201 Haproxy TLS backend for swiftproxy171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8207 [15:34:09] RECOVERY - cp201 Puppet on cp201 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [15:34:09] RECOVERY - ping on os201 is OK: PING OK - Packet loss = 0%, RTA = 0.44 ms [15:34:39] RECOVERY - mw201 HTTPS on mw201 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.056 second response time [15:34:45] RECOVERY - mw203 MediaWiki Rendering on mw203 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 3.146 second response time [15:34:49] RECOVERY - mw203 NTP time on mw203 is OK: NTP OK: Offset -0.0001525580883 secs [15:35:00] RECOVERY - mw202 MediaWiki Rendering on mw202 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.220 second response time [15:35:02] RECOVERY - mw203 HTTPS on mw203 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.096 second response time [15:35:09] RECOVERY - mw202 HTTPS on mw202 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.065 second response time [15:35:40] RECOVERY - puppet181 Check unit status of listdomains_github_push on puppet181 is OK: OK: Status of the systemd unit listdomains_github_push [15:38:26] RECOVERY - mw201.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [15:50:30] RECOVERY - mw203.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [15:51:31] PROBLEM - cp171 Disk Space on cp171 is CRITICAL: DISK CRITICAL - free space: / 27193MiB (5% inode=99%); [16:02:56] [Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1758553340000&orgId=1&to=1758556976681 [16:15:27] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/cb9375b362ffe291f9fb637be6491ec4e0662e58 [16:15:27] 02ssl/03main 07WikiTideBot 03cb9375b Bot: Auto-update domain lists [16:32:56] [Grafana] RESOLVED: MediaWiki Jobs are rapidly increasing https://grafana.wikitide.net/d/GtxbP1Xnk?from=1758555020000&orgId=1&to=1758558620000 [16:46:22] RECOVERY - Host cloud19 is UP: PING OK - Packet loss = 44%, RTA = 0.29 ms [16:46:22] RECOVERY - cloud19 NTP time on cloud19 is OK: NTP OK: Offset 0.09274747968 secs [16:46:46] PROBLEM - cloud19 Puppet on cloud19 is WARNING: WARNING: Puppet last ran 1 hour ago [16:46:51] RECOVERY - ping6 on cloud19 is OK: PING OK - Packet loss = 0%, RTA = 1.14 ms [16:46:57] RECOVERY - cloud19 Current Load on cloud19 is OK: LOAD OK - total load average: 0.77, 0.26, 0.09 [16:46:59] PROBLEM - cloud19 IPMI Sensors on cloud19 is CRITICAL: IPMI Status: Critical [4 system event log (SEL) entries present] [16:47:50] RECOVERY - cloud19 SSH on cloud19 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [16:47:59] RECOVERY - Host cp191 is UP: PING OK - Packet loss = 0%, RTA = 0.32 ms [16:47:59] RECOVERY - cp191 Check unit status of haproxy_stek_job on cp191 is OK: OK: Status of the systemd unit haproxy_stek_job [16:48:03] RECOVERY - Host os191 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [16:48:04] RECOVERY - Host mw191 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [16:48:05] RECOVERY - os191 SSH on os191 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [16:48:10] RECOVERY - Host swiftobject191 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [16:48:11] RECOVERY - swiftobject191 APT on swiftobject191 is OK: APT OK: 76 packages available for upgrade (0 critical updates). [16:48:14] RECOVERY - Host mem191 is UP: PING OK - Packet loss = 0%, RTA = 0.37 ms [16:48:19] PROBLEM - mw191 MediaWiki Rendering on mw191 is CRITICAL: connect to address 10.0.19.160 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [16:48:26] RECOVERY - Host mw192 is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [16:48:44] PROBLEM - cp191 Puppet on cp191 is WARNING: WARNING: Puppet last ran 1 hour ago [16:48:44] PROBLEM - mw191 HTTPS on mw191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw191.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [16:48:47] RECOVERY - cloud19 Puppet on cloud19 is OK: OK: Puppet is currently enabled, last run 39 seconds ago with 0 failures [16:48:49] PROBLEM - os191 Puppet on os191 is WARNING: WARNING: Puppet last ran 1 hour ago [16:48:52] PROBLEM - cp191 Disk Space on cp191 is WARNING: DISK WARNING - free space: / 35949MiB (7% inode=99%); [16:48:59] RECOVERY - Host mw193 is UP: PING OK - Packet loss = 0%, RTA = 0.26 ms [16:49:07] RECOVERY - ping on os191 is OK: PING OK - Packet loss = 0%, RTA = 0.30 ms [16:49:09] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 3 backends are down. mw191 mw192 mw193 [16:49:09] PROBLEM - swiftobject191 Puppet on swiftobject191 is WARNING: WARNING: Puppet last ran 2 hours ago [16:49:20] RECOVERY - mw192 php-fpm on mw192 is OK: PROCS OK: 25 processes with command name 'php-fpm8.2' [16:49:21] PROBLEM - mw192 Puppet on mw192 is WARNING: WARNING: Puppet last ran 1 hour ago [16:49:23] RECOVERY - mw191 APT on mw191 is OK: APT OK: 132 packages available for upgrade (0 critical updates). [16:49:24] RECOVERY - mem191 memcached on mem191 is OK: TCP OK - 0.000 second response time on 10.0.19.154 port 11211 [16:49:24] RECOVERY - cp191 Nginx Backend for mwtask161 on cp191 is OK: TCP OK - 0.001 second response time on localhost port 8163 [16:49:24] RECOVERY - cp191 Nginx Backend for matomo151 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8203 [16:49:24] PROBLEM - os191 Current Load on os191 is WARNING: LOAD WARNING - total load average: 3.44, 1.44, 0.54 [16:49:25] RECOVERY - mw192 Disk Space on mw192 is OK: DISK OK - free space: / 36594MiB (68% inode=90%); [16:49:26] RECOVERY - mw191 php-fpm on mw191 is OK: PROCS OK: 25 processes with command name 'php-fpm8.2' [16:49:31] RECOVERY - mw193 conntrack_table_size on mw193 is OK: OK: nf_conntrack is 0 % full [16:49:32] RECOVERY - mw193 SSH on mw193 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [16:49:34] RECOVERY - mw192 APT on mw192 is OK: APT OK: 132 packages available for upgrade (0 critical updates). [16:49:34] RECOVERY - cp191 Haproxy TLS backend for mw193 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8131 [16:49:34] RECOVERY - swiftobject191 ferm_active on swiftobject191 is OK: OK ferm input default policy is set [16:49:34] RECOVERY - cp191 Haproxy TLS backend for mw163 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8123 [16:49:34] RECOVERY - cp191 Nginx Backend for mwtask151 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8162 [16:49:38] RECOVERY - mw193 Current Load on mw193 is OK: LOAD OK - total load average: 0.10, 0.05, 0.01 [16:49:39] RECOVERY - cp191 Nginx Backend for phorge171 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8202 [16:49:39] RECOVERY - cp191 Nginx Backend for mw201 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8132 [16:49:44] RECOVERY - cp191 Haproxy TLS backend for mw183 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8127 [16:49:44] RECOVERY - cp191 Haproxy TLS backend for mw152 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8114 [16:49:44] PROBLEM - mw191 Puppet on mw191 is WARNING: WARNING: Puppet last ran 1 hour ago [16:49:49] RECOVERY - cp191 Haproxy TLS backend for mw172 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8118 [16:49:49] RECOVERY - ping on mw191 is OK: PING OK - Packet loss = 0%, RTA = 0.22 ms [16:49:49] RECOVERY - cp191 Nginx Backend for mw203 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8134 [16:49:54] RECOVERY - cp191 Haproxy TLS backend for reports171 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8205 [16:49:54] RECOVERY - cp191 Nginx Backend for mw163 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8123 [16:49:54] RECOVERY - cp191 Nginx Backend for mw181 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8119 [16:49:54] RECOVERY - cp191 Current Load on cp191 is OK: LOAD OK - total load average: 1.70, 0.72, 0.27 [16:49:55] RECOVERY - mw193 NTP time on mw193 is OK: NTP OK: Offset 0.0001010596752 secs [16:49:58] RECOVERY - mw193 APT on mw193 is OK: APT OK: 132 packages available for upgrade (0 critical updates). [16:49:59] RECOVERY - mem191 PowerDNS Recursor on mem191 is OK: DNS OK: 0.180 seconds response time. mem191.fsslc.wtnet returns 10.0.19.154 [16:49:59] RECOVERY - cp191 ferm_active on cp191 is OK: OK ferm input default policy is set [16:49:59] RECOVERY - cp191 Haproxy TLS backend for mw171 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8117 [16:49:59] RECOVERY - cp191 Nginx Backend for mw171 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8117 [16:50:16] RECOVERY - mw191 MediaWiki Rendering on mw191 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.968 second response time [16:50:19] RECOVERY - mw193 Disk Space on mw193 is OK: DISK OK - free space: / 36603MiB (68% inode=90%); [16:50:19] RECOVERY - mw192 ferm_active on mw192 is OK: OK ferm input default policy is set [16:50:24] RECOVERY - mw192 SSH on mw192 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [16:50:29] PROBLEM - mw193 Puppet on mw193 is WARNING: WARNING: Puppet last ran 2 hours ago [16:50:29] RECOVERY - ping on mw193 is OK: PING OK - Packet loss = 0%, RTA = 0.26 ms [16:50:44] RECOVERY - cp191 Puppet on cp191 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [16:50:44] RECOVERY - mw191 HTTPS on mw191 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.073 second response time [16:50:44] RECOVERY - mw193 PowerDNS Recursor on mw193 is OK: DNS OK: 0.036 seconds response time. mw193.fsslc.wtnet returns 10.0.19.164 [16:50:44] RECOVERY - mw193 php-fpm on mw193 is OK: PROCS OK: 25 processes with command name 'php-fpm8.2' [16:51:04] RECOVERY - swiftobject191 Puppet on swiftobject191 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:51:21] RECOVERY - os191 Current Load on os191 is OK: LOAD OK - total load average: 3.30, 2.17, 0.93 [16:51:21] RECOVERY - mw192 Puppet on mw192 is OK: OK: Puppet is currently enabled, last run 28 seconds ago with 0 failures [16:51:35] RECOVERY - mw193 MediaWiki Rendering on mw193 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.968 second response time [16:51:44] RECOVERY - mw191 Puppet on mw191 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:52:10] RECOVERY - mw192 MediaWiki Rendering on mw192 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.215 second response time [16:52:11] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [16:52:15] RECOVERY - mw192 HTTPS on mw192 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.061 second response time [16:52:24] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [16:52:26] RECOVERY - mw193 Puppet on mw193 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [16:52:45] RECOVERY - os191 Puppet on os191 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:52:46] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [16:52:49] RECOVERY - mw193 HTTPS on mw193 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.061 second response time [16:53:09] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [16:53:13] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=rubaldisbasicswiki dump.xml --no-updates (START) [16:53:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:55:28] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/ed74959f8ca30cc8fb50672b033a9fc621da2595 [16:55:28] 02ssl/03main 07WikiTideBot 03ed74959 Bot: Auto-update domain lists [16:57:50] RECOVERY - mw193.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [17:01:56] [Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1758556880000&orgId=1&to=1758560516677 [17:04:46] !log [skye@mwtask171] Starting import for loonathewikiwiki (XML: ./loona.xml; Images: None) (START) [17:04:47] !log [skye@mwtask171] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php importDump --wiki=loonathewikiwiki --no-updates --username-prefix=fandom:loonatheworld -- ./loona.xml (START) [17:04:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:04:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:06:56] [Grafana] RESOLVED: MediaWiki Jobs are rapidly increasing https://grafana.wikitide.net/d/GtxbP1Xnk?from=1758556880000&orgId=1&to=1758560780000 [17:09:15] RECOVERY - mw192.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [17:15:55] RECOVERY - mw191.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [17:29:03] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/importDump.php --wiki=rubaldisbasicswiki dump.xml --no-updates (END - exit=0) [17:29:04] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=rubaldisbasicswiki (START) [17:29:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:29:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:34:48] PROBLEM - matomo151 Disk Space on matomo151 is CRITICAL: DISK CRITICAL - free space: / 1070MiB (5% inode=89%); [17:40:30] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-6 13https://github.com/miraheze/puppet/commit/e2474c1967c5f3108c05bdb7a60e2058d6823dc6 [17:40:30] 02puppet/03paladox-patch-6 07paladox 03e2474c1 Update mediawiki.yaml [17:50:16] [02puppet] 07AgentIsai pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/9dfc115f29f04ce3463b59d7b15a21dccac432fa [17:50:16] 02puppet/03main 07Agent Isai 039dfc115 Readd shards 3 and 4 [17:58:34] !log depooling and repooling mw one by one and increasing their ram to 20gb (also doing the same with mwtask) [17:58:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:58:50] [02mw-config] 07AgentIsai pushed 1 new commit to 03main 13https://github.com/miraheze/mw-config/commit/e3623410695457f383544fc04874e4af96faeca5 [17:58:50] 02mw-config/03main 07Agent Isai 03e362341 Disable sitenotice banner [17:59:14] !log [agent@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [17:59:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:59:38] !log [agent@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 24s [17:59:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:59:45] miraheze/mw-config - AgentIsai the build passed. [18:01:46] PROBLEM - ping on mwtask151 is CRITICAL: CRITICAL - Host Unreachable (10.0.15.150) [18:02:11] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 2 backends are down. mw151 mw152 [18:02:24] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 2 backends are down. mw151 mw152 [18:02:24] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: connect to address 10.0.15.114 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [18:02:42] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw151.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [18:02:44] PROBLEM - mwtask151 MediaWiki Rendering on mwtask151 is CRITICAL: connect to address 10.0.15.150 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [18:02:46] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 2 backends are down. mw151 mw152 [18:02:48] RECOVERY - matomo151 Disk Space on matomo151 is OK: DISK OK - free space: / 6010MiB (33% inode=89%); [18:02:52] PROBLEM - mwtask151 HTTPS on mwtask151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mwtask151.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [18:02:56] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: connect to address 10.0.15.115 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [18:03:09] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 3 backends are down. mw151 mw152 mw153 [18:03:40] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw152.fsslc.wtnet port 443 after 0 ms: Couldn't connect to server [18:03:48] RECOVERY - ping on mwtask151 is OK: PING OK - Packet loss = 0%, RTA = 0.23 ms [18:04:24] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.982 second response time [18:04:42] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.068 second response time [18:04:43] RECOVERY - mwtask151 MediaWiki Rendering on mwtask151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.385 second response time [18:04:52] RECOVERY - mwtask151 HTTPS on mwtask151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4116 bytes in 0.054 second response time [18:04:55] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.274 second response time [18:04:58] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: connect to address 10.0.15.140 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [18:05:40] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4112 bytes in 0.055 second response time [18:07:00] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.548 second response time [18:07:24] PROBLEM - mwtask161 ferm_active on mwtask161 is CRITICAL: connect to address 10.0.16.157 port 5666: No route to hostconnect to host 10.0.16.157 port 5666: No route to host [18:07:45] PROBLEM - mwtask161 MediaWiki Rendering on mwtask161 is CRITICAL: connect to address 10.0.16.157 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [18:07:45] PROBLEM - mwtask161 mathoid on mwtask161 is CRITICAL: connect to address 10.0.16.157 and port 10044: No route to host [18:07:45] PROBLEM - mwtask161 SSH on mwtask161 is CRITICAL: connect to address 10.0.16.157 and port 22: No route to host [18:07:45] PROBLEM - mwtask161 HTTPS on mwtask161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mwtask161.fsslc.wtnet port 443 after 1292 ms: Couldn't connect to server [18:07:51] PROBLEM - mwtask161 Puppet on mwtask161 is CRITICAL: connect to address 10.0.16.157 port 5666: No route to hostconnect to host 10.0.16.157 port 5666: No route to host [18:07:51] PROBLEM - mwtask161 conntrack_table_size on mwtask161 is CRITICAL: connect to address 10.0.16.157 port 5666: No route to hostconnect to host 10.0.16.157 port 5666: No route to host [18:08:02] PROBLEM - mwtask161 jobrunner.svc.fsslc.wtnet HTTP on mwtask161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 9005: cURL returned 7 - Failed to connect to 10.0.16.157 port 9005 after 881 ms: Couldn't connect to server [18:08:05] PROBLEM - mwtask161 PowerDNS Recursor on mwtask161 is CRITICAL: connect to address 10.0.16.157 port 5666: No route to hostconnect to host 10.0.16.157 port 5666: No route to host [18:08:20] PROBLEM - Host mwtask161 is DOWN: CRITICAL - Host Unreachable (10.0.16.157) [18:11:11] oh forgot to downtime mwtask.. [18:12:20] RECOVERY - Host mwtask161 is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [18:13:25] RECOVERY - mwtask161 ferm_active on mwtask161 is OK: OK ferm input default policy is set [18:13:40] RECOVERY - mwtask161 mathoid on mwtask161 is OK: TCP OK - 0.000 second response time on 10.0.16.157 port 10044 [18:13:43] RECOVERY - mwtask161 HTTPS on mwtask161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4116 bytes in 0.063 second response time [18:13:43] RECOVERY - mwtask161 MediaWiki Rendering on mwtask161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.402 second response time [18:13:45] RECOVERY - mwtask161 SSH on mwtask161 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u7 (protocol 2.0) [18:13:45] RECOVERY - mwtask161 Puppet on mwtask161 is OK: OK: Puppet is currently enabled, last run 18 seconds ago with 0 failures [18:13:50] RECOVERY - mwtask161 conntrack_table_size on mwtask161 is OK: OK: nf_conntrack is 1 % full [18:14:05] RECOVERY - mwtask161 jobrunner.svc.fsslc.wtnet HTTP on mwtask161 is OK: HTTP OK: HTTP/1.1 204 No Content - 167 bytes in 0.002 second response time [18:14:10] RECOVERY - mwtask161 PowerDNS Recursor on mwtask161 is OK: DNS OK: 0.038 seconds response time. mwtask161.fsslc.wtnet returns 10.0.16.157 [18:14:11] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [18:14:24] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [18:14:46] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [18:15:08] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [18:21:18] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/rebuildall.php --wiki=rubaldisbasicswiki (END - exit=0) [18:21:20] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/maintenance/initSiteStats.php --wiki=rubaldisbasicswiki --update (END - exit=0) [18:21:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:21:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:24:11] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 2 backends are down. mw171 mw172 [18:24:24] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 3 backends are down. mw171 mw172 mw173 [18:24:46] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 3 backends are down. mw171 mw172 mw173 [18:25:08] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 3 backends are down. mw171 mw172 mw173 [18:38:00] PROBLEM - mw202.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw202.fsslc.wtnet and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [18:38:46] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-6 13https://github.com/miraheze/puppet/commit/48ebdae0361bae71f06002b1b1df28cffa9b9450 [18:38:46] 02puppet/03paladox-patch-6 07paladox 0348ebdae Update php.pp [18:39:20] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-6 13https://github.com/miraheze/puppet/commit/0c63375600db404e65a8cfc0584fdc71e59cf72c [18:39:20] 02puppet/03paladox-patch-6 07paladox 030c63375 Update mediawiki.yaml [18:39:49] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-6 13https://github.com/miraheze/puppet/commit/ffbcd7e8fa7407b6322d5f627c297ba3e73407df [18:39:49] 02puppet/03paladox-patch-6 07paladox 03ffbcd7e Update mediawiki_task.yaml [18:40:11] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [18:40:24] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [18:40:27] [02puppet] 07github-actions[bot] pushed 1 new commit to 03paladox-patch-6 13https://github.com/miraheze/puppet/commit/3cf86b8c3c5e8ffee192a3888ab7f2f33b3d2fa1 [18:40:27] 02puppet/03paladox-patch-6 07github-actions 033cf86b8 CI: lint puppet code to standards… [18:40:46] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [18:41:02] [02puppet] 07paladox merged pull request #4537: mediawiki: Increase apcu and opcache size (03main...03paladox-patch-6) 13https://github.com/miraheze/puppet/pull/4537 [18:41:02] [02puppet] 07paladox pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/cc70390ade861d9085b48262371e870d464bcb81 [18:41:03] 02puppet/03main 07paladox 03cc70390 mediawiki: Increase apcu and opcache size (#4537)… [18:41:05] [02puppet] 07paladox 04deleted 03paladox-patch-6 at 033cf86b8 13https://api.github.com/repos/miraheze/puppet/commit/3cf86b8 [18:41:09] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [18:42:44] RECOVERY - mw202.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [18:43:46] RECOVERY - cloud19 IPMI Sensors on cloud19 is OK: IPMI Status: OK [18:44:17] RECOVERY - cloud20 IPMI Sensors on cloud20 is OK: IPMI Status: OK [18:45:53] PROBLEM - cp201 Disk Space on cp201 is CRITICAL: DISK CRITICAL - free space: / 27168MiB (5% inode=99%); [18:46:24] [02mw-config] 07paladox created 03paladox-patch-4 (+1 new commit) 13https://github.com/miraheze/mw-config/commit/4823814ca913 [18:46:24] 02mw-config/03paladox-patch-4 07paladox 034823814 Add db161 (c2) to wgCreateWikiDatabaseClustersInactive… [18:46:27] [02mw-config] 07paladox opened pull request #6123: Add db161 (c2) to wgCreateWikiDatabaseClustersInactive (03main...03paladox-patch-4) 13https://github.com/miraheze/mw-config/pull/6123 [18:47:00] [02mw-config] 07paladox merged pull request #6123: Add db161 (c2) to wgCreateWikiDatabaseClustersInactive (03main...03paladox-patch-4) 13https://github.com/miraheze/mw-config/pull/6123 [18:47:02] [02mw-config] 07paladox pushed 1 new commit to 03main 13https://github.com/miraheze/mw-config/commit/0954c662960d0fda0761e197b125e11a35fb0444 [18:47:04] [02mw-config] 07paladox 04deleted 03paladox-patch-4 at 034823814 13https://api.github.com/repos/miraheze/mw-config/commit/4823814 [18:47:06] 02mw-config/03main 07paladox 030954c66 Add db161 (c2) to wgCreateWikiDatabaseClustersInactive (#6123)… [18:47:26] miraheze/mw-config - paladox the build passed. [18:47:26] !log [paladox@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [18:47:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:47:49] !log [paladox@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 22s [18:47:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:48:02] miraheze/mw-config - paladox the build passed. [19:05:50] PROBLEM - test151 MediaWiki Rendering on test151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:14:07] RECOVERY - test151 MediaWiki Rendering on test151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.575 second response time [21:05:16] PROBLEM - swiftobject151 APT on swiftobject151 is CRITICAL: APT CRITICAL: 76 packages available for upgrade (1 critical updates). [21:05:16] PROBLEM - mw152 APT on mw152 is CRITICAL: APT CRITICAL: 132 packages available for upgrade (1 critical updates). [21:05:26] PROBLEM - mw201 APT on mw201 is CRITICAL: APT CRITICAL: 132 packages available for upgrade (1 critical updates). [21:05:38] PROBLEM - ldap171 APT on ldap171 is CRITICAL: APT CRITICAL: 112 packages available for upgrade (1 critical updates). [21:05:42] PROBLEM - mon181 APT on mon181 is CRITICAL: APT CRITICAL: 1 packages available for upgrade (1 critical updates). [21:05:46] PROBLEM - mw182 APT on mw182 is CRITICAL: APT CRITICAL: 132 packages available for upgrade (1 critical updates). [21:05:50] PROBLEM - bast181 APT on bast181 is CRITICAL: APT CRITICAL: 112 packages available for upgrade (1 critical updates). [21:05:58] PROBLEM - mw183 APT on mw183 is CRITICAL: APT CRITICAL: 132 packages available for upgrade (1 critical updates). [21:06:52] PROBLEM - prometheus151 APT on prometheus151 is CRITICAL: APT CRITICAL: 120 packages available for upgrade (1 critical updates).