[00:01:00] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/7ca4231853362508eb1ee4c8d5e0ce3358b70b50 [00:01:00] 02statichelp/03main 07WikiTideBot 037ca4231 Bot: Auto-update Tech namespace pages 2026-02-28 00:00:53 [00:02:14] [02ssl] 07pskyechology requested changes on pull request #901: Apologies for the delay, it fell between the cracks. Been a while since SSL PRs were made, with myself being the last significant such contributor. 13https://github.com/miraheze/ssl/pull/901#pullrequestreview-3869623096 [00:02:16] [02ssl] 07pskyechology left a file comment in pull request #901 0300bb9c1: I understand that they were in earlier use by you, but there has been a significant gap in usage, to the point of being unnecessary. I'm not too keen on approving and maintaining redirects which will likely receive no use. 13https://github.com/miraheze/ssl/pull/901#discussion_r2866688325 [00:02:42] [02ssl] 07pskyechology self-assigned pull request #901 (Add redirects for recaptimesquadwiki (alongside old miraheze.org subdomains) 13https://github.com/miraheze/ssl/pull/901 [00:07:01] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 1 backends are down. mw183 [00:07:27] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [00:07:27] PROBLEM - mw183 PowerDNS Recursor on mw183 is CRITICAL: CRITICAL - Plugin timed out while executing system call [00:07:33] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 1 backends are down. mw183 [00:07:47] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 1 backends are down. mw183 [00:07:52] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 1 backends are down. mw183 [00:08:05] PROBLEM - mw183 Current Load on mw183 is CRITICAL: LOAD CRITICAL - total load average: 36.23, 27.91, 16.31 [00:09:01] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [00:09:21] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.081 second response time [00:09:22] RECOVERY - mw183 PowerDNS Recursor on mw183 is OK: DNS OK: 0.175 seconds response time. mw183.fsslc.wtnet returns 10.0.18.155 [00:09:32] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [00:09:44] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [00:09:52] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [00:10:03] RECOVERY - mw183 Current Load on mw183 is OK: LOAD OK - total load average: 7.69, 19.82, 14.75 [01:01:37] [Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772240497416[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236880000&orgId=1&to=1772240497416 [01:06:17] RECOVERY - mwtask181 Check unit status of mediawiki_job_generate-sitemap-index on mwtask181 is OK: OK: Status of the systemd unit mediawiki_job_generate-sitemap-index [01:06:37] [Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772240797416[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237090000&orgId=1&to=1772240780000 [01:11:37] [Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772241097417[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237480000&orgId=1&to=1772241097417 [01:16:37] [Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772241397417[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237660000&orgId=1&to=1772241397417[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jo [01:16:37] e over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237750000&orgId=1&to=1772241397417 [01:21:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772241697418[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772241697418[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:21:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237840000&orgId=1&to=1772241697418[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772237960000&orgId=1&to=1772241590000 [01:26:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772241997418[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772241997418[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238230000&orgId=1&to=1772241950 [01:26:37] a] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238140000&orgId=1&to=1772241770000 [01:31:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772242297419[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772242297419[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:31:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238650000&orgId=1&to=1772242297419[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238560000&orgId=1&to=1772242190000 [01:36:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772242597420[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772242597420[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:36:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238860000&orgId=1&to=1772242597420[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238980000&orgId=1&to=1772242597420 [01:41:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772242897420[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772242897420[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:41:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239250000&orgId=1&to=1772242897420[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239160000&orgId=1&to=1772242790000 [01:46:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772243197421[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772243197421[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:46:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239460000&orgId=1&to=1772243197421[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239580000&orgId=1&to=1772243197421 [01:50:49] [02puppet] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/126e25123156fe3e4c48d265ee97b9b6904f2c6f [01:50:50] 02puppet/03main 07CosmicAlpha 03126e251 prometheus: also drop wiki label in statsd_exporter [01:51:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772243497421[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772243497421[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [01:51:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239880000&orgId=1&to=1772243497421[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772239580000&orgId=1&to=1772243210000 [02:01:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772244097423[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772244097423[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772240330000&orgId=1&to=1772244050 [02:01:37] a] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772240240000&orgId=1&to=1772243870000 [02:06:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772244397423[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772244397423[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:06:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772240750000&orgId=1&to=1772244397423[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772240660000&orgId=1&to=1772244290000 [02:11:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772244697424[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772244697424[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:11:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772240960000&orgId=1&to=1772244697424[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241080000&orgId=1&to=1772244697424 [02:16:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772244997424[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772236860000&orgId=1&to=1772244997424[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:16:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241380000&orgId=1&to=1772244997424[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241080000&orgId=1&to=1772244710000 [02:26:24] [02mw-config] 07Universal-Omega commented on pull request #6310: That is correct. We could potentially raise this to 1.39 instead of to 1.43 first I guess. 13https://github.com/miraheze/mw-config/pull/6310#issuecomment-3976147357 [02:27:02] [02mw-config] 07Universal-Omega closed pull request #6282: Add new ManageWiki state variables (03main...03managewiki-vars) 13https://github.com/miraheze/mw-config/pull/6282 [02:27:06] [02mw-config] 07Universal-Omega 04deleted 03managewiki-vars at 0391ad773 13https://api.github.com/repos/miraheze/mw-config/commit/91ad773 [02:36:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772246197427[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772246197427[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772242430000&orgId=1&to=1772246150 [02:36:37] a] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772242340000&orgId=1&to=1772245970000 [02:41:37] [02ManageWiki] 07dependabot[bot] created 03dependabot/npm_and_yarn/multi-770cfcd984 (+1 new commit) 13https://github.com/miraheze/ManageWiki/commit/961bc06f2949 [02:41:37] 02ManageWiki/03dependabot/npm_and_yarn/multi-770cfcd984 07dependabot[bot] 03961bc06 Bump minimatch… [02:41:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772246497428[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772246497428[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:41:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772242850000&orgId=1&to=1772246497428[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772242760000&orgId=1&to=1772246390000 [02:41:39] [02ManageWiki] 07dependabot[bot] added the label 'dependencies' to pull request #775 (Bump minimatch) 13https://github.com/miraheze/ManageWiki/pull/775 [02:41:41] [02ManageWiki] 07dependabot[bot] opened pull request #775: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/ManageWiki/pull/775 [02:41:43] [02ManageWiki] 07dependabot[bot] added the label 'javascript' to pull request #775 (Bump minimatch) 13https://github.com/miraheze/ManageWiki/pull/775 [02:41:45] [02ManageWiki] 07dependabot[bot] added the label 'dependencies' to pull request #775 (Bump minimatch) 13https://github.com/miraheze/ManageWiki/pull/775 [02:41:47] [02ManageWiki] 07dependabot[bot] added the label 'javascript' to pull request #775 (Bump minimatch) 13https://github.com/miraheze/ManageWiki/pull/775 [02:44:44] [02ManageWiki] 07Universal-Omega merged 07dependabot[bot]'s pull request #775: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/ManageWiki/pull/775 [02:44:45] [02ManageWiki] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/ManageWiki/commit/6a1bc2fc15d6ac4a4b1f39cd4e011210a69f4df6 [02:44:45] 02ManageWiki/03main 07dependabot[bot] 036a1bc2f Bump minimatch (#775)… [02:44:45] [02ManageWiki] 07Universal-Omega 04deleted 03dependabot/npm_and_yarn/multi-770cfcd984 at 03961bc06 13https://api.github.com/repos/miraheze/ManageWiki/commit/961bc06 [02:46:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772246797428[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772246797428[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:46:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243060000&orgId=1&to=1772246797428[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243180000&orgId=1&to=1772246797428 [02:47:44] miraheze/ManageWiki - dependabot[bot] the build passed. [02:50:26] miraheze/ManageWiki - Universal-Omega the build passed. [02:51:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772247097429[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772247097429[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243300000&orgId=1&to=1772247020 [02:51:37] a] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243180000&orgId=1&to=1772246810000 [02:56:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772247397429[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772247397429[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [02:56:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243720000&orgId=1&to=1772247397429[Grafana] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243630000&orgId=1&to=1772247290000 [03:01:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772247697430[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772247697430[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time p [03:01:37] s://grafana.wikitide.net/d/GtxbP1Xnk?from=1772243720000&orgId=1&to=1772247697430[Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772244050000&orgId=1&to=1772247697430 [03:06:37] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772247997431[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772247997431[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772244140000&orgId=1&to=1772247950 [03:06:37] a] RESOLVED: DatasourceNoData https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772244260000&orgId=1&to=1772247890000 [03:21:37] [Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772238060000&orgId=1&to=1772248860000[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772241660000&orgId=1&to=1772248860000 [03:43:50] [02MatomoAnalytics] 07dependabot[bot] created 03dependabot/npm_and_yarn/multi-770cfcd984 (+1 new commit) 13https://github.com/miraheze/MatomoAnalytics/commit/bd65fef224c4 [03:43:50] 02MatomoAnalytics/03dependabot/npm_and_yarn/multi-770cfcd984 07dependabot[bot] 03bd65fef Bump minimatch… [03:43:51] [02MatomoAnalytics] 07dependabot[bot] added the label 'dependencies' to pull request #185 (Bump minimatch) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:43:51] [02MatomoAnalytics] 07dependabot[bot] added the label 'javascript' to pull request #185 (Bump minimatch) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:43:53] [02MatomoAnalytics] 07dependabot[bot] opened pull request #185: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:43:55] [02MatomoAnalytics] 07dependabot[bot] added the label 'dependencies' to pull request #185 (Bump minimatch) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:43:57] [02MatomoAnalytics] 07dependabot[bot] added the label 'javascript' to pull request #185 (Bump minimatch) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:48:40] miraheze/MatomoAnalytics - dependabot[bot] the build passed. [03:55:10] [02MatomoAnalytics] 07Universal-Omega merged 07dependabot[bot]'s pull request #185: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/MatomoAnalytics/pull/185 [03:55:10] [02MatomoAnalytics] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/MatomoAnalytics/commit/c83e39c8d0c35153e96ccfca5a32c89444ddb053 [03:55:11] 02MatomoAnalytics/03main 07dependabot[bot] 03c83e39c Bump minimatch (#185)… [03:55:11] [02MatomoAnalytics] 07Universal-Omega 04deleted 03dependabot/npm_and_yarn/multi-770cfcd984 at 03bd65fef 13https://api.github.com/repos/miraheze/MatomoAnalytics/commit/bd65fef [03:59:53] miraheze/MatomoAnalytics - Universal-Omega the build passed. [04:25:17] [02CreateWiki] 07dependabot[bot] created 03dependabot/npm_and_yarn/multi-770cfcd984 (+1 new commit) 13https://github.com/miraheze/CreateWiki/commit/a56b03dd640c [04:25:17] 02CreateWiki/03dependabot/npm_and_yarn/multi-770cfcd984 07dependabot[bot] 03a56b03d Bump minimatch… [04:25:18] [02CreateWiki] 07dependabot[bot] added the label 'javascript' to pull request #809 (Bump minimatch) 13https://github.com/miraheze/CreateWiki/pull/809 [04:25:18] [02CreateWiki] 07dependabot[bot] added the label 'dependencies' to pull request #809 (Bump minimatch) 13https://github.com/miraheze/CreateWiki/pull/809 [04:25:20] [02CreateWiki] 07dependabot[bot] opened pull request #809: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/CreateWiki/pull/809 [04:25:22] [02CreateWiki] 07dependabot[bot] added the label 'dependencies' to pull request #809 (Bump minimatch) 13https://github.com/miraheze/CreateWiki/pull/809 [04:25:24] [02CreateWiki] 07dependabot[bot] added the label 'javascript' to pull request #809 (Bump minimatch) 13https://github.com/miraheze/CreateWiki/pull/809 [04:34:00] miraheze/CreateWiki - dependabot[bot] the build passed. [04:45:45] [02CreateWiki] 07Universal-Omega merged 07dependabot[bot]'s pull request #809: Bump minimatch (03main...03dependabot/npm_and_yarn/multi-770cfcd984) 13https://github.com/miraheze/CreateWiki/pull/809 [04:45:46] [02CreateWiki] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/CreateWiki/commit/64e1b929a03fa9ffc5e44ada448492427a4ab7d1 [04:45:46] 02CreateWiki/03main 07dependabot[bot] 0364e1b92 Bump minimatch (#809)… [04:45:47] [02CreateWiki] 07Universal-Omega 04deleted 03dependabot/npm_and_yarn/multi-770cfcd984 at 03a56b03d 13https://api.github.com/repos/miraheze/CreateWiki/commit/a56b03d [04:54:08] miraheze/CreateWiki - Universal-Omega the build passed. [04:54:16] PROBLEM - db161 Current Load on db161 is WARNING: LOAD WARNING - total load average: 7.37, 11.14, 5.60 [04:56:16] RECOVERY - db161 Current Load on db161 is OK: LOAD OK - total load average: 1.49, 7.64, 4.99 [06:42:41] [02ManageWiki] 07Universal-Omega approved pull request #774 13https://github.com/miraheze/ManageWiki/pull/774#pullrequestreview-3870158588 [07:05:14] !log [petramagna@test151] starting deploy of {'folders': '1.45/extensions/ManageWiki'} to test151 [07:05:15] !log [petramagna@test151] finished deploy of {'folders': '1.45/extensions/ManageWiki'} to test151 - SUCCESS in 0s [07:05:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:05:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:09:44] [02ManageWiki] 07lihaohong6 merged pull request #774: T14616: Index the canonical name of extensions (07miraheze:03main...07lihaohong6:03index) 13https://github.com/miraheze/ManageWiki/pull/774 [07:09:44] [02ManageWiki] 07lihaohong6 pushed 1 new commit to 03main 13https://github.com/miraheze/ManageWiki/commit/db9f19593d1c82dda9f586978d69e271405c3e3a [07:09:44] 02ManageWiki/03main 07Peter Li 03db9f195 T14616: Index the canonical name of extensions (#774)… [07:10:41] !log [petramagna@mwtask181] starting deploy of {'versions': '1.45', 'upgrade_extensions': 'ManageWiki'} to all [07:10:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:11:06] !log [petramagna@mwtask181] finished deploy of {'versions': '1.45', 'upgrade_extensions': 'ManageWiki'} to all - SUCCESS in 25s [07:11:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:15:46] miraheze/ManageWiki - lihaohong6 the build passed. [09:02:49] [02TSPortal] 07renovate[bot] created 03renovate/all-deps (+1 new commit) 13https://github.com/miraheze/TSPortal/commit/18a3d4133d73 [09:02:49] 02TSPortal/03renovate/all-deps 07renovate[bot] 0318a3d41 Update shivammathur/setup-php digest to a8ca9e3 [09:30:26] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/a64026be3543f51cffaf3e1093c87459cd205e6c [09:30:26] 02ssl/03main 07WikiTideBot 03a64026b Bot: Auto-update domain lists [10:28:39] [02ssl] 07BlankEclair opened pull request #902: T15034: Redirect marathon.miraheze.org to marathongame.wiki (07miraheze:03main...07BlankEclair:03T15034) 13https://github.com/miraheze/ssl/pull/902 [10:29:05] [02ssl] 07BlankEclair merged pull request #902: T15034: Redirect marathon.miraheze.org to marathongame.wiki (07miraheze:03main...07BlankEclair:03T15034) 13https://github.com/miraheze/ssl/pull/902 [10:29:06] [02ssl] 07BlankEclair pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/f8c435a86aca2d1eab2adc2c38d5da5ee80d42e3 [10:29:06] 02ssl/03main 07Claire Elaina 03f8c435a T15034: Redirect marathon.miraheze.org to marathongame.wiki (#902)… [10:30:27] [02ssl] 07coderabbitai[bot] commented on pull request #902:
[…] 13https://github.com/miraheze/ssl/pull/902#issuecomment-3976896209 [13:03:37] [02ssl] 07whostacking opened pull request #903: change all CONECORP wikis to use new domain (07miraheze:03main...07whostacking:03conecorpcc) 13https://github.com/miraheze/ssl/pull/903 [13:04:21] PROBLEM - puppet181 Check unit status of listdomains_github_push on puppet181 is CRITICAL: CRITICAL: Status of the systemd unit listdomains_github_push [13:04:49] [02ssl] 07coderabbitai[bot] commented on pull request #903: No actionable comments were generated in the recent review. 🎉 […] 13https://github.com/miraheze/ssl/pull/903#issuecomment-3977141914 [13:30:21] RECOVERY - puppet181 Check unit status of listdomains_github_push on puppet181 is OK: OK: Status of the systemd unit listdomains_github_push [13:30:29] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/81ab3928b54b6f2facc461eebb00ce09e6110547 [13:30:30] 02ssl/03main 07WikiTideBot 0381ab392 Bot: Auto-update domain lists [13:43:15] [02TSPortal] 07renovate[bot] pushed 1 new commit to 03main 13https://github.com/miraheze/TSPortal/commit/18a3d4133d73a1350cbdbd71eae3de799bd766ca [13:43:15] 02TSPortal/03main 07renovate[bot] 0318a3d41 Update shivammathur/setup-php digest to a8ca9e3 [13:43:15] [02TSPortal] 07renovate[bot] 04deleted 03renovate/all-deps at 0318a3d41 13https://api.github.com/repos/miraheze/TSPortal/commit/18a3d41 [13:46:16] [02landing] 07dependabot[bot] created 03dependabot/npm_and_yarn/minimatch-10.2.4 (+1 new commit) 13https://github.com/miraheze/landing/commit/c0bfedd3ae0c [13:46:18] 02landing/03dependabot/npm_and_yarn/minimatch-10.2.4 07dependabot[bot] 03c0bfedd Bump minimatch from 10.2.2 to 10.2.4… [13:46:20] [02landing] 07dependabot[bot] added the label 'dependencies' to pull request #191 (Bump minimatch from 10.2.2 to 10.2.4) 13https://github.com/miraheze/landing/pull/191 [13:46:22] [02landing] 07dependabot[bot] opened pull request #191: Bump minimatch from 10.2.2 to 10.2.4 (03main...03dependabot/npm_and_yarn/minimatch-10.2.4) 13https://github.com/miraheze/landing/pull/191 [13:46:24] [02landing] 07dependabot[bot] added the label 'javascript' to pull request #191 (Bump minimatch from 10.2.2 to 10.2.4) 13https://github.com/miraheze/landing/pull/191 [13:47:02] miraheze/landing - dependabot[bot] the build passed. [15:22:45] [02puppet] 07MacFan4000 opened pull request #4803: users: remove my alt key (07miraheze:03main...07MacFan4000:03patch-74) 13https://github.com/miraheze/puppet/pull/4803 [15:48:59] [Grafana] FIRING: The mediawiki JobQueue backlog is increasing by more than 100 jobs a minute over an extended time period https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772290100000&orgId=1&to=1772293739251 [15:53:59] [Grafana] FIRING: An unusually high number of threats are being reported by CloudFlare! https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772290260000&orgId=1&to=1772294039251[Grafana] FIRING: A MediaWiki pool is sick according to CloudFlare https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772290260000&orgId=1&to=1772294039251[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772290250000&orgId=1&to=1772293910 [16:00:52] [02ssl] 07pskyechology requested changes on pull request #903 13https://github.com/miraheze/ssl/pull/903#pullrequestreview-3870542872 [16:00:52] [02ssl] 07pskyechology left a file comment in pull request #903 03ae99ba9: Do not edit this file yourself. 13https://github.com/miraheze/ssl/pull/903#discussion_r2867597658 [16:29:36] !log increase prometheus151 ram to 16gb [16:29:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:30:30] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/a583545f7a7592d5d459677defedeb9528371f48 [16:30:31] 02ssl/03main 07WikiTideBot 03a583545 Bot: Auto-update domain lists [16:31:21] [02puppet] 07Universal-Omega merged 07MacFan4000's pull request #4803: users: remove my alt key (07miraheze:03main...07MacFan4000:03patch-74) 13https://github.com/miraheze/puppet/pull/4803 [16:31:21] [02puppet] 07Universal-Omega pushed 1 new commit to 03main 13https://github.com/miraheze/puppet/commit/7d3c9f757c5ea57d2268fbea196e0491702b48b8 [16:31:21] 02puppet/03main 07MacFan4000 037d3c9f7 users: remove my alt key (#4803)… [16:33:44] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: connect to address 10.0.15.114 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:33:51] PROBLEM - ping on mw152 is CRITICAL: CRITICAL - Host Unreachable (10.0.15.115) [16:33:57] PROBLEM - Host mw153 is DOWN: CRITICAL - Host Unreachable (10.0.15.140) PROBLEM - mw153 SSH on mw153 is CRITICAL: connect to address 10.0.15.140 and port 22: No route to host [16:33:57] PROBLEM - mw153 Disk Space on mw153 is CRITICAL: connect to address 10.0.15.140 port 5666: No route to hostconnect to host 10.0.15.140 port 5666: No route to host [16:33:57] PROBLEM - mw153 Puppet on mw153 is CRITICAL: connect to address 10.0.15.140 port 5666: No route to hostconnect to host 10.0.15.140 port 5666: No route to host [16:34:13] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw151.fsslc.wtnet port 443 after 3077 ms: Could not connect to server [16:34:17] PROBLEM - Host mw152 is DOWN: CRITICAL - Host Unreachable (10.0.15.115) [16:34:34] PROBLEM - mw151 NTP time on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:39] PROBLEM - mw151 Current Load on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:46] PROBLEM - ping on mw151 is CRITICAL: CRITICAL - Host Unreachable (10.0.15.114) [16:34:51] PROBLEM - mw151 conntrack_table_size on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:51] PROBLEM - mw151 Check unit status of php8.4-fpm_check_restart on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:57] PROBLEM - mw151 PowerDNS Recursor on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:57] PROBLEM - mw151 ferm_active on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:57] PROBLEM - mw151 Puppet on mw151 is CRITICAL: connect to address 10.0.15.114 port 5666: No route to hostconnect to host 10.0.15.114 port 5666: No route to host [16:34:57] PROBLEM - Host mw151 is DOWN: CRITICAL - Host Unreachable (10.0.15.114) [16:35:03] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 6 backends are down. mw151 mw152 mw161 mw162 mw153 mw163 [16:35:07] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 6 backends are down. mw151 mw152 mw161 mw162 mw153 mw163 [16:35:09] PROBLEM - ping on mw161 is CRITICAL: CRITICAL - Host Unreachable (10.0.16.132) [16:35:12] uh [16:35:16] whats going on [16:35:22] PROBLEM - cp161 Varnish Backends on cp161 is CRITICAL: 6 backends are down. mw151 mw152 mw161 mw162 mw153 mw163 [16:35:26] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 6 backends are down. mw151 mw152 mw161 mw162 mw153 mw163 [16:35:34] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw162.fsslc.wtnet port 443 after 3058 ms: Could not connect to server [16:35:50] PROBLEM - mw161 APT on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:35:51] PROBLEM - mw161 Disk Space on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:35:51] PROBLEM - mw161 Current Load on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:35:51] PROBLEM - mw161 conntrack_table_size on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:35:56] PROBLEM - mw162 Check unit status of php8.4-fpm_check_restart on mw162 is CRITICAL: connect to address 10.0.16.133 port 5666: No route to hostconnect to host 10.0.16.133 port 5666: No route to host [16:35:57] PROBLEM - mw161 Check unit status of php8.4-fpm_check_restart on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:35:57] PROBLEM - mw161 PowerDNS Recursor on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:36:00] PROBLEM - mw161 php-fpm on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:36:00] PROBLEM - mw161 NTP time on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:36:01] PROBLEM - Host mw162 is DOWN: CRITICAL - Host Unreachable (10.0.16.133) [16:36:10] PROBLEM - mw161 ferm_active on mw161 is CRITICAL: connect to address 10.0.16.132 port 5666: No route to hostconnect to host 10.0.16.132 port 5666: No route to host [16:36:12] @paladox [16:36:19] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw161.fsslc.wtnet port 443 after 3061 ms: Could not connect to server [16:36:21] PROBLEM - Host mw163 is DOWN: CRITICAL - Host Unreachable (10.0.16.151) [16:36:37] PROBLEM - mw161 SSH on mw161 is CRITICAL: connect to address 10.0.16.132 and port 22: No route to host [16:36:40] PROBLEM - Host mw161 is DOWN: CRITICAL - Host Unreachable (10.0.16.132) [16:36:44] @cosmicalpha [16:38:08] @paladox around? [16:38:19] PROBLEM - Host mw201 is DOWN: CRITICAL - Host Unreachable (10.0.20.162) [16:38:20] yeh hmm [16:38:25] PROBLEM - mw202 SSH on mw202 is CRITICAL: connect to address 10.0.20.163 and port 22: No route to host [16:38:25] PROBLEM - Host mw202 is DOWN: CRITICAL - Host Unreachable (10.0.20.163) [16:38:33] Ray ID `9d5159be6a63dd9d` [16:38:37] dunno what happened? [16:38:38] PROBLEM - ping on cp201 is CRITICAL: CRITICAL - Host Unreachable (10.0.20.166) [16:38:40] PROBLEM - Host mw203 is DOWN: CRITICAL - Host Unreachable (10.0.20.165) [16:38:52] PROBLEM - cp201 HTTPS on cp201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp201.wikitide.net port 443 after 2628 ms: Could not connect to server [16:38:58] PROBLEM - mw201.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw201.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:39:00] the hosts are up? [16:39:00] PROBLEM - cp201 health.wikitide.net HTTPS on cp201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to health.wikitide.net port 443 after 531 ms: Could not connect to server [16:39:03] PROBLEM - mw191 PowerDNS Recursor on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [16:39:09] PROBLEM - cp201 Haproxy TLS backend for reports171 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [16:39:12] PROBLEM - cp201 Nginx Backend for mw161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [16:39:13] PROBLEM - cp201 Haproxy TLS backend for mw181 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [16:39:13] PROBLEM - Host cp201 is DOWN: CRITICAL - Host Unreachable (10.0.20.166) [16:39:13] PROBLEM - cp201 Haproxy TLS backend for swiftproxy161 on cp201 is CRITICAL: connect to address 10.0.20.166 port 5666: No route to hostconnect to host 10.0.20.166 port 5666: No route to host [16:39:21] nope, still dead [16:39:21] PROBLEM - mw191 SSH on mw191 is CRITICAL: connect to address 10.0.19.160 and port 22: No route to host [16:39:28] huh i can't ping the ips? but they show as up [16:39:36] PROBLEM - ping on mw192 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.161) [16:39:38] its not everything though [16:39:39] PROBLEM - ping on cp191 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.146) [16:39:41] PROBLEM - mw191 HTTPS on mw191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw191.fsslc.wtnet port 443 after 2919 ms: Could not connect to server [16:39:43] PROBLEM - cp191 Haproxy TLS backend for mw181 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [16:39:44] PROBLEM - cp191 NTP time on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [16:39:44] PROBLEM - cp191 Nginx Backend for mw162 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [16:39:44] PROBLEM - cp191 Nginx Backend for mw172 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [16:39:44] PROBLEM - cp191 Haproxy TLS backend for mw183 on cp191 is CRITICAL: connect to address 10.0.19.146 port 5666: No route to hostconnect to host 10.0.19.146 port 5666: No route to host [16:39:44] PROBLEM - Host cp191 is DOWN: CRITICAL - Host Unreachable (10.0.19.146) [16:39:45] i can still access phorge [16:39:46] PROBLEM - mw192 SSH on mw192 is CRITICAL: connect to address 10.0.19.161 and port 22: No route to host [16:39:54] PROBLEM - ping on mw191 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.160) [16:39:56] PROBLEM - bast161 SSH on bast161 is CRITICAL: connect to address 10.0.16.127 and port 22: No route to host [16:40:00] PROBLEM - mw191 conntrack_table_size on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [16:40:00] PROBLEM - mw191 ferm_active on mw191 is CRITICAL: connect to address 10.0.19.160 port 5666: No route to hostconnect to host 10.0.19.160 port 5666: No route to host [16:40:09] PROBLEM - Host mw191 is DOWN: CRITICAL - Host Unreachable (10.0.19.160) [16:40:10] PROBLEM - Host mw192 is DOWN: CRITICAL - Host Unreachable (10.0.19.161) [16:40:10] PROBLEM - mw192 ferm_active on mw192 is CRITICAL: connect to address 10.0.19.161 port 5666: No route to hostconnect to host 10.0.19.161 port 5666: No route to host [16:40:13] PROBLEM - ping on mw193 is CRITICAL: CRITICAL - Host Unreachable (10.0.19.164) [16:40:15] PROBLEM - Host bast161 is DOWN: CRITICAL - Host Unreachable (10.0.16.127) [16:40:21] PROBLEM - mw193 APT on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:25] PROBLEM - mw193 SSH on mw193 is CRITICAL: connect to address 10.0.19.164 and port 22: No route to host [16:40:25] PROBLEM - mw193 php-fpm on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:27] the proxies are gone igg [16:40:28] PROBLEM - mw193 PowerDNS Recursor on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:28] PROBLEM - mw193 Puppet on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:31] PROBLEM - mw193.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw193.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:40:31] PROBLEM - mw193 Disk Space on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:39] restarting bast161 [16:40:39] PROBLEM - mw193 ferm_active on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:39] PROBLEM - mw193 NTP time on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:43] PROBLEM - mw193 Check unit status of php8.4-fpm_check_restart on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:40:43] PROBLEM - mw193 HTTPS on mw193 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw193.fsslc.wtnet port 443 after 2178 ms: Could not connect to server [16:41:08] PROBLEM - mw193 MediaWiki Rendering on mw193 is CRITICAL: connect to address 10.0.19.164 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:41:12] PROBLEM - mw193 Current Load on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:41:19] PROBLEM - mw193 conntrack_table_size on mw193 is CRITICAL: connect to address 10.0.19.164 port 5666: No route to hostconnect to host 10.0.19.164 port 5666: No route to host [16:41:26] PROBLEM - Host mw193 is DOWN: CRITICAL - Host Unreachable (10.0.19.164) [16:41:27] PROBLEM - mw191.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw191.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:41:52] [02ssl] 07whostacking closed pull request #903: change all CONECORP wikis to use new domain (07miraheze:03main...07whostacking:03conecorpcc) 13https://github.com/miraheze/ssl/pull/903 [16:41:54] Did networking fail? [16:42:07] private network is failing? [16:42:14] RECOVERY - Host bast161 is UP: PING OK - Packet loss = 0%, RTA = 0.70 ms [16:42:20] [1/5] > [root@bast161:/home/paladox]# ping mw151 [16:42:20] [2/5] > PING mw151.fsslc.wtnet (10.0.15.114) 56(84) bytes of data. [16:42:20] [3/5] > ^C [16:42:20] [4/5] > --- mw151.fsslc.wtnet ping statistics --- [16:42:21] [5/5] > 3 packets transmitted, 0 received, 100% packet loss, time 2054ms [16:42:23] PROBLEM - bast161 Puppet on bast161 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:42:26] [02ssl] 07whostacking opened pull request #904: T15035: change all CONECORP wikis to use new domain (07miraheze:03main...07whostacking:03conecorpwikis) 13https://github.com/miraheze/ssl/pull/904 [16:42:43] but i can although [16:42:44] [1/8] > [root@bast161:/home/paladox]# ping cloud15 [16:42:44] [2/8] > PING cloud15.fsslc.wtnet (10.0.15.1) 56(84) bytes of data. [16:42:45] [3/8] > 64 bytes from cloud15.fsslc.wtnet (10.0.15.1): icmp_seq=1 ttl=64 time=0.283 ms [16:42:45] [4/8] > 64 bytes from cloud15.fsslc.wtnet (10.0.15.1): icmp_seq=2 ttl=64 time=0.110 ms [16:42:45] [5/8] > ^C [16:42:46] [6/8] > --- cloud15.fsslc.wtnet ping statistics --- [16:42:46] [7/8] > 2 packets transmitted, 2 received, 0% packet loss, time 1001ms [16:42:46] [8/8] > rtt min/avg/max/mdev = 0.110/0.196/0.283/0.086 ms [16:42:58] RECOVERY - bast161 Puppet on bast161 is OK: OK: Puppet is currently enabled, last run 3 seconds ago with 0 failures [16:43:52] hmm [16:43:55] RECOVERY - Host mw162 is UP: PING OK - Packet loss = 0%, RTA = 0.32 ms [16:43:58] RECOVERY - bast161 SSH on bast161 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:43:58] [02ssl] 07coderabbitai[bot] commented on pull request #904: No actionable comments were generated in the recent review. 🎉 […] 13https://github.com/miraheze/ssl/pull/904#issuecomment-3977402275 [16:44:14] RECOVERY - Host mw163 is UP: PING OK - Packet loss = 0%, RTA = 0.36 ms [16:44:38] PROBLEM - mw162 Puppet on mw162 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:44:42] RECOVERY - Host mw161 is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [16:44:46] i guess all vms need rebooting [16:44:50] mw151 saying network down [16:45:13] PROBLEM - mw192.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw192.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:45:18] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [16:45:21] PROBLEM - mw153.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw153.fsslc.wtnet and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [16:45:27] RECOVERY - ping on mw161 is OK: PING OK - Packet loss = 0%, RTA = 0.21 ms [16:45:30] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.630 second response time [16:45:33] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 4.846 second response time [16:45:48] RECOVERY - mw161 Current Load on mw161 is OK: LOAD OK - total load average: 14.19, 4.54, 1.61 [16:45:48] RECOVERY - mw161 Disk Space on mw161 is OK: DISK OK - free space: / 29046MiB (54.5% inode=89%); [16:45:48] RECOVERY - mw161 APT on mw161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [16:45:53] RECOVERY - mw161 Check unit status of php8.4-fpm_check_restart on mw161 is OK: OK: Status of the systemd unit php8.4-fpm_check_restart [16:45:53] RECOVERY - mw162 Check unit status of php8.4-fpm_check_restart on mw162 is OK: OK: Status of the systemd unit php8.4-fpm_check_restart [16:45:53] RECOVERY - mw161 conntrack_table_size on mw161 is OK: OK: nf_conntrack is 2 % full [16:45:53] RECOVERY - Host mw153 is UP: PING OK - Packet loss = 0%, RTA = 0.37 ms [16:45:58] RECOVERY - mw153 SSH on mw153 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:45:58] RECOVERY - mw161 NTP time on mw161 is OK: NTP OK: Offset -3.346800804e-05 secs [16:45:58] RECOVERY - mw161 PowerDNS Recursor on mw161 is OK: DNS OK: 0.034 seconds response time. mw161.fsslc.wtnet returns 10.0.16.132 [16:45:59] can confirm we are coming back up [16:46:03] RECOVERY - mw161 php-fpm on mw161 is OK: PROCS OK: 25 processes with command name 'php-fpm8.4' [16:46:08] RECOVERY - mw161 ferm_active on mw161 is OK: OK ferm input default policy is set [16:46:20] RECOVERY - Host mw152 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [16:46:23] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 2.927 second response time [16:46:23] RECOVERY - mw153.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:46:33] RECOVERY - mw161 SSH on mw161 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:46:43] PROBLEM - mw161 Puppet on mw161 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:47:07] RECOVERY - Host mw151 is UP: PING OK - Packet loss = 0%, RTA = 0.29 ms [16:47:18] RECOVERY - Host cp201 is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [16:47:38] RECOVERY - mw153 Disk Space on mw153 is OK: DISK OK - free space: / 29144MiB (56.2% inode=89%); [16:47:38] RECOVERY - mw153 Puppet on mw153 is OK: OK: Puppet is currently enabled, last run 26 minutes ago with 0 failures [16:47:42] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw163.fsslc.wtnet port 443 after 0 ms: Could not connect to server [16:47:43] PROBLEM - mw152 Puppet on mw152 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:47:46] PROBLEM - mw163.fsslc.wtnet SSL Check on sslhost is CRITICAL: connect to address mw163.fsslc.wtnet and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [16:47:48] RECOVERY - ping on mw152 is OK: PING OK - Packet loss = 0%, RTA = 0.22 ms [16:48:08] PROBLEM - cp201 Puppet on cp201 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:48:08] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.456 second response time [16:48:10] RECOVERY - Host mw202 is UP: PING OK - Packet loss = 0%, RTA = 0.19 ms [16:48:10] RECOVERY - Host mw191 is UP: PING OK - Packet loss = 0%, RTA = 0.33 ms [16:48:11] RECOVERY - Host mw201 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [16:48:15] RECOVERY - mw202 SSH on mw202 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:48:17] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: connect to address 10.0.16.151 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [16:48:19] RECOVERY - Host mw192 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [16:48:32] RECOVERY - Host mw203 is UP: PING OK - Packet loss = 0%, RTA = 0.28 ms [16:48:33] PROBLEM - mw192 MediaWiki Rendering on mw192 is CRITICAL: connect to address 10.0.19.161 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [16:48:35] RECOVERY - ping on cp201 is OK: PING OK - Packet loss = 0%, RTA = 0.25 ms [16:48:36] RECOVERY - mw162 Puppet on mw162 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:48:40] RECOVERY - mw161 Puppet on mw161 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:48:43] RECOVERY - mw151 Current Load on mw151 is OK: LOAD OK - total load average: 4.70, 1.28, 0.44 [16:48:43] RECOVERY - mw151 NTP time on mw151 is OK: NTP OK: Offset -6.344914436e-05 secs [16:48:43] PROBLEM - mw192 HTTPS on mw192 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw192.fsslc.wtnet port 443 after 0 ms: Could not connect to server [16:48:45] RECOVERY - cp201 HTTPS on cp201 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4357 bytes in 0.444 second response time [16:48:48] RECOVERY - mw151 Check unit status of php8.4-fpm_check_restart on mw151 is OK: OK: Status of the systemd unit php8.4-fpm_check_restart [16:48:48] RECOVERY - mw151 conntrack_table_size on mw151 is OK: OK: nf_conntrack is 1 % full [16:48:53] RECOVERY - ping on mw151 is OK: PING OK - Packet loss = 0%, RTA = 0.30 ms [16:48:54] RECOVERY - cp201 Haproxy TLS backend for mw181 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8119 [16:48:57] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.408 second response time [16:48:58] RECOVERY - mw151 PowerDNS Recursor on mw151 is OK: DNS OK: 0.032 seconds response time. mw151.fsslc.wtnet returns 10.0.15.114 [16:48:58] RECOVERY - mw151 Puppet on mw151 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [16:48:58] RECOVERY - mw151 ferm_active on mw151 is OK: OK ferm input default policy is set [16:49:00] RECOVERY - cp201 Haproxy TLS backend for swiftproxy161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8206 [16:49:01] RECOVERY - cp201 Haproxy TLS backend for reports171 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8205 [16:49:02] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.367 second response time [16:49:02] RECOVERY - cp201 health.wikitide.net HTTPS on cp201 is OK: HTTP OK: HTTP/2 200 - 112 bytes in 0.011 second response time [16:49:04] RECOVERY - mw191 SSH on mw191 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:49:04] RECOVERY - mw191.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:49:06] RECOVERY - cp201 Nginx Backend for mw161 on cp201 is OK: TCP OK - 0.000 second response time on localhost port 8115 [16:49:08] RECOVERY - mw191 PowerDNS Recursor on mw191 is OK: DNS OK: 0.299 seconds response time. mw191.fsslc.wtnet returns 10.0.19.160 [16:49:14] RECOVERY - mw163.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:49:17] PROBLEM - swiftproxy171 Current Load on swiftproxy171 is CRITICAL: LOAD CRITICAL - total load average: 9.22, 5.20, 3.11 [16:49:32] RECOVERY - Host mw193 is UP: PING OK - Packet loss = 0%, RTA = 0.30 ms [16:49:33] PROBLEM - mw202 Puppet on mw202 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:49:33] RECOVERY - mw191 HTTPS on mw191 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.075 second response time [16:49:37] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.249 second response time [16:49:37] RECOVERY - mw192 SSH on mw192 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:49:37] RECOVERY - mw152 Puppet on mw152 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:49:38] PROBLEM - mw203 Puppet on mw203 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:49:43] RECOVERY - ping on mw192 is OK: PING OK - Packet loss = 0%, RTA = 0.36 ms [16:49:43] RECOVERY - ping on mw193 is OK: PING OK - Packet loss = 0%, RTA = 0.21 ms [16:49:44] RECOVERY - Host cp191 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [16:49:47] RECOVERY - ping on mw191 is OK: PING OK - Packet loss = 0%, RTA = 0.23 ms [16:49:55] RECOVERY - mw193 php-fpm on mw193 is OK: PROCS OK: 25 processes with command name 'php-fpm8.4' [16:49:58] RECOVERY - mw191 conntrack_table_size on mw191 is OK: OK: nf_conntrack is 3 % full [16:49:58] RECOVERY - swiftproxy171 Current Load on swiftproxy171 is OK: LOAD OK - total load average: 6.79, 5.12, 3.17 [16:49:59] RECOVERY - mw191 ferm_active on mw191 is OK: OK ferm input default policy is set [16:50:08] RECOVERY - mw192 ferm_active on mw192 is OK: OK ferm input default policy is set [16:50:11] RECOVERY - cp201 Puppet on cp201 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [16:50:14] RECOVERY - mw193 PowerDNS Recursor on mw193 is OK: DNS OK: 0.303 seconds response time. mw193.fsslc.wtnet returns 10.0.19.164 [16:50:17] RECOVERY - mw193 SSH on mw193 is OK: SSH OK - OpenSSH_10.0p2 (protocol 2.0) [16:50:19] RECOVERY - mw193 APT on mw193 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [16:50:20] RECOVERY - mw193 HTTPS on mw193 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.073 second response time [16:50:24] RECOVERY - mw193 Check unit status of php8.4-fpm_check_restart on mw193 is OK: OK: Status of the systemd unit php8.4-fpm_check_restart [16:50:27] RECOVERY - mw193 NTP time on mw193 is OK: NTP OK: Offset -4.959106445e-05 secs [16:50:36] RECOVERY - mw193 ferm_active on mw193 is OK: OK ferm input default policy is set [16:50:38] RECOVERY - mw193 Disk Space on mw193 is OK: DISK OK - free space: / 33049MiB (62.1% inode=89%); [16:50:52] RECOVERY - mw203 Puppet on mw203 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [16:50:53] RECOVERY - mw192.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:50:55] RECOVERY - mw201.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:51:00] RECOVERY - mw192 HTTPS on mw192 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4308 bytes in 0.396 second response time [16:51:06] RECOVERY - mw192 MediaWiki Rendering on mw192 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 3.366 second response time [16:51:08] RECOVERY - mw193 Current Load on mw193 is OK: LOAD OK - total load average: 7.70, 2.73, 0.98 [16:51:08] RECOVERY - mw193 MediaWiki Rendering on mw193 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.244 second response time [16:51:09] RECOVERY - mw202 Puppet on mw202 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:51:13] RECOVERY - mw193.fsslc.wtnet SSL Check on sslhost is OK: OK - Certificate 'CloudFlare Origin Certificate' will expire on Tue 03 Apr 2040 12:14:00 PM GMT +0000. [16:51:13] RECOVERY - mw193 conntrack_table_size on mw193 is OK: OK: nf_conntrack is 2 % full [16:51:22] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [16:51:26] RECOVERY - cp161 Varnish Backends on cp161 is OK: All 31 backends are healthy [16:51:29] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [16:51:32] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [16:51:38] RECOVERY - cp191 Nginx Backend for mw172 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8118 [16:51:38] RECOVERY - cp191 Haproxy TLS backend for mw181 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8119 [16:51:43] RECOVERY - cp191 Haproxy TLS backend for mw183 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8127 [16:51:43] RECOVERY - cp191 NTP time on cp191 is OK: NTP OK: Offset -5.510449409e-05 secs [16:51:43] RECOVERY - cp191 Nginx Backend for mw162 on cp191 is OK: TCP OK - 0.000 second response time on localhost port 8116 [16:51:48] RECOVERY - ping on cp191 is OK: PING OK - Packet loss = 0%, RTA = 0.23 ms [16:52:56] RECOVERY - mw193 Puppet on mw193 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:53:08] [02ssl] 07pskyechology approved pull request #904 13https://github.com/miraheze/ssl/pull/904#pullrequestreview-3870579503 [16:53:28] [02ssl] 07pskyechology merged 07whostacking's pull request #904: T15035: change all CONECORP wikis to use new domain (07miraheze:03main...07whostacking:03conecorpwikis) 13https://github.com/miraheze/ssl/pull/904 [16:53:30] [02ssl] 07pskyechology pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/2e9307b7b848bb2e50a594be68739d1e122a7e7c [16:53:31] 02ssl/03main 07whostacking 032e9307b T15035: change all CONECORP wikis to use new domain (#904)… [17:00:41] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/657fe1296d5a3a1eaaf5df7a0f1decb48aac4428 [17:00:41] 02ssl/03main 07WikiTideBot 03657fe12 Bot: Auto-update domain lists [17:13:59] [Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772292660000&orgId=1&to=1772298660000[Grafana] RESOLVED: DatasourceError https://grafana.wikitide.net/d/GtxbP1Xnk?from=1772292660000&orgId=1&to=1772298660000 [21:03:50] [02mediawiki-repos] 07SomeMWDev created 03remove-maps-patch (+1 new commit) 13https://github.com/miraheze/mediawiki-repos/commit/ac69c1c803c7 [21:03:50] 02mediawiki-repos/03remove-maps-patch 07SomeRandomDeveloper 03ac69c1c Remove Maps patch again… [21:04:09] [02mediawiki-repos] 07SomeMWDev opened pull request #129: Remove Maps patch again (03main...03remove-maps-patch) 13https://github.com/miraheze/mediawiki-repos/pull/129 [21:05:03] [02mediawiki-repos] 07SomeMWDev merged pull request #129: Remove Maps patch again (03main...03remove-maps-patch) 13https://github.com/miraheze/mediawiki-repos/pull/129 [21:05:03] [02mediawiki-repos] 07SomeMWDev pushed 1 new commit to 03main 13https://github.com/miraheze/mediawiki-repos/commit/ca1bc6ab3fc0c102237a48319830687d37a4ef2b [21:05:03] 02mediawiki-repos/03main 07SomeRandomDeveloper 03ca1bc6a Remove Maps patch again (#129)… [21:05:05] [02mediawiki-repos] 07SomeMWDev 04deleted 03remove-maps-patch at 03ac69c1c 13https://api.github.com/repos/miraheze/mediawiki-repos/commit/ac69c1c [21:06:26] !log [somerandomdeveloper@mwtask181] starting deploy of {'world': True, 'versions': '1.45'} to all [21:06:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:09:01] !log [somerandomdeveloper@mwtask181] finished deploy of {'world': True, 'versions': '1.45'} to all - SUCCESS in 155s [21:09:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:44:09] [02mw-config] 07SomeMWDev created 03T15017 (+1 new commit) 13https://github.com/miraheze/mw-config/commit/fabb22437311 [21:44:09] 02mw-config/03T15017 07SomeRandomDeveloper 03fabb224 Restrict AutoCreatePage… [21:45:12] !log [somerandomdeveloper@test151] starting deploy of {'config': True} to test151 [21:45:13] !log [somerandomdeveloper@test151] finished deploy of {'config': True} to test151 - SUCCESS in 0s [21:45:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:45:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:45:40] [02mw-config] 07SomeMWDev opened pull request #6333: Restrict AutoCreatePage (03main...03T15017) 13https://github.com/miraheze/mw-config/pull/6333 [21:46:34] [02mw-config] 07SomeMWDev merged pull request #6333: Restrict AutoCreatePage (03main...03T15017) 13https://github.com/miraheze/mw-config/pull/6333 [21:46:36] [02mw-config] 07SomeMWDev pushed 1 new commit to 03main 13https://github.com/miraheze/mw-config/commit/1dd39a3c53c37ca4ae9b82164b84b02611821ad5 [21:46:38] !log [somerandomdeveloper@test151] starting deploy of {'pull': 'config', 'config': True} to test151 [21:46:38] [02mw-config] 07SomeMWDev 04deleted 03T15017 at 03fabb224 13https://api.github.com/repos/miraheze/mw-config/commit/fabb224 [21:46:39] !log [somerandomdeveloper@test151] finished deploy of {'pull': 'config', 'config': True} to test151 - SUCCESS in 0s [21:46:39] 02mw-config/03main 07SomeRandomDeveloper 031dd39a3 Restrict AutoCreatePage (#6333)… [21:46:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:46:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:46:54] miraheze/mw-config - SomeMWDev the build passed. [21:47:05] !log [somerandomdeveloper@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [21:47:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:47:30] !log [somerandomdeveloper@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 24s [21:47:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:35:18] PROBLEM - phorge171 Current Load on phorge171 is CRITICAL: LOAD CRITICAL - total load average: 8.35, 6.26, 3.37 [22:39:18] PROBLEM - phorge171 Current Load on phorge171 is WARNING: LOAD WARNING - total load average: 7.98, 7.33, 4.48 [22:41:18] PROBLEM - phorge171 Current Load on phorge171 is CRITICAL: LOAD CRITICAL - total load average: 8.62, 7.74, 4.98 [22:43:18] RECOVERY - phorge171 Current Load on phorge171 is OK: LOAD OK - total load average: 2.47, 5.95, 4.67 [22:50:56] PROBLEM - cp171 Disk Space on cp171 is WARNING: DISK WARNING - free space: / 45298MiB (10.0% inode=100%); [23:46:56] RECOVERY - cp171 Disk Space on cp171 is OK: DISK OK - free space: / 84967MiB (18.7% inode=100%); [23:47:18] PROBLEM - phorge171 Current Load on phorge171 is WARNING: LOAD WARNING - total load average: 7.82, 5.09, 2.44 [23:49:18] PROBLEM - phorge171 Current Load on phorge171 is CRITICAL: LOAD CRITICAL - total load average: 8.39, 6.22, 3.18 [23:57:18] RECOVERY - phorge171 Current Load on phorge171 is OK: LOAD OK - total load average: 2.20, 5.83, 4.53