[01:04:56] 10DBA, 10GrowthExperiments, 10Growth-Team (Current Sprint), 10Patch-For-Review, and 2 others: Slow load times for Special:Homepage on cswiki - https://phabricator.wikimedia.org/T267216 (10Tgr) Filed {T268700} about the alerts. >>! In T267216#6646584, @kostajh wrote: > * the mariadb-optimizer-bug, which I... [02:31:27] PROBLEM - MariaDB sustained replica lag on db1089 is CRITICAL: 2.4 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1089&var-port=9104 [02:41:43] RECOVERY - MariaDB sustained replica lag on db1089 is OK: (C)2 ge (W)1 ge 0.8 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1089&var-port=9104 [05:38:42] 10DBA, 10GrowthExperiments, 10Growth-Team (Current Sprint), 10Patch-For-Review, and 2 others: Slow load times for Special:Homepage on cswiki - https://phabricator.wikimedia.org/T267216 (10Marostegui) I agree with @Tgr - waiting for MariaDB to fix this might be a long shot. We also don't know whether: - Th... [05:40:30] 10DBA, 10Community-Tech, 10Expiring-Watchlist-Items: Watchlist Expiry: Release plan [rough schedule] - https://phabricator.wikimedia.org/T261005 (10Marostegui) Excellent - thank you! [06:20:21] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) s2 situation: * Transfer from db1074 (sanitarium master) to clouddb1014:3312 and clouddb1018:3312 completed successfully. * Sanitization on... [06:20:49] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [06:31:54] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [06:36:19] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) Restarted clouddb1015:3314, clouddb1015:3316 and clouddb1019:3314, clouddb1019:3316 (they had no errors for a day) let's give them another 24... [06:41:19] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) On-going transfers: db1125:3317 -> clouddb1014:3317 db1125:3317 -> clouddb1018:3317 [06:43:04] 10DBA, 10Orchestrator, 10User-Kormat: Enable report_host for mariadb - https://phabricator.wikimedia.org/T266483 (10Marostegui) [06:52:45] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) Nothing on logs regarding `namespace_title` index. [06:54:26] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) s5 progress [] labsdb1012 [] labsdb1011 [] labsdb1010 [] labsdb1009 [x] dbstore1003 [x] db1150 [x] db1145 [] db1144 [] db1130 [] db1124 [] db1113 [] db111... [06:54:43] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [07:04:25] 10DBA, 10CheckUser: Monitor the growth of CheckUser tables at enwiki and few other very large wikis - https://phabricator.wikimedia.org/T267275 (10Marostegui) [07:26:27] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [07:27:10] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [07:44:04] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [08:01:52] marostegui: o/ [08:01:59] going to start es1024 reboot process now [08:02:04] +1 [08:04:35] 10DBA, 10Operations, 10Patch-For-Review, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [08:06:28] marostegui: looks like writes are disabled (i'm only seeing heartbeat changes in binlog) [08:06:52] kormat: cool, let's stop mysql and leave it stopped for a few minutes to see how MW handles the "replication" broken issue on the slaves [08:07:07] +1 [08:07:25] This is cool to pick also the report_host flag up [08:07:31] yep :) [08:07:41] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) s2 progress [x] dbstore1004 [] db1146 [] db1129 [] db1122 [] db1105 [x] db1095 [] db1090 [] db1076 [] db1074 [08:09:37] Not seeing anything on logstash for now [08:09:53] me neither, but that's mostly because it's timing out for me [08:09:59] XD [08:12:09] ok, so MW is now complaining [08:12:28] but there are no fatals though [08:13:01] https://logstash.wikimedia.org/goto/98e125366e4c8e4036e6711a29812a8f is whiny, yeah [08:13:25] ok to proceed? [08:13:37] Doesn't seem to be affecting requests https://grafana.wikimedia.org/d/000000180/varnish-http-requests?orgId=1&from=now-12h&to=now https://grafana.wikimedia.org/d/000000278/mysql-aggregated?orgId=1 [08:13:39] let's reboot [08:14:05] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [08:15:26] marostegui: I have two more schema changes, should I create their ticket now or wait until you go on vacation? [08:15:41] Amir1: You can create them and then I will go on vacation and assign them to kormat [08:16:10] 😭 [08:16:20] hahahaha [08:18:52] es1024 is still waiting to reboot [08:18:57] [* ] (4 of 5) A stop job is running for …d41792-part2 (4min 24s / no limit) [08:19:57] odd [08:20:04] i even umounted /srv before rebooting [08:20:13] oh, i wonder if that's swap [08:20:21] Yeah, now it showed sda2 [08:20:43] ok. maybe next time i'll do `swapoff -a` first too [08:20:51] +1 [08:24:48] should I issue a reboot from the idrac? [08:24:52] it's been 10 minutes [08:25:36] sigh. yeah, probably. [08:26:09] ok, doing it [08:29:21] kormat: OS booting up [08:29:47] Host up [08:29:54] phew [08:33:30] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [08:33:39] `db-replication-tree` is happy [08:33:52] the fuck [08:36:07] marostegui: why would es1024 come back up read-only? [08:36:28] cause we have read_only enabled by default [08:36:42] as a way to prevent split brains, consistency issues etc [08:36:50] ohh, i see [08:36:53] ie: a master crashing all of a sudden, it might came back broken [08:36:58] so better to double check before allowing writes [08:37:12] I see the page was more or less expected [08:37:25] * sobanski goes back to shopping [08:37:34] sobanski: more: expected by marostegui. less: unexpected by me. ;) [08:37:45] kormat: did you downtime the whole host? or it just expired? [08:37:58] marostegui: i downtimed the whole section, but for 30mins [08:38:08] aaaah right [08:38:11] which seemed like plenty of time to reboot one host :/ [08:38:22] yeah .( [08:38:57] ok, icinga is green again, [08:39:02] time to revert the MW change [08:39:21] sweet [08:40:48] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [08:43:44] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [08:48:06] to be fair, this is the one time where things were properly documented: :-) https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting#Master_comes_back_in_read_only [08:49:13] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [08:51:00] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [08:51:54] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) >>! In T267090#6647511, @Marostegui wrote: > On-going transfers: > > db1125:3317 -> clouddb1014:3317 > db1125:3317 -> clouddb1018:3317 This... [09:01:16] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [09:05:55] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) [09:06:18] 10DBA, 10Operations, 10Security, 10User-Kormat: Reboot es1024 (es5 master) - https://phabricator.wikimedia.org/T268469 (10Kormat) 05Open→03Resolved Completed. [09:19:14] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [09:21:07] jynus: re: tox & comments, what is your local tox version? [09:21:41] v0.6 has already been "released", so your proposed PR would not be included [09:36:15] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [09:46:40] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) 05Open→03Resolved a:03Kormat The orchestrator config change has been deployed, and the heartbeat tables for pc{1,2,3} have been cleaned up. Other... [09:51:07] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Marostegui) >>! In T268316#6647820, @Kormat wrote: > The orchestrator config change has been deployed, and the heartbeat tables for pc{1,2,3} have been cleaned... [09:53:18] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) I've tested it in pontoon — stopping heartbeat on the master causes immediate lag to show up for the entire tree. [09:53:50] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Marostegui) \o/ [09:54:24] 10DBA, 10Orchestrator, 10Patch-For-Review: Cleanup heartbeat.heartbeat on all production instances - https://phabricator.wikimedia.org/T268336 (10Marostegui) [09:54:51] 10DBA, 10Operations, 10Orchestrator, 10Patch-For-Review: Base replication lag detection on heartbeat - https://phabricator.wikimedia.org/T268316 (10Kormat) (and re-starting heartbeat makes the lag disappear ~instantly) [09:55:48] jynus: also we should discuss how to deploy the wmfmariadbpy/wmfbackups updates [09:57:25] ok [09:59:00] did you see https://gerrit.wikimedia.org/r/c/operations/software/wmfmariadbpy/+/643287 ? [09:59:20] yes. that's the "proposed PR" i referred to above. [09:59:35] sorry, I missed that [09:59:37] let me read it [10:00:04] 3.7.0 [10:00:14] hmm, ok. i'm on 3.13.2 [10:00:25] so i'm guessing they 'fixed' the issue in later versions [10:00:49] i'm happy to merge your PR, but it won't be in v0.6. is that good enough for now? [10:00:52] are you sure, I checked bug reports and all seems closed as "won't fix" [10:01:02] jynus: i'm sure that i'm on 3.13.2 and that i don't have this issue, yes. [10:01:17] ok [10:01:31] CI doesn't seem to have the issue either [10:01:32] doesn't really affect me, as long as the 0.6 packages work [10:01:47] ok [10:03:14] so I say we upload all packages and then upgrade simultaneously on all related hosts [10:09:37] jynus: alright. when suits you? [10:09:43] now? [10:09:48] WFM [10:12:17] wmfmariadbpy packages built and uploaded to apt. i'll do a test on cumin2001 [10:12:40] I just pushed v0.4 on my side [10:12:45] about to upload the packages [10:14:18] `ModuleNotFoundError: No module named 'wmfmariadbpy.dbutil'` [10:14:21] i need to fix a package [10:14:25] oh [10:14:35] should I stop uplading? [10:14:43] no, doesn't matter [10:14:48] ok [10:14:49] just don't deploy it anywhere yet :) [10:14:53] ok [10:14:54] thanks [10:20:00] kormat: I am going to test my 0.4 on alerts hosts, as it (-check) doesn't depend from wmfmariadbpy there [10:20:14] alright [10:23:00] grr. looks like i actually need to do a new release (v0.6.1 i guess) [10:23:07] this shouldn't affect you, jynus [10:23:11] yeah [10:23:19] as long as 0.6 is not available for install [10:23:29] the dependency is >0.6 [10:30:19] oh, my fault [10:31:15] I checked the dependency of wmfbackups-check, but didn't realize it depended via python3-wmfbackups [10:31:20] so I have to revert [10:39:30] I also just realized that backups of es hosts increased a 10% compared to last week [10:40:13] +100GB in a week per shard [10:43:35] jynus: v0.6.1 released and uploaded. testing on cumin2001 now [10:43:46] cool, waiting for confirmation [10:44:57] issue fixed [10:45:45] marostegui, kormat check this out, significant increase in size on es hosts last week: https://grafana.wikimedia.org/d/000000377/host-overview?viewPanel=28&orgId=1&refresh=5m&var-server=es2020&var-datasource=thanos&var-cluster=misc&from=1603709091889&to=1606301091889 [10:47:22] nothing to worry about, but to have it in mind to see if it is a one time thing or continues for harware provisioning [10:47:49] Looking at the last 6 monhs, it's clearly over the trend line starting around November 17th [10:48:48] we are at 15% capacity, so not a immediate concern, maybe just elections got more activity on wikis? [10:49:39] jynus: i'm ready to deploy wmfmariadbpy everywhere [10:49:51] although I don't see a huge correlation with edit activity: https://grafana.wikimedia.org/d/000000208/edit-count?orgId=1&refresh=5m&from=now-30d&to=now [10:49:56] kormat: go on [10:50:34] the only issues I think would be on cumin and dbprov [10:50:45] jynus: done [10:51:38] ah, good, not sure if you did manually install the package or got upgraded automatically when upgrading dependencies? [10:52:12] oh, no, it didn't get upgraded automatically [10:52:33] i used debdeploy [10:53:38] ok, cumin upgraded [10:53:54] going for dbprov now [10:54:01] https://phabricator.wikimedia.org/P13406 [10:55:37] note wmfmariadbpy-common didn't get updated [10:55:43] not sure if relevant [10:55:53] the paste says it did? [10:56:00] on dbprov hosts [10:56:02] do you have an example host? [10:56:14] yeah, dbprov1001: apt install python3-wmfbackups [10:56:15] https://phabricator.wikimedia.org/P13406$11 [10:56:31] The following packages will be upgraded: [10:56:32] python3-wmfbackups python3-wmfmariadbpy wmfmariadbpy-common [10:57:00] fyi [10:58:22] I also get "pkg_resources.DistributionNotFound: The 'python3-wmfmariadbpy>=0.6' distribution was not found and is required by wmfbackups" [10:58:28] but that could be my mistake [10:58:48] jynus: ah. puppet doesn't install wmfmariadbpy-common on dbprov1001, afaics [10:59:53] the funny thing is python3-wmfmariadbpy 0.6.1 is installed [11:00:11] hmm. debdeploy claimed dbprov1001 was up to date [11:00:18] it is [11:00:35] (I manually updated it) [11:01:02] _was_. 10 minutes ago [11:01:45] or is your report that packages didn't get updated from before i reported that i had deployed the updates? [11:02:10] no, that is after you said it was upgraded [11:02:28] I didn't touch things before, except alert* hosts [11:02:34] ok. your "it is" was very misleading then [11:02:43] it is as in, it is now [11:02:49] packages installed with dpkg -i don't get reported to debmonitor in real time (there's no hook in dpkg as we use it for apt), those get reconciled by a daily system timer [11:03:04] moritzm: i haven't looked at debmonitor [11:03:31] ah, ok [11:03:38] it looks like debdeploy skipped some hosts, and just assumed they were fine? [11:04:08] it only updates to the version specified in the YAML file, so if that one was already installed, it simply skips these hosts [11:04:32] moritzm: it was not already installed [11:05:00] 5min after i ran debdeploy: [11:05:04] `2020-11-25 10:55:12 upgrade wmfmariadbpy-common:amd64 0.4+deb9u1 0.6.1+deb9u1` [11:05:38] i think i need to file a bug [11:07:27] which host is that? I'm out for an errand, will have a look when I'm back [11:07:35] dbprov1001 [11:07:41] k [11:09:13] moritzm: i filed https://phabricator.wikimedia.org/T268735, and added you on CC [11:10:00] kormat: please sanity check this- I think the wmfbackups require "python3-wmfmariadbpy>=0.6" is not working for 0.6.1 [11:10:33] that would be surprising. i did some testing with how it resolves versions [11:10:33] As I get: pkg_resources.DistributionNotFound: The 'python3-wmfmariadbpy>=0.6' distribution was not found and is required by wmfbackups [11:10:43] but 0.6.1 is installed [11:10:57] can you give me an example host + command to reproduce the issue? [11:11:27] run "backup-mariadb --help" on dbprov2003 [11:12:06] wait [11:12:12] jynus: how sure are you that your package build was clean? [11:12:30] because a previous PS in your CR referred to python3-wmfmariadbpy in setup.py [11:12:36] well, I tested it with 0.6 [11:12:51] not with 0.6.1, all with my own build [11:15:21] I can create a new package [11:16:22] checking something [11:16:51] I have edited manually the requires.txt file and still doesn't work [11:16:56] so may be something else [11:17:02] yeah, it looks like `/usr/lib/python3/dist-packages/wmfbackups-0.4.egg-info/requires.txt` contains garbage [11:17:05] pkg_resources.DistributionNotFound: The 'python3-wmfmariadbpy>=0.6.1' distribution was not found and is required by wmfbackups [11:17:11] if i re-make the package locally, it contains: [11:17:15] `wmfmariadbpy@ git+https://gerrit.wikimedia.org/r/operations/software/wmfmariadbpy@v0.6` [11:17:28] so my theory is that you didn't build the package from the right commit [11:18:08] yeah, I can check that helps [11:18:10] the `python3-` prefix is a giveaway, that's a debian package name, not a python package name [11:18:22] oh, of course [11:18:25] then it is my fault [11:18:32] as in, not a build issue [11:18:37] I merged the wrong commit [11:19:39] jynus: HEAD on the wmfbackups repo is correct [11:19:57] yeah, I didn't rebuld after merge [11:20:25] I build https://gerrit.wikimedia.org/r/c/operations/software/wmfbackups/+/643236/3/setup.py [11:20:41] *built [11:21:10] yeah. that's what i was suggesting with [11:21:17] 12:12:30 because a previous PS in your CR referred to python3-wmfmariadbpy in setup.py [11:21:53] I will create a new package version (debian version) over the same code release [11:23:08] i wonder how it was working before, then? [11:23:14] as in, for me [11:33:49] I see why I made the mistake [11:34:37] because I didn't have the package for 0.6 yet, I build it myself, and it "worked on my machine" but I didn't test it on a clean install [11:35:03] and this was before the release, so I forgot to rebuild it later [11:47:14] my main issue is that I though I had written wmfbackups and not python3-wmfbackups, and that that would create the same packages [11:50:04] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [11:51:22] everything seems to work on dbprov2003, I will fix the mess I made with stretch now [11:54:02] alert* hosts (backup monitoring) is also back to normal [11:57:25] cool [11:58:55] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [12:05:03] 10DBA, 10Orchestrator, 10User-Kormat: Enable report_host for mariadb - https://phabricator.wikimedia.org/T266483 (10Marostegui) es5 eqiad [] es1025 [x] es1024 [] es1023 [12:05:28] 10DBA, 10Orchestrator, 10User-Kormat: Enable report_host for mariadb - https://phabricator.wikimedia.org/T266483 (10Marostegui) [12:25:57] ok, everthing that should be upgraded is upgraded now: https://debmonitor.wikimedia.org/packages/python3-wmfbackups [12:37:55] 10DBA: Test upgrading sanitarium hosts to Buster + 10.4 - https://phabricator.wikimedia.org/T268742 (10Marostegui) [12:38:06] 10DBA: Test upgrading sanitarium hosts to Buster + 10.4 - https://phabricator.wikimedia.org/T268742 (10Marostegui) p:05Triage→03Medium [12:38:28] 10DBA: Test upgrading sanitarium hosts to Buster + 10.4 - https://phabricator.wikimedia.org/T268742 (10Marostegui) [12:38:34] 10DBA, 10Data-Services, 10Patch-For-Review, 10cloud-services-team (Kanban): Productionize clouddb10[13-20] - https://phabricator.wikimedia.org/T267090 (10Marostegui) [12:38:36] 10DBA, 10DC-Ops, 10Operations, 10ops-eqiad: (Need By: 2020-11-29) rack/setup/install db11[51-76] - https://phabricator.wikimedia.org/T267043 (10Marostegui) [12:38:38] 10DBA, 10Epic, 10Patch-For-Review: Upgrade WMF database-and-backup-related hosts to buster - https://phabricator.wikimedia.org/T250666 (10Marostegui) [12:54:15] to answer my question based on package install, there are 141 installations of 10.4 and 79 of 10.1 [12:54:50] but that doesn't have into account multi-instance, or hosts to be decommed, etc. [13:02:23] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [14:02:37] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [14:06:24] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) s4 progress [x] dbstore1004 [x] db1150 [x] db1149 [x] db1148 [x] db1147 [x] db1146 [] db1145 [] db1144 [] db1143 [] db1142 [] db1141 [] db1138 [] db1121... [14:07:51] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [14:24:16] 10Blocked-on-schema-change, 10DBA: Schema change for renaming namespace_title index on watchlist - https://phabricator.wikimedia.org/T268004 (10Marostegui) [15:52:49] sobanski: the invite issue was due to me joining on my private gmail account without realising [15:53:14] Aha! [15:54:46] (my usual flow has me joining from calendar, but as Somebody didn't invite me, that wasn't the case today) [15:55:13] o_O [15:55:22] How did that happen? I don't know. [15:55:48] sobanski: i believe you. if you knew, i'm sure the person responsible would already have been fired [15:55:55] Google knows I speak ill of it and adds these little glitches to make me look incompetent [15:56:05] seems likely [15:56:14] (is my usual line of defense) [16:31:40] https://i.imgflip.com/4ntisr.jpg [16:35:39] Did you poke them with the Fing-Longer? [16:36:18] (https://futurama.fandom.com/wiki/Fing-Longer, for convenience) [18:29:43] 10DBA, 10Operations, 10Patch-For-Review, 10User-Kormat, 10User-jbond: Standardize/centralize mapping from section to mariadb port/socket and prom-mysql-exporter port - https://phabricator.wikimedia.org/T257033 (10jcrespo) @kormat: My patches solved some of the isuess above, even if one can improve over i... [20:57:49] 10DBA, 10Operations, 10ops-eqiad: db1139 memory errors on boot (issue continues after board change) 2020-08-27 - https://phabricator.wikimedia.org/T261405 (10Jclark-ctr) @jcrespo just heard back from HP they want to remap dimm again We see that the original issue with the 3 DIMMs is now cleared. However,... [21:01:19] 10DBA, 10Community-Tech, 10Expiring-Watchlist-Items: Watchlist Expiry: Release plan [rough schedule] - https://phabricator.wikimedia.org/T261005 (10ifried) @Marostegui No problem! In that case, are we good to go for our release on 12/1? Or shall I check in the day before, on 11/30? You can follow our relea...