[10:42:16] Amir1: you are going to love what I found [10:42:57] oh god what now [10:46:57] Amir1: https://phabricator.wikimedia.org/T289050 [10:47:31] 🤬 🤬 🤬 🤬 [10:48:28] And as a gift: https://phabricator.wikimedia.org/P17036 [10:49:23] wtf is user2? [10:50:15] that's mysql [10:50:37] this will help https://phabricator.wikimedia.org/P17036#87231 [10:50:57] it looks like some ancient testing [10:51:06] oh okay, one less thing to be grumpy about [10:51:16] I am going to kill that table right now [10:51:29] it just have one row, related to browne.wikimedia.org [10:55:27] for tracking https://phabricator.wikimedia.org/T289051 [10:56:35] "User: toolserver" [10:56:43] We migrated from toolserver in 2014 [10:57:38] Amir1: user2 is very obviously twice as good as user1 [10:58:26] lol [10:58:27] It is also present on s1 and s2, although it is empty there [10:58:37] It is not on any of the other sections [10:58:38] we have querycachetwo (I'm not joking0 [10:58:53] T221449 [10:58:54] T221449: Redesign querycache* tables - https://phabricator.wikimedia.org/T221449 [10:59:04] > querycachetwo? seriously? [11:08:05] Still better than querycache_tmp or querycache_tmp_do_not_use_in_prod [11:39:31] * Amir1 PTSD kicks in [13:25:35] marostegui: "not sure if she has other different ones or not." - of _course_ she does. who do you think i am? :P [13:26:23] Emperor: nice work on the db2121 reimaging this morning! :) [13:26:37] Emperor: can you please update the task description to mark that step as done? [13:30:51] done [13:31:05] Emperor: sweet, thanks :) [13:36:41] sobanski: do we have a task for decommissioning the old pc* hardware? i'm not finding anything [13:37:12] I don’t think we do [13:37:48] I can create one later [13:40:19] kormat: assigned you https://gerrit.wikimedia.org/r/c/operations/puppet/+/713469 to review [13:40:28] (which was a TODO from this morning) [13:40:41] Emperor: +1'd, cheers [13:41:31] (this gerrit "get someone to review and +1 then merge yourself" workflow is still a bit strange to me) [13:55:51] some repos, notably mediawiki related ones, also use "get someone to review and +2, which will auto merge after tests pass", but with puppet and related you want the author to merge+deploy it instead of merging it immediately after review [13:59:06] at $LASTJOB the author was always the one who merged, if they had access. we settled on that after a few occurrences of an author figuring out an issue with the change seconds after a reviewer had committed it [14:29:38] marostegui: i've done terrible things to pc2, i think [14:30:00] what happened? [14:30:12] i mean, who can say, in this world of ours [14:30:22] but when i did the changes to the replication tree, [14:30:37] i somehow am ending up with stale entries in heartbeat.heartbeat that i can't get rid of [14:31:00] oh [14:31:49] pc2012 keeps switching between 0s behind master, and 86k seconds [14:31:49] the one for pc2008? [14:31:51] which is nice [14:31:54] yeah [14:32:03] i.. did not do reset slave all. [14:32:04] i'm bad. [14:32:15] let me re-do this [14:32:24] ok, I will not touch anything for now then [14:34:10] aaand fixed. sigh. [14:34:41] \o/ [14:34:58] I think I am reading images at 10MB/s, which should be not too impacting (swift has 200MB/s of reads per proxy) [14:35:32] marostegui: ok, shall i do pc3 now as well, and get it out of the way? [14:35:40] works for me! [14:35:57] i'm sure it does [14:37:26] marostegui: i don't even want to think about the nightmare of changing a primary in a multi-dc setup [14:37:36] y'know, when we actually _care_ about the data [14:38:03] keep in mind that the multi-dc will be only for reads (for now) [14:41:29] pc3.. done? [14:42:00] orchestrator looks good t ome [14:42:35] Do you want me to fix tendril for pc1, pc2 and pc3? [14:43:28] if you would, please [14:43:32] ok! [14:46:35] ok fixed, topology is now showing everything on tendril's tree [15:30:52] "save storage space with this easy trick" "HD manufacturers hate him" "try not to cry when you see how many redundant copies our storage has": https://phabricator.wikimedia.org/T262668#7288764 [15:50:17] Hi! Was there any change lately on thanos swift? I am getting this error on the last deployment of tegola or maps tile server that uses swift but its the first time I see it: https://phabricator.wikimedia.org/P17039 [15:51:49] *our maps tile server [15:52:21] I think the right people are afk [15:53:33] I learned a little bit of swift for mw, but know nothing for maps [15:59:40] I'm afraid I've not yet looked at the S3-compatible layer on Swift. [16:00:28] the strange thing is that if I look at docs, tiles live on cassandra, not swift [16:01:22] this is a staging server of our new stack we are migrating [16:01:31] production is indeed on cassandra [16:02:12] ok, sorry so as you see, we are probably the people that least know about it 0:-), I am guessing your contact was filippo? [16:02:27] (on sre?) [16:03:55] yes I think we've talked with filippo [16:04:11] Its not urgent either way [16:04:18] he may be afk for today, but he is on this channel [16:04:32] and may be able to answer tomorrow [16:04:48] sounds good, thanks! [16:41:02] I think it might be relevant to the upgrade to bullseye on thanos. I filled a ticket here: https://phabricator.wikimedia.org/T289076 [16:42:14] 👍