[11:11:11] Hi folks, thanos is running out of disk space again, I'm afraid. I'd normally tag g.odog on T351927 and he'd tweak retention a bit to free up space, but he's on sabattical. Can someone here take care of it, please? [11:11:12] T351927: Decide and tweak Thanos retention - https://phabricator.wikimedia.org/T351927 [11:11:22] I'm afraid I can only apologise that the new hardware is still not here :( [11:49:53] tappof: Any opinion on this solution for routing alerts for the WMCS team: https://gerrit.wikimedia.org/r/c/operations/alerts/+/1087434 [11:50:29] It seems a little silly to duplicate the alerting rule, on the other hand it does exactly what I want :-) [12:05:47] Ah, tappof is out... Who next in line for Alert Manager :-) [13:38:15] slyngs: I was hoping that instead of duplicating the alert, there would be '$some' mechanism that for wmcs hosts, it automagically attaches the `team=wmcs` tag. Otherwise we would need to duplicate pretty much all the alerts? [13:39:01] arturo: Yeah, I've asked o11y for advice, because that's potentially a lot of dublication [13:40:44] ack [13:45:40] hi folks, t.appof is out this week and next i’ll check in with folks on this side of the world as they’re available [13:46:17] lmata: DYK who is taking care of thanos retention whilst g.odog is out, pls? [13:48:47] this was t.appof but will ask herron to take a look :) [13:52:25] TY :) [13:56:45] lmata: Thanks, no rush, we temporarily put back the Icinga monitoring [14:53:11] Emperor: forgot today is the us election so may be tomorrow is that ok? [14:58:19] I should think so, yes. [15:02:12] cool, thanks for the heads up! [15:12:39] Emperor: Looking into it. [22:40:46] hey 0lly, should I be able to see metrics data older than ~1 year in Thanos downsample-1h ? I'm not finding anything when...tried some simple queries like `rate(node_load15{instance="elastic1094:9100"}[15m])` to no avail