[10:31:09] Hi! FYI I've added a namespace template variable to https://grafana-rw.wikimedia.org/d/pz5A-vASz/kubernetes-resources, so you can see the request/limits per namespace over time [10:47:24] next SIG is on Earth Day, I 'll try to reschedule by a week [10:48:33] brouberol: This is just for the Total Ram and Total CPUs panels ? [11:00:00] looks like it, I 'll update the description to make it clear and switch the Metric to kube_namespace_created as cardinality is way lower and thus is a ton faster [12:19:26] akosiaris: I tried using `kubernetes_namespace`, but IIRC all I saw was `kube-metrics` [12:19:47] and correct, it was just for the total ram and total cpus, as they are the only ones in which we see requests [15:00:08] brouberol: kube-system probably? not kube-metrics? [15:00:31] but yes, the kubernetes_namespace label is IIRC populated by helm and it's the namespace the workload is deployed in [15:00:46] e.g. kube_namespace_created is populated by kube-state-metrics [15:00:57] every metric has a kubernetes_namespace and a namespace label [15:01:10] the former being part of the helm deployment, the latter being the actual value we care for [15:01:23] that's right, it was kube-system [15:01:24] my bad [15:08:27] understood, thanks for the clarification! [15:21:18] hey folks, I've added a procedure to move live services to Istio ingress in https://phabricator.wikimedia.org/T391457 [15:21:26] at least, this is what I've used for citoid [15:21:50] there are a couple of question marks about the cleanup steps, lemme know if you have opinions