[12:36:59] Flagging this for later: having the CI taking 10+ minutes for an admin_ng change might be something we should try to optimize [12:47:24] elukey: when you're around, would you mind helping btullis and I deploy istio updates on dse-k8s-eqiad? We're facing helm value issues atm [12:47:26] thanks! [13:36:52] brouberol: o/ [13:36:59] sorry I was afk and I didn't see the ping [13:37:09] is it too late or are you still working on it? [13:37:33] I’m out for a walk atm but we’ll need help [13:37:50] *uograding istio [13:38:27] I think we need to change the config yaml file but i’m not sure if we can just copy the one we have in main or not [13:38:57] Besides this, dse-k8s-eqiad is upgraded to 1.31 ! [13:41:51] ah right I can check, in theory copying the main's one should be safe, modulo some sanity check [13:41:56] I am going to do it in a bit [13:43:46] Cool, thanks. I'm also out to lunch, as it were. Back in a bit. [14:09:05] Oh and btw, at no point did we delete/wipe etcd [14:11:10] did I miss a "create updated envoy config" in the documentation 🙈 - in that case plase add something [14:15:06] Or we missed it [14:15:17] I’m back home. Let me Check [14:19:10] I'm not seeing anything ingress-related under https://wikitech.wikimedia.org/wiki/Kubernetes/Clusters/Upgrade/1.31#Required_patches [14:27:00] brouberol, btullis - I'd try to use main's config.yaml to be honest, you have a pretty basic config [14:29:03] sounds great [14:31:48] https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/1261467 [14:35:39] +1d it. i notice that we don't get any CI for this, presumably because of the custom deploy. [14:38:50] Not super keen on line 73 `# TODO: Figure this out` but then I'm like that with quite a bit of istio :-) [14:41:51] And we just apply it with `istioctl-1.24.2 apply dse-k8s-eqiad/config.yaml` as per https://wikitech.wikimedia.org/wiki/Kubernetes/Ingress#Istio_setup_and_configuration - Is that correct? [14:42:10] that's my understanding [14:43:23] We would then see all of the ingressgateway pods restart with the new version, then we take the rest of the day off? [14:43:48] my understanding as well [14:44:36] OK, let's go for it. [14:44:49] wee [14:45:57] `istioctl-1.24.2 manifest generate -f ./dse-k8s/config.yaml` now works [14:45:58] applying [14:48:25] ✔ Installation complete [14:49:04] Nice. Thanks. [14:50:25] We noticed one interesting this as well, during the upgrade some (all?) of our nodes switched their `node.kubernetes.io/disk-type`label from `ssd` to `hdd` - when we know them to have only SSDs installed. [14:50:52] We can create a ticket and look into this, but thought I'd share it in case anyone has any insight off the top of your head. [14:55:42] Looks like it is probably related to this: https://github.com/wikimedia/operations-puppet/blob/production/modules/profile/manifests/kubernetes/node.pp#L105-L119 - We can come back to this, as it's probably not affecting anything. [15:14:21] sorry didn't read till now, yes all correct for istio! nice! [15:32:33] np! [15:32:48] I'm pretty happy, and TBH pretty relieved that this is over [15:46:29] yep really nice job folks! [15:49:32] thanks to y'all for a seamless puppet migration path [17:08:08] what's the next k8s version we're aiming to support? [18:09:44] we don't know yet...but given we wanted to support "supported" rolling upgrades it's probably =+1 [18:09:57] *+=1 [18:12:10] btullis: the disk type thing is odd, please file a task [18:12:18] also: congrats