[13:09:23] I'm pretty sure that behaviour has existed for a good while - basically you have to wait until the php-fpm workers cycle and pick up the new files from disk. In my usage I just restarted the web service on each deploy to force it, but sometimes SIGUSR1 the master process when testing by hand. [13:09:54] !log tools depolyed jobs-api change to default resources, patching existing jobs [13:09:57] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [13:38:05] jnuche: just checking in, are you fully unblocked now? [13:49:39] !log tools patched all tools with new resource defaults, everything looks good [13:49:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [13:54:23] andrewbogott: at least for the migration test yeah, thanks again [13:54:43] great! Hopefully it's useful long-term for you to be able to self-serve :) [13:54:51] even though it's a tangled web [18:37:40] Is envvars feeling happy? Had multiple pods hit `CreateContainerConfigError` with Error: failed to sync secret cache: timed out waiting for the condition [18:48:14] There's a suspicious spike in the response times here https://grafana-rw.wmcloud.org/d/8H1LfdwSz/envvars-service-overview?orgId=1&from=now-6h&to=now&timezone=browser&var-datasource=P8433460076D33992 [18:48:20] seems ok now though [18:51:15] that seems related to k8s api/controllers probably though, not envvars-api itself, if it happens too much please open a task to investigate 👍 [20:06:24] It's defiantly happened more than 4 times today, but I finished for now and it eventually gets there... will make a ticket if it continues later