[06:13:08] greetings [07:30:45] I see OpenstackAPIResponse is firing, taking a look [07:47:58] ok I think it is related to the latest changes in transient queues, open a task [07:48:01] opening even [09:40:40] tofuinfratest is still failing, in the last run the failures were openstack_containerinfra_cluster_v1 and openstack_compute_volume_attach_v2 [09:41:10] openstack_containerinfra_cluster_v1 is the one that is consistently failing, I opened T423393 [09:41:11] T423393: [tofuinfratest] Magnum cluster creation is failing - https://phabricator.wikimedia.org/T423393 [13:58:23] godog: I had one (or two?) of the api servers shut off for a while last night which probably made the response time average shoot up if it's averaged over all cloudcontrols. BUT, the reason I had them shut off was because I was seeing intermittent failures (which might be why the tofu tests are flaky). Will see if I can reproduce all that today. [14:37:57] andrewbogott: ack, yeah I saw your sal and looked into it a little bit today, please take a look at https://gerrit.wikimedia.org/r/c/operations/puppet/+/1271719?usp=dashboard which I think should fix the permission denied errors [14:40:25] maybe not the whole story, though it should help [14:50:20] that's interesting! Is there any reason why that would be a new problem rather than something present for years in our install? [14:51:37] like, is it from the rabbit queue change? [14:52:28] yes indeed [14:52:45] I'll expand on that point in the commit message [14:56:00] {{done}} [14:59:36] thanks! Let's merge and see if that cheers up the tofu runs [15:00:45] will do [15:20:08] ok change deployed, I need to step out though might catch up later today [16:16:23] godog: tofu runs seem much better!