[07:53:29] 10Continuous-Integration-Infrastructure (Zuul upgrade): Artifact storage for updated Zuul CI - https://phabricator.wikimedia.org/T397385#10949396 (10hashar) From the Ceph Object Gateway [[ https://docs.ceph.com/en/reef/radosgw/swift/#features-support | Swift API - features-support ]] page, it is marked as suppor... [07:54:16] Rados GW does support X-Delete-After from the Swift protocol and the code was written back in September 2015 [07:54:16] https://github.com/ceph/ceph/commit/fa347d8f69b8eff2e246d35a127c4bfa5a50b5e0#diff-293ad9f8972ecf01da75b086c27e843af0c6b405b543d0ea542dda6912db8a0a [07:55:01] which means that the ansible playbook `upload-logs-swift` can push artifacts asking for them to be deleted after X days [07:55:31] and Ceph RADOS would happilly delete them for us. So no need to write a custom garbage collector [07:56:05] I only uses S3 cause it had a tutorial. OpenDev uses Swift and I am gonna try to set it up [12:34:10] 2025-06-26 12:30:16.131398 | localhost -> localhost | Output suppressed because no_log was given [12:34:10] failure [12:34:12] poor me :) [13:11:06] I went to add a flag to disable the Ansible `no_log` for upload-logs-swift: https://review.opendev.org/c/zuul/zuul-jobs/+/953442/1/roles/upload-logs-swift/tasks/main.yaml [13:11:57] and I have made a change that depends-on it and set zuul_log_verbose: true : https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1164193/ [13:12:31] but that is not being applied for some reason :] [15:53:42] so hmm [15:53:45] yaml is hard [15:53:47] doc is important [15:54:11] I encrypted the whole clouds.yaml file and passed it as is to the uploads-to-swift role [15:54:25] but it really expects a yaml structure instead so each value has to be encrypted individually [15:54:56] once I found that out and finally fixed the botched structure https://gerrit.wikimedia.org/r/c/integration/config/+/1164252/1/zuul.d/secrets.yaml [15:55:07] well surely it worked https://zuul-dev.wmcloud.org/t/wikimedia/build/85d02167307d44e6b51d65b75a197f80 has uploaded using the Swift API [15:55:18] and the logs/console work there [15:55:21] :party [17:04:32] corvus: Link dumping here on any bits that I should read for setting up RBAC and other goodies on the kubernetes cluster would be appreciated. :) [17:06:34] jnuche: if you want we can talk about Zuul tomorrow. For this evening I am done :) [17:35:01] bd808: here we go: https://review.opendev.org/c/zuul/nodepool/+/953479/1/doc/source/kubernetes.rst [17:55:02] corvus: thanks. And thanks for making it an addition to the main docs so I might find it again in a year or so. :) [18:04:59] of course! that's where it belongs :) i just recently did that analysis for someone, but hadn't gotten around to updating docs yet; i suspect a lot of folks just went with easy-mode cluster admin privs. :) [19:46:35] back in the day I'd write the doc in upstream rather than in our in house wiki :) [19:49:08] nice you get artifacts with human friendly names at https://zuul.opendev.org/t/zuul/build/c912da694b4545eb8889be514933c5c6/artifacts [19:49:23] > Docs preview site [19:50:23] https://95ec6d5180d45c332d30-3a495235250eccc64faa024f22d41fdc.ssl.cf1.rackcdn.com/zuul/c912da694b4545eb8889be514933c5c6/docs/kubernetes.html \o/ [21:37:16] 10Continuous-Integration-Infrastructure (Zuul upgrade), 07Upstream: terraform-provider-openstack >v3.0.0 uses a gophercloud client that does not work with WMCS Magnum APIs - https://phabricator.wikimedia.org/T397106#10952240 (10bd808) 05Open→03Stalled [22:31:20] 10Continuous-Integration-Infrastructure (Zuul upgrade), 13Patch-For-Review: Provision Kubernetes cluster and bastion using OpenTofu and Magnum - https://phabricator.wikimedia.org/T396936#10952379 (10bd808) >>! In T396936#10936924, @bd808 wrote: > Maybe the next best thing would be to implement a proxy service... [23:55:33] 10Continuous-Integration-Infrastructure (Zuul upgrade), 13Patch-For-Review: Provision Kubernetes cluster and bastion using OpenTofu and Magnum - https://phabricator.wikimedia.org/T396936#10952564 (10bd808) I filed {T397994}. Until there is a fix for that, the Project Puppet settings will need to be managed via...