[07:51:25] I guess T393564 can be closed now? as the beta has been started [07:51:25] T393564: [Hypothesis] WE6.3.10 start a beta for the push-to-deploy features - https://phabricator.wikimedia.org/T393564 [08:15:06] I'm waiting to have the next task open for it [08:15:32] as it also explains the proces during the beta itself, though I can bubble it up to the parent task I guess [08:15:38] yep, I'll do that [08:33:21] done :) [09:08:17] morning. the new toolsdb replica is now up&running, but will need some time to catch up [09:27:37] \o/ [09:52:26] hmpf... more random vcr errors on maintain-kubeusers https://gitlab.wikimedia.org/repos/cloud/toolforge/maintain-kubeusers/-/jobs/551315 [10:04:05] hmm... some dns errors happening too https://gitlab.wikimedia.org/repos/cloud/toolforge/envvars-admission/-/jobs/551282 `fatal: unable to access 'https://gitlab.wikimedia.org/repos/cloud/toolforge/envvars-admission.git/': Could not resolve host: gitlab.wikimedia.org` [10:04:36] it passed now though [10:25:32] fixed the vcr recording again... going for lunch [13:17:37] quick review fixing pre-commit tests on cicd repo https://gitlab.wikimedia.org/repos/cloud/cicd/gitlab-ci/-/merge_requests/63 [13:27:56] what is a good place for me to run 'reportbug'? I'm pretty sure my local debian VM can't send email. [13:34:29] andrewbogott: I think you don't need a local proper mail setup if you follow https://wiki.debian.org/reportbug#Using_the_Debian_bug_report_SMTP_server instead [13:37:29] is the theory that I run reportbug on the system exhibiting the issue so that it can include useful metadata? Because the actual hosts showing the issue are cloudcontrol* [13:39:29] I usually just run reportbug on my laptop and include any relevant version/etc info manually [13:39:49] if you have a specific use case I can also have a look, I already have a local setup for working with all the odd debian systems [13:43:20] I was thinking I'd just run it on login.toolforge.org because mail is configured there :p [13:53:10] looking for a review of https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/856 [14:06:12] taavi: LGTM, +1d, dcaro do you have any concerns before deploying to tools? [14:07:43] unrelated, can I get a +1 on this? https://gitlab.wikimedia.org/repos/cloud/cloud-vps/tofu-infra/-/merge_requests/252 [14:07:58] LGTM, to me, that does not enable it on all the workers right away right? [14:08:30] dhinus: +1d [14:08:42] thanks [14:09:00] it only enables it on worker 108, nice [14:09:14] * taavi merges before someone asks why -108 specifically [14:09:25] is 108 just a random one? DAMN I was almost typing enter :P [14:09:30] hahahah [14:09:49] it's a random non-NFS worker [14:10:19] ack [14:18:41] next: https://gerrit.wikimedia.org/r/c/operations/puppet/+/1163729 [14:21:07] +1d [14:21:30] ty [15:09:28] https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/858 [15:14:10] +1d [15:22:47] Raymond_Ndibe: https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-misctools/-/merge_requests/1 <- first step, building the package in ci [15:23:08] I'm trying to use gerritlab for the first time, for stacked patches in gitlab, but I keep getting unauthorized [15:23:15] my token has scopes "api, read_repository, write_repository", do I need something else? [16:45:28] who in WMCS is brave enough to take over that change ? https://gerrit.wikimedia.org/r/c/operations/puppet/+/896052 or should it be abandoned ? [16:46:36] probably me! but we likely should try that log-only first [16:47:44] that would block all ipv6 traffic from outside cloud-vps? [16:48:19] no [16:49:51] I mean, not all traffic but all traffic originating from outside [16:49:58] can you explain what it does then? [16:59:30] it's only IPv4 because there are separate 'ip' and 'ip6' terms in nftables, so by definition this only affects IPv4 [17:00:09] I think at minimum it needs review in terms of the newer VXLAN-enabled networks though, we have new subnets for those that probably aren't covered by $virtual_subnet_cidr [17:00:19] I think that's about traffic from wikiprod realm to wmcs 172.16.x private space. ideally there should be no traffic like that, but i suspect at least the current jenkins setup might be using those [17:01:15] taavi: yes exactly, it will block connections to the 172.16.x range from elsewhere. Obviously as that is a private range internet destinations cannot source such connection requests, so ultimately the only place they could be coming from now which it would block is wmf prod. [17:01:50] possibly a variant of this chance with "accept" and "counter/log" is a good first step to evaluate where we are [17:05:09] I will try to come up with an updated patch at some point! [17:21:13] ok so for instance the service I'm building now that runs on a cloudcontrol but talks to k8s on a VM... [17:21:29] that's different because it's using the cloud private realm? [17:29:16] should be yeah, depends what IP on the cloudcontrol is exposing the service (but ideally it would be using a cloud-private one to fit the policy) [17:29:55] in this case the traffic is originating on the cloudcontrol, outbound to the VM [17:30:07] (seems like you're talking about the opposite but I may misunderstand) [17:30:29] I had the opposite in mind yes, but ultimately it's the same question [17:31:14] a cloudcontrol will 100% use the cloud-private network to reach VMs, that traffic will not go out via the core routers in and out of WMF prod. land [17:31:29] so shouldn't be any issue with that [17:31:36] ok! that's what I thought but I didn't want to build a castle on false hopes :) [17:31:47] nope good to check! [17:31:48] or on a route that's about to be removed [17:39:24] * dcaro off [17:59:18] something! https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-misctools/-/jobs/552363/artifacts/browse/debs/ xd, for tomorrow