[08:45:57] moritzm just a question on building go packages on build2002: I'm not vendoring dependencies for benthos so dh_golang needs to download dependencies at compile time, other than using USENETWORK=yes in .pbuilderrc && envvar is there something I can do to allow this? [08:49:55] USENETWORK=yes should be all that's needed [08:50:46] I think the majority of our internal go debs simply vendor within the deb, but USENETWORK=yes is exactly for that purpose [08:51:28] ok, because I always get i/o timeouts when trying to fetch go deps [08:58:22] I haven't used it for a long time myself, maybe also try it from build2001 to rule out that there's some bullseye->bookworm regression [08:58:51] fabfur: I recall that I had a similar issue a long time ago (so my memories may be fuzzy), IIRC I tried placing USE_NETWORK=yes to ~/.pbuilderrc and it helped [08:59:42] mmm maybe I missed something here [08:59:44] fabfur@build2002:~$ cat .pbuilderrc [08:59:44] USENETWORK=yes [08:59:48] the underscore! [08:59:52] :D [09:04:35] unfortunately, seems that's not the case :( [09:22:26] <_joe_> use_network on build2002 should be prohibited [09:22:31] <_joe_> imho :) [09:22:51] <_joe_> fabfur: you should indeed vendor all dependencies directly in your benthos repo. [09:23:05] <_joe_> downloading them at compile time is just terrible practice [09:23:23] <_joe_> this is how we're building all our go packages I've ever had to vet. [09:23:59] yeah, that's just for a patch I'm trying to introduce to avoid T256098 but at this point it makes sense to vendoring all, if godog agrees ... [09:23:59] T256098: Segfault for systemd-sysusers.service on stat1007 - https://phabricator.wikimedia.org/T256098 [09:28:06] fabfur: I've had success in the past building benthos on build2001, do you mind sharing the command and error ? [09:28:54] on the vendoring specific question, I don't feel strongly either way, though in this case it seems a workaround for a different problem [09:29:07] I'm just applying the quilt patches and then `BACKPORTS=yes DIST=bookworm GIT_PBUILDER_AUTOCONF=no gbp buildpackage -jauto -us -uc -sa --git-builder=git-pbuilder --git-ignore-new` [09:29:29] the error is a timeout on all dependencies (`go get ...` [09:31:59] is the command in debian/README.source working ? [09:33:25] with bookworm that is, not bullseye [09:34:04] let me try setting `https_proxy` [09:37:53] yep, that works, sorry I should've read the README first [09:38:30] anyway, I think vendoring dependencies is worth the pain, so we don't have to set it anymore, but being a package that we import and patch WDYT? [09:39:13] IMHO not worth the pain no [16:56:29] hey folks, as FYI for the weekend, Kartotherian (kartotherian.discovery.wment, backend behind maps.wikimedia.org) is currently served by a mixture of wikikube k8s workers and bare metals (maps* nodes). The load is slightly more on k8s, but there should be plenty of capacity (the metrics looks really good now). [16:57:09] If anything happens kartotherian-related and I am not around, the rollback is listed in the task's description of https://phabricator.wikimedia.org/T386926 [16:57:30] very easy, basically just pool only bare metals [18:45:56] inflatador: We're getting quite a few alerts about cloudelastic things, I assume because of the updates you did/are doing. If you feel like accepting this ticket, please do! Otherwise please drop some notes there about monitoring changes that are now needed. Thanks! T388270 [18:45:57] T388270: Update alerting to correspond with the new cloudsearch cluster - https://phabricator.wikimedia.org/T388270 [18:51:14] andrewbogott apologies, I'm not sure why WMCS is a recipient for any of these alerts. But I'll def take a look [18:51:32] thanks! [18:51:47] They're non-paging so not urgent. [18:57:48] NP. I assume WMCS does NOT want to get alerts for cloudelastic, at least not the very low-level ones y'all're getting now. LMK if that's not true [19:08:34] andrewbogott ^^ [20:02:42] inflatador: I think that's right, as long as someone gets them :) [20:04:09] andrewbogott ACK, my team should be getting those, not y'all, so I'll try and get to the bottom of that today. If you have any ideas LMK. Maybe something is tagging based on the 'cloud' in the hostnames? [20:04:35] yeah, it might just be cloud*, not sure.