[08:23:09] dcaro: hi, good to see you have handled the CI magic to upgrade tox to 3.21.4 ! [08:40:43] hashar: thanks! mostly James_,F :) [08:41:50] dcaro: and your name somehow sounded familiar so I have done my research and surely I found out you have some patches for jenkins job builder :] [08:42:18] long time ago yes, used it quite a lot at redhat [08:42:40] it still has a community this day ;] [08:43:12] though I guess most have migrated toward JenkinsFile pipelines or Zuul v3 (in the openstack context) [08:43:17] anyway it is serving us well [08:43:59] it has been a very useful tool for me too, did not keep up with the development though xd [08:44:04] *I did not [08:44:38] a bit of a side question, is there an easy way for me to run the tox tests that ci runs locally? (given that it's a docker image) [09:05:52] dcaro: theorically yes [09:06:21] you could run the container bind mounting the code to /src [09:06:32] then the CI containers have USER nobody [09:06:39] so there are some madness with the user permissions [09:06:45] ie the .tox directory would not be able to be created [09:08:39] yep, I'm seeing that right now :) [09:09:02] and last time I checked, Docker does not let you map your local use uid to a different uid in the container [09:09:02] though a chmod 777 .tox was enough [09:09:12] yeah that would do [09:09:34] and some of the CI containers have an entry point that run git init / git fetch / git checkout [09:09:40] I had to add some code to the run.sh, to be able to skip the ci-init script [09:09:46] yes that xd [09:09:50] yeah [09:09:54] so the idea on CI [09:09:57] would it be ok if I send a patch like that? [09:10:13] the jenkins jobs are triggered by Zuul (which should sound familiar since you got involved in openstack at some point) [09:10:25] and Zuul pass environment variables that has all the context for git to fetch the patch [09:10:50] some of the ci contianer thus have something that looks like: git init && git fetch $ZUUL_URL/$ZUUL_PROJECT $ZUUL_REF && git checkout -B $ZUUL_BRANCH [09:11:07] then execute whatever default command, aka 'tox' [09:11:22] when using a bindmount for the source code, I usually just override the entrypoint as well [09:11:32] -v "$(pwd):/src" --entrypoint=tox [09:11:37] which sometime works out of the box ;) [09:12:46] I guess what I can do is to have the script to skip its execution when it is not run under the CI environment (for example Jenkins set the environment variable JENKINS_URL) [09:13:19] I think that'd be nicer, so we make sure we run the same options for tox and setup as CI (skipping the clone) [09:13:25] I can get that patch up :) [09:14:06] and since that script doing the git dance was used in most images [09:14:12] i have extracted it to a data only image [09:14:14] dockerfiles/ci-common/content/ci-src-setup.sh [09:14:24] nice [09:14:47] which is then copied to various images using something like COPY --from=ci-common /utils /utils [09:14:58] well [09:15:09] actually that is done in the CI base images ci-buster and ci-stretch [09:15:32] so every single images have the script available, though not all entry points use it [09:15:41] but yeah [09:16:10] I would welcome a patch to that ci-src-setup.sh script that would by pass it when JENKINS_URL is not set [09:16:24] or maybe we could use a CI variable :D [09:17:09] I'm ok with both :), as long as is doable, I can just run a mini-bash script to run the ci tests (like puppet repo does) [09:22:12] dcaro: I think my original idea was to make the jenkis job as dumb as possible [09:22:20] ie invoke a single image that would do everything [09:22:33] (assuming it is passed the proper environment variables which Zuul does for us) [09:22:54] but that derailed as we build of those images. So some do use the ci-src-setup script, some other don't [09:23:12] and for images that do not use the script, we have the jenkins job to invoke the releng/ci-src-setup-simple image which just invoke the script [09:23:17] it is obviously messy [09:23:21] and definitely inconsistent [09:24:03] hmm, so there multiple docker run, to overcome those images not running the script? [09:24:26] yeah [09:24:44] ack [09:24:45] I call than that the unpurpose undesigned technical debt [09:24:51] damn [09:25:04] trying to outsmart myself clearly does not work [09:25:14] xd [09:25:26] so that is a technical debt and surely we could look at making sure all the ci images act the same and are consistent in their behavior [09:25:29] that would less surprising [09:25:46] cause anytime one has to run one of those image locally they are kind of forced to have a look at the entry point [09:28:47] I can for sure help with the tox related ones for starter (as that's what I'm using on some of the projects directly), if that helps [09:36:32] dcaro: yeah that would definitely be helpful [09:37:45] I'll try to give it a shoot then :) [09:38:23] *shot [09:50:02] funny to see the tox stuff here [09:50:11] I uploaded tox 3.21.4 to Debian this past Saturday :) [10:05:34] volans: would you accept a patch removing py39 from spicerack's tox.ini? [10:06:32] <_joe_> kormat: would you accept a patch upgrading your laptop? [10:06:37] <_joe_> :P [10:07:17] <_joe_> kormat: the issue with removing py39 is that... it will require anyone running newer distros to run tox in docker [10:07:37] <_joe_> which ofc is doable and I already do it [10:07:44] _joe_: which newer distros? [10:07:45] <_joe_> but it's a counterbalance to consider here [10:08:00] no released ubuntu, for example, ships with py39 [10:08:13] <_joe_> kormat: debian sid/testing and a few other distros are now shipping py39 [10:08:27] kormat: use pyenv [10:08:32] I have all versions installed [10:08:44] volans: i have py39 installed. tox 3.13.2 ignores it. [10:08:50] <_joe_> I think fedora too [10:09:04] kormat: upgrading tox from pip is an option? [10:09:05] and meanwhile we cannot use f-strings for compatibility :-( [10:09:23] <_joe_> jynus: compatibility with what? [10:09:26] jynus: only if you need to support stretch hosts [10:09:31] _joe_: i don't really take "people running unreleased distros might experience breakage/inconvenience" all that seriously. [10:09:32] <3.6 [10:09:35] mgmt hosts are all in buster [10:09:41] that as less than, not a heart :-) [10:09:44] *is a [10:10:17] <_joe_> kormat: well most of the people developing spicerack are, so ymmv [10:10:27] volans, I would love to get rid of stretch preciselly for that :-( [10:10:38] volans: i try to avoid using pip outside of venvs. it's a clusterfuck [10:10:56] _joe_: sigh [10:11:05] <_joe_> but to be clear - it's ok to tell everyone to run tox in our CI docker image [10:11:31] _joe_: you'd also have to tell them how to do so [10:12:24] <_joe_> kormat: yes, I'm thinking what is the best way to do it [10:12:36] right now is not technically possible, I know that d.c.aro was looking to patch the tox images to allow for a local run too [10:12:50] _joe_: ubuntu users, at least, can easily install multiple python versions. i have py3.{5,6,7,8,9} installed [10:12:50] also the CI job runs multiple dockers in sequence [10:15:57] <_joe_> volans: I'm not sure it's not technically possible, I've done it multiple times to test pythons I didn't have installed on my systems [10:15:59] ok. i guess you could argue that ubuntu 21.06 will be released in a few months, and it will ship with py3.9. i'll bite the bullet and install tox from pip [10:16:00] I'm working on doing the 'runnig ci docker' locally easy for spicerack: T274338 so I'd wait until then [10:16:06] T274338: [spicerack] easy run CI tests locally - https://phabricator.wikimedia.org/T274338 [10:16:32] <_joe_> dcaro: oh yes it's not "easy" right now :P [10:16:43] <_joe_> it's pretty inconvenient [10:17:22] <_joe_> and I support making it easy! It should, and for all of our python projects [10:17:37] I'm thinking on just creating a script like the one in the puppet repo [10:17:39] +1 [10:32:24] * legoktm notes that Fedora (stable release) allows co-installing Python 3.5 through 3.10, all from packages [10:34:52] <_joe_> yes, sorry, we hired another fedora user. At least it's not gentoo [10:35:16] <_joe_> (also I suppose he's up at this hour fixing stuff that rpm upgrades messed up, and not working...) [10:35:18] * kormat covers klausman's ears [10:35:28] <_joe_> kormat: :} [10:35:57] I'm being hip and writing stuff in Rust ;) [10:43:44] <_joe_> rust is not cool anymore, it's institutionalized. You should move to Zig [10:44:18] kormat: I am used to the abuse. Calloused, as it were. [10:45:00] <_joe_> klausman: :* [10:45:20] dcaro: I have filed a task about the ci-src-setup script https://phabricator.wikimedia.org/T274347 [10:45:38] I am off for the kids but will follow up later this afternoon if there is any activity on that task ;] [10:47:53] klausman: you're no longer the lead for gentoo alpha? 😮 https://wiki.gentoo.org/index.php?title=Project%3AAlpha&type=revision&diff=877242&oldid=824818 [10:48:00] there goes my respect [11:01:48] hasharLunch:thanks! [11:30:20] volans: do you have any idea how to get pytest to run tests multiple times, with different environments? [11:30:38] (in my case i want to run tests against mariadb version X, and then again against version Y, and so on) [11:35:56] I think a parametrized fixture might do it, let me see [11:37:39] https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures [11:38:12] ah haah. that looks promising. thanks! [11:38:25] so you can have a connect() fixture that is parametrize with different URIs [11:38:28] and connect to different DBs [11:38:39] see also https://stackoverflow.com/a/59931678 [11:39:14] 👍 [12:09:29] moritzm: there's a change from you on puppet access-related, so I am not going to merge it just in case you're still doing something else. My change can be merged anytime, so feel free to do so [12:10:16] sorry, forgot to puppet-merge, will merge both now [12:10:22] thanks! [12:24:13] moritzm: there's also a change from me now :) that one too can go anytime [12:24:19] so I'll leave it to you [12:27:24] all done :-) [12:28:04] hasharLunch: when you are back, let me know if there's any script to update the changelogs automatically... changing the ci-common images ends up touching almost everything (if not everything xd) [12:28:28] dcaro: yeah we typically don't do that :/ [12:28:35] thx [12:28:44] due to all the side effects when rebuilding the images [12:29:47] dcaro: just rebuild the base images (ci-stretch and ci-buster) and then I guess the tox ones since that is where you had the use case [12:29:57] what about the others? [12:30:48] but to answer, docker-pkg has a way to update the changelogs of all child images ( docker-pkg update --reason "Add feature X y z" ci-buster . [12:30:49] I can update them in several patches if needed too [12:31:01] would bump the changelog of all images descending from "ci-buster" [12:31:06] the trouble is [12:31:14] whenever the images are rebuild, bunch of debian packages get updated [12:31:25] and anything that relies on outside state might either broke or get magically updated [12:31:44] so eventually when we change the jenkins jobs to use the newer image, unrelated stuff explodes here and there [12:32:51] sure, but the way of treating that is to expose sooner no? better than leaving it and the become a way bigger issue later (because even more things change) [12:33:15] it also take several hours to rebuild all the images [12:34:15] and some amount of time to validate that jobs for high traffic repos are working [12:34:30] with the nice side effects of eventually having to spend more times to catch up with whatever unrelated update happened [12:34:58] so the update to the base image will eventually be included in future updates made for specific images [12:34:59] ie [12:35:41] if one get to update the java based images, the job get tested as a result of that update [12:35:49] and if there is a side effect that is adjusted at that point [12:36:05] or maybe I am just over thinking all of that, but in the past the few mass updates did cause major havoc here and there [12:36:26] so tldr: update ci-buster and ci-stretch, then another change that just update the tox images [12:36:39] and the rest of the fleet will eventually be updated :] [12:38:17] dcaro: and I think I will look at normalizing all those images to use the updated ci-src-setup script eventually :) [12:39:41] going to do homework with kids + groceries etc. Be back later this afternoon [12:44:34] ack [15:11:43] alert1001 and alert2001 give me the following error on puppet compiler: "ERROR: Compilation failed for hostname alert1001.wikimedia.org in environment change." [15:12:43] I already reloaded the facts, thinking they were not updated [15:20:27] jynus: link? [15:20:41] https://integration.wikimedia.org/ci/job/operations-puppet-catalog-compiler/27968/console [15:21:59] https://puppet-compiler.wmflabs.org/compiler1003/27968/alert2001.wikimedia.org/change.alert2001.wikimedia.org.err [15:22:45] jynus: "Error: Function lookup() did not find a value for the name 'profile::dbbackups::check::db_password'" [15:23:04] ah [15:23:13] great catch, needs updating on labs realm [15:23:17] didn't think about taht [15:23:20] thank you a lot! [15:23:47] I didn't think about that because it failed so early [15:25:17] and that reminds me it will need also need a change on production [17:00:39] someone available for a quick +1? https://gerrit.wikimedia.org/r/c/operations/puppet/+/663242 [17:04:42] shdubsh: lookin [17:07:16] jayme: Thanks!