[09:14:17] is there a cmdline utility for querying netbox? [09:14:58] kormat: there is a python lib :) [09:15:12] soo close. :) [09:15:50] (more seriously, that'll be useful when i implement this properly) [09:21:30] <_joe_> this? [09:22:44] _joe_: `this->` [09:23:21] i wrote a 25l bash script to add hosts to the zarcillo db as a stop-gap measure. [09:23:43] i then used it to automate inserting wrong data into the db. because that's what technology is for [09:23:55] <_joe_> oh you want netbox => zarcillo to be automated [09:23:57] <_joe_> ahahah [09:25:26] <_joe_> yeah it makes sense. I've never looked at the netbox api but it shouldn't be too hard to do. I would just suggest you automate adding new databases, and not removing them :P [09:25:52] <_joe_> (given our recent misfortunes) [09:26:05] (why limit ourselves to "recent"?) [09:27:26] bblack: slightly different but i wrote some stuff to spoof the source address of dns queries in python some time back, it was something that was much easier to do in the original perl script (https://b4ldr.wordpress.com/2014/03/20/spoofing-dns-pcakets/), the original goal of that script was flawed to to BCP38, all though it did let us test if host where correctly doing BDP38 :) [09:28:11] https://github.com/b4ldr/dnsreader was the finished project if curious [11:51:04] kormat: we have some basic spicerack integration and you can access the api object from it [11:51:28] that is from pynetbox [11:51:33] we also have the swagger at https://netbox.wikimedia.org/api/docs/ [11:51:53] https://doc.wikimedia.org/spicerack/master/api/spicerack.netbox.html [11:52:31] jbond42, XioNoX : https://gerrit.wikimedia.org/r/c/operations/dns/+/597514 [11:55:04] kormat: for a more general solution, there are multiple options (netbox custom scripts that runs in the Netbox env, HTTP webhooks to push data to something zarcillo is listening to, Netbox plugins [overkill in this case], integrated in the reimage script once is migrated to a cookbook) [11:55:15] happy to chat about them when you want [11:59:23] arturo: +1 from me [12:04:04] volans: ack, thanks. [12:06:06] kormat: then there is the puppet way... exported resources defined in puppet and zarcillo collecting them ;) [12:07:14] arturo: great! I don't know enough about DNS but I can probably have a high level look [12:09:57] ACK [14:21:22] <_joe_> can I say I find *extremely* confusing I have to merge the changes to labs/private and they get propagated to all the production puppetmasters? [14:21:58] <_joe_> why do we even have it there besides the frontends [14:30:24] _joe_: thats a good question, tbh i have not thought abouut it and now i wonder why its on the production puppetmasters at all. [14:30:55] i suspect because it was just easier to stick it everywhere when modifying puppet-merge [14:31:49] <_joe_> yeah to be clear, I don't like that we have to run puppet-merge to merge things that don't go in production [14:32:09] <_joe_> we circle back to the separation of concerns between cloud and prod puppet-wise [14:32:33] i agree and it complicates puppet-merge (current bug: https://phabricator.wikimedia.org/T251104) [14:33:01] but is there any reason for the labs/private repo do exist on the production puppetmasters? [14:33:55] <_joe_> jbond42: tangential reasons [14:34:41] it feels like it may be left over from when we pushed to the labs puppet masters as we no longer to that im wondering if it just needs cleaning up [14:35:02] <_joe_> no it doesn't [18:04:27] I have a vague recollection of a move from trusting gerrit to determine the latest version of the puppet repo, to trusting a puppetmaster [18:04:58] is it possible the prod puppetmaster was used for getting the correct status of labs/private.git or something? seems weird nonetheless, can't think of any reason prod would need that repo [19:34:20] Krenair: thats corect puppet merge now writes the sha1 of the commit running on production. The labs environemnt then periodicly polls to determin what to fetch for its own environment, i think previoulsy it just pulled HEAD (could be wrong). The labs/private repo piggy backs this process. however as the production environment doesn't use the labs/private repo it seems to me that we could [19:34:26] disentangle this and have a puppet-merge script ... [19:34:29] ... running on the cloud puppetmasters but i guess im still missing something [20:42:21] jbond42: I honestly don't remember the details, but I think I remember that andrewbogott had some part in the addition of labs/private.git to the puppet-merge process. He is certainly the WMCS team's puppetmaster SME [20:44:30] part of the reason to have it in one script is "otherwise no one ever does it" [20:45:59] jbond42: it could definitely be two different processes. That seems worse to me than the status quo, but I don't feel all that strongly. [21:18:30] andrewbogott: bd808: thanks. i think one of the issues is that adding the labs/private repo to the puppet-merge script adds more complexity. further i think a lot of people who run puppet-merge on the production puppet masters dont neccesarialy know wether to merge a change in the labs/private repo (which leads to https://phabricator.wikimedia.org/T251104). so "otherwise no one ever does it" may [21:18:36] not be that effective. As such i think ... [21:18:39] ... there is an argument to have it as a sperate process with its own monitoring so unmerged changes get merged quickly by someone who knows how to review them. the only disadvantage i see to this is when you merge to both ops/puppet and labs/private you now need to run two commands instead of one. this is not a task i preform often but imagine thats quite different for the cloud admin team so [21:18:45] hard to gauge the inconvinence? [21:24:10] most of the values in labs/private.git are placeholder values to make the manifests compile. There are very few "real" values in there that I can think of beyond ssh public keys for cloud-wide roots and a small number of passwords which are not treated as secrets in the Cloud VPS environment. [21:24:52] Relatively many people outside of the wmcs team also modify labs/private (because it's consumed by the PCC). So it would be training those folks to do the extra step that is the hard part. If it requires logging onto a different puppetmaster to merge, that's yet more to remember. [21:25:18] I believe that my original version of this had a —private switch on puppet-merge [21:25:26] and that someone other than me removed that? But maybe I'm remembering wrong [21:25:30] * andrewbogott looks at history [21:25:37] if someone finds a time machine, a small side trip to have the repo named labs/not-private.git would be awesome. [21:26:57] hm [21:27:12] what is the relationship between modules/puppetmaster/templates/puppet-merge.erb and modules/puppetmaster/files/puppet-merge.py ? [21:31:16] bd808: lol andrewbogott (high level) the python script does most of the work but is designed to work on a single host. the bash script takes care of locking and running the python script on all nodes [21:31:31] makes sense [21:32:40] i also know vola.ns is working on a more genric `git repo merge` type function which is also feeding into my thinking here re: sperating [21:35:20] also i here the issues regarding people not merging when using pcc, however the people submitting the change probably did so because pcc failed with a message saying cant find ket xxx:yyy so one would think they would merge to fix there pcc test [21:35:42] * jbond42 not saying it dosn;t happen just curius [21:38:58] I'm not sure, I just had lots of experience finding a big backlog there when it was a separate command [21:39:20] it's possible that would've gone away on its own as folks practiced [21:39:51] I don't object to you rearranging the workflow, I just fear that it'll be a lot of work and result in a new workflow that is exactly as confusing as the current one :) [21:42:11] bd808: in response to the "most of the values in labs/private.git are placeholder" when thinking about this earlier i was thinking that you could just pull HEAD for this repo like you use to, its not as much of a threat as the ops/puppet repo. however a malicous user could manipulate one of the modules in that rpo to add some evil functions so im not sure thats a good idea. however [its late this [21:42:17] could be a bad idea] we could potentially ... [21:42:19] ... have a script that automaticly merges HEAD as unless there are changes to modules or being more specific modules/**/*.{pp,rb}. this would allow hiera changes to get automaticly merged and changes that could cause a breach (which happen much less often) would need manul intervention [21:44:16] andrewbogott: aggree with your worries and im not planning on makeing changes just yet :), but long term i think it would be nice to split cloud and production [21:44:30] * jbond42 personal asperation [21:45:01] while maintaing DRY (of course) which is the hard part [21:45:32] I was going to reference our last major discussion of that goal in Dublin ;) [21:45:57] jbond42: it used to just pull HEAD — I added the merge step because there was a compromise with malicious patches merged into the repo. Fortunately they didn't actually take effect anywhere. [21:48:19] andrewbogott: yes i rember but that issue effected the production repo. simlare issues exist in the labs/private repo but the attack surface could be reduced. im not sure what damage a mallicious user could do by updating the yaml values, and the hieradata dir is the one that changes tha most and the one that pcc users are most concerned with so if we could auto merge them it may solv 95% of the [21:48:25] issue [21:49:06] I would not want to be in charge of demonstrating that a cloud VM can't be compromised via changes to hiera :) [21:51:17] yes valid point