[12:14:02] something changed in the puppet git repo and now my local lint script is broken :-/ [12:14:38] https://www.irccloud.com/pastebin/at4hKvBO/ [12:15:01] we no longer have a puppet_lint task? [12:15:08] https://www.irccloud.com/pastebin/X7I4rtAj/ [12:23:13] ^^^ nevermind. My patch didn't update any puppet manifest, that's why the task was not present [16:56:28] so https://phabricator.wikimedia.org/T226778 has a list of racks not db masters in them [16:56:33] so those are likely our priority this week [16:57:01] well, i thoguht it did [16:57:05] where did marostegui put that comment? [16:57:24] oh, not on the master task [16:57:37] https://phabricator.wikimedia.org/T227138#5354060 [16:57:49] so today, a4 and a5 if we can help it [16:57:54] i added to master task via quote [16:58:13] then tomorrow a3 and a7 [16:58:19] or mix in a row b [17:00:19] herron: o/ - a4 (https://phabricator.wikimedia.org/T227140) contains kafka1001 [17:00:25] (if you want to depool etc..) [17:00:59] elukey: heya, ahh ok good to know [17:02:22] scb1001 needs to be depooled as well [17:02:26] since it runs a ton of services [17:02:58] I see July 22-26, is there a more specific schedule for racks? [17:03:41] Rob mentioned a4 and a5 today [17:03:53] I think that a more precise schedule will be written in the master task [17:04:18] I'd like to make sure that no outages will happen because people didn't read the phab updates :D [17:12:10] thcipriani: et. al. we're suffering a hardware failure so I'm needing to move a lot of deployment-prep again. I'm doing it all in a lump, right now, so it shouldn't be offline for all that long. [17:14:32] ack [17:20:39] elukey: ok I’ll depool kafka1001 now then [17:22:22] I am logging off in a bit, thanks! [17:23:17] robh: any news on the schedule part? [17:24:29] elukey: what do you mean? [17:24:37] i said today a4 and a5 [17:24:43] beyond that no schedule, im actively working a4 now [17:25:15] but if you know of those that are left a good time to specifically do one let me know =] [17:25:22] otherwise its pick another rack without a db master, do it. [17:25:32] so... a7 likely after a5 [17:25:50] then b1, b4, b6, b7 [17:26:13] i think we can do 2 a day maybe 3 racks but not sure this is our first rack [17:26:14] =] [17:26:28] robh: I thought that you guys needed people to depool and/or check services while operating, a lot of hosts don't have any owner. Like in a4 scb1001 (that runs several services) [17:27:18] it is not clear what kind of help you need from us :D [17:27:48] I thought that the tasks in phab were meant to get people aware of the work and execute actions when needed (like depool/pool) [17:28:07] if not, I completely misunderstood :) [17:29:55] kafka1001 is depooled, fwiw. I set downtime for 24h, robh do you think it will need more than that? [17:31:21] nope, that is more than enough [17:31:44] we're in the proces sof depowering 1 of the 2 pdu sides [17:32:17] sweet! ok [17:35:56] depooled scb1001 [17:39:57] as FYI some .mgmt interfaces are down [17:40:49] anyone familiar with repropro know what this error means: https://phabricator.wikimedia.org/P8780 [17:42:41] chaomodus: you need buster-wikimedia not buster [17:42:41] chaomodus: s/buster/buster-wikimedia :) [17:44:59] ah hah! thank you :D [17:46:40] you don't need to rebuild, BTW, you can also force the distribution [17:47:07] ignore=wrongdistribution [17:48:10] how do i pass that argument? [17:48:59] * apergos peeks in [17:49:54] --ignore=wrongdistribution [17:50:00] on the reprepro command line [17:50:02] tah [18:08:41] I am logging off, please people follow up with the rack/pdu work [18:09:17] o/