[02:11:08] The fingerprint coming back from the login.toolforge.org seems to have changed and it doesn't match what's on Wikitech. [02:11:27] 49:5a:5b:37:c7:01:81:b3:da:41:ac:9b:b5:c2:6c:ab is what I'm getting [02:14:49] Nvm. [06:46:59] !log tools delete tools-sgebastion-10 T314665 [06:47:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [06:47:04] T314665: Toolforge: Replace all bastion with grid-less bookworm based bastion hosts - https://phabricator.wikimedia.org/T314665 [07:19:55] `#!/bin/bash` [07:19:55] `# make-admin.sh` [07:19:57] `# Usage: ./make-admin.sh user@example.com` [07:19:58] `if [ -z "$1" ]; then` [07:20:00] ` echo "Usage: ./make-admin.sh "` [07:20:01] ` echo "Example: ./make-admin.sh user@example.com"` [07:20:03] ` exit 1` [07:20:04] `fi` [07:20:06] `EMAIL=$1` [07:20:07] `ADMIN_SECRET=${ADMIN_SECRET:-"01991123-2906-7a1d-ad81-63f20cc51461"}` [07:20:09] `BASE_URL=${BASE_URL:-"https://cibembe.vercel.app"}` [07:20:10] `echo "Making user admin: $EMAIL"` [07:20:12] `RESPONSE=$(curl -s -X POST "$BASE_URL/api/auth/make-admin" \` [07:20:13] ` -H "Content-Type: application/json" \` [07:20:15] ` -d "{` [07:20:16] ` \"email\": \"$EMAIL\",` [07:20:18] ` \"adminSecret\": \"$ADMIN_SECRET\"` [07:20:19] ` }")` [07:20:21] `echo "Response: $RESPONSE"` [07:20:22] `# Check if the response contains success message` [07:20:24] `if echo "$RESPONSE" | grep -q "promoted to admin successfully"; then` [07:20:25] ` echo "✅ User $EMAIL has been successfully promoted to admin!"` [07:20:27] `else` [07:20:28] ` echo "❌ Failed to promote user to admin. Check the response above for details."` [07:20:30] `fi [07:20:32] change-role.sh` [09:26:49] tool wikihistory: kubectl get pods lists a line "job2-6cf4b7549c-9zkcs 1/1 Terminating 1 (6d23h ago) 9d" … is this a zombie-job? [09:33:16] wurgl might be, we have been having issues with NFS locking processes within pods, let me check [09:33:22] what tool is that? [09:33:26] wikihistory xd [09:35:30] yep, that worker had some issues, I'll reboot, that will force it to recreate the pod [09:36:23] I restarted it already, so kill it (if you can kill an already dead process :-) [09:37:02] I'll have to reboot the worker to kill it... they are pretty resilient stuck processes [09:52:15] all fine. Thx [10:00:27] let us know if it happens again, if there's a few processes stuck we get an alert, but if it's only one we don't have an automated way (yet) to differentiate from real load