[01:05:25] 10Gerrit: gerrit sometimes giving "internal server error: Error inserting change/patchset" - https://phabricator.wikimedia.org/T205503 (10Bawolff) [07:22:34] 10Gerrit: gerrit sometimes giving "internal server error: Error inserting change/patchset" - https://phabricator.wikimedia.org/T205503 (10hashar) Extracts from Gerrit logs The first connection: ``` name=sshd_log [2018-09-26 01:00:36,733 +0000] 6003b0c3 bawolff a/99 LOGIN FROM XXXXX [2018-09-26 01:00:41,701 +000... [07:34:53] 10Gerrit: gerrit sometimes giving "internal server error: Error inserting change/patchset" - https://phabricator.wikimedia.org/T205503 (10hashar) [07:35:55] 10Gerrit: gerrit sometimes giving "internal server error: Error inserting change/patchset" - https://phabricator.wikimedia.org/T205503 (10hashar) TLDR: I have no idea what has happened. Maybe delete the change and upload it again? [08:06:23] (03PS4) 10Thiemo Kreuz (WMDE): Update message to talk about "top level" instead of "file comment" [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/438031 [08:14:16] Project beta-scap-eqiad build #223090: 15ABORTED in 5 min 24 sec: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/223090/ [08:14:38] !log Restarting CI Jenkins on contint1001 [08:14:46] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [08:23:44] Yippee, build fixed! [08:23:45] Project beta-code-update-eqiad build #220332: 09FIXED in 43 sec: https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/220332/ [08:24:18] !log Restarting CI Jenkins on contint1001 [#2] [08:24:22] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [08:44:27] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team (Kanban), 10Release Pipeline: Migrate m4executor CI nodes to bigram instances - https://phabricator.wikimedia.org/T205362 (10hashar) Thank you Dan to have dig into all those metrics. [08:55:52] 10Continuous-Integration-Config, 10Growth-Team, 10MediaWiki-extensions-CentralAuth, 10Notifications, 10MW-1.29-release: Echo tests on REL1_29 fail due to CentralAuth - https://phabricator.wikimedia.org/T202667 (10Aklapper) 1.29 is not supported anymore. Does this still happen in other supported versions... [08:55:54] 10Continuous-Integration-Config, 10Growth-Team, 10MediaWiki-extensions-CentralAuth, 10Thanks: Thanks REL1_29 failing due to CentralAuth tables being missing - https://phabricator.wikimedia.org/T202670 (10Aklapper) 1.29 is not supported anymore. Does this still happen in other supported versions or shall th... [09:42:29] (03PS1) 10Hashar: Migrate ArticlePlaceholder to Quibble [integration/config] - 10https://gerrit.wikimedia.org/r/462898 (https://phabricator.wikimedia.org/T180171) [09:49:24] (03PS1) 10Hashar: Remove mwext-qunit-composer-jessie [integration/config] - 10https://gerrit.wikimedia.org/r/462901 (https://phabricator.wikimedia.org/T183512) [09:49:26] (03PS1) 10Hashar: Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) [09:51:57] (03PS2) 10Hashar: Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) [09:51:59] (03CR) 10jerkins-bot: [V: 04-1] Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [09:54:20] (03CR) 10jerkins-bot: [V: 04-1] Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [10:30:08] maintenance-disconnect-full-disks build 6023 integration-slave-docker-1034 (/srv: 95%): OFFLINE due to disk space [10:35:08] maintenance-disconnect-full-disks build 6024 integration-slave-docker-1034: OFFLINE due to disk space [10:40:06] maintenance-disconnect-full-disks build 6025 integration-slave-docker-1034: OFFLINE due to disk space [10:41:02] 10Project-Admins: Create a component project for the EditAccount extension - https://phabricator.wikimedia.org/T205293 (10planetenxin) @Aklapper no this was not yet communicated to the maintainers. I want to report issues and contribute. @CCicalese_WMF recommended to request a component project to have things ta... [10:45:07] maintenance-disconnect-full-disks build 6026 integration-slave-docker-1034: OFFLINE due to disk space [10:50:06] maintenance-disconnect-full-disks build 6027 integration-slave-docker-1034: OFFLINE due to disk space [10:55:06] maintenance-disconnect-full-disks build 6028 integration-slave-docker-1034: OFFLINE due to disk space [11:00:06] maintenance-disconnect-full-disks build 6029 integration-slave-docker-1034: OFFLINE due to disk space [11:05:08] maintenance-disconnect-full-disks build 6030 integration-slave-docker-1034: OFFLINE due to disk space [11:10:06] maintenance-disconnect-full-disks build 6031 integration-slave-docker-1034: OFFLINE due to disk space [11:15:06] maintenance-disconnect-full-disks build 6032 integration-slave-docker-1034: OFFLINE due to disk space [11:20:00] PROBLEM - Free space - all mounts on integration-slave-docker-1033 is CRITICAL: CRITICAL: integration.integration-slave-docker-1033.diskspace._srv.byte_percentfree (<30.00%) [11:20:07] maintenance-disconnect-full-disks build 6033 integration-slave-docker-1033 (/srv: 96%): OFFLINE due to disk space [11:20:07] maintenance-disconnect-full-disks build 6033 integration-slave-docker-1034: OFFLINE due to disk space [11:25:07] maintenance-disconnect-full-disks build 6034 integration-slave-docker-1033: OFFLINE due to disk space [11:25:07] maintenance-disconnect-full-disks build 6034 integration-slave-docker-1034: OFFLINE due to disk space [11:29:01] 10Project-Admins, 10wikiba.se: permit a phabricator page for the FactGrid project - https://phabricator.wikimedia.org/T193071 (10Olaf_Simons) The following points are of concern at https://database.factgrid.de/wiki/Main_Page a wikibase instance at the university of Erfurt and (and joint venture with Wikimedia.... [11:30:14] maintenance-disconnect-full-disks build 6035 integration-slave-docker-1033: OFFLINE due to disk space [11:30:14] maintenance-disconnect-full-disks build 6035 integration-slave-docker-1034: OFFLINE due to disk space [11:35:06] maintenance-disconnect-full-disks build 6036 integration-slave-docker-1033: OFFLINE due to disk space [11:35:07] maintenance-disconnect-full-disks build 6036 integration-slave-docker-1034: OFFLINE due to disk space [11:40:07] maintenance-disconnect-full-disks build 6037 integration-slave-docker-1033: OFFLINE due to disk space [11:40:07] maintenance-disconnect-full-disks build 6037 integration-slave-docker-1034: OFFLINE due to disk space [11:45:07] maintenance-disconnect-full-disks build 6038 integration-slave-docker-1033: OFFLINE due to disk space [11:45:07] maintenance-disconnect-full-disks build 6038 integration-slave-docker-1034: OFFLINE due to disk space [11:50:09] maintenance-disconnect-full-disks build 6039 integration-slave-docker-1033: OFFLINE due to disk space [11:50:10] maintenance-disconnect-full-disks build 6039 integration-slave-docker-1034: OFFLINE due to disk space [11:55:06] maintenance-disconnect-full-disks build 6040 integration-slave-docker-1033: OFFLINE due to disk space [11:55:07] maintenance-disconnect-full-disks build 6040 integration-slave-docker-1034: OFFLINE due to disk space [11:57:08] !log [[gerrit:462927]] (ores) is going to beta [11:57:12] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [12:00:06] maintenance-disconnect-full-disks build 6041 integration-slave-docker-1033: OFFLINE due to disk space [12:00:07] maintenance-disconnect-full-disks build 6041 integration-slave-docker-1034: OFFLINE due to disk space [12:05:07] maintenance-disconnect-full-disks build 6042 integration-slave-docker-1033: OFFLINE due to disk space [12:05:08] maintenance-disconnect-full-disks build 6042 integration-slave-docker-1034: OFFLINE due to disk space [12:10:06] maintenance-disconnect-full-disks build 6043 integration-slave-docker-1033: OFFLINE due to disk space [12:10:07] maintenance-disconnect-full-disks build 6043 integration-slave-docker-1034: OFFLINE due to disk space [12:15:07] maintenance-disconnect-full-disks build 6044 integration-slave-docker-1033: OFFLINE due to disk space [12:15:08] maintenance-disconnect-full-disks build 6044 integration-slave-docker-1034: OFFLINE due to disk space [12:20:06] maintenance-disconnect-full-disks build 6045 integration-slave-docker-1033: OFFLINE due to disk space [12:20:06] maintenance-disconnect-full-disks build 6045 integration-slave-docker-1034: OFFLINE due to disk space [12:25:06] maintenance-disconnect-full-disks build 6046 integration-slave-docker-1033: OFFLINE due to disk space [12:25:06] maintenance-disconnect-full-disks build 6046 integration-slave-docker-1034: OFFLINE due to disk space [12:30:06] maintenance-disconnect-full-disks build 6047 integration-slave-docker-1033: OFFLINE due to disk space [12:30:06] maintenance-disconnect-full-disks build 6047 integration-slave-docker-1034: OFFLINE due to disk space [12:35:08] maintenance-disconnect-full-disks build 6048 integration-slave-docker-1033: OFFLINE due to disk space [12:35:08] maintenance-disconnect-full-disks build 6048 integration-slave-docker-1034: OFFLINE due to disk space [12:38:08] 10Beta-Cluster-Infrastructure, 10Analytics-Kanban, 10Operations, 10Patch-For-Review, and 2 others: Prometheus resources in deployment-prep to create grafana graphs of EventLogging - https://phabricator.wikimedia.org/T204088 (10fgiunchedi) >>! In T204088#4616379, @Ottomata wrote: > BTW, I updated https://wi... [12:40:06] maintenance-disconnect-full-disks build 6049 integration-slave-docker-1033: OFFLINE due to disk space [12:40:06] maintenance-disconnect-full-disks build 6049 integration-slave-docker-1034: OFFLINE due to disk space [12:45:06] maintenance-disconnect-full-disks build 6050 integration-slave-docker-1033: OFFLINE due to disk space [12:45:06] maintenance-disconnect-full-disks build 6050 integration-slave-docker-1034: OFFLINE due to disk space [12:50:06] maintenance-disconnect-full-disks build 6051 integration-slave-docker-1033: OFFLINE due to disk space [12:50:06] maintenance-disconnect-full-disks build 6051 integration-slave-docker-1034: OFFLINE due to disk space [12:55:06] maintenance-disconnect-full-disks build 6052 integration-slave-docker-1033: OFFLINE due to disk space [12:55:07] maintenance-disconnect-full-disks build 6052 integration-slave-docker-1034: OFFLINE due to disk space [12:55:44] 10Project-Admins: Create a component project for the EditAccount extension - https://phabricator.wikimedia.org/T205293 (10CCicalese_WMF) It is, indeed, best to check with the maintainer of the extension first to be sure they want to use Phabricator to receive issue reports. I should have included that step in my... [13:00:07] maintenance-disconnect-full-disks build 6053 integration-slave-docker-1033: OFFLINE due to disk space [13:00:08] maintenance-disconnect-full-disks build 6053 integration-slave-docker-1034: OFFLINE due to disk space [13:04:01] beta cluster nodes should start with two zeroes instead of one, I would love to loging to deployment-ores007 [13:05:08] maintenance-disconnect-full-disks build 6054 integration-slave-docker-1033: OFFLINE due to disk space [13:05:08] maintenance-disconnect-full-disks build 6054 integration-slave-docker-1034: OFFLINE due to disk space [13:08:18] Amir1: "oops, I typoed" [13:08:20] * Reedy grins [13:09:08] :P [13:10:06] maintenance-disconnect-full-disks build 6055 integration-slave-docker-1033: OFFLINE due to disk space [13:10:06] maintenance-disconnect-full-disks build 6055 integration-slave-docker-1034: OFFLINE due to disk space [13:15:07] maintenance-disconnect-full-disks build 6056 integration-slave-docker-1033: OFFLINE due to disk space [13:15:07] maintenance-disconnect-full-disks build 6056 integration-slave-docker-1034: OFFLINE due to disk space [13:17:10] lol [13:20:07] maintenance-disconnect-full-disks build 6057 integration-slave-docker-1033: OFFLINE due to disk space [13:20:08] maintenance-disconnect-full-disks build 6057 integration-slave-docker-1034: OFFLINE due to disk space [13:24:49] 10Beta-Cluster-Infrastructure, 10Analytics-Kanban, 10Operations, 10Patch-For-Review, and 2 others: Prometheus resources in deployment-prep to create grafana graphs of EventLogging - https://phabricator.wikimedia.org/T204088 (10Ottomata) I didn't realize that either! Documenting. [13:25:06] maintenance-disconnect-full-disks build 6058 integration-slave-docker-1033: OFFLINE due to disk space [13:25:06] maintenance-disconnect-full-disks build 6058 integration-slave-docker-1034: OFFLINE due to disk space [13:30:07] maintenance-disconnect-full-disks build 6059 integration-slave-docker-1033: OFFLINE due to disk space [13:30:07] maintenance-disconnect-full-disks build 6059 integration-slave-docker-1034: OFFLINE due to disk space [13:35:07] maintenance-disconnect-full-disks build 6060 integration-slave-docker-1033: OFFLINE due to disk space [13:35:08] maintenance-disconnect-full-disks build 6060 integration-slave-docker-1034: OFFLINE due to disk space [13:40:06] maintenance-disconnect-full-disks build 6061 integration-slave-docker-1033: OFFLINE due to disk space [13:40:06] maintenance-disconnect-full-disks build 6061 integration-slave-docker-1034: OFFLINE due to disk space [13:45:06] maintenance-disconnect-full-disks build 6062 integration-slave-docker-1033: OFFLINE due to disk space [13:45:06] maintenance-disconnect-full-disks build 6062 integration-slave-docker-1034: OFFLINE due to disk space [13:50:07] maintenance-disconnect-full-disks build 6063 integration-slave-docker-1033: OFFLINE due to disk space [13:50:07] maintenance-disconnect-full-disks build 6063 integration-slave-docker-1034: OFFLINE due to disk space [13:52:55] hasharAway: ^^ [13:55:06] maintenance-disconnect-full-disks build 6064 integration-slave-docker-1033: OFFLINE due to disk space [13:55:06] maintenance-disconnect-full-disks build 6064 integration-slave-docker-1034: OFFLINE due to disk space [14:00:06] maintenance-disconnect-full-disks build 6065 integration-slave-docker-1033: OFFLINE due to disk space [14:00:06] maintenance-disconnect-full-disks build 6065 integration-slave-docker-1034: OFFLINE due to disk space [14:05:09] maintenance-disconnect-full-disks build 6066 integration-slave-docker-1033: OFFLINE due to disk space [14:05:09] maintenance-disconnect-full-disks build 6066 integration-slave-docker-1034: OFFLINE due to disk space [14:10:07] maintenance-disconnect-full-disks build 6067 integration-slave-docker-1033: OFFLINE due to disk space [14:10:07] maintenance-disconnect-full-disks build 6067 integration-slave-docker-1034: OFFLINE due to disk space [14:15:06] maintenance-disconnect-full-disks build 6068 integration-slave-docker-1033: OFFLINE due to disk space [14:15:06] maintenance-disconnect-full-disks build 6068 integration-slave-docker-1034: OFFLINE due to disk space [14:16:55] RECOVERY - Puppet errors on integration-slave-jessie-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [14:20:07] maintenance-disconnect-full-disks build 6069 integration-slave-docker-1033: OFFLINE due to disk space [14:20:07] maintenance-disconnect-full-disks build 6069 integration-slave-docker-1034: OFFLINE due to disk space [14:25:07] maintenance-disconnect-full-disks build 6070 integration-slave-docker-1033: OFFLINE due to disk space [14:25:07] maintenance-disconnect-full-disks build 6070 integration-slave-docker-1034: OFFLINE due to disk space [14:30:06] maintenance-disconnect-full-disks build 6071 integration-slave-docker-1033: OFFLINE due to disk space [14:30:07] maintenance-disconnect-full-disks build 6071 integration-slave-docker-1034: OFFLINE due to disk space [14:35:07] maintenance-disconnect-full-disks build 6072 integration-slave-docker-1033: OFFLINE due to disk space [14:35:08] maintenance-disconnect-full-disks build 6072 integration-slave-docker-1034: OFFLINE due to disk space [14:40:06] maintenance-disconnect-full-disks build 6073 integration-slave-docker-1033: OFFLINE due to disk space [14:40:07] maintenance-disconnect-full-disks build 6073 integration-slave-docker-1034: OFFLINE due to disk space [14:42:18] uhhhhh [14:45:08] maintenance-disconnect-full-disks build 6074 integration-slave-docker-1033: OFFLINE due to disk space [14:45:08] maintenance-disconnect-full-disks build 6074 integration-slave-docker-1034: OFFLINE due to disk space [14:45:49] spammy [14:46:34] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10Reedy) [14:47:41] !log investigating integration-slave-docker-103{3,4} [14:47:45] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [14:48:23] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10Reedy) [14:55:08] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10matmarex) [14:57:13] o/ [14:57:51] Folks, does anyone know if eventstreams is set up for the beta cluster? I had someone ask and I haven't been able to find an answer on the wiki pages or phab tickets that I've looked at. [15:00:28] https://stream.beta.wmflabs.org/ would suggest not [15:02:01] Yeah I figured that would be the URL [15:02:27] just seeing if horizon would give any clues [15:02:33] Godspeed [15:02:43] keep loading loading loading loading [15:03:23] marktraceur: Looks like a no [15:03:30] OK, thanks [15:03:43] But kafka is there.. [15:03:55] So I wonder if it's just provisioning a vm to do the frontend stuff [15:04:07] do the heira config and apply profile::eventstreams [15:05:17] !log integration-slave-docker-1033:sudo rm -rf /srv/jenkins-workspace/workspace/* and bring back online [15:05:20] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [15:13:22] !log integration-slave-docker-1034:sudo rm -rf /srv/jenkins-workspace/workspace/* and bring back online -- https://phabricator.wikimedia.org/P7592 [15:13:25] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [15:19:17] (03PS1) 10Thcipriani: maintenance-full-disks: IRC alert every 5th build [integration/config] - 10https://gerrit.wikimedia.org/r/463095 [15:36:03] 10Project-Admins: Create a component project for the EditAccount extension - https://phabricator.wikimedia.org/T205293 (10Aklapper) I'm fine making the maintainers members of the Phab project to create, it's just that I'd like to see a "Yes" from a maintainer as there //might be// already a place where maintaine... [15:50:13] (03PS2) 10Thcipriani: maintenance-full-disks: IRC alert every 5th build [integration/config] - 10https://gerrit.wikimedia.org/r/463095 [15:50:19] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10pmiazga) [15:51:50] 10Release-Engineering-Team, 10Wikimedia-Incident, 10Wikimedia-production-error: Deployments of MediaWiki with scap cause a spam of "web request took longer than 60 seconds and timed out" - https://phabricator.wikimedia.org/T204871 (10Jdlrobson) > Maybe we always had the issue and it is now showing up due to... [16:03:40] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10zeljkofilipin) All blockers are removed or resolved, I'll move the train from group 0 to group 1 during morning SWAT (now). [16:04:38] (03CR) 10Hashar: [C: 032] Migrate ArticlePlaceholder to Quibble [integration/config] - 10https://gerrit.wikimedia.org/r/462898 (https://phabricator.wikimedia.org/T180171) (owner: 10Hashar) [16:05:15] (03CR) 10Hashar: [C: 032] Remove mwext-qunit-composer-jessie [integration/config] - 10https://gerrit.wikimedia.org/r/462901 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [16:06:15] (03Merged) 10jenkins-bot: Migrate ArticlePlaceholder to Quibble [integration/config] - 10https://gerrit.wikimedia.org/r/462898 (https://phabricator.wikimedia.org/T180171) (owner: 10Hashar) [16:07:04] (03PS3) 10Hashar: Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) [16:07:25] (03Merged) 10jenkins-bot: Remove mwext-qunit-composer-jessie [integration/config] - 10https://gerrit.wikimedia.org/r/462901 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [16:19:07] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10Krinkle) [16:28:25] (03CR) 10Thcipriani: [C: 032] "Deployed" [integration/config] - 10https://gerrit.wikimedia.org/r/463095 (owner: 10Thcipriani) [16:28:55] (03Merged) 10jenkins-bot: maintenance-full-disks: IRC alert every 5th build [integration/config] - 10https://gerrit.wikimedia.org/r/463095 (owner: 10Thcipriani) [16:40:49] (03CR) 10Hashar: [C: 032] Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [16:42:57] (03Merged) 10jenkins-bot: Phase out mwext-testextension* [integration/config] - 10https://gerrit.wikimedia.org/r/462902 (https://phabricator.wikimedia.org/T183512) (owner: 10Hashar) [16:44:05] 10Release-Engineering-Team (Watching / External), 10Scap, 10ORES, 10Operations, 10Scoring-platform-team (Current): ORES should use a git large file plugin for storing serialized binaries - https://phabricator.wikimedia.org/T171619 (10Halfak) [16:57:49] 10Scap, 10Datacenter-Switchover-2018: Scap canary warning monitoring URL is hard-coded with eqiad servers, so isn't useful when codfw is primary - https://phabricator.wikimedia.org/T205559 (10Jdforrester-WMF) [17:24:48] 10Release-Engineering-Team (Kanban), 10Release, 10Train Deployments, 10User-zeljkofilipin: 1.32.0-wmf.23 deployment blockers - https://phabricator.wikimedia.org/T191069 (10Jdforrester-WMF) [18:27:23] PROBLEM - SSH on integration-slave-docker-1021 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:30:06] 10RelEng-Archive-FY201718-Q2, 10Scap (Tech Debt Sprint FY201718-Q2), 10Scoring-platform-team: Need to make the number of cached revisions configurable - https://phabricator.wikimedia.org/T181176 (10awight) [18:47:14] RECOVERY - SSH on integration-slave-docker-1021 is OK: SSH OK - OpenSSH_6.7p1 Debian-5+deb8u7 (protocol 2.0) [18:51:23] 10MediaWiki-Releasing, 10GitHub-Mirrors: Latest mediawiki release git tags did not sync to github - https://phabricator.wikimedia.org/T205568 (10Legoktm) p:05Triage>03High [18:54:02] 10MediaWiki-Releasing, 10GitHub-Mirrors: Latest mediawiki release git tags did not sync to github - https://phabricator.wikimedia.org/T205568 (10Reedy) [18:54:29] 10Continuous-Integration-Infrastructure, 10Release-Engineering-Team, 10MediaWiki-Core-Tests, 10Wikimedia-production-error (Shared Build Failure): Quibble CI jobs failing due to memory allocation - https://phabricator.wikimedia.org/T198432 (10Krinkle) 05Open>03Resolved a:03dduvall Tentatively marking... [18:55:05] 10MediaWiki-Releasing, 10GitHub-Mirrors: Latest mediawiki release git tags did not sync to github - https://phabricator.wikimedia.org/T205568 (10Reedy) They made it to Phab... [18:56:01] Reedy: my understanding is that phab is pulling, while gerrit->gh is pushing [18:58:54] this is true [18:59:10] https://wikitech.wikimedia.org/wiki/Gerrit#Forcing_Replication_re-runs [19:01:56] thcipriani: hmm, commits appear to be replicating properly, but not these tags [19:02:19] lemme see if there's anything in the gerrit logs, which tag specifically? [19:02:20] the newer branches have replicated fine [19:02:22] anything in the replication log? [19:02:32] 1.31.1, 1.30.1, 1.29.?, 1.27.? [19:02:32] thcipriani the log is now at /var/log/gerrit :) [19:02:51] 1.31.1, 1.30.1, 1.29.3, 1.27.5 [19:05:32] hrm, is bast4001 having issues? I can't seem to ssh into anything in prod [19:05:45] ulsfo is down for server physical moves [19:06:02] use 1001 or 2001 [19:08:50] there we go, thanks [19:09:29] no_justification we are going to have to rename the top menu "documentation" to something like wmf documentation seeing as pg does not merge it now :) [19:09:34] so it shows twice for me. [19:13:06] well, on the 20th I see a message that it 1.31.1 was created [19:13:47] but nothing outside of that [19:14:01] * thcipriani tries re-running replication for mediawiki/core [19:16:36] > Replicate mediawiki/core ref ..all.. to github.com, Succeeded! (OK) [19:17:06] nothing in the error log related to replication while I was doing that [19:18:23] Looks to have fixed it [19:18:24] https://github.com/wikimedia/mediawiki/tags [19:18:32] They're there and verified:) [19:18:48] cool [19:34:05] 10MediaWiki-Releasing, 10GitHub-Mirrors: Latest mediawiki release git tags did not sync to github - https://phabricator.wikimedia.org/T205568 (10Reedy) 05Open>03Resolved a:03thcipriani [20:32:19] 10Continuous-Integration-Infrastructure: Add jenkins syntax verification on operations/dns - https://phabricator.wikimedia.org/T205579 (10jijiki) [20:32:45] 10Continuous-Integration-Infrastructure, 10User-jijiki: Add jenkins syntax verification on operations/dns - https://phabricator.wikimedia.org/T205579 (10jijiki) [20:43:04] 10Release-Engineering-Team (Kanban), 10Education-Program-Dashboard, 10MediaWiki-extensions-EducationProgram, 10Epic, and 2 others: Deprecate and remove the EducationProgram extension from Wikimedia servers after June 30, 2018 - https://phabricator.wikimedia.org/T125618 (10Iniquity) @Jforrester , hello:) Ma... [21:58:55] PROBLEM - Puppet errors on integration-slave-jessie-1003 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [22:33:55] RECOVERY - Puppet errors on integration-slave-jessie-1003 is OK: OK: Less than 1.00% above the threshold [0.0] [22:53:40] 10Release-Engineering-Team, 10MediaWiki-extensions-WikimediaIncubator, 10Epic, 10I18n: Make creating a new Language project easier - https://phabricator.wikimedia.org/T165585 (10Liuxinyu970226)