[00:01:02] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [01:01:01] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [01:16:02] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [01:43:22] PROBLEM - Puppet errors on saucelabs-02 is CRITICAL: (Service Check Timed Out) [01:43:22] PROBLEM - Puppet errors on deployment-elastic06 is CRITICAL: (Service Check Timed Out) [01:43:22] PROBLEM - Free space - all mounts on deployment-prometheus02 is CRITICAL: (Service Check Timed Out) [01:43:22] PROBLEM - Puppet errors on integration-slave-docker-1050 is CRITICAL: (Service Check Timed Out) [01:43:22] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Free space - all mounts on deployment-aqs01 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Puppet errors on integration-slave-jessie-1004 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Puppet errors on deployment-deploy01 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Puppet staleness on deployment-eventlog05 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Puppet staleness on deployment-webperf12 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Free space - all mounts on deployment-eventgate-1 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Free space - all mounts on deployment-db06 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Free space - all mounts on deployment-sca01 is CRITICAL: (Service Check Timed Out) [01:44:59] PROBLEM - Puppet staleness on saucelabs-01 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Free space - all mounts on deployment-ms-be06 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Free space - all mounts on deployment-eventlog05 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Free space - all mounts on deployment-puppetdb02 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Puppet staleness on deployment-puppetdb02 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Free space - all mounts on integration-slave-docker-1059 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Puppet staleness on deployment-kafka-jumbo-1 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Puppet errors on deployment-snapshot01 is CRITICAL: (Service Check Timed Out) [01:45:23] PROBLEM - Puppet staleness on deployment-db05 is CRITICAL: (Service Check Timed Out) [01:45:26] PROBLEM - Puppet errors on deployment-eventgate-1 is CRITICAL: (Service Check Timed Out) [01:45:26] PROBLEM - Puppet staleness on deployment-restbase02 is CRITICAL: (Service Check Timed Out) [01:45:26] PROBLEM - Puppet errors on integration-slave-docker-1048 is CRITICAL: (Service Check Timed Out) [01:45:26] PROBLEM - Free space - all mounts on deployment-hadoop-test-1 is CRITICAL: (Service Check Timed Out) [01:45:26] PROBLEM - Puppet staleness on deployment-urldownloader02 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on integration-slave-docker-1040 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on deployment-sessionstore02 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet errors on deployment-mwmaint01 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on integration-slave-jessie-1002 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on deployment-cumin02 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet errors on deployment-ms-be05 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on deployment-mcs01 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet staleness on integration-slave-jessie-1001 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Free space - all mounts on deployment-mediawiki-07 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet staleness on deployment-elastic07 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet staleness on integration-slave-docker-1052 is CRITICAL: (Service Check Timed Out) [01:45:27] PROBLEM - Puppet errors on integration-agent-puppet-docker-1001 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Free space - all mounts on integration-slave-docker-1048 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet staleness on deployment-db06 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on integration-puppetmaster01 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on integration-slave-docker-1051 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Free space - all mounts on deployment-acme-chief04 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on deployment-puppetmaster03 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on deployment-logstash2 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on deployment-cache-upload05 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Puppet errors on deployment-restbase02 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Free space - all mounts on integration-slave-docker-1054 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Free space - all mounts on deployment-elastic05 is CRITICAL: (Service Check Timed Out) [01:46:17] PROBLEM - Free space - all mounts on deployment-jobrunner03 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Free space - all mounts on deployment-ms-be05 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet errors on deployment-sessionstore02 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet staleness on deployment-kafka-main-1 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Free space - all mounts on deployment-cpjobqueue is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet staleness on integration-slave-jessie-1004 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet errors on deployment-webperf12 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet staleness on deployment-fluorine02 is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Puppet staleness on deployment-changeprop is CRITICAL: (Service Check Timed Out) [01:46:18] PROBLEM - Free space - all mounts on integration-r-lang-01 is CRITICAL: (Service Check Timed Out) [01:47:33] RECOVERY - Puppet errors on saucelabs-02 is OK: OK: Less than 1.00% above the threshold [2.0] [01:47:33] RECOVERY - Puppet errors on deployment-elastic06 is OK: OK: Less than 1.00% above the threshold [2.0] [01:47:33] RECOVERY - Puppet errors on integration-slave-docker-1050 is OK: OK: Less than 1.00% above the threshold [2.0] [01:47:33] RECOVERY - Free space - all mounts on deployment-prometheus02 is OK: OK: All targets OK [01:47:42] RECOVERY - Free space - all mounts on deployment-aqs01 is OK: OK: All targets OK [01:47:44] RECOVERY - Puppet errors on integration-slave-jessie-1004 is OK: OK: Less than 1.00% above the threshold [2.0] [01:47:44] RECOVERY - Puppet errors on deployment-deploy01 is OK: OK: Less than 1.00% above the threshold [2.0] [01:47:44] RECOVERY - Puppet staleness on deployment-eventlog05 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:47:44] RECOVERY - Puppet staleness on deployment-webperf12 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:47:47] RECOVERY - Free space - all mounts on deployment-sca01 is OK: OK: deployment-prep.deployment-sca01.diskspace._var.byte_percentfree (No valid datapoints found) deployment-prep.deployment-sca01.diskspace._srv.byte_percentfree (No valid datapoints found) deployment-prep.deployment-sca01.diskspace._mnt.byte_percentfree (No valid datapoints found) deployment-prep.deployment-sca01.diskspace._var_log.byte_percentfree (No valid datapoints [01:47:49] RECOVERY - Puppet staleness on saucelabs-01 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:47:51] RECOVERY - Free space - all mounts on deployment-eventgate-1 is OK: OK: All targets OK [01:47:51] RECOVERY - Free space - all mounts on deployment-db06 is OK: OK: All targets OK [01:47:54] RECOVERY - Puppet staleness on deployment-kafka-jumbo-1 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:47:54] RECOVERY - Free space - all mounts on deployment-eventlog05 is OK: OK: deployment-prep.deployment-eventlog05.diskspace._var_lib_mysql.byte_percentfree (No valid datapoints found) [01:47:55] RECOVERY - Free space - all mounts on deployment-puppetdb02 is OK: OK: All targets OK [01:47:55] RECOVERY - Puppet staleness on deployment-puppetdb02 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:47:56] RECOVERY - Free space - all mounts on deployment-ms-be06 is OK: OK: All targets OK [01:47:57] RECOVERY - Free space - all mounts on integration-slave-docker-1059 is OK: OK: All targets OK [01:48:00] RECOVERY - Puppet errors on deployment-snapshot01 is OK: OK: Less than 1.00% above the threshold [2.0] [01:48:00] RECOVERY - Puppet errors on deployment-eventgate-1 is OK: OK: Less than 1.00% above the threshold [2.0] [01:48:01] RECOVERY - Puppet staleness on deployment-db05 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:48:01] RECOVERY - Puppet staleness on deployment-restbase02 is OK: OK: Less than 1.00% above the threshold [3600.0] [01:48:04] RECOVERY - Puppet errors on integration-slave-docker-1048 is OK: OK: Less than 1.00% above the threshold [2.0] [01:48:05] RECOVERY - Free space - all mounts on deployment-hadoop-test-1 is OK: OK: All targets OK [02:12:49] RECOVERY - Free space - all mounts on deployment-memc06 is OK: OK: All targets OK [02:12:55] RECOVERY - Puppet errors on deployment-maps04 is OK: OK: Less than 1.00% above the threshold [2.0] [02:13:03] RECOVERY - Puppet staleness on deployment-mediawiki-07 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:16] RECOVERY - Free space - all mounts on deployment-urldownloader02 is OK: OK: All targets OK [02:13:17] RECOVERY - Puppet staleness on integration-r-lang-01 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:28] RECOVERY - Puppet errors on integration-cumin is OK: OK: Less than 1.00% above the threshold [2.0] [02:13:28] RECOVERY - English Wikipedia Mobile Main page on beta-cluster is OK: HTTP OK: HTTP/1.1 200 OK - 37009 bytes in 1.238 second response time [02:13:28] RECOVERY - Puppet staleness on integration-slave-docker-1052 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:31] RECOVERY - Puppet errors on integration-agent-puppet-docker-1001 is OK: OK: Less than 1.00% above the threshold [2.0] [02:13:34] RECOVERY - Puppet errors on integration-slave-docker-1050 is OK: OK: Less than 1.00% above the threshold [2.0] [02:13:38] RECOVERY - Puppet staleness on deployment-sca04 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:39] RECOVERY - Free space - all mounts on deployment-mwmaint01 is OK: OK: All targets OK [02:13:40] RECOVERY - Free space - all mounts on deployment-docker-citoid01 is OK: OK: All targets OK [02:13:44] RECOVERY - Puppet staleness on integration-slave-docker-1054 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:44] RECOVERY - Free space - all mounts on deployment-maps04 is OK: OK: All targets OK [02:13:44] RECOVERY - Puppet staleness on deployment-webperf12 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:48] RECOVERY - Free space - all mounts on integration-slave-docker-1051 is OK: OK: All targets OK [02:13:48] RECOVERY - Free space - all mounts on deployment-dumps-puppetmaster02 is OK: OK: All targets OK [02:13:48] RECOVERY - Puppet staleness on integration-agent-puppet-docker-1001 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:48] RECOVERY - Free space - all mounts on deployment-imagescaler03 is OK: OK: All targets OK [02:13:51] RECOVERY - Puppet errors on deployment-fluorine02 is OK: OK: Less than 1.00% above the threshold [2.0] [02:13:52] RECOVERY - Puppet staleness on deployment-webperf11 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:13:52] RECOVERY - Free space - all mounts on deployment-docker-mathoid01 is OK: OK: All targets OK [02:13:56] RECOVERY - Puppet errors on deployment-restbase02 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:00] RECOVERY - Puppet errors on deployment-sessionstore02 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:01] RECOVERY - Puppet staleness on deployment-mediawiki-09 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:07] RECOVERY - Free space - all mounts on deployment-webperf12 is OK: OK: All targets OK [02:14:07] RECOVERY - Puppet errors on deployment-jobrunner03 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:07] RECOVERY - Puppet staleness on deployment-changeprop is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:09] RECOVERY - Free space - all mounts on deployment-memc05 is OK: OK: All targets OK [02:14:11] RECOVERY - Puppet staleness on deployment-ircd is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:11] RECOVERY - Free space - all mounts on integration-r-lang-01 is OK: OK: integration.integration-r-lang-01.diskspace._mnt.byte_percentfree (No valid datapoints found) [02:14:11] RECOVERY - Free space - all mounts on deployment-poolcounter05 is OK: OK: All targets OK [02:14:16] RECOVERY - Free space - all mounts on deployment-hadoop-test-2 is OK: OK: All targets OK [02:14:24] RECOVERY - Free space - all mounts on integration-slave-jessie-1002 is OK: OK: integration.integration-slave-jessie-1002.diskspace._mnt.byte_percentfree (No valid datapoints found) [02:14:24] RECOVERY - Puppet staleness on deployment-aqs03 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:24] RECOVERY - Puppet staleness on deployment-ms-be05 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:24] RECOVERY - App Server Main HTTP Response on deployment-mediawiki-07 is OK: HTTP OK: HTTP/1.1 200 OK - 48172 bytes in 4.223 second response time [02:14:24] RECOVERY - Free space - all mounts on deployment-mediawiki-09 is OK: OK: All targets OK [02:14:24] RECOVERY - Puppet staleness on integration-slave-jessie-1001 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:24] RECOVERY - Free space - all mounts on deployment-mediawiki-07 is OK: OK: All targets OK [02:14:27] RECOVERY - Puppet errors on integration-agent-docker-1001 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:31] RECOVERY - Free space - all mounts on integration-slave-docker-1048 is OK: OK: All targets OK [02:14:33] RECOVERY - Puppet errors on deployment-logstash03 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:33] RECOVERY - Puppet staleness on deployment-prometheus02 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:33] RECOVERY - Puppet staleness on deployment-db06 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:33] RECOVERY - Puppet errors on deployment-eventlog05 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:39] RECOVERY - Free space - all mounts on deployment-etcd-01 is OK: OK: All targets OK [02:14:41] RECOVERY - Puppet errors on deployment-parsoid09 is OK: OK: Less than 1.00% above the threshold [2.0] [02:14:49] RECOVERY - Free space - all mounts on integration-slave-jessie-1001 is OK: OK: integration.integration-slave-jessie-1001.diskspace._mnt.byte_percentfree (No valid datapoints found) [02:14:49] RECOVERY - Puppet staleness on deployment-maps04 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:50] RECOVERY - Puppet staleness on deployment-mx02 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:14:57] RECOVERY - Free space - all mounts on deployment-elastic05 is OK: OK: deployment-prep.deployment-elastic05.diskspace._var_log.byte_percentfree (No valid datapoints found) deployment-prep.deployment-elastic05.diskspace._var_lib_elasticsearch.byte_percentfree (No valid datapoints found) [02:15:01] RECOVERY - Puppet staleness on integration-slave-jessie-1002 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:15:02] RECOVERY - Free space - all mounts on deployment-zookeeper02 is OK: OK: All targets OK [02:15:02] RECOVERY - Puppet staleness on deployment-memc06 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:15:13] RECOVERY - Puppet errors on integration-slave-docker-1052 is OK: OK: Less than 1.00% above the threshold [2.0] [02:15:29] RECOVERY - Puppet staleness on deployment-ores01 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:16:00] RECOVERY - Puppet errors on deployment-dumps-puppetmaster02 is OK: OK: Less than 1.00% above the threshold [2.0] [02:16:04] RECOVERY - Puppet staleness on deployment-wikifeeds01 is OK: OK: Less than 1.00% above the threshold [3600.0] [02:16:08] RECOVERY - Puppet errors on deployment-kafka-jumbo-2 is OK: OK: Less than 1.00% above the threshold [2.0] [02:16:15] RECOVERY - Free space - all mounts on deployment-alex-test is OK: OK: All targets OK [03:39:43] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<10.00%) [03:59:45] PROBLEM - Free space - all mounts on deployment-fluorine02 is CRITICAL: CRITICAL: deployment-prep.deployment-fluorine02.diskspace._srv.byte_percentfree (<50.00%) [04:11:05] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [05:01:03] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [06:11:02] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [06:29:41] PROBLEM - Free space - all mounts on deployment-deploy01 is CRITICAL: CRITICAL: deployment-prep.deployment-deploy01.diskspace.root.byte_percentfree (<10.00%) [07:09:48] RECOVERY - Free space - all mounts on deployment-fluorine02 is OK: OK: All targets OK [08:11:03] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [10:11:05] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [11:49:42] RECOVERY - Free space - all mounts on deployment-deploy01 is OK: OK: All targets OK [12:11:02] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [12:19:05] (03PS1) 10Samwilson: Add new Discourse extension [integration/config] - 10https://gerrit.wikimedia.org/r/534906 [12:19:40] (03PS2) 10Samwilson: Add new Discourse extension [integration/config] - 10https://gerrit.wikimedia.org/r/534906 (https://phabricator.wikimedia.org/T215053) [14:11:06] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [15:06:02] (03PS1) 10Daimona Eaytoy: Allow @template annotations [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) [15:10:52] (03CR) 10jerkins-bot: [V: 04-1] Allow @template annotations [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) (owner: 10Daimona Eaytoy) [15:11:33] (03PS2) 10Daimona Eaytoy: Allow @template annotations [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) [16:04:58] hmm, i wonder what caused gerrit to use more memory between 4-10am https://gerrit.wikimedia.org/r/monitoring?part=graph&graph=usedMemory [16:10:43] (03CR) 10Krinkle: [C: 03+1] "Soon :)" [integration/config] - 10https://gerrit.wikimedia.org/r/534522 (owner: 10Jforrester) [16:11:02] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [16:12:32] I'm going to ask vgutierrez to cleanup gerrit-slave (e.g remove it from dns/acme/gerrit apache) [16:12:49] since no one should be using it and also gerrit-replica has been up for a while [17:14:29] (03PS3) 10Krinkle: Allow @template annotations [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) (owner: 10Daimona Eaytoy) [17:14:50] (03CR) 10Krinkle: "Renamed method to reduce chances of a grep result for the real thing." [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) (owner: 10Daimona Eaytoy) [17:15:19] (03CR) 10Krinkle: [C: 03+1] "LGTM, though I haven't quite understood how it's meant to work (still reading the Phan docs), certainly seems fine to allow use of." [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) (owner: 10Daimona Eaytoy) [17:16:51] (03CR) 10Daimona Eaytoy: "> Renamed method to reduce chances of a grep result for the real" [tools/codesniffer] - 10https://gerrit.wikimedia.org/r/534917 (https://phabricator.wikimedia.org/T232256) (owner: 10Daimona Eaytoy) [17:55:23] PROBLEM - App Server Main HTTP Response on deployment-mediawiki-jhuneidi is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 hphp_invoke - string 'Wikipedia' not found on 'http://en.wikipedia.beta.wmflabs.org:80/wiki/Main_Page?debug=true' - 353 bytes in 0.060 second response time [18:11:01] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [20:11:04] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0] [22:11:03] PROBLEM - Mediawiki Error Rate on graphite-labs is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [10.0]