[02:48:48] hello? [02:49:23] Guest96242 [02:49:25] wat [02:49:48] -_- [02:50:40] -_- [02:50:47] Feralex [02:51:02] ? [02:51:58] !admin [02:52:00] ? [02:52:15] how does this work.. [02:52:27] welp [06:03:16] Wikidata is in read only mode :( [06:05:47] GerardM: yes, is is going to last long do you know? [06:07:20] They have problems as it is. Cannot propagate changes to Wikis, the volume of changes is going up and as I understand it, part of the issue is a design issue at the WMF end (not Germany) [06:07:42] hmm that's why there [06:07:43] the consequence is that they hack their way to greater performance [06:07:52] s is no changes since 5:48... I was thinking something broke in my updater [06:08:02] (Germany as I understand it) [06:08:28] so this isn't a planned thing then? [06:08:36] no [06:09:37] as it is to make ends meet, changes particularly mass changes are discouraged.. The problem is that some people will leave not to come back [06:09:39] RECOVERY - WDQS SPARQL on wdqs1001 is OK: HTTP OK: HTTP/1.1 200 OK - 13348 bytes in 0.001 second response time [06:09:59] RECOVERY - WDQS HTTP on wdqs1001 is OK: HTTP OK: HTTP/1.1 200 OK - 13348 bytes in 0.001 second response time [06:27:29] PROBLEM - High lag on wdqs2002 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [1800.0] [06:27:34] de.wikipedia is also locked since 5:48 [06:27:39] PROBLEM - High lag on wdqs2001 is CRITICAL: CRITICAL: 31.03% of data above the critical threshold [1800.0] [06:28:09] PROBLEM - High lag on wdqs1003 is CRITICAL: CRITICAL: 34.48% of data above the critical threshold [1800.0] [06:28:18] PROBLEM - High lag on wdqs2003 is CRITICAL: CRITICAL: 31.03% of data above the critical threshold [1800.0] [06:30:18] PROBLEM - High lag on wdqs1002 is CRITICAL: CRITICAL: 31.03% of data above the critical threshold [1800.0] [06:30:19] PROBLEM - High lag on wdqs2003 is CRITICAL: CRITICAL: 31.03% of data above the critical threshold [1800.0] [06:33:39] PROBLEM - High lag on wdqs2001 is CRITICAL: CRITICAL: 44.83% of data above the critical threshold [1800.0] [07:26:29] RECOVERY - High lag on wdqs1003 is OK: OK: Less than 30.00% above the threshold [600.0] [07:26:38] RECOVERY - High lag on wdqs2003 is OK: OK: Less than 30.00% above the threshold [600.0] [07:26:58] RECOVERY - High lag on wdqs2001 is OK: OK: Less than 30.00% above the threshold [600.0] [07:27:29] RECOVERY - High lag on wdqs1002 is OK: OK: Less than 30.00% above the threshold [600.0] [07:27:49] RECOVERY - High lag on wdqs2002 is OK: OK: Less than 30.00% above the threshold [600.0] [08:40:10] PROBLEM - puppet last run on wdqs1001 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:27:22] aude: addshore: the jobs for Wikidata seems to pass again after changing to composer install [13:27:30] at least the https://gerrit.wikimedia.org/r/#/c/368312/ got a +2 [13:27:37] I have closed https://phabricator.wikimedia.org/T165316 [13:27:37] ack! [13:27:40] solved by aude patch \O/ [13:28:05] the reason is that the text extension job did something like: cd extensions/Wikidata/ && composer update && composer test && git clean [13:28:16] that is an optimization to run the composer test in the same job that is running the PHPUnit tests [13:28:33] I am not sure why we use composer update in the first place [20:41:25] multichill: https://www.wikidata.org/wiki/Wikidata:Property_proposal/Rijksmuseum_id seems like something you'd have an opinion on [20:44:30] nikki: Didn't we block him? :S [20:45:16] no, although he did get blocked for a day a couple of days ago for running a bot under his normal user