[02:37:21] PROBLEM - Free space - all mounts on tools-webproxy is CRITICAL: CRITICAL: tools.tools-webproxy.diskspace._var.byte_percentfree.value (<44.44%) [02:52:16] RECOVERY - Free space - all mounts on tools-webproxy is OK: OK: All targets OK [06:08:53] PROBLEM - Puppet failure on tools-master is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [06:09:17] PROBLEM - Puppet failure on tools-exec-09 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [06:10:53] PROBLEM - Puppet failure on tools-exec-11 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [06:10:55] PROBLEM - Puppet failure on tools-webgrid-05 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [06:11:26] PROBLEM - Puppet failure on tools-exec-06 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [06:11:56] PROBLEM - Puppet failure on tools-static is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [06:12:56] PROBLEM - Puppet failure on tools-webgrid-tomcat is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [06:13:16] PROBLEM - Puppet failure on tools-webgrid-02 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [06:13:54] PROBLEM - Puppet failure on tools-exec-15 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [06:13:54] PROBLEM - Puppet failure on tools-exec-wmt is CRITICAL: CRITICAL: 70.00% of data above the critical threshold [0.0] [06:14:06] PROBLEM - Puppet failure on tools-dev is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [06:14:34] PROBLEM - Puppet failure on tools-webgrid-generic-01 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:14:56] PROBLEM - Puppet failure on tools-exec-gift is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:15:18] PROBLEM - Puppet failure on tools-login is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [06:15:32] PROBLEM - Puppet failure on tools-exec-14 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [06:15:32] PROBLEM - Puppet failure on tools-exec-02 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [0.0] [06:16:00] PROBLEM - Puppet failure on tools-mail is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:16:14] PROBLEM - Puppet failure on tools-exec-07 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [06:16:48] PROBLEM - Puppet failure on tools-uwsgi-01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [06:16:56] PROBLEM - Puppet failure on tools-exec-13 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:17:06] PROBLEM - Puppet failure on tools-exec-08 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [06:17:46] PROBLEM - Puppet failure on tools-exec-catscan is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [06:18:37] PROBLEM - Puppet failure on tools-exec-12 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [06:19:19] PROBLEM - Puppet failure on tools-webgrid-01 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [06:20:21] PROBLEM - Puppet failure on tools-webgrid-03 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [06:21:00] PROBLEM - Puppet failure on tools-shadow is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [06:21:46] PROBLEM - Puppet failure on tools-exec-01 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [06:22:44] PROBLEM - Puppet failure on tools-webgrid-04 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [06:22:44] PROBLEM - Puppet failure on tools-exec-cyberbot is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [06:22:51] PROBLEM - Puppet failure on tools-exec-04 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:23:57] PROBLEM - Puppet failure on tools-webproxy is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [06:24:39] PROBLEM - Puppet failure on tools-submit is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [06:34:16] RECOVERY - Puppet failure on tools-exec-09 is OK: OK: Less than 1.00% above the threshold [0.0] [06:35:56] RECOVERY - Puppet failure on tools-exec-11 is OK: OK: Less than 1.00% above the threshold [0.0] [06:36:28] RECOVERY - Puppet failure on tools-exec-06 is OK: OK: Less than 1.00% above the threshold [0.0] [06:38:17] RECOVERY - Puppet failure on tools-webgrid-02 is OK: OK: Less than 1.00% above the threshold [0.0] [06:38:53] RECOVERY - Puppet failure on tools-exec-wmt is OK: OK: Less than 1.00% above the threshold [0.0] [06:38:53] RECOVERY - Puppet failure on tools-master is OK: OK: Less than 1.00% above the threshold [0.0] [06:39:41] RECOVERY - Puppet failure on tools-webgrid-generic-01 is OK: OK: Less than 1.00% above the threshold [0.0] [06:40:05] RECOVERY - Puppet failure on tools-exec-gift is OK: OK: Less than 1.00% above the threshold [0.0] [06:40:55] RECOVERY - Puppet failure on tools-webgrid-05 is OK: OK: Less than 1.00% above the threshold [0.0] [06:41:01] RECOVERY - Puppet failure on tools-mail is OK: OK: Less than 1.00% above the threshold [0.0] [06:41:57] RECOVERY - Puppet failure on tools-static is OK: OK: Less than 1.00% above the threshold [0.0] [06:41:58] RECOVERY - Puppet failure on tools-exec-13 is OK: OK: Less than 1.00% above the threshold [0.0] [06:42:53] RECOVERY - Puppet failure on tools-webgrid-tomcat is OK: OK: Less than 1.00% above the threshold [0.0] [06:43:32] RECOVERY - Puppet failure on tools-exec-12 is OK: OK: Less than 1.00% above the threshold [0.0] [06:43:52] RECOVERY - Puppet failure on tools-exec-15 is OK: OK: Less than 1.00% above the threshold [0.0] [06:44:17] RECOVERY - Puppet failure on tools-dev is OK: OK: Less than 1.00% above the threshold [0.0] [06:44:17] RECOVERY - Puppet failure on tools-webgrid-01 is OK: OK: Less than 1.00% above the threshold [0.0] [06:45:14] RECOVERY - Puppet failure on tools-login is OK: OK: Less than 1.00% above the threshold [0.0] [06:45:22] RECOVERY - Puppet failure on tools-webgrid-03 is OK: OK: Less than 1.00% above the threshold [0.0] [06:45:32] RECOVERY - Puppet failure on tools-exec-14 is OK: OK: Less than 1.00% above the threshold [0.0] [06:45:38] RECOVERY - Puppet failure on tools-exec-02 is OK: OK: Less than 1.00% above the threshold [0.0] [06:46:16] RECOVERY - Puppet failure on tools-exec-07 is OK: OK: Less than 1.00% above the threshold [0.0] [06:46:48] RECOVERY - Puppet failure on tools-uwsgi-01 is OK: OK: Less than 1.00% above the threshold [0.0] [06:47:06] RECOVERY - Puppet failure on tools-exec-08 is OK: OK: Less than 1.00% above the threshold [0.0] [06:47:42] RECOVERY - Puppet failure on tools-webgrid-04 is OK: OK: Less than 1.00% above the threshold [0.0] [06:47:46] RECOVERY - Puppet failure on tools-exec-catscan is OK: OK: Less than 1.00% above the threshold [0.0] [06:47:50] RECOVERY - Puppet failure on tools-exec-04 is OK: OK: Less than 1.00% above the threshold [0.0] [06:49:44] RECOVERY - Puppet failure on tools-submit is OK: OK: Less than 1.00% above the threshold [0.0] [06:50:54] RECOVERY - Puppet failure on tools-shadow is OK: OK: Less than 1.00% above the threshold [0.0] [06:51:47] RECOVERY - Puppet failure on tools-exec-01 is OK: OK: Less than 1.00% above the threshold [0.0] [06:52:47] RECOVERY - Puppet failure on tools-exec-cyberbot is OK: OK: Less than 1.00% above the threshold [0.0] [06:53:59] RECOVERY - Puppet failure on tools-webproxy is OK: OK: Less than 1.00% above the threshold [0.0] [06:57:47] PROBLEM - Puppet failure on tools-uwsgi-01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [07:27:49] RECOVERY - Puppet failure on tools-uwsgi-01 is OK: OK: Less than 1.00% above the threshold [0.0] [10:30:33] Hi there. We have a problem logging in to Wikitech. Some of us have tried it for a while now, but we've been getting "incorrect password" responses all day long. Does anyone know what's going on? (We can't just have been making typos in 100 tries) [10:37:02] rhydon: strange. Gerrit does work, so LDAP is not down. [10:37:24] rhydon: but the behavior (long wait before erroring out) suggests a timeout [10:41:03] valhallasw`cloud: Timeout is was also my first guess, especially since Gerrit works. Anyway, I have no idea why it occurs. A fix would be great. [12:25:18] Hi [12:26:00] I have a doubte [12:26:08] Anyone can help me? :) [12:27:36] NeoMahler: just ask your question; if anyone can help you, they will. [12:27:44] ok [12:28:05] Can I put an IRC bot for #vikidia-rc (non wikimedia project) on wikimedia labs? [12:31:54] NeoMahler: according to the rules, it would have to be WMF related, but I think vikidia is probably close enough in spirit to warrant an exception. Check with Coren (assuming you'd want to put it on Tool Labs), or maybe Ryan Lane. [12:32:14] ok [12:32:55] Coren, what do you thing? [12:33:05] (I don't see ryan lane here) [12:33:48] NeoMahler: best thing to do is probably to write an email to labs-l@lists.wikimedia.org. It's morning where Coren lives, and he might actually be in San Francisco for the dev summot, where it's only 4:30 AM [12:34:00] (don't forget to subscribe to that mailing list, too, or you'll miss the answers) [12:34:06] ok :) [12:34:08] thanks! [13:02:17] 3Tool-Labs: Shorten update interval of lighttpd error logs - https://phabricator.wikimedia.org/T87562#994301 (10jkroll) 3NEW [13:16:24] 3Tool-Labs: Shorten update interval of lighttpd error logs - https://phabricator.wikimedia.org/T87562#994342 (10valhallasw) This is probably because this is an NFS mount, which means updates can take a while. If you want quicker updates, you can ssh to the relevant -webgrid node, and tail the error log there: `... [13:47:21] anyone can review https://gerrit.wikimedia.org/r/#/c/186762/ [13:50:59] Mjbmr: check the project history for people who have merged before, and add them as reviewer [13:51:19] OK [13:51:43] (brian wolff, in this case) [13:57:09] how did you find people who have merged before? [13:59:21] Mjbmr: in the search bar, enter project:mediawiki/extensions/intersection [13:59:37] then click a few of the non-l10n changes [14:01:40] I don't see Bawolff? [14:02:18] Mjbmr: https://gerrit.wikimedia.org/r/#/c/149631/ [14:02:18] oh, I see him now. [14:02:23] ok. [14:03:33] do these people have +2 or commit access? [14:04:10] Mjbmr: you can see that list if you click the project name, then select the 'access' tab . [14:04:35] project admins? [14:05:18] https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/extensions/intersection,access. [14:06:13] no, I mean is there any diff btw people who are project admin and people have +2? [14:07:41] Mjbmr: they need the 'Submit' right, I think. People in the 'Owner' group have that right. [15:18:02] Any lab roots around? [17:43:28] 3MediaWiki-extensions-OpenStackManager, Librarization, MediaWiki-extensions-Translate: Bring in spyc for OpenStackManager and Translate via composer - https://phabricator.wikimedia.org/T75945#994607 (10Nikerabbit) 5Open>3stalled No reaction in upstream ticket :( [17:43:46] 3MediaWiki-extensions-OpenStackManager, Librarization, MediaWiki-extensions-Translate: Bring in spyc for OpenStackManager and Translate via composer - https://phabricator.wikimedia.org/T75945#994609 (10Nikerabbit) p:5Triage>3Low [18:10:18] 3Wikimedia-Labs-Infrastructure: Internal DNS look-ups fail every once in a while - https://phabricator.wikimedia.org/T72076#994662 (10yuvipanda) p:5High>3Unbreak! Seems to be acting up again [18:10:46] 3Wikimedia-Labs-Infrastructure: Internal DNS look-ups fail every once in a while - https://phabricator.wikimedia.org/T72076#994665 (10yuvipanda) [19:58:58] 3Wikimedia-Labs-Infrastructure: Internal DNS look-ups fail every once in a while - https://phabricator.wikimedia.org/T72076#994923 (10yuvipanda) I increased the limit for connection tracking with `sysctl -w net.netfilter.nf_conntrack_max=131072` (doubling the value). That seems to have stabilized somewhere aroun... [20:52:56] 3Tool-Labs: Provide namespace IDs and names in the databases similar to toolserver.namespace - https://phabricator.wikimedia.org/T50625#994993 (10Nosy) any news here? [20:59:21] has anyone made any cgi-script that allow their bots to be rebooted through a web interface? [20:59:31] not that difficult, and i can do it myself, but why duplicate effort [21:01:14] 3Tool-Labs-tools-Other: [tracking] toolserver.org tools that have not been migrated - https://phabricator.wikimedia.org/T60865#995000 (10Multichill) [23:20:05] Hey ! Just found that BounceHandler got surprisingly missing from deployment.wikimedia.beta.wmflabs.org ! Can we get that re-installed ? [23:20:15] (03PS3) 10Awight: Add more repos maintained by Fundraising Tech [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/177378 [23:20:33] We had done the same some 6months back - but cannot find it in deployment.wikimedia.beta.wmflabs.org/wiki/Special:Version [23:21:33] <^d> YuviPanda: can we haz new labs project with lots of quota? [23:26:29] reported https://phabricator.wikimedia.org/T87624?workflow=create [23:50:08] (03PS1) 10Awight: Un-flow the config file... [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/186905 [23:51:18] (03CR) 10Awight: "See also @I45cb9e963d1c199357b37055b8d93eb7159166a2 -- I've tested this change by dumping the file before and after into canonical form, a" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/186905 (owner: 10Awight)