[00:06:41] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [00:38:04] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2922026 (10Andrew) [00:39:18] Earwig: the copyvio detector times out, please take a look, any wiki main page as example [00:39:40] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2565438 (10Andrew) [00:43:08] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2922064 (10Andrew) >>! In T143349#2921069, @dschwen wrote: > Please do not remove the fastcci or maps-wma1 instances! They are being used. @dschwen, Just in case you've missed the conte... [00:53:46] PROBLEM - Puppet run on tools-worker-1010 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [00:54:56] gry: works for me [00:55:07] unm [00:55:12] run it on the Main Page, you say? [00:55:34] yes you can try [00:55:35] also works [00:55:36] or https://tools.wmflabs.org/copyvios/?lang=en&project=wikipedia&title=Draft%3AGenetic+Method&oldid=749741623&action=search&use_engine=1&use_links=1&turnitin=0 [00:55:40] times out [00:56:25] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2922117 (10dschwen) I'm upgrading them to trusty, will that work? [00:56:28] "Originally generated in 638.668 seconds using 8 queries" [00:56:39] so it finished, just took 10 minutes [00:56:43] yeah [00:56:47] whoa.. [00:56:48] that's definitely too slow [00:57:08] only 8 queries, not too many URLs checked, def shouldn't take that long [00:57:13] I'll look [00:57:19] seems to be intermittnet [00:57:24] thanks Earwig :) [00:57:30] it said database is locked one time [00:57:31] I did change something recently [00:57:38] but it should make things faster, not slower... [00:57:43] i didn't copy the error, silly me.. then it didn't show it again [00:57:57] that could just be random labs things, I see it sometimes [00:58:10] by the way, if you're enthusiastic, [00:58:32] i would suggest make it possible to click at the URLs listed at the top of the reports to highlight the matches related to that URL only [00:58:50] i see everything in red now. don't know which passage matches which source [00:58:57] oh... [00:59:11] the highlighted matches are only for the top source [00:59:14] 10Tool-Labs-tools-anagrimes, 06Wiktionary, 15User-Dereckson: Create a MediaWiki extension to supersede http://tools.wmflabs.org/anagrimes/hasard.php - https://phabricator.wikimedia.org/T154730#2922122 (10Dereckson) [00:59:16] ok [00:59:17] it doesn't show all of them at once [00:59:34] i would like to be able to view for the second source as well please [00:59:34] maybe I can try to clarify that [00:59:55] should not be hard for programming.. mainly hard for the UI programming but not much of the logic behind it [00:59:56] you can click the 'compare' button on the side [00:59:57] next to the URL [00:59:59] ok [01:00:20] that sorts it out, just add a clarification at the top [01:00:24] that the red stuff is only first source [01:04:23] it's kind of strange, the logs just show the tool stalling for a few minutes in the middle of working [01:05:49] PROBLEM - Puppet run on tools-bastion-05 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [01:05:56] I want to maybe try k8s, but can I run a python webserver that way if it's 2.7? [01:07:14] 10Tool-Labs-tools-Xtools, 06Community-Tech: Investigation: Plan for rewriting XTools - https://phabricator.wikimedia.org/T154551#2922144 (10kaldari) My preference would be for any server-specific aspects of XTools to be configurable or optional. If we want developers to maintain it, it needs to be easy to set... [01:07:15] try it, i think there should be a way [01:16:42] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [01:17:43] PROBLEM - Puppet run on tools-bastion-02 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [01:25:18] 06Labs, 10Labs-Infrastructure, 10Tool-Labs, 10DBA, 10Wikimedia-Developer-Summit (2017): Labsdbs for WMF tools and contributors: get more data, faster - https://phabricator.wikimedia.org/T149624#2922174 (10Quiddity) Potential question for discussion (or moving to somewhere more appropriate!) - * Where sho... [01:25:39] PROBLEM - Puppet run on tools-exec-1408 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [01:29:01] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2922210 (10Andrew) @dschwen, Yes, Trusty is fine although it will also be deprecated in a couple of years. Note, though, that actually doing an in-place dist-upgrade of an existing VM i... [01:29:11] Hello, I'm new and may be able to help with Puppet code. It wasn't clear whether Projects puppet or puppet-cleanup were still after contributors. What's the next step? [01:33:46] RECOVERY - Puppet run on tools-worker-1010 is OK: OK: Less than 1.00% above the threshold [0.0] [01:40:47] RECOVERY - Puppet run on tools-bastion-05 is OK: OK: Less than 1.00% above the threshold [0.0] [01:52:42] RECOVERY - Puppet run on tools-bastion-02 is OK: OK: Less than 1.00% above the threshold [0.0] [02:00:29] friendly12345: there may be some things to work on for WMF production puppet in https://phabricator.wikimedia.org/project/board/78/ [02:01:03] we are generally bad at triaging Puppet work to make it easy to get started helping [02:01:40] mutante: do you know of any general Puppet cleanup stuff that friendly12345 might be able to take a shot at writing patches for? [02:06:23] bd808: Alright I'll take a look at one of the 'trivial' ones and give it a go [02:08:27] friendly12345: the ops team tends to hang out in the #wikimedia-operations channel and may be able to help you find things that need doing. This time of day most of them are not watching irc though [02:09:24] EU work hours have the most of them about and things really clear out after about 21:00Z or so [02:10:05] we also have the mediawiki-vagrant project that uses Puppet to provision developer VMs. It may have some things worth working on [02:10:36] RECOVERY - Puppet run on tools-exec-1408 is OK: OK: Less than 1.00% above the threshold [0.0] [02:10:48] I've been working on a branch there to port us from Ubuntu Trusty to Debein Jessie and that's mostly puppet stuff [02:11:44] bd808: friendly12345: so i had one ticket that i mentored in Google-CodeIn but nobody took [02:11:59] the job is to add documentation to the puppet classes [02:12:18] what it needs is one line of comment that describes what this does [02:12:43] that means you dont actually have to write new puppet code, but look at the existing one and figure out what it does, on a high level [02:13:09] friendly12345: [02:14:09] PROBLEM - Puppet run on tools-webgrid-lighttpd-1210 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [02:17:00] legoktm: around? [02:19:54] friendly12345: https://phabricator.wikimedia.org/T127797 if that sounds interesting at all.. it doesn't have to be all of them, but might also be useful to get a feel for the existing code :) [02:21:14] enterprisey: hi [02:21:28] legoktm: hi [02:21:41] just wondering about the rationale behind using the internal cdn vs google's [02:22:12] external CDN is technically against the Labs TOU [02:22:32] although poorly documented and currently not enforced [02:22:50] hmm, good to know [02:22:54] in what way? [02:23:04] privacy [02:23:29] how so? [02:23:34] embedding external resources in html served from Labs leaks visitor IPs to third partied [02:23:41] mutante: You probably are well aware of this but taking a look at puppet-operations the structure is not exactly something a Puppet pattern newcomer would have seen within the last 2 years, although I did see a Phabricator RFC issue discussing future changes to structure [02:23:43] without concent [02:23:47] when you ask a browser to send a request to google's cdn, it may set tracking cookies, leak IP addresses, etc. [02:24:11] friendly12345: oh you will fit right in :) [02:24:27] _joe_ just published a new set of recommendations for new code [02:24:34] hmm [02:25:04] um [02:25:17] but that's only correct if it's the user's first time using a website that uses jquery [02:25:19] which is unlikely [02:25:19] enterprisey: the reason that is bad is that many Labs/Tools are linked directly from the wikis and so must follow the global WMF privacy policy [02:25:48] bd808: see above [02:25:52] not really [02:26:01] a false argument IMO [02:26:02] actually nvm [02:26:08] I was incorrect :) [02:26:20] etags means a 304 is still sent with headers to google [02:26:52] also we have a nice CDN, so make use of it ;) [02:27:24] I mean, performance-wise, I don't like it [02:27:25] but eh [02:27:36] and I've never, ever heard a privacy argument like that before [02:27:49] You've never heard that IP addresses are PII? [02:28:01] of course I've heard that [02:28:18] I've been in PII discussions at a web dev company before [02:28:40] User Agents are also PII. As are the Referer headers. [02:28:49] every time you use the google or jquery CDN you give them a referrer, a timestamp, and an IP address [02:28:59] I doubt the performance advantage of using an external CDN versus the tool labs one matters that much with HTTP/2 and all that fancy stuff [02:29:57] (btw the context for all of this is https://github.com/APerson241/vote-history/issues/2) [02:29:58] friendly12345: the new standard is documented at https://wikitech.wikimedia.org/wiki/Puppet_coding#Organization [02:30:21] bd808: you mentioned a set of recommendations for new code? [02:30:42] enterprisey: for Puppet modules [02:30:46] oh [02:30:52] I thought those were about privacy, etc [02:31:08] I need to work on some for tools though. it's "on the list" ;) [02:31:12] two conversations are happening right now :P [02:31:24] * bd808 is split-brain [02:31:54] 06Labs, 10Tool-Labs, 06Community-Tech-Tool-Labs: Make a nag system to email maintainers of tools still running on precise gird hosts - https://phabricator.wikimedia.org/T149214#2922332 (10bd808) This looks like a promising start based on @scfc's research: ``` tools-bastion-02.tools:~/projects/T149214 bd808$... [02:34:01] * bd808 -> dinner [02:35:51] darn, now I have like at least 20 different repositories to update [02:37:26] bd808: I'll check it out [02:40:03] friendly12345: the structure is actually much closer to puppet style guide nowadays. especially since everythign from manifests/ is now in proper modules [02:40:39] there are only a few more to split up (mariadb) and it won't have autoloader layout warnings at all [02:47:31] the RFC is about adding the new concept of "profile" yea [02:51:13] mutante: Is the point of only allowing class { } and not include ::class in profiles to help with debugging? Because that means that profiles that have the same class in them can't both be used in a role due to duplicate declarations [02:53:59] mutante: Custom functions like require_package() and merge() look a bit clunky, but I'm sure there is/was a good reason for them [02:54:10] RECOVERY - Puppet run on tools-webgrid-lighttpd-1210 is OK: OK: Less than 1.00% above the threshold [0.0] [02:55:17] friendly12345: I don't have the answer to that as i didn't write it and we don't actually have profiles in code yet. Actually the mail that said we are actually starting to use that RFC and our Puppet coding guidelines is 15 hours old :) [02:56:01] 'change' the Puppet coding guidelines .. i meant.. at https://wikitech.wikimedia.org/wiki/Puppet_coding#Organization [03:11:11] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Juniorsys was created, changed by Juniorsys link https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Juniorsys edit summary: Created page with "{{Tools Access Request |Justification=I wish to assist with making changes to Puppet code |Completed=false |User Name=Juniorsys }}" [03:13:16] ^ That was me [03:21:18] friendly12345: if you want to test puppet code and have a local puppetmaster, you should request access to "labs" instead of "tool labs" [03:21:21] friendly12345: https://wikitech.wikimedia.org/wiki/Help:FAQ#What_is_the_difference_between_Labs_and_Tool_Labs_.3F [03:37:42] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [04:37:42] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [04:38:04] 06Labs, 10Gerrit: Strange errors when cloning operations/mediawiki-config from gerrit to labs NFS - https://phabricator.wikimedia.org/T142787#2922497 (10demon) [04:38:08] 06Labs, 10Tool-Labs, 10Gerrit, 10Pywikibot-core: Fresh clone of pywikibot from gerrit fails with error: RPC failed; result=56, HTTP code = 200 on Toollabs - https://phabricator.wikimedia.org/T151351#2922500 (10demon) [04:38:17] 06Labs, 10Gerrit: Strange errors when cloning operations/mediawiki-config from gerrit to labs NFS - https://phabricator.wikimedia.org/T142787#2546348 (10demon) (Yes, this is the older task but that one has more comments/details) [04:38:28] 06Labs, 10Tool-Labs, 10Gerrit, 10Pywikibot-core: Fresh clone of pywikibot from gerrit fails with error: RPC failed; result=56, HTTP code = 200 on Toollabs (NFS) - https://phabricator.wikimedia.org/T151351#2814978 (10demon) [04:54:21] 06Labs, 10Labs-Sprint-102, 10Labs-Sprint-103, 10Labs-Sprint-104, and 3 others: Audit projects' use of NFS, and remove it where not necessary - https://phabricator.wikimedia.org/T102240#2922530 (10Krinkle) [04:54:23] 06Labs: Investigate reducing use of NFS in cvn project - https://phabricator.wikimedia.org/T102370#2922527 (10Krinkle) 05Open>03Resolved a:03Krinkle This was resolved sometime last year. All bots are installed on local disks now, run from there, and read/write their data files locally as well. NFS is used... [07:08:42] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [08:13:41] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [09:26:01] PROBLEM - Puppet run on tools-services-02 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [10:06:02] RECOVERY - Puppet run on tools-services-02 is OK: OK: Less than 1.00% above the threshold [0.0] [14:09:41] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [15:31:58] PROBLEM - Puppet run on tools-worker-1003 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [15:34:37] (03Restored) 10Hashar: Introduce tox as an entry point [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/265735 (owner: 10Hashar) [15:34:46] (03PS4) 10Hashar: Introduce tox as an entry point [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/265735 [15:48:41] (03PS1) 10Hashar: Fix up all flake8 errors [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/330919 [15:49:40] (03CR) 10Hashar: [V: 032 C: 032] Introduce tox as an entry point [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/265735 (owner: 10Hashar) [15:50:00] (03CR) 10Hashar: "check experimental" [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/330919 (owner: 10Hashar) [15:52:26] (03CR) 10Hashar: [C: 032] Point SSH key goal at local key management screen [labs/striker] - 10https://gerrit.wikimedia.org/r/328841 (https://phabricator.wikimedia.org/T144711) (owner: 10BryanDavis) [15:52:47] (03CR) 10Hashar: [C: 032] Remove some vertical whitespace on linked accounts screen [labs/striker] - 10https://gerrit.wikimedia.org/r/328621 (owner: 10BryanDavis) [15:53:27] (03CR) 10Hashar: [V: 032 C: 032] Replace git.wikimedia.org links with diffusion [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/305293 (https://phabricator.wikimedia.org/T139089) (owner: 10Paladox) [15:55:25] 10Tool-Labs-tools-anagrimes, 06Wiktionary, 15User-Dereckson: Create a MediaWiki extension to supersede http://tools.wmflabs.org/anagrimes/hasard.php - https://phabricator.wikimedia.org/T154730#2923562 (10Darkdadaah) The current tool uses a more complex database primarily designed for the custom search in htt... [15:55:25] (03CR) 10Hashar: [C: 032] Fix up all flake8 errors [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/330919 (owner: 10Hashar) [15:56:40] (03Merged) 10jenkins-bot: Fix up all flake8 errors [labs/tools/crosswatch] - 10https://gerrit.wikimedia.org/r/330919 (owner: 10Hashar) [16:07:05] PROBLEM - Puppet run on tools-exec-1401 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [16:08:31] PROBLEM - Puppet run on tools-worker-1007 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:08:37] PROBLEM - Puppet run on tools-exec-1417 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:08:44] PROBLEM - Puppet run on tools-proxy-02 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:10:14] PROBLEM - Puppet run on tools-precise-dev is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [16:10:34] PROBLEM - Puppet run on tools-webgrid-lighttpd-1403 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:10:38] PROBLEM - Puppet run on tools-exec-1406 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:11:54] PROBLEM - Puppet run on tools-exec-1214 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:12:06] PROBLEM - Puppet run on tools-exec-1407 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [16:12:50] PROBLEM - Puppet run on tools-worker-1013 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [16:13:43] PROBLEM - Puppet run on tools-bastion-02 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:13:59] ^ this is expected I imagine as labservices1001 is under maint [16:14:21] PROBLEM - Puppet run on tools-exec-1412 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [16:14:29] PROBLEM - Puppet run on tools-exec-1409 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:15:11] PROBLEM - Puppet run on tools-webgrid-lighttpd-1209 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:15:13] PROBLEM - Puppet run on tools-webgrid-lighttpd-1210 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [16:15:17] PROBLEM - Puppet run on tools-exec-1212 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [16:15:34] 06Labs, 10Labs-Infrastructure, 06Operations, 10ops-eqiad, 07Wikimedia-Incident: Replace fans (or paste) on labservices1001 - https://phabricator.wikimedia.org/T154391#2923595 (10Cmjohnson) @Andrew Replaced the thermal paste on labservices1001....it didn't look dry and crusty so not 100% it will fix the i... [16:15:43] PROBLEM - Puppet run on tools-exec-1216 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [16:15:51] PROBLEM - Puppet run on tools-webgrid-lighttpd-1409 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [16:16:13] PROBLEM - Puppet run on tools-webgrid-lighttpd-1205 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [16:16:53] PROBLEM - Puppet run on tools-exec-gift is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [16:17:02] PROBLEM - Puppet run on tools-services-02 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [16:17:50] PROBLEM - Puppet run on tools-exec-1217 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [16:18:12] PROBLEM - Puppet run on tools-webgrid-lighttpd-1410 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [16:19:42] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [16:42:07] RECOVERY - Puppet run on tools-exec-1401 is OK: OK: Less than 1.00% above the threshold [0.0] [16:43:30] RECOVERY - Puppet run on tools-worker-1007 is OK: OK: Less than 1.00% above the threshold [0.0] [16:45:17] RECOVERY - Puppet run on tools-precise-dev is OK: OK: Less than 1.00% above the threshold [0.0] [16:47:05] RECOVERY - Puppet run on tools-exec-1407 is OK: OK: Less than 1.00% above the threshold [0.0] [16:47:51] RECOVERY - Puppet run on tools-worker-1013 is OK: OK: Less than 1.00% above the threshold [0.0] [16:48:39] RECOVERY - Puppet run on tools-exec-1417 is OK: OK: Less than 1.00% above the threshold [0.0] [16:48:45] RECOVERY - Puppet run on tools-proxy-02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:50:32] RECOVERY - Puppet run on tools-webgrid-lighttpd-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [16:50:38] RECOVERY - Puppet run on tools-exec-1406 is OK: OK: Less than 1.00% above the threshold [0.0] [16:51:54] RECOVERY - Puppet run on tools-exec-1214 is OK: OK: Less than 1.00% above the threshold [0.0] [16:52:50] RECOVERY - Puppet run on tools-exec-1217 is OK: OK: Less than 1.00% above the threshold [0.0] [16:53:12] RECOVERY - Puppet run on tools-webgrid-lighttpd-1410 is OK: OK: Less than 1.00% above the threshold [0.0] [16:53:43] RECOVERY - Puppet run on tools-bastion-02 is OK: OK: Less than 1.00% above the threshold [0.0] [16:54:19] RECOVERY - Puppet run on tools-exec-1412 is OK: OK: Less than 1.00% above the threshold [0.0] [16:54:28] RECOVERY - Puppet run on tools-exec-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [16:55:10] RECOVERY - Puppet run on tools-webgrid-lighttpd-1209 is OK: OK: Less than 1.00% above the threshold [0.0] [16:55:12] RECOVERY - Puppet run on tools-webgrid-lighttpd-1210 is OK: OK: Less than 1.00% above the threshold [0.0] [16:55:14] RECOVERY - Puppet run on tools-exec-1212 is OK: OK: Less than 1.00% above the threshold [0.0] [16:55:43] RECOVERY - Puppet run on tools-exec-1216 is OK: OK: Less than 1.00% above the threshold [0.0] [16:55:51] RECOVERY - Puppet run on tools-webgrid-lighttpd-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [16:56:13] RECOVERY - Puppet run on tools-webgrid-lighttpd-1205 is OK: OK: Less than 1.00% above the threshold [0.0] [16:56:53] RECOVERY - Puppet run on tools-exec-gift is OK: OK: Less than 1.00% above the threshold [0.0] [17:02:00] RECOVERY - Puppet run on tools-services-02 is OK: OK: Less than 1.00% above the threshold [0.0] [17:05:42] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [17:31:25] halfak: Your instance snuggle-en is running Precise… I see that you also have snuggle-enwiki-01… does that mean that the former is obsolete? [17:33:10] andrewbogott, yes it is. I've not had time to make sure that I'm ready to delete yet. [17:33:12] Sorry for the trouble. [17:33:32] halfak: no problem, I'm just looking for low-hanging fruit. I'll make a note that you're on the case :) [17:33:42] Great :) [17:33:46] I'll have it resolved soon. [17:34:07] 06Labs, 10Labs-Infrastructure: Deprecate precise instances in Labs by 03/31/2017 - https://phabricator.wikimedia.org/T143349#2923862 (10Andrew) [17:46:36] halfak: for that gerrit thing... instead of refs/publish/master you need it to be refs/publish/the_other_branch_your_targeting. In your .git/config there should be a "merge" entry for the local branch. Set it to refs/heads/... and git-review should do the right thing [17:47:17] not literally ... of course ;) [17:47:45] Thanks bd808. I see "refs/publish" and "refs/heads" in your message. [17:48:29] yeah, the publish part you can see in your paste -- http://pastebin.ca/3753822 -- comes from git-review [17:48:58] but it's based on the upstream tracking branch for the local branch which is heads [17:49:00] I don't see "refs/publish" in my config [17:49:57] yeah, you shouldn't [17:50:09] but you should see heads for the upstream tracking [17:50:30] OK. But you said to change "refs/publish/master" [17:50:31] git-review should use the upstream tracking branch to figure out the push location [17:50:42] doh sorry [17:51:08] so for a local one I have its "merge = refs/heads/jessie-migration" [17:51:18] refs/heads/the_upstream_target [17:51:35] So just change the merge = refs/heads/... line for my local branch? [17:51:48] to merge = refs/heads/wmflabs? [17:51:54] yes, that should make git-review send the patch to the right branch in gerrit [17:52:05] kk testing :) [17:52:21] hmm.. same error [17:53:12] Even says "! [remote rejected] HEAD -> refs/publish/master/update_libraries (no new changes)" [17:53:16] I'll paste my config [17:53:47] http://pastebin.ca/3753839 [17:53:59] bd808, ^ does that look right? [17:55:01] Oh wait. I got the error to change to something that looks *almost* right by explicitly putting "wmflabs" as review branch via CLI [17:55:10] "! [remote rejected] HEAD -> refs/publish/wmflabs/update_libraries (no new changes)" [17:55:11] halfak: hmmm... yeah. [17:55:32] that looks like the correct publish url [17:55:34] Not the change from refs/publish/master/... to /refs/publish/wmflabs/... [17:56:21] there is also a --track option to git-review that says "Choose the branch to submit the change against" [17:56:51] Here's a clear indication that there are, in fact changes: http://pastebin.ca/3753840 [17:56:54] Will try --track [17:57:44] Looks like "git review --track wmflabs" and "git review wmflabs" do the same thing [18:00:35] Here's my attempts and failures using --track: http://pastebin.ca/3753842 [18:01:46] so the other trick you could try is skipping git-review entirely and doing `git push origin HEAD:refs/for/wmflabs` [18:03:51] bd808, I would be infinitely happy to do that. Let me try. [18:04:18] "! [remote rejected] HEAD -> refs/for/wmflabs (no new changes)" [18:04:23] Arg! [18:04:29] How can there be no new changes!? [18:06:03] Here's the attempt at pushing and a quick double-check that HEAD does, in fact, have my commit: http://pastebin.ca/3753845 [18:06:47] what does `git log @{upstream}..HEAD` tell you? [18:07:12] oh that's what you pasted basically [18:08:01] is that change-id recycled somehow? this is deep into gerrit voodoo :/ [18:11:00] "fatal: no upstream configured for branch 'update_libraries'" [18:11:28] so you local branch isn't tracking any upstream [18:11:40] I've been doing a lot of config editing :/ [18:11:46] but the direct push wouldn't care about that [18:12:56] should be "upstream = origin/wmflabs", right? [18:14:22] with the version of git I'm running locally it seems to set "merge = refs/heads/wmflabs" [18:14:49] Oh I have that merge line [18:14:58] We manually added that before. [18:15:42] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [18:15:42] but there's something funky with mine because @{upstream} seems to be master and not wmflabs :/ [18:16:35] Aha! I set "remote = origin" for the branch and then the "git log @{upstream}..HEAD" worked as intended. [18:16:59] still got "! [remote rejected] HEAD -> refs/publish/wmflabs/update_libraries (no new changes)" [18:17:04] for my git review call [18:17:09] * halfak curses gerrit [18:29:42] It turns out that the problem was that I'd git-review'd the exact commit as a change against master and abandoned it. The solution was to make a small change to the commit in question to generate a new hash and try that. [18:29:54] * halfak keeps the scrollback readers happy [19:02:42] !log tools Terminated deprecated instances tools-exec-121[2-6] (T154539) [19:02:46] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [19:02:46] T154539: Reduce Precise OGE exec hosts to 5 - https://phabricator.wikimedia.org/T154539 [19:05:47] I would expect that shinken will whine about some of those tools-exec nodes being gone for a while. That happened the last time anyway [19:10:47] PROBLEM - Host tools-exec-1214 is DOWN: CRITICAL - Host Unreachable (10.68.17.253) [19:11:07] PROBLEM - Host tools-exec-1213 is DOWN: CRITICAL - Host Unreachable (10.68.17.252) [19:11:13] PROBLEM - Host tools-exec-1212 is DOWN: CRITICAL - Host Unreachable (10.68.17.166) [19:11:39] 06Labs, 10Tool-Labs, 07Epic: Phase out precise instances from Tool Labs - https://phabricator.wikimedia.org/T94790#2924139 (10bd808) [19:11:55] 06Labs, 10Tool-Labs, 07Epic: Phase out precise instances from Tool Labs - https://phabricator.wikimedia.org/T94790#1172717 (10bd808) [19:11:57] 06Labs, 10Tool-Labs, 13Patch-For-Review, 15User-bd808: Reduce Precise OGE exec hosts to 5 - https://phabricator.wikimedia.org/T154539#2915068 (10bd808) 05Open>03Resolved [19:24:01] PROBLEM - Puppet run on tools-exec-1218 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [19:24:39] PROBLEM - Puppet run on tools-docker-registry-01 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [19:24:47] PROBLEM - Puppet run on tools-worker-1010 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [19:25:09] PROBLEM - Puppet run on tools-webgrid-lighttpd-1402 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:25:11] PROBLEM - Puppet run on tools-webgrid-lighttpd-1203 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:25:18] PROBLEM - Puppet run on tools-redis-1001 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:25:48] PROBLEM - Puppet run on tools-webgrid-lighttpd-1404 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:25:54] PROBLEM - Puppet run on tools-worker-1005 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [19:26:13] PROBLEM - Puppet run on tools-webgrid-lighttpd-1201 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:27:09] PROBLEM - Puppet run on tools-webgrid-generic-1401 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [19:27:23] PROBLEM - Puppet run on tools-exec-1219 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:27:31] PROBLEM - Puppet run on tools-docker-registry-02 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [19:27:37] PROBLEM - Puppet run on tools-webgrid-lighttpd-1204 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [19:27:43] PROBLEM - Puppet run on tools-exec-1405 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [0.0] [19:28:09] PROBLEM - Puppet run on tools-webgrid-lighttpd-1414 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [19:28:20] PROBLEM - Puppet run on tools-worker-1019 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [19:28:22] PROBLEM - Puppet run on tools-worker-1004 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [19:28:30] PROBLEM - Puppet run on tools-webgrid-lighttpd-1406 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [19:28:42] that was me.. fixing it [19:28:45] 06Labs, 10Labs-Infrastructure, 07Need-volunteer, 13Patch-For-Review: Redirect https://toolserver.org/~magnus/ - https://phabricator.wikimedia.org/T113696#2924190 (10DatGuy) 05Open>03Resolved a:03DatGuy Cheers Dzahn [19:28:56] PROBLEM - Puppet run on tools-proxy-01 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [19:29:02] PROBLEM - Puppet run on tools-exec-1415 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [19:29:16] PROBLEM - Puppet run on tools-logs-02 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:29:31] 06Labs, 10Labs-Infrastructure, 07Need-volunteer, 13Patch-For-Review: Redirect https://toolserver.org/~magnus/ - https://phabricator.wikimedia.org/T113696#2924193 (10Dzahn) @DatGuy when i follow the original link it's still 404 though. caching? [19:29:38] PROBLEM - Puppet run on tools-webgrid-lighttpd-1407 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [19:29:40] PROBLEM - Puppet run on tools-worker-1022 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:29:42] PROBLEM - Puppet run on tools-exec-1404 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:30:08] PROBLEM - Puppet run on tools-exec-1419 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [19:30:18] PROBLEM - Puppet run on tools-webgrid-lighttpd-1413 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:30:30] PROBLEM - Puppet run on tools-worker-1014 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [19:30:31] PROBLEM - Puppet run on tools-static-10 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:30:33] PROBLEM - Puppet run on tools-webgrid-lighttpd-1208 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [19:31:05] PROBLEM - Puppet run on tools-webgrid-generic-1403 is CRITICAL: CRITICAL: 30.00% of data above the critical threshold [0.0] [19:32:11] PROBLEM - Puppet run on tools-webgrid-lighttpd-1207 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:32:23] PROBLEM - Puppet run on tools-docker-builder-03 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:32:39] PROBLEM - Puppet run on tools-exec-1408 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [19:32:59] PROBLEM - Puppet run on tools-exec-1403 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [19:33:17] PROBLEM - Puppet run on tools-worker-1015 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [0.0] [19:33:29] ^ next puppet run will do it [19:33:45] 06Labs, 10Labs-Infrastructure, 07Need-volunteer, 13Patch-For-Review: Redirect https://toolserver.org/~magnus/ - https://phabricator.wikimedia.org/T113696#2924201 (10DatGuy) As you said on May 17, > but https://tools.wmflabs.org/magnustools/thetalkpage is also 404 so that would not change much. Probably... [19:34:08] PROBLEM - Puppet run on tools-exec-1413 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:34:18] PROBLEM - Puppet run on tools-webgrid-generic-1404 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [19:35:04] 06Labs, 10Labs-Infrastructure, 13Patch-For-Review: Change upper-bound system uid range to 499 - https://phabricator.wikimedia.org/T45795#2924204 (10scfc) 05Open>03Resolved [19:36:48] PROBLEM - Puppet run on tools-bastion-05 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [19:39:49] PROBLEM - Puppet run on tools-exec-1402 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [0.0] [19:49:42] RECOVERY - Puppet run on tools-exec-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [20:01:11] RECOVERY - Puppet run on tools-webgrid-lighttpd-1201 is OK: OK: Less than 1.00% above the threshold [0.0] [20:02:24] RECOVERY - Puppet run on tools-exec-1219 is OK: OK: Less than 1.00% above the threshold [0.0] [20:02:32] RECOVERY - Puppet run on tools-docker-registry-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:02:38] RECOVERY - Puppet run on tools-webgrid-lighttpd-1204 is OK: OK: Less than 1.00% above the threshold [0.0] [20:02:46] RECOVERY - Puppet run on tools-exec-1405 is OK: OK: Less than 1.00% above the threshold [0.0] [20:03:06] RECOVERY - Puppet run on tools-webgrid-lighttpd-1414 is OK: OK: Less than 1.00% above the threshold [0.0] [20:04:03] RECOVERY - Puppet run on tools-exec-1218 is OK: OK: Less than 1.00% above the threshold [0.0] [20:04:15] RECOVERY - Puppet run on tools-logs-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:04:35] RECOVERY - Puppet run on tools-webgrid-lighttpd-1407 is OK: OK: Less than 1.00% above the threshold [0.0] [20:04:37] RECOVERY - Puppet run on tools-docker-registry-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:04:47] RECOVERY - Puppet run on tools-worker-1010 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:09] RECOVERY - Puppet run on tools-exec-1419 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:09] RECOVERY - Puppet run on tools-webgrid-lighttpd-1203 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:10] RECOVERY - Puppet run on tools-webgrid-lighttpd-1402 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:15] RECOVERY - Puppet run on tools-redis-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:16] PROBLEM - Puppet staleness on tools-worker-1003 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [43200.0] [20:05:31] RECOVERY - Puppet run on tools-static-10 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:32] RECOVERY - Puppet run on tools-webgrid-lighttpd-1208 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:48] RECOVERY - Puppet run on tools-webgrid-lighttpd-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [20:05:54] RECOVERY - Puppet run on tools-worker-1005 is OK: OK: Less than 1.00% above the threshold [0.0] [20:07:08] RECOVERY - Puppet run on tools-webgrid-generic-1401 is OK: OK: Less than 1.00% above the threshold [0.0] [20:07:12] RECOVERY - Puppet run on tools-webgrid-lighttpd-1207 is OK: OK: Less than 1.00% above the threshold [0.0] [20:07:40] RECOVERY - Puppet run on tools-exec-1408 is OK: OK: Less than 1.00% above the threshold [0.0] [20:08:17] RECOVERY - Puppet run on tools-worker-1015 is OK: OK: Less than 1.00% above the threshold [0.0] [20:08:19] RECOVERY - Puppet run on tools-worker-1019 is OK: OK: Less than 1.00% above the threshold [0.0] [20:08:21] RECOVERY - Puppet run on tools-worker-1004 is OK: OK: Less than 1.00% above the threshold [0.0] [20:08:29] RECOVERY - Puppet run on tools-webgrid-lighttpd-1406 is OK: OK: Less than 1.00% above the threshold [0.0] [20:08:55] RECOVERY - Puppet run on tools-proxy-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:01] RECOVERY - Puppet run on tools-exec-1415 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:11] RECOVERY - Puppet run on tools-exec-1413 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:18] PROBLEM - Puppet run on tools-worker-1001 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:09:22] RECOVERY - Puppet run on tools-webgrid-generic-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:39] RECOVERY - Puppet run on tools-worker-1022 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:41] PROBLEM - Puppet run on tools-mail-01 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [20:09:51] RECOVERY - Puppet run on tools-exec-1402 is OK: OK: Less than 1.00% above the threshold [0.0] [20:09:53] PROBLEM - Puppet run on tools-worker-1002 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [0.0] [20:10:18] RECOVERY - Puppet run on tools-webgrid-lighttpd-1413 is OK: OK: Less than 1.00% above the threshold [0.0] [20:10:30] RECOVERY - Puppet run on tools-worker-1014 is OK: OK: Less than 1.00% above the threshold [0.0] [20:11:02] RECOVERY - Puppet run on tools-webgrid-generic-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:11:32] PROBLEM - Puppet run on tools-webgrid-lighttpd-1403 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:11:38] PROBLEM - Puppet run on tools-exec-1406 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:11:49] RECOVERY - Puppet run on tools-bastion-05 is OK: OK: Less than 1.00% above the threshold [0.0] [20:12:22] RECOVERY - Puppet run on tools-docker-builder-03 is OK: OK: Less than 1.00% above the threshold [0.0] [20:12:26] PROBLEM - Puppet run on tools-checker-02 is CRITICAL: CRITICAL: 60.00% of data above the critical threshold [0.0] [20:12:59] RECOVERY - Puppet run on tools-exec-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:13:11] PROBLEM - Puppet run on tools-webgrid-lighttpd-1412 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [20:13:19] PROBLEM - Puppet run on tools-prometheus-01 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [20:14:21] PROBLEM - Puppet run on tools-exec-1410 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [20:15:30] PROBLEM - Puppet run on tools-exec-1409 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [0.0] [20:47:25] RECOVERY - Puppet run on tools-checker-02 is OK: OK: Less than 1.00% above the threshold [0.0] [20:48:21] RECOVERY - Puppet run on tools-prometheus-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:16] RECOVERY - Puppet run on tools-worker-1001 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:39] RECOVERY - Puppet run on tools-mail-01 is OK: OK: Less than 1.00% above the threshold [0.0] [20:49:53] RECOVERY - Puppet run on tools-worker-1002 is OK: OK: Less than 1.00% above the threshold [0.0] [20:51:33] RECOVERY - Puppet run on tools-webgrid-lighttpd-1403 is OK: OK: Less than 1.00% above the threshold [0.0] [20:51:41] RECOVERY - Puppet run on tools-exec-1406 is OK: OK: Less than 1.00% above the threshold [0.0] [20:53:11] RECOVERY - Puppet run on tools-webgrid-lighttpd-1412 is OK: OK: Less than 1.00% above the threshold [0.0] [20:54:19] RECOVERY - Puppet run on tools-exec-1410 is OK: OK: Less than 1.00% above the threshold [0.0] [20:55:27] RECOVERY - Puppet run on tools-exec-1409 is OK: OK: Less than 1.00% above the threshold [0.0] [21:06:41] PROBLEM - Puppet run on tools-services-01 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [21:26:50] 06Labs, 10Labs-Sprint-109: Remove reliance on ldap $::projectid from shinkengen - https://phabricator.wikimedia.org/T108625#2924472 (10Andrew) [21:26:52] 06Labs, 07LDAP, 13Patch-For-Review: Clean up ldap host entries and references - https://phabricator.wikimedia.org/T148781#2924471 (10Andrew) [21:28:01] 06Labs, 10Labs-Sprint-109: Remove reliance on ldap $::projectid from shinkengen - https://phabricator.wikimedia.org/T108625#1525456 (10Andrew) I wrote an alternative fix for this, and I'm not sure which I like better. Probably Krenair's, in theory, although there may be package dependency issues. I'll test b... [21:35:27] andrewbogott, yeah, those need the python3 versions of the openstack client packages [21:35:44] Which in most cases don't exist [21:35:49] a bunch of pieces of software is blocked on migrating from the wikitech API on this [21:35:59] I think I'm going to just demote that tool to python2; it works fine in either. [21:36:21] well, that's your choice. I thought I got it working on a VM with python3 packages [21:36:34] oh? Maybe I haven't dug deep enough [21:36:50] I got a lot of incoherent dependency warnings when I tried to install the python3 packages [21:36:53] but I'll give it another go [21:37:06] (I already backported 20 or so things to get this working with python 2 :( ) [21:37:11] andrewbogott: demoting things to python2 makes yuvipanda really really sad [21:37:20] yeah I think python3 will need seem backports :( [21:37:24] (debian packages also make yuvipanda really really sad) [21:37:39] (about to run into flight) [21:37:47] yuvipanda: I'm coding for the environment I have, not the environment I want to have in 2022 [21:38:01] In this case, demoting is literally a one-character change [21:38:13] But if there are a bunch of other python3 tools that also need those openstack client packages... [21:38:14] then... [21:38:18] I will also be sad [21:39:17] https://gerrit.wikimedia.org/r/#/q/owner:krenair+status:open+label:Code-Review-1%252Ckrenair+project:operations/puppet [21:39:59] 4 python3 things [21:40:40] dammit [21:40:41] ok [21:40:58] I'm not convinced that python3 is better, just 'a different language that doesn't support our use cases' [21:41:35] it doesn't support our use cases? [21:42:25] I feel like this isn't the first time I've needed dependencies that aren't available for 3 [21:42:40] In theory those cases will diminish over time [21:42:53] AFAIK it's just openstack packages needing newer versions of dependencies [21:43:04] But given that supporting 3 will be trivial in 2 years but will take me a day of backporting today... [21:43:09] which is a ubuntu/debian packaging thing mostly [21:43:10] I'm not super convinced in the merits. [21:43:22] ubuntu/debian packaging == our use case [21:43:36] ubuntu/debian packaging may not support our use case [21:44:26] andrewbogott: asyncio is worth it by itself, and python2 is EOL in 2020. Ubuntu 16.04 already has python3 as default pytohn, and I think stretch might also have it as default [21:44:37] it's really not some far off in the future thing [21:45:27] This is like ipv6. It's not that I'm against it, it's just that I'm not going to single-handedly drag the entire ecosystem into the new standard. [21:45:32] anyway, I need lunch, back later [21:45:40] I'm boarding in about 15minutes, which is the best time to get into language related arguments :) [21:46:15] My point is, don't argue with me — argue with the openstack devs and mirantis and the ubuntu cloud archive people. [21:46:16] yeah, I mostly agree with Krenair - but hating on debian packaging doesn't get us anywhere I guess. must continue pulling nails with pliers [21:46:19] THEY are the ones that don't support 3 [21:46:35] andrewbogott: wait, are these libraries not available or are the packages not available? [21:46:51] if the former, then totally ok. nothing we can do about it [21:46:54] https://wiki.openstack.org/wiki/Python3 [21:47:14] it looks like they are sort of mostly supported [21:47:42] most of the client libraries seem grene [21:47:48] But, again — I am going to have to spend a day backporting now [21:47:56] rather than just removing a single number from a source file [21:48:10] I honestly think that is completely worth it. [21:48:17] well, and making the scripts be a mix of python2 and 3 in the 'wrong' direction [21:48:20] I'm happy not arguing with anyone [21:48:21] There's no reason I can't just replace that little '3' on the #! line two years from now when someone else has done this packaging work. [21:48:21] almost all of the NFS scripts and what not are python3 [21:48:31] I have some patches and you can apply them to python 2 or 3 however you like [21:49:42] personally I think the forward-compatible way would be python 3, but I'm not on the wikimedia operations team [21:49:57] I can work with either way [21:50:51] andrewbogott: it's a different language, not 'just a executable name'... but anyway, I hope you consider doing the backports. [21:50:51] off now [22:02:51] andrewbogott, you know if we want we could skip the whole openstack client packages thing and just communicate with the APIs directly [22:02:53] I've done it before [22:03:00] 06Labs, 10Labs-Infrastructure, 07Need-volunteer, 13Patch-For-Review: Redirect https://toolserver.org/~magnus/ - https://phabricator.wikimedia.org/T113696#2924590 (10Dzahn) Oh, of course, you're right. :) resolved then [22:05:17] we could reuse code from it [22:11:40] RECOVERY - Puppet run on tools-services-01 is OK: OK: Less than 1.00% above the threshold [0.0] [22:20:32] Hi all. Looking for some help. python3 jobs (`jsub infobox-coords-migrator` and `jsub infobox-coords-migrator-cron`) submitted using my tool (jjmc89-bot) are failing. The last successful run was December 14, and the script has not changed since December 8. On December 25, I noticed python3 jobs being interrupted by a keyboard interrupt; the message was appearing in the .err. Currently the .err does not have any information on why the job termi [22:50:02] JJMC89: keyboard interrupt sounds like maybe running out of ram and getting killed by the job scheduler [22:51:46] JJMC89: you might try adding something like '-mem 1g' to your jsub command [22:52:19] that would give you double the base 512m ram allocation limit [22:52:32] Thanks bd808. I'll try that. [22:52:43] without some logging it hard to say what's really going on