[00:02:50] drewmutt: the sha256 fingerprint of the key you have in LDAP is SHA256:GeMCUKNUnTzkhz2J1fzvIYjKZVC8BA99GKto9zzgz4Q. You can check the fingerprint of a key with `ssh-keygen -lf FILE` [00:03:26] bd808: Thanks, I actually got it sorted out, I wasn't using the right key. [00:03:35] :) easy to mess up [00:04:35] bd808: i have like 20 different ssh keys lol [00:04:35] SSH always is ^_^ Thanks for the help! [00:04:50] all for tools [00:05:08] Zppix: ummm... why? [00:05:33] one for each personal device i use you know how hard it is to use 1 key for mutiple devices [00:06:12] that's a lot of hardware :) [00:06:34] bd808: all of them are encrypted to max ldap will allow [00:08:05] and privatekeys on my devices are stored in md5 hashed files (up to about 10-15 times hashed) [00:09:54] md5? [00:12:14] yes md5 [00:13:30] why would you use md5 for anything? [00:14:25] bd808, hi [00:14:34] do you have to type passphrases when you load the keys, Zppix [00:14:39] bd808, i shared you a pastebin related to perl a few days ago, pinging you [00:14:42] yes mutante [00:14:53] Reedy: i love encryption [00:15:00] well, you'd know md5 is shit [00:15:10] Reedy: there ssh keys no-one wants them [00:17:03] gry: yeah... I looked around a bit but haven't figured out the problem. :/ I did see some things that looked sort of related in old bug tracker reports for ... something. I forget now what I really saw. [00:17:27] gry: is this something that you got working before? [00:22:16] bd808, only before the precise migration. the tool is down for two weeks because of this issue. [00:22:38] ok, so it worked on precise but blows up on trusty [00:22:47] that might help us track something down [00:23:05] yes please [00:23:23] * bd808 looks at the pastebin again [00:23:29] as you see in the pastebin, there's a package whose version does not match requirements of the package I'm installing from cpan [00:24:17] why would you use md5 for anything? <= +1 [00:24:21] gry: what tool? [00:24:47] zhuyifei1999_: i've been wanting to covert it to something better but i dont have the time atm [00:24:48] gpy [00:25:01] gry: what is gpy [00:25:18] an irc bot for a few small wikis, announcing category changes and urls [00:25:45] on wmflabs? [00:26:31] gry: was that custom perl you are using built on trusty? [00:26:59] Zppix yes [00:27:27] bd808 `perl --version` returns `v5.18.2` if I am checking the right thing [00:27:38] Zppix maybe not wmflabs, it's wmtools [00:28:28] what last version did it work on? [00:29:49] dunno, sorry [00:29:50] gry: ok. I was just poking around the tool and saw the perlbrew built perl in there. Looks like the cpanm that you have locally just calls to `/usr/bin/env perl` and that does seem to be the system perl [00:30:44] honestly python would of been a better choice (and this coming from a person who HATES python) but perl is better when it comes to syntax iirc [00:30:52] bd808, why does it call to system perl [00:30:58] gry: thats default [00:31:09] Zppix: don't pick on anyone's choice of language in this channel please [00:31:41] maybe I forget `source ~/perl5/perlbrew/etc/bashrc`, I was pretty sure it was in bashrc [00:31:42] bd808: i wasnt i was just offering my advice [00:33:00] gry: the '#!/usr/bin/env perl' uses the active $PATH and the tool account isn't setting that to include the perl5/perlbrew/perls/perl-5.24.1/bin dir. [00:33:21] or choice of encryption! /me would like to encourage conversations such as "have you looked at x as an alternative? it offers these things, these are the drawbacks of y" rather than 'the thing you use sucks' [00:33:40] bd808, maybe I will add `perlbrew init' and `perlbrew switch perl-5.24.1` to the job script, I'm not sure [00:33:43] or to bashrc [00:33:44] or something [00:34:32] gry: yeah, if you want to use the perlbrew version you'll need to do something like that. [00:34:53] the top of your paste has "cpanm (App::cpanminus) 1.7014 on perl 5.018002 built for x86_64-linux-gnu-thread-multi" and that version number matches the system perl [00:35:02] yes thanks for checking [00:35:22] bd808: tools is on debian isnt it? [00:35:47] the job grid and bastion hosts are all Ubuntu Trusty (14.04) [00:36:02] Zppix madhuvishy thanks for suggestions.. I've got source code at gpy.nongnu.org, if you think a python implementation has advantages I would be happy to maintain it as well (familiar with both languages, just a tad more confident with eprl as I freelanced kind of for it before) [00:36:03] the kubernetes runtime is Debian Jessie [00:43:33] gry: do you mind if I try installing that local::lib module in your tool after switching to the perlbrew perl version? [00:43:56] you could do that if you like but I think I just did exactly that [00:44:39] 06Labs, 10wikitech.wikimedia.org: Assign Zppix content admin on WikiTech - https://phabricator.wikimedia.org/T162218#3155935 (10Reedy) [00:45:06] although I think I simply broke it [00:46:07] no it's doing ok now [00:46:26] I'm just reinstalling all the deps because they have new versions or something [00:46:37] oops. I may be fighting against you :/ [00:46:57] we could do it in a shared screen session [00:47:03] what's your username? bd808 too? [00:47:04] "Successfully installed local-lib-2.0000" [00:47:31] I'll step out and let you work :) [00:47:40] this looks promissing though [00:50:03] okay [00:50:59] "tmux -S /tmp/shareds attach -t shared" should let you see what's happening if you like (if you figure out how to get access to /tmp/shareds , i dunno how to do that when we two don't have any user groups in common) [00:51:09] may be useful if I break it again [00:51:15] I have root, so ... :) [00:52:03] I'd have to tmux in tmux which always creeps me out a little [00:52:27] * bd808 lives in an 80 col by 60 line tmux session [00:53:10] I do the same thing. ssh to a uni cluster, and open a screen session. within that the irssi session is nested in one of the windows [00:57:54] gry: https://perlbrew.pl/Perlbrew-In-Shell-Scripts.html -- may help you figure out how to bootstrap your bot [03:56:26] !log wikilabels deploying bb7637f to staging [03:56:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikilabels/SAL [03:58:37] !log wikilabels deploying bb7637f to prod [03:58:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikilabels/SAL [06:34:35] PROBLEM - Puppet run on tools-exec-1423 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [0.0] [07:00:42] PROBLEM - Puppet run on tools-exec-1404 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [0.0] [07:14:36] RECOVERY - Puppet run on tools-exec-1423 is OK: OK: Less than 1.00% above the threshold [0.0] [07:27:05] 06Labs, 10Labs-Infrastructure: Experiment with Linux KSM (dedupe memory shared by instances) on labs infra - https://phabricator.wikimedia.org/T146037#3156414 (10akosiaris) Please note that since 2016-11-21 ksm is disabled on our ganeti hosts. The reasons are twofold. a) We did not really see that much of a ga... [07:35:35] PROBLEM - Free space - all mounts on tools-docker-builder-04 is CRITICAL: CRITICAL: tools.tools-docker-builder-04.diskspace.root.byte_percentfree (<10.00%) [07:35:41] RECOVERY - Puppet run on tools-exec-1404 is OK: OK: Less than 1.00% above the threshold [0.0] [07:59:24] 06Labs, 10Labs-Infrastructure: Experiment with Linux KSM (dedupe memory shared by instances) on labs infra - https://phabricator.wikimedia.org/T146037#3156465 (10hashar) 05Open>03declined Thanks for the paper that is a really interesting attack and apparently quite fast. Since there was little memory gain... [08:29:59] 10Labs-project-Wikipedia-Requests, 10MediaWiki-Interface: Problem with Wikipedia logo on cywiki - https://phabricator.wikimedia.org/T162242#3156513 (10Llywelyn2000) [08:36:48] 06Labs: Provide snapshot of http://tools-elastic-01.tools.eqiad.wmflabs logs - https://phabricator.wikimedia.org/T158199#3156546 (10Tarrow) 05Open>03Resolved a:03Tarrow I'm not sure what the solution to this problem is; or if it might impact other people. In the end I set up my own cluster under wikifactm... [09:23:19] 06Labs, 10Tool-Labs, 10InternetArchiveBot: tools.iabot is overloading the grid by running too many workers in parallel - https://phabricator.wikimedia.org/T161951#3156695 (10Cyberpower678) Well considering this is looking more like a grid issue and not a tool issue, I'm going to remove InternetArchiveBot fro... [09:44:45] 06Labs, 10wikitech.wikimedia.org: Assign Zppix content admin on WikiTech - https://phabricator.wikimedia.org/T162218#3156739 (10Aklapper) To clarify: Which specific MediaWiki user rights is this request about? (as "content admin" does not exist)? Which problems do you see on wikitech.wikimedia.org currently a... [10:02:24] 06Labs, 10wikitech.wikimedia.org: Assign Zppix content admin on WikiTech - https://phabricator.wikimedia.org/T162218#3156784 (10Coffee) p:05Triage>03Lowest Note that multiple English Wikipedia admins/Wikimedia OTRS members/Wikimedia IRC admins strongly suggest that any request by Zppix for any additional t... [10:22:15] Change on 12wikitech.wikimedia.org a page Nova Resource:Tools/Access Request/Saanina was created, changed by Saanina link https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Access_Request/Saanina edit summary: Created page with "{{Tools Access Request |Justification=A new bot for wikipedia Arabic, because of the lack of enough tools there. |Completed=false |User Name=Saanina }}" [11:24:26] what's the easiest way to resolve WMF dbnames to domain names in a script? [11:25:11] dbnames to domain names? which dbs are we talking? [11:26:11] and how "well" do you need it- a quick and dirty script or a production-like quality? [11:27:29] maybe you are looking for the meta_p table on labs replicas? [11:28:34] https://wikitech.wikimedia.org/wiki/Help:Tool_Labs/Database#Metadata_database [11:28:59] (that is non-canonical, but easy) [11:34:33] 06Labs, 10Labs-Infrastructure, 10DBA: LabsDB replica service for tools and labs - issues and missing available views (tracking) - https://phabricator.wikimedia.org/T150767#3157034 (10jcrespo) [11:34:35] 06Labs: New entries in meta_p.wiki are missing a URL - https://phabricator.wikimedia.org/T142759#3157031 (10jcrespo) 05Open>03Resolved a:03jcrespo ``` ./sql.py -h labsdb1001.eqiad.wmnet meta_p -e "select dbname from wiki where url is null" --no-dry-run Results for labsdb1001.eqiad.wmnet:3306/meta_p: 0 row... [11:38:24] 06Labs, 10wikitech.wikimedia.org: Assign Zppix content admin on WikiTech - https://phabricator.wikimedia.org/T162218#3155923 (10MarcoAurelio) @Aklapper: `contentadmin` permissions do indeed exist: https://wikitech.wikimedia.org/wiki/Wikitech:Content_administrators & https://wikitech.wikimedia.org/wiki/Special:... [12:55:45] 06Labs, 10wikitech.wikimedia.org: Assign Zppix content admin on WikiTech - https://phabricator.wikimedia.org/T162218#3157139 (10Zppix) 05Open>03Invalid Withdrawn (I mistaken the wiki that i meant to apply for) [14:08:49] 10Tool-Labs-tools-Xtools, 03Community-Tech-Sprint: Build new front-end for xtools-articleinfo - https://phabricator.wikimedia.org/T159395#3157317 (10MusikAnimal) Okay moving to Needs Review for real this time! Try it out at http://tools.wmflabs.org/xtools-dev/articleinfo/en.wikipedia.org/Google (in particular... [17:22:28] 10Tool-Labs-tools-Xtools, 03Community-Tech-Sprint: Build new front-end for xtools-articleinfo - https://phabricator.wikimedia.org/T159395#3066378 (10Niharika) Where's the code for this? I don't see it at https://github.com/MusikAnimal/xtools-rebirth [17:24:24] 10Tool-Labs-tools-Xtools, 03Community-Tech-Sprint: Build new front-end for xtools-articleinfo - https://phabricator.wikimedia.org/T159395#3066378 (10Matthewrbowker) >>! In T159395#3158037, @Niharika wrote: > Where's the code for this? I don't see it at https://github.com/MusikAnimal/xtools-rebirth http://git... [17:25:09] 10Tool-Labs-tools-Xtools, 03Community-Tech-Sprint: Build new front-end for xtools-articleinfo - https://phabricator.wikimedia.org/T159395#3158053 (10MusikAnimal) >>! In T159395#3158037, @Niharika wrote: > Where's the code for this? I don't see it at https://github.com/MusikAnimal/xtools-rebirth We've been pu... [17:50:05] 06Labs, 10Tool-Labs, 10InternetArchiveBot: tools.iabot is overloading the grid by running too many workers in parallel - https://phabricator.wikimedia.org/T161951#3148214 (10bd808) @Cyberpower678 I feel like we have been down this road before. What are you doing with the IABot account that requires such an i... [17:51:28] 06Labs, 10Labs-Infrastructure, 05MW-1.29-release (WMF-deploy-2017-03-28_(1.29.0-wmf.18)), 13Patch-For-Review: Support project creation without OpenStackManager - https://phabricator.wikimedia.org/T150091#3158141 (10Andrew) [17:56:24] 10Tool-Labs-tools-Xtools: Adminstats isn't labelling administrators properly - https://phabricator.wikimedia.org/T154408#3158191 (10Matthewrbowker) So, answer to this question is thus: there was no "continue" logic for the API query that generated the list of administrators. We would stop after just one query.... [17:58:17] 06Labs, 10Tool-Labs, 10InternetArchiveBot: tools.iabot is overloading the grid by running too many workers in parallel - https://phabricator.wikimedia.org/T161951#3158193 (10Cyberpower678) I wasn't aware that big brother can work on jobs other than the web service. Thank you for pointing that out. Now that... [18:01:31] 10Tool-Labs-tools-Article-request, 10ArticleFeedbackv5, 06Collaboration-Team-Triage, 10Notifications: Notification Extension for Public-sourced Article Request - https://phabricator.wikimedia.org/T162038#3158204 (10Matthewrbowker) >>! In T162038#3151692, @Niharika wrote: >>>! In T162038#3151487, @Matthewrb... [18:42:18] 06Labs, 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154301 (10chasemp) Using swap seems like a non-starter unless we are totally devoid of RAM capacity in which case we h... [19:59:56] 06Labs, 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3158650 (10hashar) >>! In T162166#3158359, @chasemp wrote: > Using swap seems like a non-starter unless we are totally... [20:26:38] Anybody with a little bit mir php experiance than me? (my experiance is close to zero) Does $result = mysql_unbuffered_query($query, $userdb2); lock the table for reading? [20:27:21] More detailed: does it lock the table until all records are fetched? [20:28:20] Wurgl: Unless its changed in PHP it should iirc [20:29:11] the query is "SELECT * FROM dewiki_pd" and returns about 600k records. [20:29:13] 06Labs, 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3154301 (10Platonides) Given that the problem seems to be just the algorithm memory check rather than actually needing... [20:29:22] processing takes about 50 � 60 minutes [20:29:54] And during that time it seems that this table cannot be accessed [20:30:10] I don't like that :-( [20:30:37] It seems to behave like an exclusive lock [20:31:19] But I do not read anything about locks here: http://docs.php.net/manual/da/function.mysql-unbuffered-query.php [20:32:38] Wurgl: dewiki is a pretty big DB it could just be wmf servers trying to prevent the query from overloading the server, perhaps, but I can't really say for sure. [20:32:55] No, private database [20:33:04] Wurgl: oh, in that case i am not sure [20:33:35] s51412__data <-- thats the name of the database [20:33:38] try something more specific if possible? [20:35:03] Well � there is a tool for searching persons in de.wikipedia.org : https://tools.wmflabs.org/persondata/ [20:35:13] Wurgl: What storage engine? [20:35:26] there can be some difference between innodb and myisam [20:35:52] 06Labs, 10Labs-Infrastructure: bootstrap_vz: Move firstboot.sh out of the base image? - https://phabricator.wikimedia.org/T161327#3158818 (10Andrew) a:05Andrew>03None [20:36:00] Reedy: ah your around when you get a moment mind doing me a favor its related to puppet [20:36:49] And accessing its webpage, even the url i just copied, takes usally 0,5 seconds. But when a certain jobs runs, which is every morning at 5:15 GMT the response time is giant. Yesterday I had 53 Minutes, today 67 Minutes [20:37:21] Wurgl: it probably caches it and keeps it cached for x amount of time [20:37:25] No idea waht storage engine. I did not set up the database [20:37:35] Wurgl: is it on labs or tools? [20:37:39] the db that is [20:38:52] tools-db [20:39:05] $db = mysql_connect('tools-db', $ts_mycnf['user'], $ts_mycnf['password'], TRUE) or die("database connection failed: " . mysql_error()); [20:39:25] tools i think its mango or mysql i cant quite recall [20:41:06] what does such job? [20:42:25] 06Labs, 10Continuous-Integration-Config, 10MediaWiki-extensions-Scribunto, 10Wikidata: For contintcloud either add RAM or Swap to the instances - https://phabricator.wikimedia.org/T162166#3158827 (10hashar) Oops I forgot about overcommit_memory which I mentioned on the parent task. From T125050#3153574 :... [20:42:53] Read in one connection all records (a record is somthing like a person aka. its wiki-page) [20:43:48] for each record read with a second connection all categories that page has (this is done with the wikipedia-db) [20:44:17] and that slows the other query? [20:44:19] and in a third db-connection fill a different table (or delete records from there) [20:44:28] No transactions! [20:44:42] No lock statements, just select, insert, delete [20:45:28] what engine is used by that table? [20:45:35] the one in s51412__data [20:45:50] show create database `s51412__data`; [20:45:55] | s51412__data | CREATE DATABASE `s51412__data` /*!40100 DEFAULT CHARACTER SET latin1 */ | [20:46:04] the table, not the database [20:46:40] Table: ) ENGINE=MyISAM AUTO_INCREMENT=9853636 DEFAULT CHARSET=latin1 | [20:47:24] Yes, I know the table � had to type in the query (and as I remember a default engine can be part of the database definition) [20:47:40] try changing the table to innodb [20:48:49] MyISAM doesn't support concurrent writes [20:49:01] and I think there is probably a lock between the two jobs [20:49:02] "ALTER TABLE table_name ENGINE=InnoDB;" <-- thats all? [20:49:18] https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html <-- when reading this it seems [20:49:24] I doubt it can be done, but… try it [20:50:15] ok, it is the right statement then :P [20:50:55] Its funny, at least one table is innodb � [20:51:58] Seems to work. tried it with a small table with 5300 rows [21:14:57] 10Tool-Labs-tools-Xtools, 03Community-Tech-Sprint: Add a server-side caching service for the new XTools - https://phabricator.wikimedia.org/T161057#3158872 (10kaldari) Just need to fix description of cacheGet(). [21:15:27] Platonides: I like that: "Stage: 1 of 2 'copy to tmp table' 160% of stage done" <-- 160 percent? [21:24:35] Wurgl: hi [21:25:18] So tomorrow morning I will see � thanks