[05:11:25] TimStarling: a couple of RFC meetings ago you wondered aloud if we should send pingbacks from new installs with information about the environment. I implemented that in https://gerrit.wikimedia.org/r/#/c/296699/ . [05:12:09] interesting [08:38:50] https://bugzilla.mozilla.org/show_bug.cgi?id=1154339 see, tabs are evil. [08:38:50] Title: 1154339 – JS microbenchmark with JQuery proxy is much slower when inner callback function uses tabs instead of spaces for indentation (at bugzilla.mozilla.org) [08:38:55] I'm just going to leave this here... [10:01:49] Reedy: that's amazing [10:47:29] Scary and amazing? [10:59:51] ori: I wonder if Chrom(e|ium) is affected too.. Or just FF [19:36:09] for the pingback, instead of sending $wgServer, I'd like to send a unique, stable identifier for the wiki [19:36:32] I can generate a $wgPingbackId via the installer much like $wgSecretKey is generated [19:36:49] but that won't work for updates [19:39:11] what about a hash of the host's mac address? Can php get that reliably? [19:39:25] although I guess you could obviously have many wikis per host [19:39:41] and/or many hosts per wiki [19:39:42] (dumb idea) [19:39:46] generate random string, drop it into a file, read back [19:40:29] if we generate a random identifier, it should probably go in the database rather than litter the filesystem [19:40:29] is that for T56426? [19:40:29] T56426: Create Ping server extension for MediaWiki - https://phabricator.wikimedia.org/T56426 [19:40:57] I thought of deriving it from e.g. a SHA1 hash of $wgServer + `select user_registration from user where user_id = (select min(user_id) from user);` [19:41:29] it's for https://gerrit.wikimedia.org/r/#/c/296699/ [19:41:37] very old users don't have a registration date [19:41:46] oh, good point [19:42:26] awesome, thank you for working on that! [19:42:45] I can generate a random ID and store it in the updatelog table, with ul_key = 'Pingback' and ul_value = random id [19:43:07] that is abusing the updatelog table a little but seems marginally better than introducing a new table [19:43:31] the hash of wgServer + DB name might work, unless wgServer is dynamic, but just storing a random key is way less trouble IMO [19:43:32] do we have any tables that aren't abused? ;) [19:44:31] well, the idea is to make the data safe to share widely. Faidon pointed out that attackers may be very interested to know that xyz.org is running version such-and-such of MediaWiki, etc. [19:44:46] Special:Version [19:45:12] but yeah putting it all in one pile will probably make it more discoverable [19:46:18] ori: what's the usecase for being able to indentify unique wikis [19:46:42] checking for upgrade progress? [19:46:44] [12:44:31] well, the idea is to make the data safe to share widely. Faidon pointed out that attackers may be very interested to know that xyz.org is running version such-and-such of MediaWiki, etc. <-- wikiapiary already does this...and it's *not* opt-in [19:46:46] right [19:46:58] (re: bd808) [19:47:57] suppose you want to know how many wikis run on 32-bit systems. you don't want to double-count wikis that have multiple entries in the logs [19:48:56] well, it's opt-in in that your wiki could be behind an HTTP authentication scheme, or have a robots.txt that disallows crawling of Special:Version, etc. [19:49:12] and it would be good to check on things like the rate that security versions are adopted [19:49:15] or be hosted on a private intranet (on a server that can dial out) [19:49:20] but that's not what you really want to know though....you'd want to know how many hosting providers are giving their users 32-bit PHP [19:49:32] (which wikiapiary kind of can determine for some webhosts) [19:49:42] UUID stuffed in the db seems reasonable [19:50:19] but yeah, I think it needs to be in the db because it's not unusual to do a re-generation of LocalSettings.php when going through the web updater [19:50:45] related question: should we exclude development instances? and if so, how do we determine whether or not the wiki is a dev wiki? wgServer == 127.0.0.1? ( $wgShowExceptionDetails || $wgDevelopmentWarnings = true ) ? [19:51:06] devs should know not to opt-in? [19:51:19] just dump that information into the ping [19:51:56] an install is an install [19:52:30] the dataset is always going to have a lot of noise for something like this [19:53:08] yeah, good suggestions [19:55:25] IMO you ultimately want to know how many users are affected by core change X; the number of installs is less helpful [19:55:53] so add something like log10(active users); that will filter out dev instances well enough [20:13:37] * Krinkle likes the new Kibana, if it weren't for that hidious logo. Did they produce it by running the old logo through a broken VCR? [20:15:27] did kibnana3 still have the thatched hut logo? /me has already forgotten [20:17:01] The elastic logo makes me think of pus and ooze -- https://www.elastic.co/assets/blt68e0d3e570096cfb/logo-elastic-1000x343.png [20:27:01] bd808, reminds me of... World of Goo! [20:27:26] That was a fun little game [20:43:21] it looks like a soap bubble (the es logo) [20:46:54] legoktm: is there a way to detect if there are other users needing T140074 in the wake of the GlobalRename problems? or would stewards have a way to spot that? [20:46:55] T140074: Attach Biplab Anand's local accounts - https://phabricator.wikimedia.org/T140074 [20:49:12] bd808: can you think of a way to make kibana not require a login for requests originating from the cluster? it's easy to do with `RemoteIPHeader X-Real-IP and Require ip 10. <%= scope['::network::constants::all_networks'].join(' ') %>`, but the question is where to plug that, given that the kibana module only lets you choose from a set of prefabricated auth configs [20:50:32] ori: I think we are going to open up port 9200 to tin and fluorine [20:51:00] kibana4 doesn't allow arbitrary Elasticsearch queries [20:51:52] right [20:51:55] ori: do you have a use-case that is actually for Kibana and not general Elasticsearch access? [20:52:27] no; I should have asked about logstash [20:52:43] this is for gwicke's deployment log monitoring python script [20:52:57] yeah I talked to thcipriani about it today [20:53:12] ah cool [20:53:18] I think he's going to post a gerrit patch to open ferm up [20:53:39] that's the plan [20:54:02] the monitoring script no longer works since the kibana upgrade [20:54:11] have to go directly to elasticsearch now [20:56:18] nod [20:56:34] https://gerrit.wikimedia.org/r/#/c/296699/ (pingback) updated, feedback welcome [20:57:37] PHP Fatal Error: Call to private method AbuseFilter::getFilter() from context 'ContentTranslation\AbuseFilterCheck' [20:57:54] This known? Mostly in wmf.10 from what I see, not new to wmf.11 [21:00:59] greg-g: Promoted https://wikitech.wikimedia.org/wiki/Caching_overview to be the entry page for "HTTP caching" in the "Wikimedia infrastructure" navigation category. (see new sidebar). Previously that link pointed to the Category page. [21:01:12] ostriches: we backported a patch this morning that was supposed to fix that error [21:01:27] a feck the backport has a problem too? [21:01:46] maybe there is a related abusefilter patch that didn't get in? [21:01:51] backported to.... wmf.10? [21:01:57] yeah in swat [21:02:11] cherry-pick [21:02:24] ostriches: https://gerrit.wikimedia.org/r/#/c/299707/ [21:02:47] T139657 [21:02:47] T139657: Fatal error: Invalid static property access: AbuseFilter::filters in AbuseFilterCheck.php on line 121 - https://phabricator.wikimedia.org/T139657 [21:03:12] ostriches: this needs a cherry-pick too -- https://gerrit.wikimedia.org/r/#/c/299268/ [21:03:14] Krinkle: wow, that page was made 2 days after I started, pretty sure [21:03:27] Hmmm [21:03:48] bd808: Lemme see if I can narrow it [21:05:55] bd808: AbuseFilter::getFilter() is also private...? [21:06:13] "ostriches: this needs a cherry-pick too -- https://gerrit.wikimedia.org/r/#/c/299268/" [21:06:49] Doing [21:57:13] bd808: Backported and deployed. [21:57:25] awesome [21:58:03] Errors goin' away [23:04:31] gwicke: TimStarling: James_F: https://www.mediawiki.org/wiki/Node.js_debugging#Chrome_DevTools [23:04:48] https://i.imgur.com/HshfDOx.png [23:05:07] Now without any additional tools (as long as you have a recent chrome and Node v6.3+, ran on my localhost for now instead of Vagrant) [23:05:15] But pointed Parsoid to my vagrant wiki [23:06:22] Neat. [23:06:49] Noticed that instructions for Parsoid configuration are outdated. It seems localsettings.js doesn't work by default (need to create a config.yaml first for service-runner). Should probably be updated in readme etc. [23:07:33] It was also very difficult to figure out what the Parsoid URL is for getting plain HTML. All documentation either refers to _rt or Restbase. Not the "/:domain/v3/page/html/:title". Eventually found it in irc logs. [23:38:01] Who wants to delete code? https://gerrit.wikimedia.org/r/#/c/299798/ :) [23:44:47] ostriches: {{done}} [23:47:26] Thx [23:47:46] On a completely different note....I was starting a fresh vagrant, and mwv-apt isn't happy :\ [23:47:51] deb [trusted=yes] http://mwv-apt.wmflabs.org/repo <%= scope['::lsbdistcodename'] %>-mwv-apt main [23:48:11] Seems to evaluate to http://mwv-apt.wmflabs.org/repo/dists/trusty-mwv-apt/main/binary-i386/Packages [23:48:13] Which is 404 [23:48:21] hmmm... [23:48:22] (binary-all or binary-amd64 as options) [23:49:12] should be http://mwv-apt.wmflabs.org/repo/dists/trusty-mwv-apt/main/binary-amd64/ [23:49:33] $::lsbdistcodename says binary-i386? [23:49:36] Weird.... [23:50:01] my vm jsut has "deb [trusted=yes] http://mwv-apt.wmflabs.org/repo trusty-mwv-apt main" [23:50:42] Yeah [23:50:44] deb [trusted=yes] http://mwv-apt.wmflabs.org/repo trusty-mwv-apt main [23:51:13] But when I ran apt-get update [23:51:17] ::lsbdistcodename == trusty [23:51:24] W: Failed to fetch http://mwv-apt.wmflabs.org/repo/dists/trusty-mwv-apt/main/binary-i386/Packages 404 Not Found [23:51:53] oh... are you using LXC? [23:52:43] parallels [23:53:12] and an i386 image I take it [23:54:14] I don't remember who did the parallels testing. Brion or Tyler [23:55:52] I did some awhile back [23:56:04] I just spun up a new vm today tho