[00:02:42] Hmmmmm [00:03:50] bd808: I'll destroy the vm and rm the box. [00:03:57] See if we can get a better image for parallels [00:05:51] I think you'll have other sadness with i386. I doubt much in the main wmf apt repo is built for it either [00:06:56] I'm gonna try and get a non-i1386 image. [00:06:58] Lookin' [00:18:03] Updating my vagrant and vagrant-parallels plugin while I'm at it, both kinda outdated. [00:21:00] https://atlas.hashicorp.com/bento/boxes/ubuntu-14.04 seems to be widely used. And updated. [00:21:44] It also supports vmware & virtualbox, which is kinda nice [00:21:50] Although we should jessie :) [00:21:59] Maybe I'll guinea pig. I'm bored tonight [00:22:25] bento's boxes seem good [00:39:08] Heh, I think I have to jump to puppet 4 [00:40:16] yuck. I don't think our puppet code works with puppet4 [00:40:49] not tested but I know I saw a tracking bug for ops/puppet about things that would have to change there [00:46:21] I'm seeing if I can outsmart the plugin. [00:46:25] We use 3.7.2 in prod. [01:36:43] More AbuseFilter fun in prod. [01:36:56] Stack trace resulting in User::loadFromSession called before the end of Setup.php [01:37:28] parsing messages, loads user prefs for parsing. [02:57:14] a.nomie has a patch for that - https://gerrit.wikimedia.org/r/#/c/297903/ [15:30:08] ostriches: you should cherry-pick https://gerrit.wikimedia.org/r/#/c/297903/ to .11 (and maybe .10?) to fix that AbuseFilter message log spam. I just merged it into master. [19:35:38] bd808: Do you happen to know what "git clone … returned 128 instead of one of [0]" means from vagrant? I've had it for months now with EventLogging, but hacking around it by disabling roles it happens to no longer works since with the Parsoid switch to service-runner it's exhibiting it too. Totally fresh installs of vagrant don't fix it. :-( [19:36:10] first result on SO [19:36:11] "It's probably because your SSH key has been compromised. Make a new one and add it to your GitHub account." [19:36:23] http://stackoverflow.com/a/33827734 ? [19:36:32] it means some unspecified thing broke during the clone. any git failure exits with that same status code [19:36:39] Helpful. [19:37:01] the way to find out is to ssh into the instance and run 'git pull' for the repo [19:37:14] that will usually tell you what's busted [19:37:29] -vvvv [19:37:41] often it is a corrupted pack file [19:38:00] or a burp from gerrit that interrupted the fetch [19:38:13] Thanks, will try that. [19:39:05] I would *love* to find a way to make the git clones done by mw-vagrant more robust and give better error messages [19:40:35] Reedy: `git pull -vvvv` returned 0 (and a bunch of verbosity). [19:40:50] RESOLVED WORKSFORME [19:41:33] probably a transient gerrit origin issue then :/ [19:41:46] But it's consistent on `vagrant provision`. [19:41:58] hmmm [19:42:14] and the pull that worked was done from inside the vm? [19:42:18] Yes. [19:43:06] * James_F `rm -rf`s the entire vagrant directory including the .git tree and starts again, just in case. [19:43:14] interesting. You can try `PUPPET_DEBUG=1 vagrant provision` which will log a ton of info. That may help narrow things down [19:43:59] most usefully, PUPPET_DEBUG will log the exact git command that is failing which you can then try to run manually [19:45:29] Gosh, so much logging. [19:46:39] Aha! [19:47:09] ==> default: Debug: Executing '/usr/bin/git clone --recurse-submodules ssh://USER@gerrit.wikimedia.org:29418/eventlogging.git /vagrant/srv/eventlogging' [19:47:09] […] [19:47:09] ==> default: Notice: /Stage[main]/Eventlogging/Git::Clone[eventlogging]/Exec[git_clone_eventlogging]/returns: Permission denied (publickey). [19:47:17] That's… interesting. [19:47:49] so it's got a bogus URI [19:48:10] Yeah. [19:48:20] let me check the puppet code for that one [19:49:16] it's using a normal "git::clone" define [19:50:03] origin ssh://jforrester@gerrit.wikimedia.org:29418/mediawiki/extensions/EventLogging.git (fetch) [19:50:18] (from git remote -v) [19:51:07] Same in the host and inside Vagrant. [19:51:31] do you have things setup so that all your clones are ssh? [19:51:41] Yes. [19:51:49] Is that bad? [19:52:15] not necessarily. Just not how I do things [19:52:34] It's so I can start hacking whatever the project. [19:52:41] it's more fiddly as your have to have ssh agent forwarding working into the vm [19:52:49] * James_F nods. [19:53:03] yeah. I do that with an insteadOf git config [19:53:29] so outside my vm "https://gerrit.wikimedia.org/r/p/" turns into "ssh://gerrit/" [19:53:33] * James_F nods. [19:53:39] fancypants [19:53:43] and .ssh/config maps the host and user bits [19:54:11] I think I learned that one from o.ri at some point [19:54:43] Very fancy. [19:54:57] http://www.gossamer-threads.com/lists/wiki/wikitech/375350 [19:56:34] so James_F I'm still not sure why that one clone is blowing up for you [19:56:45] but maybe you can figure it out now? [19:56:57] no reviewers for https://gerrit.wikimedia.org/r/#/c/296699/ ? Come on, Make Ori a Programmer Again™ [19:57:33] ori: :-D [19:57:36] ooh [19:58:35] https://www.mediawiki.org/beacon 404s :P [19:58:51] Is it caught by the proxies? [19:59:04] If so, we should probably have some vague info page there [19:59:05] it's https://www.mediawiki.org/beacon/event [19:59:39] commit summary is wrong then? [19:59:54] i prefer "abridged" ;) [20:00:00] it actually works, as in if you run the patch you'll end up with a row in the relevant EL table [20:00:20] beacon/event being a blank page doesn't feel nice either [20:00:27] I presume there's a TODO somewhere about it? [20:00:44] that's how EL works normally right? [20:00:44] it's a beacon endpoint, not sure I see the point in having a page there [20:00:46] yeah [20:00:50] well.. [20:00:55] varnish 204s all /beacon/* URLs [20:01:08] /beacon/event?... is claimed by eventlogging, but there are a few others [20:01:13] "Why is my MW making requests to this page? I'll visit it in my browser. It's a blank page. Panic" [20:01:32] "Post to IRC/phab/wikitech-l/mediawiki-l" [20:01:55] well, consider that a large number of extensions already make requests to that page [20:02:07] * bd808 looks for foods [20:02:13] in a way that is far easier to observe (client-side code) [20:02:29] and we don't get panicked e-mails [20:04:13] Can I reserve the right to say "I told you so"? [20:04:15] ;D [20:04:30] sure [20:04:38] Ok, we'll move on for now [20:04:42] I'll try testing it in a bit [20:04:44] I need fooood [20:05:49] man cannot live by beacons alone [20:06:00] unless the beacons are made out of bread [20:47:09] Trivial. https://gerrit.wikimedia.org/r/#/c/300135/ [20:59:49] ori: Why only queue a ping if there's a post? [21:00:14] because there's a database update involved [21:00:32] that very high usage updatelog table? :P [21:01:31] it's a matter of hygiene, not volume. the plan is to have edge caches route requests to the primary or secondary dc depending on whether or not they need the master database [21:02:24] Aaron has been chasing down synchronous (non-deferred) database writes that occur on GETs and fixing them, we're getting close to 0 [21:03:26] are we enabling this on WMF wikis? :P [21:05:31] well, no, but it makes sense to adhere to these conventions whenever possible [21:06:10] why not only do it on POST? [21:07:21] the only reason I can think of is to ensure that wikis on which no page is ever edited and no user ever logs in are counted [21:08:10] I wonder how many "readonly" wikis there are [21:08:15] And whether they're upgraded [21:08:15] maybe that's a valid reason [21:08:28] Per chad... update.php should probably schedule a ping [21:08:33] or, attempt to etc [21:08:47] well, version upgrades schedule a ping [21:08:57] the key we check for in the updatelog interpolates $wgVersion [21:09:09] so whenever $wgVersion changes, a ping will be sent [21:09:32] only if the wiki has some post type action [21:10:11] I honestly don't know what the right answer is [21:10:12] that may end up excluding wikis that we want to hear from [21:10:16] I can sort of seeing it both way from [21:10:19] you might be right [21:10:58] IIRC master database connections are permitted in deferred postsend callbacks [21:11:05] (AaronSchulz ^ right?) [21:11:09] having update.php schedule a pingback seems to be a reasonable middleground [21:11:25] as a "just in case extra thing" [21:11:30] middle ground between which two alternatives? [21:12:11] I'm not meaning as the only thing [21:12:25] I'm just meaning vs these "readonly" wikis etc [21:12:37] if it's OK to use the master DB on a postsend handler in a GET request, then I should just drop the $wgRequest->wasPosted() check [21:13:14] I don't think it should be done on requests tbh. [21:13:25] I like the idea of only collecting on update/install. [21:13:45] I was going to suggest if we are doing it on request, shouldn't we probably cache that one has been done in memc/similar rather than having to ask the DB [21:13:54] it's tricky to do on install, because the wiki is not fully configured [21:14:18] Special:PingBack [21:14:38] could do it in a job, but then you have to keep track of whether or not a job has been enqueued, or have the job reschedule itself [21:14:56] I don't think so. [21:14:56] I don't see the reason not to do it on request. Again, remember this is in a post-send handler, so the user is not waiting on this [21:15:05] it runs after the response has been flushed [21:15:10] I think you can do it. [21:15:31] what is the issue with the current approach? [21:15:32] What needs to be setup that isn't done? If you're only doing it on install/update, you don't need the updatelog table anymore or cache. [21:15:57] for one, the SAPI is CLI (for the update.php case and for noninteractive installs), so there are different ini files loaded in most cases [21:16:10] it's common for memory_limit to be different [21:16:20] sometimes extensions, too [21:17:02] This is true, ok. I think we can still defer it a little further w/o needing a db hit. [21:17:12] Heck, we could write a file to cache/* [21:17:19] and you don't get a useful $_SERVER['SERVER_SOFTWARE'] [21:17:22] Then it's just a quick file_exists() [21:17:27] I don't see the issue with the current approach [21:17:36] it's much less risky than attempting to rely on the disk [21:17:43] It feels like we're hitting the db for nothing. [21:18:05] I think you're overestimating how much work is being done [21:18:35] we write to the master database exactly once per version [21:19:44] Errr, how about we just raise the cache expiry to something really high? :) [21:19:53] the data store of record for the wiki is the database, not the filesystem. I don't think it's a good idea to stash little bits of state on the disk [21:20:00] (Like, a month, seriously) [21:20:11] And then update.php can purge the cache entry explicitly, so we don't wait a month [21:20:29] I really don't see the point [21:21:23] Fair 'nuff. [21:22:56] seriously, not trying to be difficult, but I think the little bit of care I exercised in avoiding needless database activity may be sending the wrong signal (that this is a potentially expensive database operation that needs to be handled delicately) [21:22:56] Does EventLogging have protections against bogus data submissions? [21:22:59] Since it's anon :) [21:23:17] ori: It's not expensive, I just like avoiding things. [21:23:18] data has to conform to the schema [21:23:20] DB hits. [21:23:27] Responsibility :P [21:23:46] but there is no provision to prevent you from sending valid fake data [21:24:04] we used to have one (we hashed the IP with a rotating salt, so we can associate all events originating from a particular IP) [21:24:21] but we never had the occasion to use it in 3-4 years of EventLogging usage and it was a privacy/security risk [21:24:24] I guess it's on us (as consumers) to massaging the data and dropping outliers/nonconformists? [21:25:08] it's actually pretty annoying and painstaking to construct and send an event that is perfectly valid but bogus [21:25:19] and there is really no gain to be had [21:25:37] Truth [21:26:38] "YOU MUST KEEP PHP 5.3 SUPPORT LOOK AT ALL THESE WIKIS USING IT" [21:26:44] like, you could imagine some scenario where someone really passionate about keeping support for 32-bit arch decides to game the system by submitting fake pingbacks [21:26:52] snap [21:26:59] it's a lot of work to do it in a way that isn't trivial to recognize [21:27:16] you'd need Dinesh and Gilfoyle's clickfarm randomizer thing [21:27:19] lol [21:27:23] Though, if you keep a date of when they came in [21:27:29] we do [21:27:32] they are timestamped [21:27:34] After you announce we want to do X [21:27:45] I think the description for usesGit is a little off, and doesn't tell us quite what we want. [21:27:46] holy shit, there's a lot of data supporting keeping this [21:27:49] it looks odd :P [21:27:59] All we know is they've got git on that directory, not that they installed from our repo [21:28:08] (ie: you may use git to deploy software at your work) [21:28:18] good point [21:28:28] is there a better way to tell apart tarball installs from git-clones? [21:28:50] Hmmmmm [21:28:54] we could set a custom $wgSomething before tarballing them :D [21:29:01] mediawiki/vendor vs composer install is a decent way [21:29:11] Point ^ [21:29:29] beyond that, you're looking at remotes in git config files... [21:29:38] My guess would be checking git remotes, but that's either a shell out to git or some custom reading of gitconfig [21:30:09] legoktm: Lack of .git in vendor, you mean? [21:30:13] (as how to tell?) [21:31:00] it's very hard to do correctly without shelling out to git, because there are all sorts of edge cases to account for [21:31:11] yeah, it's sucky [21:31:31] ostriches: and that mediawiki/vendor has a bunch more dependencies than composer install will bring in [21:31:49] We've got the gitinfo classes that Special:Version uses [21:31:58] I like MatmaRex's suggestion but it's another step in the tarball process so ostriches et all would have to be cool with that [21:32:09] Script it? :P [21:32:24] Oh adding that to make-release would be trivial [21:32:45] replace a string in DefaultSettings or similar? [21:33:22] At the same time, shelling out is nbd. [21:33:38] If they've got git, they've set $wgGitBin or whatever it's called. [21:33:44] We use it for GitInfo, like Reedy said [21:33:51] Would be trivial to check remotes there. [21:34:21] And really, we *only* want to know if it's from a repo we control too. [21:34:29] yeah [21:34:35] (installing from your 3 year old forked git repo doesn't help us much) [21:34:35] so is it github or gerrit [21:34:42] github/gerrit/phab [21:34:46] oh yeah [21:34:47] :D [21:34:47] Future-proof it from day 1. [21:34:49] maybe I should just take usesGit out of this patch and do it in a follow-up? [21:35:05] could do [21:35:19] it's not like REL1_28 is going out very soon and we're desperate to get this in :) [21:35:40] Although we have the version already too [21:35:44] Actually, that is enough. [21:36:01] I dunno [21:36:04] I should take a break [21:36:08] E_TOOMANYTABS [21:36:35] ori: How do we view the stuff pinged to EL? [21:37:41] it's on dbstore1002.eqiad.wmnet [21:37:52] i think the credentials for the research account is on officewiki [21:37:54] trying to find the right page [21:38:32] https://office.wikimedia.org/wiki/Research_FAQ#How_do_I_access_instrumentation_data_.28EventLogging.29.3F says "get in touch with Analytics Engineering" [21:38:40] it's on wikitech somewhere [21:39:02] the goal would be to automate exporting a dump [21:39:15] and making that public [21:39:44] or as public as possible [21:39:59] Would writing something to go on MW suck? [21:40:07] Or would it be better having a bot to write a page on MW? [21:40:17] I think we could easily expose the data on Labs/Tool Labs [21:40:21] legoktm@stat1002:~$ mysql --defaults-file=/etc/mysql/conf.d/statistics-private-client.cnf -h analytics-store.eqiad.wmnet -A log [21:40:23] ie crunch the numbers, do some maths, write a wiki page [21:40:35] from there bots can do things and a nie interactive ui could be built [21:40:38] legoktm beat me to it, https://wikitech.wikimedia.org/wiki/Analytics/EventLogging#Access_data_in_MySQL [21:40:39] *nice [21:41:29] the data pushed into labs could be aggregate tables based on interesting dimensions [21:41:40] so less privacy concern (none if done right) [21:45:13] nod [21:49:01] how do I know if it worked then? :P [21:50:07] replace $wgVersion with something filthy and challenge me to repeat it [21:51:21] ( ! ) Warning: Cannot modify header information - headers already sent by (output started at /var/www/wiki/mediawiki/core/includes/specials/SpecialVersion.php:312) in /var/www/wiki/mediawiki/core/includes/WebResponse.php on line 42 [21:51:42] doesn't like the version I chose [21:52:28] ori: That should've done it [21:52:30] it's actually pretty annoying and painstaking to construct and send an event that is perfectly valid but bogus [21:52:33] see what I mean? :) [21:52:58] Well, more the characters I put in my LocalSettings when I changed wgVersion [21:53:21] did it get sent? [21:53:30] it should've [21:53:32] you probably got throttled or the row already exists in updatelog [21:53:35] I edited a page [21:53:44] and the version number I changed it to... Wasn't a number [21:55:09] I just sent one and it showed up [21:55:15] {"event": {"MediaWiki": "1.28.0-alpha", "OS": "Darwin 15.6.0", "PHP": "5.6.23", "arch": 64, "database": "mysql", "machine": "x86_64"}, "recvFrom": "cp2004.codfw.wmnet", "revision": 15781718, "schema": "MediaWikiPingback", "seqId": 1423715, "timestamp": 1469051688, "userAgent": "\"MediaWiki/1.28.0-alpha\"", "uuid": "8c726d5001285de5b7520f4e8d03caa8", "wiki": "246252809670281e71397be191a350cb"} [21:55:17] I don't see yours [21:55:35] when do the deferred updates run? [21:55:42] later [21:55:58] post-send [21:58:49] ori: you should have loads of pingbacks now [21:59:27] 0.512976969274 [21:59:40] {"event": {"MediaWiki": "0.512976969274", "OS": "Linux 4.4.0-31-generic", "PHP": "7.0.4-7ubuntu2.1", "arch": 64, "database": "mysql", "machine": "x86_64", "usesGit": true}, "recvFrom": "cp3041.esams.wmnet", "revision": 15777149, "schema": "MediaWikiPingback", "seqId": 17705308, "timestamp": 1469051952, "userAgent": "\"MediaWiki/0.512976969274\"", "uuid": "f0584b65f3da51c7bc9643b758ce9ef3", "wiki": "4e69cb15e99d32acdb736473bd86776b"} , e [21:59:40] tc [21:59:56] 894 [22:00:16] [eventlog1001:/srv/log/eventlogging] $ grep Pingback all-events.log | wc -l [22:00:16] 896 [22:01:28] ('grep -c' was the right thing to do there) [22:02:13] lol [22:02:14] reedy@ubuntu64-web-esxi:/var/www/wiki/mediawiki/core$ grep -c "pingback sent OK" /tmp/log.txt [22:02:14] 894 [22:02:34] 2 are from me [22:04:06] lol [22:04:30] I've no idea if my original pings got sent [22:05:03] Certainly, the rest of the code seems to work fine :P [22:05:25] get the latest patchset, and run wfGetDB( DB_MASTER )->delete( 'updatelog', [ 'ul_key' => "Pingback-{$wgVersion}" ] ); ObjectCache::getLocalClusterInstance()->delete( "Pingback-{$wgVersion}" ); [22:05:30] and visit a page [22:10:04] git seems to be going slow [22:10:29] demand a refund [22:11:44] http://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016 [22:12:12] 10 mins to deploy? [22:12:15] even we're not that bad [22:12:38] "The regular expression was: ^[\s\u200c]+|[\s\u200c]+$ Which is intended to trim unicode space from start and end of a line. A simplified version of the Regex that exposes the same issue would be \s+$ which to a human looks easy (“all the spaces at the end of the string”), but which means quite some work for a simple backtracking Regex engine. [22:12:43] The malformed post contained roughly 20,000 consecutive characters of whitespace on a comment line that started with -- play happy sound for player to enjoy. For us, the sound was not happy." [22:14:36] PyBal's health check uses Main_Page too [22:17:58] that took a lot longer than it should've [22:20:28] doesn't look to be logging anything [22:21:38] ori: Any more pings? [22:21:44] is $wgPingback true? [22:21:58] no, still 896 [22:23:12] oh [22:23:16] I thought I added it [22:23:18] where'd it go [22:23:47] now? [22:24:19] yep [22:24:30] {"event": {"MediaWiki": "1.28.0-alpha", "OS": "Linux 4.4.0-31-generic", "PHP": "7.0.4-7ubuntu2.1", "arch": 64, "database": "mysql", "machine": "x86_64", "memoryLimit": "128M", "serverSoftware": "Apache/2.4.18 (Ubuntu)"}, "recvFrom": "cp3041.esams.wmnet", "revision": 15781718, "schema": "MediaWikiPingback", "seqId": 17711401, "timestamp": 1469053414, "userAgent": "\"MediaWiki/1.28.0-alpha\"", "uuid": "a5ff81776ec65dea8df7b96e07856d04", " [22:24:30] wiki": "4e69cb15e99d32acdb736473bd86776b"} [22:25:05] hahaha, IT'S ALREADY USEFUL [22:25:10] Who here has been running MW on PHP 7? [22:25:14] * ori points at Reedy [22:25:40] Gotta find something to do to break the boredom [22:26:03] add php7 compat to scribunto [22:26:12] err, luasandbox [22:26:28] there's a patch for it I think? [22:26:34] oh cool [22:26:39] heh [22:26:58] Brad was working on it because legoktm is getting everything in php7 shape for debian [22:27:07] I did one for wikidiff2 [22:27:16] nice! [22:27:19] I think it is merged? [22:27:35] Still can't work out why apache_request_headers() doesn't seem to work on PHP7 [22:28:10] well, doesn't exist [22:28:29] wikidiff2 php7 compat was https://gerrit.wikimedia.org/r/#/c/293894/ [22:28:39] it's part of php-fpm [22:29:06] are you running php7 under php-fpm? (is that still a thing in the world of php7?) [22:29:20] fpm is installed [22:29:29] ori: luasandbox is https://gerrit.wikimedia.org/r/#/c/298312/ [22:29:48] anomie, that's awesome [22:30:32] php5 + php7 compat leads to ifdef soup :/ [22:31:19] there are a couple of half done attempts at a macro/backport compat layer but none of them work well that I've found [22:31:40] we just forked yaml because it was too gross otherwise [22:33:58] we need a kibana4 url shortener [22:36:04] https://github.com/elastic/kibana/issues/1553 [22:36:23] ori: yeah, fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; [22:37:30] lol @ "While having the entire state of the application in the url is very handy, there are times when it gets too long to share on certain platforms. " [22:41:36] heh. it borks IE too -- https://github.com/elastic/kibana/issues/3947 [22:42:00] yeah, ResourceLoader has to work around that [22:42:58] the funny thing to be about this is that it is a regression to how kibana originally did things that they dropped in kibana3 because it broke all kinds of things [22:43:12] rewrites are doomed to repeat the same dumb bugs [22:44:48] and add some extra, just for you [22:46:40] In completely off-topic news I'm hoping that Google gives niantic some more server capacity soon because I can't catch any pokemon when the client can't connect to the server :/ [22:59:43] bd808: so, I reported this issue with wikidata and included a logstash (short) url: https://phabricator.wikimedia.org/T140955 [23:00:15] when I open it in Iceweasel 44 it gives me the default page [23:00:19] but in Chromium it works [23:00:50] BUT, it works if I copy/paste the long url into iceweasel [23:01:07] (even though it's the same url in my address bar, just not going through the shorturl redirector) [23:01:22] is this an issue on my end? [23:02:18] bd808: second thing if you're not sure, adive on https://phabricator.wikimedia.org/T140954 ? :) :) [23:02:23] advice [23:02:29] maybe something wonky in their js on the initial https://logstash.wikimedia.org/app/kibana#1654394940997267372 response? [23:03:20] what should the link be? [23:03:29] https://logstash.wikimedia.org/app/kibana#/dashboard/Fatal-Monitor?_g=(refreshInterval:(display:'5%20minutes',pause:!f,section:2,value:300000),time:(from:now-2d,mode:relative,to:now))&_a=(filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'logstash-*',key:level,negate:!t,value:NOTICE),query:(match:(level:(query:NOTICE)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index [23:03:35] :'logstash-*',key:message,negate:!t,value:SlowTimer),query:(match:(message:(query:SlowTimer,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'logstash-*',key:message,negate:!t,value:'Invalid%20host%20name'),query:(match:(message:(query:'Invalid%20host%20name',type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'logstash-*',key:level,negate:!t,value: [23:03:41] INFO),query:(match:(level:(query:INFO)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'logstash-*',key:level,negate:!t,value:WARNING),query:(match:(level:(query:WARNING)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'logstash-*',key:normalized_message.raw,negate:!f,value:'Wikibase%5CRepo%5CStore%5CWikiPageEntityStore::updateWatchlist:%20Automatic%20transaction%20with [23:03:47] %20writes%20in%20progress%20(from%20DatabaseBase::query%20(LinkCache::addLinkObj)),%20performing%20implicit%20commit!!'),query:(match:(normalized_message.raw:(query:'Wikibase%5CRepo%5CStore%5CWikiPageEntityStore::updateWatchlist:%20Automatic%20transaction%20with%20writes%20in%20progress%20(from%20DatabaseBase::query%20(LinkCache::addLinkObj)),%20performing%20implicit%20commit!!',type:phrase))))),opt [23:03:53] ions:(darkTheme:!t),panels:!((col:1,id:Top-20-Hosts,panelIndex:2,row:3,size_x:9,size_y:2,type:visualization),(col:1,columns:!(type,level,wiki,host,message),id:Default-Events-List,panelIndex:3,row:10,size_x:12,size_y:23,sort:!('@timestamp',desc),type:search),(col:1,id:Fatal-Events-Over-Time,panelIndex:4,row:1,size_x:12,size_y:2,type:visualization),(col:1,id:Trending-Messages,panelIndex:5,row:5,size_x [23:03:59] :12,size_y:5,type:visualization),(col:10,id:MediaWiki-Versions,panelIndex:6,row:3,size_x:3,size_y:2,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:'(type:mediawiki%20AND%20(channel:exception%20OR%20channel:wfLogDBError))%20OR%20type:hhvm')),title:'Fatal%20Monitor',uiState:(P-2:(spy:(mode:(fill:!f,name:!n)),vis:(legendOpen:!f)),P-4:(spy:(mode:(fill:!f,name:!n)),vis:(colors:(excep [23:04:03] heh [23:04:05] tion:%23C15C17,hhvm:%23BF1B00))),P-6:(spy:(mode:(fill:!f,name:!n)),vis:(legendOpen:!t)))) [23:04:05] lol [23:04:08] ughhh [23:04:15] https://phabricator.wikimedia.org/P3529 [23:04:37] irssi took the liberty of adding newlines, for some reason [23:06:30] (since it's longer than the line max for IRC, I presume, I'm a bit dense right now) [23:08:29] I was actually asking about the other link (for the UBN!) but I found it [23:08:45] oh, sorry [23:08:59] no wirries [23:09:00] *worries [23:09:22] ftr: I wasn't sure if it was UBN! [23:09:35] it was not! [23:09:50] fuck anyone who is installing MW bits using Composer [23:10:08] I'd like to rip that out of everything we control [23:10:14] it doesn't work and never will [23:10:30] * bd808 shakes fist in general direction of SMW [23:11:10] It really sucks that the most popular MW variant has such a bad relationship with upstream [23:14:46] greg-g: your short link works for me with FF. Can you see any js errors when you try to use it in Iceweasel? [23:15:29] also didn't Mozilla and Debian decide to get along and kill off the iceweasel fork? [23:18:24] a couple "unreachable code after return statement" and a "mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create" [23:20:12] greg-g: no errors at all for me (FF 47.0.1) [23:20:23] ok ok [23:21:03] also kibana4 is a pile of stuff that I don't want to debug or extend. E_TOOMUCHJS [23:21:10] * greg-g nods