[00:04:38] TimStarling: Here's another one with symbols in tidy -- https://phabricator.wikimedia.org/P120#543 [00:05:00] yeah, i deployed a package with symbols to labs [00:06:21] and this is definitely with a patched HHVM with support for the 'Z' arg type, right? [00:06:56] 3MediaWiki-Core-Team: Central Code Repository - https://phabricator.wikimedia.org/T1238#802797 (10Legoktm) RfC: https://www.mediawiki.org/wiki/Requests_for_comment/Global_scripts [00:07:15] it's definitely the package _joe_ produced for that purpose, yes [00:07:33] and it's a segfault with a totally different bt than the one i was seeing earlier when i didn't have the patch [00:08:03] *than the crash [00:08:11] this one is actually a segfault [00:11:36] TimStarling: another one, with full arguments: . deployment-mediawiki02.eqiad.wmflabs:/home/ori/core.deployment-mediawiki02.hhvm.17244.1417565053 if you want to poke. [00:49:37] <^d> manybubbles: poolcounter segmenting conf went out. uneventful. might be more interesting tomorrow with new branch. [00:51:38] ^d: arc should go in /srv, imo. I hear you on the benefits of providing it for both the host and guest environments, but we decline to do that because sharing executable scripts with the host has hairy implications for security. [00:52:05] <^d> fair enough. patch was based on idle conversation in here anyway :) [00:53:08] <^d> ori: What about my other argument though. Like iegreview I was going to make a role for phab hacking. [00:53:40] I was looking for your comments -- did you say it on IRC or write it on the patch? [00:53:46] if the former, then I lost them [00:54:06] s/it/them [00:54:14] <^d> irc yesterday at some point. [00:55:14] bd808, marxarelli: any thoughts on this? [00:55:59] we keep parsoid in /srv and MediaWiki-Vagrant users are more likely to want to hack on parsoid than on iegreview. It's a shame that VirtualBox Shared Folders is so buggy and slow. [00:56:01] I think it's ok personally. I've tossed 2 non-wiki roles in there myself. [00:56:16] in where? [00:56:33] in mw-v. Are we talking about arc of a general pahb role? [00:56:36] *phab [00:56:40] * bd808 is confused [00:56:56] <^d> Well if we're only doing arc then ori's point of moving it to /srv makes sense. [00:57:12] I suggested sharing arc between the host and guest so ... [00:57:12] <^d> For doing a general phab role for hacking you'd want it in /vagrant [00:57:21] i think both phab and arc can go in mwv; it's just a question of where to put files [00:57:27] i don't think we should share arc [00:57:39] we don't do that with git, or git-review [00:57:41] <^d> We share MW maintenance scripts, fwiw. [00:57:54] <^d> arc is just php like those. [00:58:16] And if mw-v ships arc there is one less thing to download [00:58:44] well, the wiki that LocalSettings.php configures exists on the guest, so those maintenance scripts won't work (and aren't meant to work) on the host [00:59:20] <^d> arc would do the same, it would be operating on the host. [00:59:23] <^d> Which is the point. [00:59:27] Installing things in /srv bugs me based on my assumption that this is a dev environment [00:59:29] how would people use arc on their host environment? would they use the full path each time? [00:59:42] <^d> Either that or add it to your $PATH. [00:59:48] add to path or a shell alias [01:00:02] <^d> Plenty of options once you *have* the binary :) [01:00:37] I think I'd be OK with putting in /vagrant so long as we don't actually encourage people to use it on their host environment [01:01:01] it won't be in $PATH, so it won't get used accidentally on the host, so that's fine [01:01:02] I'm not getting the security separation [01:01:27] You are worried about the contained local vm attacking the host? [01:03:12] yes. I'd like people to be able to expose their VM via things like http://localtunnel.me/ [01:04:39] and invite someone to remote-debug via SSH, or look at a proposed interface change together [01:05:41] k. So arc exposes a different or larger risk than edits to the Vagrantfile? [01:06:06] that's a good point [01:06:46] It is more attack surface but not really different [01:07:16] But if I was attacking folks who use vm sharing services I'd target the Vagrantfile ;) [01:07:57] And if I was specifically targeting mw-v users I'd attack Vagrantfile-extra.rb [01:10:04] But I have to agree in spirit. I don't like schemes that have been devised for git-review in the vm that require ssh-agent forwarding on a similar principle [01:13:30] <^d> luckily we won't have that same problem in arc. [01:13:33] <^d> yay, conduit. [01:13:34] kinda late to the party but yeah, i agree with bd808: if you're meaning for people to hack on it, leave it in /vagrant [01:15:13] imho, only supporting software/services that won't be touched by the user should go in /srv [01:24:19] what percentage of traffic is served by hhvm now? [01:26:40] 3RESTBase-Cassandra, MediaWiki-Core-Team: Review restbase indexing proposal - https://phabricator.wikimedia.org/T729#803068 (10GWicke) [01:27:29] I haven't heard a number but _joe_ said the app servers would all be reimaged by Thursday [01:33:26] <^d> http://etherpad.wikimedia.org/p/app-server-upgrade has a link to a google doc. [01:33:52] <^d> I heard tell earlier today "done by thursday" or something. [01:35:00] seems to be about 60% [01:36:33] yeah, wednesday/thursday should be most of the reinstalls done [01:53:49] 3MediaWiki-Core-Team: kafkatee security review - https://phabricator.wikimedia.org/T75950#803961 (10csteipp) A couple of minor asserts https://gerrit.wikimedia.org/r/#/c/177152/ Otherwise this should be fine to deploy. [02:29:29] ori: https://wdq.wmflabs.org/api_documentation.html [04:52:40] 3MediaWiki-Core-Team: Isolate throughput issue on HHVM API action=parse - https://phabricator.wikimedia.org/T758#805753 (10ori) [06:22:01] AaronSchulz: the whole thing could be made a lot simpler still [08:25:43] <_joe_> TimStarling: 60% is a reasonable figure atm, yes [08:27:33] <_joe_> and yeah, done by thursday is probably true [08:27:43] <_joe_> modulo terrible prod problems [11:23:21] 3MediaWiki-Core-Team, MediaWiki-extensions-GWToolset, Multimedia: Upload stopps regularly - https://phabricator.wikimedia.org/T76450#806478 (10mark) [13:47:53] 3MediaWiki-Core-Team, MediaWiki-extensions-GWToolset, Multimedia: Upload stopps regularly - https://phabricator.wikimedia.org/T76450#807145 (10Gilles) [14:54:50] 3MediaWiki-API, MediaWiki-Core-Team: API CORS preflight response should allow Api-User-Agent header - https://phabricator.wikimedia.org/T76340#807260 (10Anomie) [15:47:19] <_joe_> hey any devs around? [15:47:32] <_joe_> robots.txt is timing out pretty much everywhere [15:51:44] * anomie is around [15:53:06] 3CirrusSearch, MediaWiki-Core-Team: insource:"A, B et C (Le Prisonnier)" should match interwiki links such as [[fr:A, B et C (Le Prisonnier)]] - https://phabricator.wikimedia.org/T74902#807326 (10Manybubbles) [15:54:10] <_joe_> so, what I see is a lot of 503s on requests from googlebot, but if I request the same url again, it works [15:58:13] <_joe_> ori: the "apache" pool is 100% on HHVM as of now [16:04:56] _joe_: I see https://www.mediawiki.org/robots.txt isn't working, but I'm not seeing anything obvious on the MediaWiki end. Unless maybe it's only happening on wikis that don't have MediaWiki:robots.txt and something in the stack is choking on the mismatched Content-Length header? [16:05:42] <_joe_> anomie: no idea [16:06:50] <_joe_> anomie: I can see it just fine calling the appservers directly [16:14:07] _joe_: We could try https://gerrit.wikimedia.org/r/#/c/177232/ and see if it helps. [16:15:03] <_joe_> anomie: do you think that might be related? [16:15:57] _joe_: Some spot checking didn't return anything contradictory to the theory that it's only wikis without MediaWiki:robots.txt having problems, and the Content-Length header is only wrong in that case. Worth a shot, IMO. [16:16:18] <_joe_> anomie: it might in fact [16:16:26] <_joe_> varnish could interpret that as a failure [16:17:19] <_joe_> anomie: go for it :) [16:18:33] <_joe_> eh sorry, I should've +1'd it [16:20:22] just updated firefox this morning. there is now a link which bounces people to enwiki's search.... [16:21:01] :) I saw that [16:21:20] It's been an option for quite a while but the new UI makes it easy to spot [16:22:00] _joe_: Darn, doesn't seem to have made a difference. Still getting the error page. [16:22:31] <_joe_> anomie: yes [16:26:24] * wingswednesday updated firefox [16:26:45] No Firefox, I don't care who your search deal is with. Yahoo still sucks and I'm not switching. [16:27:16] wingswednesday: you'll make my baby brother sad. :( [16:27:34] He works at Y! on their stores platform [16:28:25] Well he doesn't work in search then :) [16:28:38] nope. true enough [16:31:42] manybubbles: robh and I discussed it earlier, 24 seems fine to us too. [16:31:49] He said maybe 26 if we're feeling paranoid. [16:32:11] I think 26 is probably overkill for failover purposes. [16:32:53] <_joe_> anomie: why do you set the content-lenght at all? [16:33:12] <_joe_> anomie: it's set wrong if the content is then gzipped by apache [16:33:30] <_joe_> so, just don't set it and let apache do its work? [16:35:03] _joe_: Let's try it. [16:35:45] <_joe_> I guess the same happens anywhere we set the content-lenght [16:35:52] <_joe_> *th [16:36:03] I don't think we normally let apache add gzip [16:36:08] do we? [16:36:50] There was/is an hhvm bug related to that. MW was gzipping and then hhvm double gzipped [16:36:59] <_joe_> was [16:37:10] <_joe_> we're not double-gzipping in fact [16:37:20] <_joe_> I think this is a result of some workaround of ours [16:37:38] <_joe_> and the sane thing is to let apache do the gzipping [16:37:49] _joe_: Yeah, killing Content-Length entirely there seems to have fixed it. [16:37:56] <_joe_> I don't think we're crazy enough to do that in php [16:38:05] <_joe_> anomie: high five :) [16:38:35] I tracked it down at some point. There is code in our php layer that chooses to add or not add the gzip output filter and sets some http headers when it does. [16:40:14] wfGzipHandler() is the place -- https://github.com/wikimedia/mediawiki/blob/7878e3896b5d5976254f4d580ad41e1adcf5a844/includes/OutputHandler.php#L95-L145 [16:41:21] And called from -- https://github.com/wikimedia/mediawiki/blob/7878e3896b5d5976254f4d580ad41e1adcf5a844/includes/OutputHandler.php#L56-L63 [17:11:28] <_joe_> bd808: anyway, when php gzips its content, it will output the relevant header I guess [17:11:47] <_joe_> while the robots.txt file was ungzipped in php from what I understand [17:11:59] bd808: I had a dream last night that involved someone (dunno who) telling me that running `brew upgrade` was bad. [17:12:02] <_joe_> and apache took care by itself to gzip it for clients [17:12:08] I need some time away from the command line... [17:12:20] _joe_: *nod* [17:13:30] wingswednesday: well... it is a package manager that uses binaries downloaded from the amazon cloud :) [17:13:54] wingswednesday: But yeah stop dreaming about security or you will wake up with a huge neckbeard at some point [17:13:56] Is that where sourceforge hosts their files these days? [17:14:07] amazon? [17:14:20] The "bottles" are in aws aren't they? [17:14:29] ==> Downloading https://downloads.sf.net/project/machomebrew/Bottles/libpng-1.6.15.yosemite.bottle.tar.gz [17:14:48] wingswednesday: well, brew and upgrade to yosemite don't go well [17:15:03] YuviPanda: It's ok if you have an SSD [17:15:04] I've been on yosemite. [17:15:14] Which I do in all my laptops :) [17:15:16] hmm, I should update one of these days [17:15:32] I did it on 2 laptops. It took about 30 minutes or so [17:15:40] Actually, I guess the old MBP in the closet is spinning disk. [17:15:47] But it just collects dust. [17:16:06] I did have to rebuild quite a bit of stuff. Anything that was linked against system libs got broken. [17:16:37] and dear god have a high bandwidth connection. [17:16:59] getting the update + xcode + other random things was 5G I think [17:17:21] dear god xcode. [17:17:22] <_joe_> bd808: oh the usual Xcode-breaks-brew thing [17:17:36] "Here download 6GB of bullshit you don't need just so you can use clang" [17:18:02] _joe_: Yeah and new python broke several things for me. Like vim [17:18:18] <_joe_> yosemite? [17:18:26] *nod* [17:18:44] My brew built vim was linked against the system python [17:18:54] <_joe_> gee [17:18:59] brew built vim? why? [17:19:00] <_joe_> I don't want that [17:19:02] I was on 10.8 though [17:19:07] <_joe_> oh ok [17:19:15] <_joe_> yeah that was kinda horrible [17:19:18] 10.8 -> 10.10 in one shot [17:19:31] I should probably just put debian on this thing at some point [17:19:40] <_joe_> YuviPanda: don't [17:19:44] <_joe_> you'll regret that [17:20:04] wingswednesday: because vim! Back in the olden days the system vim was crap. Might be better now. [17:20:26] +1 to _joe_ [17:20:34] I tried dual booting ubuntu and osx once. [17:20:38] That road...there be dragons [17:20:45] YuviPanda: If you want debian, you should probably get a lenovo laptop [17:20:51] that's true [17:20:54] I keep thinking this [17:21:03] and then going 'naaah, it is ok' [17:21:08] I tried dual booting osx and fedora, it was...not fun. [17:21:12] probably when I'm in a single place long enough to get a adesktop [17:21:12] <_joe_> I got a carbon x-1, it's nice but the keyboard sucks [17:21:17] and large, large screens [17:21:31] desktop? bah [17:21:36] but fedora on a real laptop is lovely :> [17:21:47] just get a good external monitor [17:21:51] _joe_: I can't use a computer if I'm not using http://images.anandtech.com/doci/7125/Kinesis%20Advantage%20(1).jpg as a keyboard [17:21:59] 20s on a normal keyboard and my RSI flares up [17:22:31] but carrying that keyboard around meant I can't carry my camera around, :( [17:22:33] but still ok [17:23:23] Heh, I should pastebin this old e-mail about my dual-boot woes. [17:23:34] <_joe_> YuviPanda: I have http://www.daskeyboard.com/model-s-professional-for-mac/ [17:23:56] _joe_: :D blue or brown keys? [17:24:24] oh, blue [17:24:34] * YuviPanda isn't a fan of blue keys too much. [17:24:50] https://phabricator.wikimedia.org/P123 [17:24:50] <_joe_> I love the hard feedback [17:24:52] _joe_: I have a majestouch ninja, which is somewhat similar. Can type on that for maybe 10 mins before it flares up again, though :( [17:25:01] I use this keyboard -- http://www.microsoft.com/hardware/en-us/p/sculpt-ergonomic-desktop/L5V-00001 [17:25:03] fuck me, I forgot how much GPT/MBR fighting sucked. [17:25:16] The only thing M$ ever made that I like is keyboards [17:25:44] <_joe_> and the mice, some years ago, were the best [17:26:03] Yeha they did make some good mice. I use a magic mouse now [17:26:16] * YuviPanda uses trackpad, can't use mice either [17:26:17] <_joe_> but don't tell bblack either thing [17:26:33] <_joe_> he's been in Logitech for some years I think :) [17:27:12] heh. My friends who work at HP don't like it when I point out that Cannon makes better printers [17:27:26] and makes much of the guts of the HP printers too [17:27:58] I've had the same HP printer for 8 years and it still works. [17:28:07] It's a shit printer, but hey it works for me. [17:28:35] My LJ4M+ is finally dying. I bought it in 1997. [17:28:58] The transfer rollers are rotting :( [17:29:25] I think my mom's LJ5L still works. [17:29:28] Good printer. [17:29:53] bd808: operations/mediawiki-config now has a composer.json file that isn't in the repo root, so the normal php-composer-validate job won't work for it... [17:30:12] legoktm: hmmm... lame. [17:30:21] So we need to tweak the job I guess? [17:30:43] * bd808 remebers he needs to poke hashar about the cdb job too [17:30:49] yeah, I just have no idea how to do that :P [17:31:28] also, I was using the xhprof profiler last night, and it was super easy to use (compared to me fiddling around with the db one a few months ago)! :D [17:31:34] legoktm: The magic there is not too dark. bash scrips in yaml that generate xml. what could be easier? :) [17:31:57] wingswednesday: Debian seems to do GPT well (or well enough, anyway) now. [17:32:37] bd808: I think we just need to change the directory to multiversion [17:33:28] Making the jjb macros parametrizable is not magic I've tried yet. [17:34:07] legoktm: file a bug and maybe it will get magically fixed. If not we can take a shot at it [17:34:26] ok :P [17:38:37] anomie: Perhaps :) [17:38:53] That e-mail is dated 2011 :p [17:40:33] I guessed it was old. [17:41:33] * anomie installed Debian jessie on two GPT-using computers recently [17:45:39] _joe_, YuviPanda: woooooooooooooooooooooooow [17:45:43] you guys rock!! [17:45:50] <_joe_> ori: alex too :) [17:45:56] !!!! [17:45:58] ^ :) [17:46:00] 3MediaWiki-Core-Team, Code-Review: Import all gerrit.wikimedia.org repositories with Diffusion - https://phabricator.wikimedia.org/T616#807595 (10Chad) [17:46:12] i literally jumped when i read your e-mail [17:46:17] <_joe_> :) [17:46:38] not read it... [17:46:40] all done? [17:46:46] <_joe_> I said so on IRC first [17:46:54] Reedy: all appservers done. [17:46:59] well, except for one which has a bad disk. [17:46:59] I've not read backscroll either [17:47:02] been off all day [17:47:07] Reedy: slacker. [17:47:13] oh wow. NIce [17:47:15] Awesome [17:47:15] Congrats :D [17:47:23] YuviPanda: Moving house and shizz [17:47:26] <_joe_> ori: do you think it's better to wait for the tidy change for doing the same with API? [17:47:30] Reedy: nice :) [17:48:02] <_joe_> ori: and we didn't accomplish that without a nifty bug on robots.txt [17:48:10] oh what was the bug? [17:48:29] <_joe_> we printed out content-length from PHP [17:48:36] <_joe_> but then the file is .txt [17:48:44] We need to get hhvm logs into logstash. Fatalmonitor there is junk without the hhvm errors. [17:48:45] <_joe_> so if the file was requested as gzipped [17:49:01] manybubbles: woo, poolcounter rebuilt :) [17:49:02] <_joe_> apache gzips it [17:49:28] <_joe_> only that, in case of mod_proxy_fcgi, the original content-length doesn't get rewritte [17:49:46] <_joe_> or at least it's what I figured while firefighting it with anomie [17:50:12] s/HAT/HAL/ everywhere :) [17:50:25] <_joe_> L? [17:50:46] <_joe_> it's HNJ is my final goal :P [17:50:52] cirrus on enwiki, hhvm on app servers, it's time to retire [17:51:01] <_joe_> (HHVM, Nginx, jessie) [17:51:14] Linux :) [17:51:18] <_joe_> but it does sound horrible [17:51:28] Higher New Jersey [17:51:31] So we don't tie ourselves to a distro :p [17:51:41] <_joe_> wingswednesday: I'm in [17:51:43] * YuviPanda votes for arch [17:51:48] HAA [17:51:51] <_joe_> YuviPanda: arch? [17:51:52] ;-) [17:52:00] why not gentoo? pain must be felt! [17:52:09] we can go full hog and use OS X [17:52:09] <_joe_> YuviPanda: it's clear I didn't inteview you [17:52:11] * bd808 slaps YuviPanda with a large trout [17:52:13] Slackware or gtfo. [17:52:26] No love for puppy? [17:52:34] Linux From Scratch [17:52:37] <_joe_> "what distro do you prefer in production" Gentoo, slackware or arch earn you a "no hire" :P [17:52:39] _joe_: heh, I'm pretty sure that if I interview at the WMF now I won't really make it through. [17:52:48] Reedy: We can make our own distro! [17:52:50] * YuviPanda hasn't even run arch in 5 years. [17:52:55] ReactOS or death [17:53:00] waitohshi [17:53:12] WMLinux [17:53:16] <_joe_> YuviPanda: nah, they hired me, you'd do fine [17:53:26] _joe_: heh :) [17:53:32] I have a lot of manuals for HPUX 8 if that will help [17:53:47] <_joe_> ori: there was an article on wired about TempleOS [17:53:53] I've some SCO disks at my dads work [17:54:03] _joe_: I've seen people with similar things to what I had on my resume get rejected directly by HR, so I suspect that'll be my fate too. [17:54:09] (as in, my resume as it was when I got hired) [17:54:09] I used z/OS before. [17:54:14] <_joe_> Reedy: did they try to sue you? [17:54:20] _joe_: yeah, they managed to avoid talking about the racism [17:54:20] lol [17:54:30] No, they've had a reasonable amount of money out of us [17:54:34] <_joe_> ori: yeah I noticed [17:54:45] Windows Small Business Server [17:54:49] I think that's still a thing? [17:54:49] <_joe_> Reedy: I meant the disks [17:55:00] YuviPanda: Not really, they've stopped it for newer versions [17:55:16] TFS! [17:55:17] ah, it's Windows Server Essentials now [17:55:25] It's not the same though [17:55:26] wingswednesday: I bet you'd love it if we migrated again ;) [17:55:37] Sure why not. [17:55:40] <_joe_> well, if we want to play the "I used some weird OS" game, I think I used pretty much any Unix around, and OpenVMS as well [17:55:56] <_joe_> which was kinda nice, btw [17:56:31] TempleOS spoils you [17:56:34] <_joe_> (I'm not that old, we had some really really old workstations at the university) [17:56:35] * YuviPanda will lose, on account of never having used any OS that's not a debian derivative for more than a few months [17:56:49] <_joe_> YuviPanda: you're too young :) [17:56:54] Heh, the caption on [[z/OS]] makes me lol. [17:56:55] no other OS comes with guidelines for talking with god via the equivalent of /dev/random [17:56:56] "A mainframe computer on which z/OS can run." [17:57:03] _joe_: heh :) First linux I ever tried was Ubuntu 7. something, I think :) [17:57:22] SCO... fond and yet horrible memories. My first "real" computer job was contract work reverse engineering the data storage of a point of sale system for gas stations built on SCO and ndbm. [17:57:28] where I proceeded to completely trash my system because I didn't understand partitioning [17:57:40] bd808: that sounds like a lot of fun actually [17:57:44] <_joe_> YuviPanda: I started with red hat 5 (not RHEL, redhat), then debian slink I think [17:57:47] i bet you have some fun stories [17:58:13] <_joe_> {insert "the history of mel" link} [17:58:16] I've had fun with C-ISAM files [17:58:27] * YuviPanda has no fun stories [17:58:40] YuviPanda: That one time you rm -rf / [17:58:41] :P [17:58:43] I have fun stories but they don't involve antiquated OS releases. [17:58:46] never written major JS without jquery, for example. [17:58:47] ori: It was. Perl and late nights in college. I figured out how to print reports that the vendor was asking $10k for and mine actually worked. [17:58:50] Reedy: ah, that's not fun :) [17:58:57] I think I got paid $500 and some pizza [17:59:01] heh [17:59:04] bd808: you took their jobs! [17:59:16] xD [17:59:20] I also broke their drm [17:59:27] bd808: http://www.geekpeak.de/images/produkte/i22/22-go-away-or-i-will-replace-you-de.jpg [17:59:43] The SCO system at my dads work... Them quoting everything in days or half days work [17:59:50] Minor changes "yeah, that's half a day" [18:00:00] Even in C.. That's gonna take you minutes to do [18:00:30] Rule #1 of contract programming: everything takes at least 4 hours. [18:00:43] greg-g: ^ FYI [18:00:44] Reedy: do you ever go, like, "I just pushed a new release to one of the world's biggest websites while on the phone with you" [18:00:44] ;) [18:01:30] My bullshit meter dealing with "IT suppliers" in the UK [18:01:31] Ugh [18:01:34] I've totally crashed the site while on the phone with my dad. [18:01:44] * Reedy high fives wingswednesday [18:37:11] 3FINCH, MediaWiki-Core-Team: Generalize NoScript detection in Wikipedia Zero to be used for all users - https://phabricator.wikimedia.org/T1384#807839 (10Jdlrobson) [18:37:23] 3FINCH, MediaWiki-Core-Team: Generalize NoScript detection in Wikipedia Zero to be used for all users - https://phabricator.wikimedia.org/T1384#24251 (10Jdlrobson) [18:39:30] 3Zero, FINCH, MediaWiki-Core-Team: Generalize NoScript detection in Wikipedia Zero to be used for all users - https://phabricator.wikimedia.org/T1384#807847 (10Jdlrobson) [18:40:43] Does #mw-core-team == all things anyone wants to put into mw/core.git repo? [18:40:50] seems not quite right [18:42:20] it should be tasks that the mw core team is going to work on or should look at [18:42:49] MediaWiki-General-Or-Unknown is where stuff for mw/core.git should go [18:43:41] * bd808 votes for those FINCH tasks to go there instead [18:50:02] you guys need a better name [18:50:15] "The Helpless Cases Team" has a nice ring to it [18:50:45] or Hopeless, even better. [18:51:16] Actually, MW Core Team was a fine name. [18:51:26] Before everyone else started using "core" to describe their stuff :) [18:51:29] could someone review https://gerrit.wikimedia.org/r/#/c/177021/ ? It's a script to remove totally invalid emails from the database...I'd like to run it in prod sometime soon [18:52:16] AaronS: https://gerrit.wikimedia.org/r/#/c/177268/ and its child, when you get a chance. [18:52:20] I'm pretty hopeless [18:52:34] wingswednesday: Why not just call you lot "the MediaWiki platform team"? [18:52:40] ori: any idea when api servers will be converted? [18:52:45] wingswednesday: And the rest can be "hangers-on". [18:52:57] because platform is a larger group [18:53:04] and we suck at naming things [18:53:19] * wingswednesday points to topic [18:53:51] Team51 [18:54:11] Actually when I was setting up our team workboard, I kept noticing we're project 37. [18:54:20] "Project 37" sounded like a nice skunkworks division name. [18:55:09] wingswednesday: oh! now I know who you are! your name was getting cut off and I was like "who the fucking is wingswedn?" [18:55:14] project 37 [18:55:17] best name [18:55:38] thanks manybubbles :D [18:55:40] Maybe all teams should be renamed to honor historic female encyclopedians. I can start a loomio poll on it. ;) [18:55:55] bd808: all teams everywhere [18:56:07] dammit bd808 you reminded me of loomio too early this month. [18:56:19] Only 3 days into December and already I remembered Loomio. [18:56:47] Experience seasonal joy by starting a loomio pool! [18:56:54] So I started trying to write that blog post I'm supposed to write. [18:57:05] Oh man, we did talk about doing that. [18:57:09] s/pool/poll/ [18:57:11] You wanna collab on it? [18:57:23] and I did a bit of reasearch. We has about the same number full text searches as cnn.com has page views [18:57:26] about [18:57:33] they have us by a few percentage points [18:57:39] That's a cool statistic :) [18:57:52] about a billion a month [18:57:54] about [18:58:13] I still can't wrap my head around our page view scale [18:58:15] I'll write a first rough rough rough draft and mail it to you. or something. [18:58:20] okie dokie [18:58:39] bd808: its _crazy_. varnish helps so so much [18:59:36] bd808: so, I see the new jobrunner running in labs, and not the old one, but the old one still has an init.d entry and and don't one for the new runner [19:00:07] I think the new one is an upstart job right? [19:00:50] AaronS: /etc/init/jobrunner.conf [19:01:16] just saw it in grep of all /etc [19:01:42] 3Mobile-Web, MediaWiki-Core-Team, Scrum-of-Scrums: Unified diff on mobile - https://phabricator.wikimedia.org/T1223#807925 (10kaldari) 5Open>3Resolved a:3kaldari This has already been released to stable. [19:01:51] where is it in /etc/init.d? We should probably clean that up/ [19:07:13] 3MediaWiki-JobRunner, Scrum-of-Scrums, MediaWiki-Core-Team, Beta-Cluster: beta cluster job runner keep running some periodic tasks - https://phabricator.wikimedia.org/T65681#808020 (10aaron) 5Open>3Resolved a:3aaron Labs now uses the new job runner loop and the periodic tasks stuff in MW uses redis, not me... [19:07:42] bd808: mw-jobrunner or something like that [19:08:13] AaronS: Not child, second change. https://gerrit.wikimedia.org/r/#/c/177277/ [19:09:26] $ sudo rm /etc/default/mw-job-runner /etc/init.d/mw-job-runner /usr/local/bin/jobs-loop.sh [19:09:50] * bd808 runs puppet to see if they come back [19:10:06] that's one way to find out [19:11:20] AaronS: Gone now. :) [19:13:18] proper syntax helps. [19:13:24] I should test patches before pushing. [19:17:22] csteipp: does the "editcontentmodel" right apply for when a page doesn't exist and you're trying to create a page with a different content model than what the default would be? [19:18:02] csteipp: MassMessage has a special page which will let people do that (create a new page with MassMessageListContent type) and I'm not sure if we should check for the right [19:19:30] 3Wikimedia-Logstash, MediaWiki-Core-Team: Get HHVM logs into logstash - https://phabricator.wikimedia.org/T76119#808038 (10JanZerebecki) [19:19:47] 3MediaWiki-Core-Team, MediaWiki-General-or-Unknown: $wgProfileToDatabase still used - https://phabricator.wikimedia.org/T75917#808040 (10Chad) 5Open>3Resolved Closed by commit rMWdbca12bf9332. [19:23:11] csteipp: wctaiwan pointed out to me that MM uses the API internally, so we'll automatically do whatever core does. [19:23:15] legoktm: I would say yes, but I'm not sure what you're doing with it. If it's a fixed content type, and they're not "changing" it, then maybe not. [19:23:37] Cool [19:26:13] > Editing the list through the API failed with error code cantchangecontentmodel. [19:31:01] wingswednesday: added you to a few more cdb-related changes, I forgot to upload them when I wrote them offline [19:48:21] 3MediaWiki-Core-Team: Make Parser::parse tolerate recursion - https://phabricator.wikimedia.org/T76651#808255 (10Welterkj) [19:53:10] 3MediaWiki-Core-Team: Make Parser::parse tolerate recursion - https://phabricator.wikimedia.org/T76651#808273 (10matmarex) `recursiveTagsParse` is not magically "safe". Calling it means that all kinds of metadata generated during parsing will be associated with the currently ongoing parse (link tables, RL module... [20:00:09] 3Librarization, MediaWiki-Core-Team: Made MWException handle non-MW exceptions better - https://phabricator.wikimedia.org/T76652#808294 (10aaron) [20:02:33] legoktm: kk. [20:03:11] manybubbles: I haven't actually reviewed the lines of code yet, but your patch for extra at least compiles and passes tests for me locally :) [20:17:14] I wonder if newer rsync on reinstall apaches will speed up scap [20:20:09] I don't want new apaches in codfw. It'll slow scap back down :( [20:23:56] yeah :( [20:24:24] bd808: Dunno if we've discussed this... But we should make it so servers in codfw won't try and sync from eqiad [20:24:50] Obviously it has to happen for the first couple of proxies/2nd master, but beyond that, it shouldn't [20:24:50] IIRC for tampa it went everywhere [20:26:01] Reedy: Yeah. We need a sync server in codfw for sure. I'd actually like a warm master there (so we could deploy from codfw if tin was offline somehow) [20:26:13] right [20:26:44] but like I say, the "find nearest/fastest server" really shouldn't be causing a load of cross dc transfer [20:28:36] hmmm... so we'd need to split the rsync list by DC and give the right half to the right servers I guess [20:29:00] which would mean we'd need to know which servers are which in the dsh list [20:29:19] I think we had dc specific dsh lists before [20:29:33] We just didn't really use them [20:30:51] wingswednesday: we're not using it directly because you didn't like \Cdb\Exception [20:31:11] I don't! [20:31:12] :) [20:32:19] Reedy: We could figure it out inside scap if we used fqdns in the file too -- https://github.com/wikimedia/operations-puppet/blob/8930c23890075e1fac9f0f7d9580ae881a0671b5/modules/dsh/files/group/mediawiki-installation [20:32:31] Or we could use some better list [20:32:43] Well, the first # is always the DC. [20:32:59] So 4xxx's shouldn't fetch from 1xxx's, etc. [20:33:02] yeah [20:33:06] that's easier, duh :) [20:33:38] It'll work until we end up with >999 apaches in a single DC :) [20:34:20] With hhvm maybe that won't happen :) [20:35:12] Reedy: You should file a bug about it I guess. Do we know when the first codfw MW servers will be stood up? [20:35:22] Not sure... [20:36:11] If its not until next month we could pretend that fixing scap is part of greg-g's top 5 project [20:36:25] lol, pretend. [20:37:33] Reedy: also you/we/us need to fix that patch that you and ori worked on for l10nupdate still. [20:38:17] oris patch was a lot different to my original... :) [20:40:15] I think the only problem with it was that it didn't catch the right exception [20:50:01] 20:49:56 Finished sync_wikiversions (duration: 00m 05s) [20:50:04] that feels a lot quicker [20:53:03] 3MediaWiki-Vendor, OOjs-UI, MediaWiki-Core-Team, UI-Standardization: Import OOJS-UI - https://phabricator.wikimedia.org/T76662#808487 (10bd808) p:5Triage>3Normal a:3bd808 [20:53:19] 3MediaWiki-Vendor, OOjs-UI, MediaWiki-Core-Team, UI-Standardization: Import OOJS-UI - https://phabricator.wikimedia.org/T76662#808458 (10bd808) [20:53:41] 3MediaWiki-Vendor, OOjs-UI, MediaWiki-Core-Team, UI-Standardization: Import OOJS-UI - https://phabricator.wikimedia.org/T76662#808458 (10bd808) [20:54:03] 3MediaWiki-Vendor, OOjs-UI, MediaWiki-Core-Team, UI-Standardization: Import OOJS-UI - https://phabricator.wikimedia.org/T76662#808506 (10bd808) 5Open>3Resolved [20:54:09] OOjs -> "ew, js" in my head. [20:54:14] Every. Single. Time. [20:54:19] 3Librarization, MediaWiki-Core-Team: Made MWException handle non-MW exceptions better - https://phabricator.wikimedia.org/T76652#808508 (10aaron) [20:54:26] i've been saying that we should rename it to just OOUI. [20:54:31] no one listens. :( [20:54:33] wingswednesday: I call it Wee-jus. [20:54:51] Also, GWToolset reminds me of GWT. [20:54:53] OO == "object oriented"? [20:55:47] oooh, oooh, JS! [20:55:51] it stands for "class oriented". :> [20:55:55] It's so shiny! [20:56:04] So "object oriented javascript user interface" I guess for a PHP template/widget library :/ [20:56:21] we are soooo bad at naming [20:57:12] You know where we talk about widgets a lot? Business school. [20:57:18] 3CirrusSearch, MediaWiki-Core-Team: Prefix search containing only a namespace finds weird results - https://phabricator.wikimedia.org/T76350#808518 (10aaron) [20:57:21] Everything in econ class is about "shipping widgets" [20:57:58] The term always makes me think of windows 3.x GUI programming [20:58:26] "just add a tree widget and wire it up to the foo model" [20:59:16] 3UI-Standardization, MediaWiki-Vendor, OOjs-UI, MediaWiki-Core-Team: Import OOJS-UI - https://phabricator.wikimedia.org/T76662#808525 (10bd808) [20:59:45] AaronS: Thx for reviews. MediaWiki now has 4 fewer shitty globals. [21:00:22] well they are still there ;) [21:00:36] But they have @deprecated on them so I can actively ignore them :) [21:01:06] csteipp: does someone else need to look at https://gerrit.wikimedia.org/r/#/c/177021/ ? [21:01:20] AaronS: No, I just haven't actually tested it yet [21:01:54] ok [21:02:31] bd808: Yeah, I'm sure simple rsyncs are quicker with everything on 14.04 [21:03:01] Sweet. I like it when things get faster [21:03:24] _joe_: YuviPanda ^^ [21:03:31] ~7 seconds to sync wmf-config [21:03:45] it was usually 12-14 I think [21:04:00] <_joe_> woa [21:04:07] woo, things got quicker? [21:04:23] seems to be [21:04:26] <_joe_> Reedy: I think it has to do with the cpu beeing less loaded [21:04:40] still [21:04:40] net gain :) [21:05:03] Clearly we need to find new usages for all this excess CPU we've got on hand. [21:05:09] bitcoins? [21:05:11] seti? [21:05:41] We've folded proteins before :p [21:07:23] 3MediaWiki-Core-Team, MediaWiki-extensions-WikibaseClient, Scrum-of-Scrums, Wikidata: Set languageLinkSiteGroup in $wgWBClientSettings to avoid fetching SiteList object from memcached - https://phabricator.wikimedia.org/T58602#808530 (10faidon) [21:10:38] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Remove invalid emails from the database - https://phabricator.wikimedia.org/T76512#808533 (10aaron) a:3Kunalgrover05 [21:10:51] 3Wikimedia-General-or-Unknown, MediaWiki-Core-Team: Remove invalid emails from the database - https://phabricator.wikimedia.org/T76512#802252 (10aaron) a:5Kunalgrover05>3Legoktm [21:15:04] 3MediaWiki-Core-Team: Refactor Title to make permission checking it's own class - https://phabricator.wikimedia.org/T75958#808540 (10csteipp) p:5Triage>3Normal a:3csteipp [21:24:54] _joe_: Are you satisfied with https://gerrit.wikimedia.org/r/#/c/176191/ now or do you want me to change it more? I'd really like to get that and the follow up change to put hhvm errors into logstash done now that we have all the app servers on hhvm [21:25:04] The error reporting is a bit blind without it [21:25:13] well at least some of the error reporting [21:25:53] <_joe_> bd808: uh, looking, sorry [21:26:19] _joe_: thanks. :) [21:27:00] <_joe_> bd808: scope.lookupvar, not scope[] [21:27:09] <_joe_> it's less error-prone [21:27:17] You want the puppet2 syntax? [21:27:25] ok [21:27:40] <_joe_> bd808: uhm wait a sec [21:28:01] <_joe_> I looked at this part of the docs specifically today [21:28:30] <_joe_> yeah in 3 it's the same [21:28:37] <_joe_> s/docs/source/ [21:28:51] <_joe_> sorry nevermind, old habits [21:32:12] ori: http://graphite.wikimedia.org/render/?width=818&height=411&_salt=1417642405.405&target=MediaWiki.IndexPager.doQuery.LogPager.count&from=-14days yay for 09d941379540c94e9a0057056ab1396add33f878 [21:33:01] 3MediaWiki-JobRunner, Scrum-of-Scrums, MediaWiki-Core-Team, Beta-Cluster: beta cluster job runner keep running some periodic tasks - https://phabricator.wikimedia.org/T65681#808565 (10greg) [21:35:45] 3MediaWiki-Core-Team: Investigate memcached-serious error spam that mostly effects HHVM API servers - https://phabricator.wikimedia.org/T75949#808569 (10aaron) [21:39:19] 3MediaWiki-Core-Team, MediaWiki-extensions-WikibaseClient, Scrum-of-Scrums, Wikidata: avoid fetching SiteList object from memcached - https://phabricator.wikimedia.org/T58602#808572 (10JanZerebecki) 5Resolved>3Open [21:39:43] 3Scrum-of-Scrums, MediaWiki-extensions-WikibaseClient, MediaWiki-Core-Team, Wikidata: avoid fetching SiteList object from memcached - https://phabricator.wikimedia.org/T58602#642522 (10JanZerebecki) [21:39:57] 3Scrum-of-Scrums, MediaWiki-extensions-WikibaseClient, MediaWiki-Core-Team, Wikidata: avoid fetching SiteList object from memcached - https://phabricator.wikimedia.org/T58602#642522 (10JanZerebecki) a:5hoo>3Wikidata-bugs [21:48:25] bd808: is dberror not in logstash? [21:49:03] AaronS: It should be... [21:49:20] Search for "type:dberror" [21:49:30] I see one in the last 15 mins [21:49:46] 3 in the last hour [21:50:13] 2 are from vert1000 and one from enwiki [21:50:30] I get none searching for that [21:50:52] AaronS: Try https://logstash.wikimedia.org/#dashboard/temp/b8-GcnwaSoW0DtSqtwZ__g ? [21:51:06] ah, wait [21:51:15] if I keep clicking "zoom out" I see stuff [21:51:47] bd808: btw, https://gerrit.wikimedia.org/r/#/c/175908/ [21:53:47] Can you kill that "reconnected" message too? [21:54:04] Seems very low signal [21:55:07] bd808: I'll make a separate patch now [21:55:20] *nod* [22:01:22] Krinkle: any change to look at https://gerrit.wikimedia.org/r/#/c/174628/19 ? [22:02:47] <^d> AaronS: Can you re-review https://gerrit.wikimedia.org/r/#/c/173472/ for me? It was a manual rebase. [22:03:22] <^d> thx [22:07:41] ^d: https://www.mediawiki.org/wiki/CirrusSearch/BlogDraft [22:08:01] its probably garbage but I needed to get _something_ down to start with [22:09:01] <^d> I'll read it in a few. [22:14:24] I think you should mention the cnn statistic :P [22:15:54] <^d> "Wikimedia Search -- roughly as interesting as CNN" [22:15:55] <^d> :) [22:21:46] ^d: https://en.wikipedia.org/wiki/Tammar_wallaby?forceprofile=true finally, actually useful info [22:25:02] shiny [22:26:25] AaronS: "17.46% 83.463 1051 - wfRunHooks@3" yikes hooks in hooks in hooks? [22:26:48] <^d> I think that just proves hooks are evil :) [22:26:51] bd808: or a hook that triggered two on the same level [22:27:21] don't make that mistaken assumption of xhprof retry count=stack ;) [22:27:35] but yeah, it's probably some ugly thing heh [22:27:55] sure. It means at least a hook in a hook but they could be siblings [22:28:17] Transclusion expansion time report (%,ms,calls,template) [22:28:18] 15.49% 403.579 1 - Template:Reflist [22:29:00] 57.03% 280.967 1 - SkinTemplate::outputPage [22:29:07] man, lots of stuff to work on [22:29:26] TimStarling: so is the general performance sprint thingy still happening? [22:29:39] * AaronS could always just do stuff anyway, meh [22:30:10] I have done a bit of profiling using that StartProfiler.php hack I recently merged [22:30:25] right now I am helping the fundraising team with some kind of memcached-related emergency [22:30:29] 315.230 16 - Template:Navbox [22:30:47] bd808: it's just an infobox! ugh [22:34:45] AaronS: that is neat! [22:40:52] 3MediaWiki-Core-Team: Fix SectionProfiler percents - https://phabricator.wikimedia.org/T76668#808674 (10aaron) [22:43:09] bd808: is there any documentation for configuring monolog with mediawiki? [22:43:34] TimStarling: Not yet other than the comments in the classes and the config I made for beta [22:44:09] MWLoggerMonologSpi has pretty good doc comments [22:44:30] but I have it on my list of things-to-do-soon to make a wiki page about it [22:44:59] Hm.. anyone know a good way for the user options API to support setting more than one preference at a time? Seems rather broken that it doesn't support that whilst SpecialPreferences does. Maybe through associate array query parameters? PHP supports options[foo] .. [22:45:28] can't you do change=foo=bar|baz=bar ? [22:45:32] TimStarling: The beta config is here -- https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/logging-labs.php [22:46:06] Krinkle|detached: http://www.mediawiki.org/w/api.php?modules=options ? [22:46:44] legoktm: can't do that if pref name contains '=' or '|', or value contains '|' [22:46:46] I was just wondering if fundraising can use it, but I think their mediawiki is too old [22:46:55] they don't have a vendor/monolog directory anyway [22:47:23] ugh [22:47:28] but most won't? [22:47:44] but they can [22:47:54] value can actually contain '|' perfectly fine, and does in the defaultp ref set [22:47:58] for the time correction one [22:48:16] i can't reproduce https://phabricator.wikimedia.org/T76641 on my local installation (which is zend). is this possibly hhvm-related? [22:48:40] ugh [22:50:22] <^d> Problem is obviously that we have user preferences. [22:51:42] TimStarling: We didn't merge all the patches until 1.25. They could in theory be cherry-picked back into an older release. I wrote most of it during the 1.23 dev cycle. [22:52:08] Next time, I guess. [22:53:05] jackmcbarn, have you tried hhvm? [22:53:28] it happens on wmf production. i don't currently have an hhvm server of my own in usable condition [22:54:32] <^d> jackmcbarn: mw-vagrant should set one up pretty quick. [22:55:32] i have a vagrant. i just broke it a while ago and haven't fixed it [22:56:08] vagrant destroy -f; git pull; ./setup.sh; vagrant up [22:56:21] *poof* it's fixed! [22:56:30] hopefully [22:58:14] <^d> phpunittttttt! [22:58:18] * ^d stabs stabs stabs [23:20:39] if I'm using hhvm with profilerxhprof, can I do wfProfileIn( __METHOD__ . $thing );? [23:20:44] or was that still a WIP? [23:21:04] <^d> It's a no-op. [23:21:44] <^d> You want scopedProfileIn() [23:22:06] btw, I'm not seeing entries for that in the output :/ [23:22:26] ah, ok [23:22:33] <^d> For what? scopedProfileIn() or wfProfileIn()? [23:22:55] the former [23:22:59] the DB classes call it [23:23:19] <^d> *nod* yeah, should be showing up then. [23:28:25] TimStarling: for extension registration, I was wondering if we could just skip putting some stuff in globals, and just have whatever needs it get it from the registry. So instead of populating $wgResourceModules, ResourceLoader would just do something like $registry->get( 'ResourceModules' ); whenever it needs them. [23:28:53] AaronS: That's the hhvm xhprof bug I found. [23:29:43] yeah, that would be good [23:29:55] as long as it is O(1) [23:30:38] AaronS: https://github.com/facebook/hhvm/issues/4354 [23:31:16] I think it should be, because Registry::get would just be returning whatever it already got out of the cache [23:31:32] AaronS: I think the workaround would be to exclude the method we wrap xhprof_frame_begin() in from profiling [23:31:53] <^d> legoktm: +1 [23:32:19] The problem is that when the method that calls xhprof_frame_begin() is popped from the stack hhvm pops the custom frame instead of the calling method's frame [23:33:31] so you have to use it directly right now? [23:34:39] yeah :( [23:34:52] even their own object doesn't work right [23:35:06] so it's probably something somebody started working on and forgot about [23:35:33] Or add the wrapping method to the global exclude list [23:35:50] then it will not trigger a stack pop on exit [23:35:57] and things will work correctly [23:48:23] bd808: any idea how long that might take to get fixed upstream? [23:48:50] AaronS: Dunno. You could send them a patch. :) [23:49:00] I fixed for it in the pecl patch I made [23:49:33] The hhvm xhprof internals are actually pretty clean looking. The patch probably won't be too hard [23:49:51] They are much nicer than the php5 implementation [23:50:34] I would guess that thinking of profiling when you write the interpreter helps make it easier to do [23:55:17] 267.304 4 - Title::checkCascadingSourcesRestrictions [23:55:21] * AaronS tracks that one down [23:55:28] definitely a bug