[00:15:48] TimStarling: hey, you there? [00:15:58] yes [00:52:13] TimStarling: whoops sorry, didn't see your response... are you still there now? [00:52:33] yes mwang [00:54:53] TimStarling: :] so, i work on the hhvm team at facebook, and i'm working with fredemmott/fe (who you may have worked with in the past) on HHVM-Hack-OSS plans. we want to chat with some of the relevant folks over at wikipedia—are you a good point of contact? [00:56:57] I did the initial development work to get HHVM deployed, but our ops department handled the more recent maintenance and upgrades [00:57:10] so it depends on what sort of plans, I guess [00:59:11] moritzm worked on it most recently, he is in #wikimedia-tech [01:00:29] TimStarling: we wanna chat about any plans you might have around php7 migration, and other topics re: the future of the language/runtime, so folks on the ops side and engineering stakeholders are both people we want to hear from [01:01:31] TimStarling: cool, i'll ping moritzm there. anyone else you think might be helpful to ping? [01:02:20] paravoid [01:02:59] both moritzm and paravoid are in europe, so it is probably too late for them right now [01:05:34] what is your position on this? I know our ops folks are talking about switching to PHP 7 because they are pretty frustrated with the lack of attention facebook is giving to the open source product lately [01:06:46] TimStarling: cool, thanks for the leads. could you share their emails? (by DM if you prefer) [01:07:18] TimStarling: right, that's basically what we want to talk about. it's pretty obvious our OSS story has been neglected recently. we have concrete plans to pick things back up in Q4, and the reason i want to have this conversation is to get you guys in the loop about direction [20:13:30] anomie, so, I overlooked one detail about the parsermigration api ... which is that the output is only contents of the body tag without modules, css, etc. [20:14:24] so, after pondering for a little while .. i figured i could probably hack it by opening the parsermigration-edit and selectively suppressing content on that page via jquery and css. [20:14:59] worked fine locally .. but when i deployed it on a vm as part of the mass test ... i encountered this .. http://mw-expt-tests.wmflabs.org/visualdiff/pngs/enwiki/Panahida.tidy.png [20:15:12] i suppose there isn't any easy way around this? [20:19:16] subbu: Log into an account? [20:20:28] i am having phantomjs hit the url .. [20:20:47] so, i suppose i have to login and get the relevant cookies and pass them along with my post? [20:20:52] Possibly the better solution would be to have the API module return the additional data from the ParserOutput so you can have modules and CSS. [20:20:58] ya. [20:21:26] i decided to hack it because it is another week before that comes along because of release timelines. [20:21:41] and i was hoping to have some numbers before wikimania. [20:23:05] but, yes, fixing the api module is the ideal solution. [20:23:28] i'll think of the login and pass cookie option in the interim. [21:51:13] anomie, ok for now, i just created a new test account and am having phantom post the central auth cookies ... seems to be doing the trick for now .. although for protected pages that need higher user level, this will still fail .. but, good to get this process started. [21:51:25] will followup on getting the api fixed next week. [21:53:19] 75K pages in all .. will know tomorrow what this shows. [21:57:03] but, http://mw-expt-tests.wmflabs.org/ is where i kicked it off ... 75K pages sampled from about 60 wikis.