[19:33:19] #startmeeting test [19:33:19] Meeting started Wed Jun 18 19:33:19 2014 UTC and is due to finish in 60 minutes. The chair is greg-g. Information about MeetBot at http://wiki.debian.org/MeetBot. [19:33:19] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [19:33:19] The meeting name has been set to 'test' [19:33:26] #topic 1 [19:33:30] #endmeeting [19:33:31] Meeting ended Wed Jun 18 19:33:30 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [19:33:31] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-19.33.html [19:33:31] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-19.33.txt [19:33:31] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-19.33.wiki [19:33:31] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-19.33.log.html [19:33:33] (sorry) [19:58:03] https://www.mediawiki.org/wiki/Thread:Talk:Release_Management_RFP/Cost_v._benefit [19:58:41] hello there! [19:59:06] * greg-g wave [19:59:07] s [19:59:15] hey [19:59:25] Hello [19:59:27] we'll get started in a minute [19:59:32] allright [19:59:50] I'm idling in here (times two) and probably can't participate much in the next hour, but I'd really appreciate if the cost could be discussed in detail. [20:00:20] ah, a new new nick, I thought you just changed yesterday, too? [20:00:24] :) [20:00:29] Old nick. :-) [20:00:40] Marybelle: I responded on your lqt thread [20:00:40] It's my alt. [20:00:45] Marybelle: I'll put it in my notes to ask on your behalf [20:00:46] hexmode: I responded back. [20:00:59] k, I need to look [20:01:01] ok, let's get going [20:01:11] #startmeeting MediaWiki Release Management RFP IRC Office Hour (Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE | | https://meta.wikimedia.org/wiki/IRC_office_hours)) [20:01:11] Meeting started Wed Jun 18 20:01:11 2014 UTC and is due to finish in 60 minutes. The chair is greg-g. Information about MeetBot at http://wiki.debian.org/MeetBot. [20:01:12] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [20:01:12] The meeting name has been set to 'mediawiki_release_management_rfp_irc_office_hour__channel_is_logged_and_publicly_posted__do_not_remove_this_note_____https___meta_wikimedia_org_wiki_irc_office_hours__' [20:01:17] #topic intro [20:01:24] Hello and welcome to the IRC discussion with the two MediaWiki release management applicants. [20:01:31] There are two applicants this time around. "Mark y Markus" and "The Consortium". [20:01:35] #link https://www.mediawiki.org/wiki/Release_Management_RFP/2014/Mark_y_Markus_LLC [20:01:38] and [20:01:43] #link https://www.mediawiki.org/wiki/Release_Management_RFP/2014/Consortium [20:01:50] * greg-g waits for the bot to catch up? [20:02:03] maybe it's silent on links [20:02:04] moving on [20:02:14] I hope everyone has had a chance to read through the proposals before this meeting. :) [20:02:37] btw, who here is here to ask questions (ie: not part of either applicantion)? [20:02:45] * mwalker raises hand [20:02:51] * greg-g knows many will lurk and energize when a topic that interests them comes up [20:03:11] * legoktm has questions too [20:03:14] cool [20:03:32] Then let's get going with the M&M proposal, want to have as much time for questions as possible [20:03:36] #topic Mark y Markus [20:03:40] First, hexmode and mglaser, please introduce yourselves. [20:04:06] I'll go [20:04:27] I've been active in the technical community since 2010 [20:04:46] as WMF and afterwards working with individuals and companies [20:05:04] in the past year I've worked with mglaser on release management [20:05:16] (for the record, you can both introduce yourselves at the same time, to save time, this goes doubly for "The Consortium" since there's more of them ;) ) [20:05:23] k [20:05:42] I'm working with wikis in enterprises since 2007 [20:05:49] MediaWikis [20:06:18] Since 2010 I'm involved with Wikimedia [20:06:58] (let me know when you're done) [20:07:02] There are some extensions I maintain(ed): BlueSpice, Windows Azure Storage, a Windows package for MediaWiki [20:07:05] and others [20:07:07] I don't know what else to say here except to point you to the rfp. [20:07:11] :) [20:07:23] (last years has more info about me) [20:07:24] with hexmode, I did the releases of MediaWiki in the last year [20:07:25] that's perfect, just wanted to give you a moment to give context of who you are :) [20:07:51] More details about both of us can be found in the proposal [20:07:53] so: done [20:07:57] :) [20:07:57] awesome, ok, well, let's get on with the questions [20:08:00] Questions for them? [20:08:16] (I have some if no one speaks up ;) ) [20:08:25] Maybe start us off? [20:08:33] you address tarball releases in your rfp; but I'm curious about any work you're doing with packaging [20:08:54] mwalker: I've been in touch with Debian and Fedora devs [20:08:56] specifically; the ohter proposal says they're going to work on install instructions for parsoid and other popular but complicated things [20:09:13] coordinating with them about security releases [20:09:27] mwalker, there's a lot of discussion going on about which distribution formats to use or not [20:09:48] e.g. vagrant, debian packages or installer scripts [20:10:00] mwalker: yeah, and parsoid does need to be packaged. I started on an RPM for a client (not this work) but haven't returned to it [20:10:27] We think it is important to keep mediawiki easy enough to installl that even people that are not sysops can test and use it [20:10:28] mwalker: I will soon, though, because $CLIENT wants to deploy VE in the next couple of monnths [20:10:36] so, "it's complicated"? ;) [20:11:00] greg-g: packaging is targeting to larger, non-shell users [20:11:02] to that end, I had a lot of discussions, e.g. with Gabriel Wicke, about a proper packaging format / or fprmats [20:11:10] but; do you see that effort being mostly on your side of things? or is that a WMF responsibility? [20:11:15] people who can maintain their own infrastructureer [20:11:28] mwalker: WMF already has a lot of packaging they do [20:11:32] greg-g, yeah, it will become complicated as more components come in [20:11:35] such as parsoid [20:11:35] * greg-g nods [20:11:40] mwalker: we'll reuse wher appropiratew [20:12:06] mwalker: but a lot more needs to be done to make packaging work for non-WMF orgs [20:12:13] mwalker, I think its a joint effort [20:12:30] +1 [20:12:44] Ori started mw-vagrant, for example [20:12:58] * YuviPanda isn't sure vagrant can be considered similar to a 'package' of any sort, though [20:13:00] I'm trying to build on that for Redhat installations [20:13:17] there's also a difference between taking the lead (like ori did) or us following along and helping [20:13:22] to throw in a potential curve ball, we know from experience that what the WMF uses to deploy to hundreds of servers is not what someone else will use on a shared host or even on a 5 vps 'cluster', who will take the torch and be the point person for making sure that use case is addressed effectively? [20:13:29] YuviPanda: right, but it can be thought of as a start to deploying [20:13:38] YuviPanda it's more about how to get MediaWiki to the people who want to use it [20:14:06] If vagrant is a viable way of making it easy to start off with a MediaWiki, that's one way. [20:14:10] So in last year's rfp - https://www.mediawiki.org/wiki/Release_Management_RFP/2013/NicheWork_and_Hallo_Welt!#Problems_with_MediaWiki_release_managment you listed a whole bunch of problems and "initial improvements" you wanted to make. How'd that go? [20:14:20] hmm, right. but vagrant, as it is now, is very much a dev environment. lots of things that'd make sense in a production environment are turned off, and lots of things that are terrible ideas in a prod environment are turned on... [20:14:21] I feel we will not have "the one and holy technology", though [20:14:26] greg-g: that needs to be considered early on in the design process; it's not something you can bolt on easily [20:14:35] greg-g: the release team on this contract should take the lead of adapting the work to non-wmf users [20:14:55] gwicke: so are you volunteering? ;) [20:15:00] YuviPanda, you're right there. That's why we have not made a final decision on this [20:15:08] we are giving that a lot of thought already [20:15:28] mglaser: do the maintainers of mw-vagrant know that you guys are potentially considering it as a starting point for deployment/packaging? [20:15:45] I'm also not sure how that'll work, though. 'MW as a virtual appliance'? [20:15:45] YuviPanda: absolutely, but using the vagrant work to iterate would be better than going from scratch [20:15:46] to gwicke but generally: right, but something like this needs a champion or it'll just be forgotten [20:16:06] greg-g: we have it in our goals for the service team [20:16:20] YuviPanda: packaging, no. deployment, yes [20:16:21] hexmode: mglaser ^^ [20:16:23] this includes a) packaging and b) design to scale down [20:16:35] * greg-g nods [20:16:38] \o/ [20:16:47] ok, so it seems like a team effort with gwicke doing much of the torch holding [20:16:48] I like that [20:16:59] moving on :) [20:17:03] hexmode: ah, hmm. makes sense. do make sure that people like ori and bd808 (and perhaps me?) are in the loop though, whatever you guys decide to build on top of vagrant [20:17:06] In the budget section (thanks for that! it's good to have that break down), you have a line for "advocate third-party interests" at 4 hours/week. That one seems very nebulous (there is no definitive definition of what what entails, exactly, in the proposal); can you explain a bit about what you see fitting into that line item vs the first three line items in the "cost of organising (sic ;) ) a user group" section? [20:17:19] [01:14:11 PM] So in last year's rfp - https://www.mediawiki.org/wiki/Release_Management_RFP/2013/NicheWork_and_Hallo_Welt!#Problems_with_MediaWiki_release_managment you listed a whole bunch of problems and "initial improvements" you wanted to make. How'd that go? <-- I'd also like to hear an answer to this. [20:17:22] something like packaging needs to be a wmf-wide effort [20:17:36] we'll focus on service packaging, partly driven by our own needs as well [20:17:37] bawolff, legoktm : we addressed this in our proposal [20:17:42] ok, bawolff's question first if you haven't already started answering [20:17:52] Maintainn two major releases per year: done [20:18:02] I haven't heard much from platform on core packaging however [20:18:13] completed vs ongoing work sections from this year's RFP [20:18:21] or is this something the release team would like to tackle? [20:18:34] Continuous integration: We have integrated the make release script with Jenkins, it's triggered on a git tag [20:18:44] greg-g: yeah, that section is meant to address bawolff's q. [20:18:45] tarballs are automatically built and tested [20:18:50] gwicke: let's not get bogged down in this topic, but yes, A) platform will respond work on that with you and B) release team will also be involved [20:19:05] bawolff: is there something in the 12 month recap you want more info on? [20:19:22] Work with extension developers: Reached out to them at various conferences, SMWCon, Hackathons, Wikimania, etc [20:19:25] greg-g: that's good to hear [20:19:59] More just a status report. There's a big list of things, what areas didn't work out, and why? What have you learned trying to do those things. What are you planning to do differently should you get the contract again? [20:20:01] etc [20:20:01] Lasting relationships with Open Source organisations: Mark is in touch with Mozilla and Debian packagers, and others afaik [20:20:34] I have 3 questions related to bawolff's: [20:20:34] What were the hardest things you had to do this past year? How will you make them easier? [20:20:38] • What were the easiest things you did this past year? [20:20:40] • What was the biggest surprise from this past year? [20:20:44] mglaser: well you listed 4 bullet points on https://www.mediawiki.org/wiki/Release_Management_RFP/2013/NicheWork_and_Hallo_Welt!#Problems_with_MediaWiki_release_managment (what bawolff linked), and I'm trying to figure out which of those you guys made progress on. [20:20:51] bawolff: I'll try to do quick bullet points for you after greg-g's [20:21:16] greg-g: easiest -- writing emails about releases [20:21:35] :) [20:22:04] greg-g: easiest: go to conferences and talk to third party users/developers [20:22:14] (because I like this) [20:22:23] :) [20:22:28] greg-g: biggest surprise for me -- how much work needed to be done on getting people we needed done [20:22:43] ex: waiting on the tarball server [20:22:46] hardest: my first release, when I found that I have about half of the permissions needed. [20:22:54] (now resolved) [20:23:47] * greg-g nods [20:23:54] * legoktm repeats self... [20:23:56] [01:20:45 PM] mglaser: well you listed 4 bullet points on https://www.mediawiki.org/wiki/Release_Management_RFP/2013/NicheWork_and_Hallo_Welt!#Problems_with_MediaWiki_release_managment (what bawolff linked), and I'm trying to figure out which of those you guys made progress on. [20:24:03] items on "problems list" from last year to follow [20:24:36] "Skinning sucks" -- it still sucks. Not really a release mgmt issue, but still something we want to fix [20:25:20] greg-g: sorry to drag you back to packaging: so who will take responsibility for packaging core? [20:25:27] So you mentioned last year: "Our first attempt at fundraising will be done with Kickstarter to fund the development of a better skin system for MediaWiki. We'll set up a skin exchange and publicize it as well." [20:25:30] gwicke: off-topic for now, please [20:25:31] did that happen? [20:25:37] "Web-based config" -- still needed. Much more of a release issue. No significant progress b/c of the issues with actually putting out a release reliably. [20:26:09] legoktm, these are issues that should be addressed by a functioning 3rd party community. In the first year, we were mainly focussed on the actual release process, to make this as tight as possible. [20:26:10] bawolff: no [20:26:36] "spam" -- still an issue. Not much work, but SimpleAntiSpam is included now so only started. [20:26:37] bawolff: 'it's complicated' given trademarks and such [20:26:49] so, 6 minute warning [20:26:50] greg-g: so is this a retrospective only? [20:26:53] as hexmode said, these are still issues and will continue to be unless we have a functioning third party community. So that's what we want to focus on this year. [20:26:59] gwicke: no, but we have only 6 minutes left and I have other questions [20:27:09] greg-g: What does trademarks have to do with it? There's no need for it to be officially Wikimedia branded [20:27:09] we can't bogart this topic everywhere [20:27:12] hexmode: "Not really a release mgmt issue, but still something we want to fix" <-- A bit confused. Last year you put it under a heading "Problems with MediaWiki release managment", but now its not? [20:27:45] at some point we should try to come up with a coherent distribution strategy [20:27:45] "Help system" -- some work has been done on mw.o that could be used, but none to put in tarball yet. [20:27:52] ok, it seems to be getting a bit nit-picky on what things were done/how much/and what category they're in, but, let's go bigger picture for the last 4 minutes [20:27:59] gwicke: yes, this is not the venue [20:28:08] ok, reset discussion.... [20:28:10] legoktm: yes, my view changed [20:28:17] bawolff: its hard to raise money for MEdiaWiki development when you can't use the word MediaWiki [20:28:22] mwalker: go ahead [20:28:29] oh right, that's trademarked too [20:28:31] ok, thanks for clarifying [20:28:41] yeah, I could see that being an issue [20:28:45] greg-g: I think it should be very much part of the release process [20:29:10] gwicke: we both agree, but there are other issues to talk about now, sorry, end of discussion on it (for now, not for ever) we are time boxed here [20:29:35] 1m? [20:29:36] Marybelle, can we continue the budget question on MW.o? [20:29:41] I thought mwalker had one [20:29:47] you requested 75k last year, with three major focus areas. As an end user, I only saw improvement in the release process -- but can you describe the work you've done for supporting mediawiki users? [20:30:16] e.g. did the funds / time actually get split evenly between all your tasks that you were going to focus on? [20:30:31] mwalker: we underestimated how much time release management would take [20:30:44] mwalker: the release took more time than we initially thought [20:31:07] can you give an estimated hours/week it took? [20:31:20] (The Consortium, we'll start at the end of this question) [20:31:30] ? [20:31:32] sorry [20:31:35] 24 total is on the low end [20:31:43] 24hour each week? [20:31:47] s [20:32:05] (to clarify) [20:32:10] Producing a tarball takes about 4-8 hours. [20:32:17] yes, I think that is a conservative amount considering all the communication, etc [20:32:19] backporting not included [20:32:29] there's stuff around this. [20:32:37] such as automation [20:32:41] additional testing [20:32:45] but [20:32:55] now that we have added automation [20:33:02] we expect it to take less [20:33:08] +1 [20:33:20] still, there are things that we don't have automated [20:33:27] e.g. signing tarballs [20:33:40] so, on a release week, you spent 30+ hours on just that? (given the probable fluctuation, other weeks being shorter) [20:33:57] hexmode: and branching extensions on new major releases https://bugzilla.wikimedia.org/show_bug.cgi?id=64157 [20:34:14] greg-g, no [20:34:40] it depends on the amount of backports [20:34:51] I'd say 16 hhrs [20:35:02] average [20:35:04] mobile-reportcard.wmflabs.org [20:35:05] er [20:35:09] 16:31 < hexmode> 24 total is on the low end [20:35:14] (bad paste) [20:35:18] also making sure we pick up bug reports from, say, support desk, etc [20:35:37] cool, ok, that helps [20:35:47] so, let's continue this thread on that talk page that Marybelle started [20:35:51] thanks hexmode and mglaser ! [20:35:55] :) [20:35:57] yw [20:36:02] I'm going to push us on to the next group, sorry :) [20:36:07] thanks greg-g and everyone in this discussion [20:36:13] #topic The Consortium [20:36:20] Those members of The Consortium present, please do introduce yoursevles (concurrently). [20:36:28] Hi [20:36:28] Hi! [20:36:32] hi [20:36:34] rar! [20:36:40] (one-liner intros probably) [20:36:45] (Isarra isn't with us today) [20:36:51] nicks to names are useful ;) [20:36:54] Agenda? [20:37:02] I'm Emufarmers (Benjamin Lees). I've been working with MediaWiki and providing support to people using it for...a while now. [20:37:45] i'm a wikipedian since 2007 (but i've always been focusing on technical matters rather than writing the articles), and a mediawiki developer for about two years now, with +2 rights. (Bartosz Dziewoński here.) [20:37:47] I'm Skizzerz (Ryan Schmidt), I started working with mediawiki in late 2007, developing custom extensions and providing a ton of end user support. A short list of things I did are available on the RFP as well as my mw.org user page [20:38:10] I'm Jack Phoenix, I do stuff and have been doing for a fair while; more info can be found on my MW.org page for those interested in my areas of expertise, extensions/skins I've developed and so on: https://www.mediawiki.org/wiki/User:Jack_Phoenix [20:38:21] cool, thanks [20:38:27] alright, questions? [20:38:34] I have one to start... [20:38:39] In your RFP, I felt like the word "we" was being used interchangeably to mean "The Consortium members" and "third-party users/sys-admins/devs". Example: [20:38:44] (in the "For system administrators" section) "In order to serve our user communities, we need:" vs "Schedules should be consistent so we can plan our own upgrades accordingly." [20:38:44] what is "The Consortium"? [20:38:59] is that intentional or otherwise? [20:39:04] Vulpix: what we are calling the 5 of us submitting the RFP [20:39:20] https://www.mediawiki.org/wiki/Release_Management_RFP/2014/Consortium [20:39:26] thanks gwicke :) [20:39:40] thanks Skizzerz [20:39:40] #link https://www.mediawiki.org/wiki/Release_Management_RFP/2014/Consortium [20:39:47] probably accidental. The first example refers to us as the team, the latter seems to refer to end users (which does include us since all of us are involved with 3rd party wikis to some extent) [20:39:50] if I can jump on greg's question -- we sort of have scheduled releases already; the last thurs of every month -- how will you improve [20:40:24] or are you just identifying that it's something you'll need to keep doing? [20:41:13] Skizzerz: to continue that thread, I like to be pedantic at times (we're all wikimedians in our own little ways), and I wonder if that will be confusing when your advocating on behalf of third-parties vs what 'the consortium as contracter' needs [20:41:14] mwalker: yes, the current schedule is nice, assuming it is kept. we're probably going to continue doing this. [20:41:36] s/your/you're/ #pedanticfail [20:42:38] I have another question in case this one is answered [20:42:44] On the flip side, if a single month did not have any significant number of bugfixes that were deemed good to backport, we may hold off for that month, as upgrading costs 3rd party sysadmins time and energy, and if we are delivering an update with only middling returns, they may decide to skip it entirely (in which case, what is the point?) [20:43:31] greg-g: yeah, that was an oversight, since our team is full of third-party users too :) we should probably correct this [20:43:40] greg-g: If you'd like I can go through and clarify what all of the "we"'s are sometime after this meeting to make it more clear :) [20:43:40] * greg-g nods [20:44:10] either way on that, Skizzerz, I think my bigger point is what I'm more thinking about (how you'll advocate for others vs yourselves) [20:45:14] What's your plan for making the installation & maintenance of full-featured mediawiki (with caching, tidy, VE, parsoid etc) easier? [20:45:20] :) [20:45:45] greg-g: We (The Consortium) are all 3rd party users, and a lot of our goals as contractors align with pain points that we have experienced administering 3rd party wikis/wiki farms in the past. Does that answer your question? [20:46:05] you dont have to address this in response to gwickes question; but it is related: "if that question is answered -- how do you plan on improving the release notes / regression notices? Something that's traditionally been a very hard problem for the foundation" [20:46:29] Skizzerz: I think so :) [20:46:36] It wasn't that hard a problem in the svn days... [20:46:37] Skizzerz: it's a nuanced thing, I think [20:47:25] mwalker: well most of all, somebody should actually review the changes coming in, at least briefly. right now adding the release notes is an annoying process for developers as well (constant merge conflicts), so it's very easy to forget about them [20:47:29] I see 'outreach' in your plan -- who is going to do the work? [20:48:13] MatmaRex: would that be something the consortium could do? [20:48:15] mwalker: i've previously written some simple tools to make adding release notes less painful, and i'm hoping to continue working on that [20:48:29] greg-g: yes [20:48:44] MatmaRex: eg, reading over the merged changes per wmfXX branch, consolidating that into the point releases or major releases [20:48:52] * greg-g nods [20:49:10] such problematic changes are often not very hard to spot if you're looking for them, but are very easy to forget about [20:49:16] gwicke (in response to the "full-featured" question): We can package standalone extensions in the tarball similarly to how we are doing now, but that is only half of the story. Something I'd love to see is an install script that is capable of installing dependencies for packaged extensions, so you can package VE, check the "I want to install this" box, and the script tries to get parsoid set u [20:49:16] p, etc. (prompting the user with instructions if it is unable to do that) [20:49:35] Another avenue is package repositories, such as debian and red-hat [20:49:58] so you can apt-get install mediawiki-visualeditor and it'd set up node.js, Parsoid, and VE with sane default configs [20:50:34] gwicke: is it feasible for a single host (vps or real machine) to host all the peices (namely just parsoid, ve, and mwcore)? [20:50:57] * greg-g goes down tangent, should probably stick to topic [20:51:03] greg-g: sure [20:51:08] You have two related (in my eyes) points in the "What we propose to do" section: [20:51:11] sure, I've partially done it (was writing a guide, then left off on it) [20:51:11] "Thorough testing before releases: multiple PHP versions, different operating systems and databases, and against common extensions" and "Development of automated testing infrastructure" [20:51:12] my $3/month VPS certainly can [20:51:12] greg-g: i can answer, definitely yes :) [20:51:16] and From my experience, that's a ton of work for non-full time individuals. What experience does your team have on automated testing? Do you have experience with Jenkins (how much?) and selenium? [20:51:28] gwicke: :) [20:51:56] greg-g: as we discovered recently VE also needs the Cite extension and TemplateData (and some other Template stuff?) at minimum [20:52:21] chrismcmahon: :) but those aren't separate services, at least0 [20:52:22] -0 [20:52:39] Skizzerz: given that we already have packages for services like parsoid -- are you going to build the installer packages and/or work on improving core packaging? [20:52:41] chrismcmahon: Actually, that's just the way it's currently packaged; we could split it up if there's third party demand to not use e.g. the citation stuff. [20:53:02] greg-g: We're about (2–3 months) to add the citoid service as an optional extra, BTW. [20:53:04] The consortium: to keep you on track/topic: see the question from me re Jenkins/Selenium :) [20:53:10] * James_F shuts up. [20:53:12] James_F: oh right [20:54:42] (6 minute warning) [20:54:51] greg-g: re testing, i think we've admittedly only worked with jenkins as "end-users" until now (having it comment on our patches :) ), i have a bit of experience with ruby and selenium [20:55:21] ok, so put another way: how will you go about doing the 'thorough testings before releases" [20:55:40] manual is well, manual and prone to miss things unless you are a pro at it. :) [20:55:52] gwicke: there's my initial comment about making the tarball package capable of handling stuff like that (likely to be realized via composer, although I haven't given the fine details a whole lot of thought yet). I feel it is very important that the end user have multiple avenues of installing something, as not everyone will have SSH access and apt-get/yum. Thus, having a system in place that c [20:55:52] an set all of that up from a web UI would be very beneficial to those users, and isn't a point that the WMF has already made a lot of progress in, I believe [20:56:38] (thanks for your full thought answers, Skizzerz, seriously) [20:57:11] Skizzerz: you can't set up fully-featured MW on a shared host normally [20:57:13] now that I have that out of my way, I've fiddled with jenkins a bit (I run my own instance at https://jenkins.skizzerz.net to run builds for a C++ project a couple of friends and I are working on), so I'm at least familiar with the admin side of it [20:57:18] I don't have any other questions for now, unless more on the testing pre releases can be said by the team, so if others have short/quick questions in the last 4 minutes... [20:57:31] Skizzerz: cool [20:57:34] if that host provides node.js already installed (sadly I'm unaware of any that do), I don't see why you couldn't [20:57:53] greg-g: i'm not sure what kind of answer you're looking for, unit tests and integration tests (like crawling a wiki with a lot of extensions installed and looking for fatals) are the usual approach and that's what we're planning [20:57:56] my question was specifically about the growing group of users that have a cheap VPS with root [20:58:05] greg-g: manual testing isn't perfect, but I think it should still be able to catch stuff like https://bugzilla.wikimedia.org/show_bug.cgi?id=60054 [20:58:06] and have the ability to run everything & the kitchen sink [20:58:10] if there's good packaging [20:58:15] so far we dont' support them [20:58:22] You mention documentation. Currently we have very little documentation on scaling up mediawiki (e.g. How to set up slave databases, How to set up varnish with htcp purges, etc). Is that something you guys plan to work on, or do you plan to concentrate on small users (Which is arguably most of our third party user base) [20:58:24] (as a random example) [20:58:27] MatmaRex: right, so going with that, do you/your team, feel comfortable doing it? [20:58:52] legoktm: agreed, it isn't one XOR the other [20:58:57] yup [20:59:06] MatmaRex: cool, thanks [20:59:12] greg-g: yup [20:59:16] good instructions go a long way (I've started an incomplete guide for VE at https://mediawiki.org/wiki/User:Skizzerz/VisualEditor ), and easy install scripts go even further. For example we can ship an install.sh as part of VE that the user can run to grab all of the dependencies and set it up automagically :) [20:59:29] er [20:59:30] mislink [20:59:47] https://www.mediawiki.org/wiki/User:Skizzerz/VE_on_1.23 [21:00:13] 1 minute left officially, we can officially end the meeting but you all can stay and chat for as long as you can. [21:00:28] any last questions? :) [21:00:36] oh, bawolff had one [21:00:39] Skizzerz: so was that a 'no' ? [21:00:41] ;) [21:01:56] * greg-g waits for Skizzerz to answer bawolff [21:02:23] Consortium: for collaboration with package maintainers, do you guys have contact with the current debian/redhat/etc maintainers? [21:02:32] bawolff: I believe having good step-by-step guides is very beneficial to sysadmins who may not be entirely familiar with the mediawiki ecosystem. The issue with such guides is they quickly get out of date, so it would require going back to them and updating every so often. Things that are considered "scaling" can also benefit small users as well, for example rate limiting features are not avai [21:02:32] lable unles Memcached is used [21:03:19] * greg-g now waits for an answer to csteipp's question [21:03:27] (there's no one after us in here, so we can go long) [21:03:44] Well different scaling is useful to different groups. You have to be pretty big to need master/slave db. Caching is useful to just about everybody [21:04:06] We are not going to only focus on small users, however. Some of us are in a unique position to actually have access to and experience running larger farms (ShoutWiki for instance), and assisting others in bridging the gap from "small wiki" to something larger is something we'd be able to do [21:05:36] large farms are less interesting for distribution IMO [21:05:37] anyone on the consortium have contact with the current debian/redhat/etc mediawiki maintainers? [21:05:50] (rephrasing/asking csteipp's question) [21:06:04] gwicke: In my opinion, there are four different "classes" of third-party users: shared hosting, single wikis on dedicated/VPS, "clustered" wikis/wiki alliances, and wiki farms. Our efforts would mostly focus on the first two groups, as they are the most likely to make use of the tarball packages [21:06:19] also, do you have experience with packaging yourself? [21:06:28] csteipp: no. we're hoping they'll be happy to accept our contributions without a need to have a "contact" inside ;) [21:06:36] :) [21:07:08] ok, I'm going to officially end this, but please do keep chatting if you have time and more questions. [21:07:11] Thanks all! [21:07:25] Thanks! [21:07:29] Thanks to the applicants and those who asked questions, especially. Great discussions. [21:07:35] #endmeeting [21:07:36] Meeting ended Wed Jun 18 21:07:35 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [21:07:36] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-20.01.html [21:07:36] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-20.01.txt [21:07:36] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-20.01.wiki [21:07:36] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.2014-06-18-20.01.log.html [21:08:24] I have to head back to work, but if you have any further questions for me, feel free to ask on the RFP talk page :) [21:08:27] we're on irc all time if you have further questions :) [21:08:31] that too [21:08:38] I'll be around in a few hours [21:10:10] I've got another question for any consortium folk, if you have a minute-- There wasn't much about security patches in the proposal. What's your general philosophy? And/or, who in the group would be taking the lead on those? [21:11:06] What sort of philosophy issues are there? Are you referring to full disclosure vs non full disclosure? [21:11:16] security issues are serious and will be treated as such. We'll coordinate a release as soon as possible, working with the package maintainers to ensure a simultaneous release (if we don't, we effectively gave people using packaged versions a 0day hole) [21:11:35] Sure, but lots of security issues aren't critical/severe. [21:11:41] It's always a matter of assessment. [21:11:41] csteipp: we're going to come up with something to avoid making the patches public several hours before the release, as it happens now [21:11:49] disclosure-wise, the release announcement will only give vaguaries as to what the issue was, people could look at the bug (once it goes public), and any POCs if they want specifics [21:12:15] MatmaRex: you have ideas on that? [21:12:21] anyway, off for real [21:12:23] :) [21:12:26] thanks Skizzerz [21:12:28] thanks everyone [21:12:57] * Marybelle waves. [21:12:59] greg-g: well, not commit them until when the release happens? i've never understood why we changed to doing it the way we do this now [21:13:14] i mean, i know it's so that tarballs can be built automatically straight from the git repo [21:13:24] i just don't see why the repo has to be public to do this… [21:14:00] MatmaRex, this is not a new idea [21:14:03] MatmaRex: it'd have to be a separate gerrit, as I understand it [21:14:13] or separate git server, at least [21:14:20] csteipp: i'm going to be the bugzilla/gerrit guy in general, i'm probably also the person to poke about new security patches [21:14:21] gerrit optional ;) [21:14:28] Well git is distrubuted, so you don't need a server [21:14:29] the reason we so this in the public gerrit is that we don't have the resources to maintain a seperate gerrit [21:14:34] i also don't understand the part of the question about 'philosophy issues' [21:14:37] bawolff: git repo* :P [21:14:44] yeah, what bawolff said [21:15:14] every git repo is a potential server yes, sorry for overloading/being redundant [21:15:27] MatmaRex: The tradeoff is that to do it privately, then we're doing the build on basically non-production hardware (like my laptop)... and it's difficult to determine if that's more or less secure than pushing them into gerrit and having jenkins build them. [21:15:37] maybe doing code review with phabricator will magically resolve this and allow us to make private commits :) [21:15:49] * bawolff is unsure that would be a good thing [21:15:53] do I trust csteipp's laptop to build tarballs? maybe. maybe not. [21:16:18] "private commits" sounds kind of scary [21:16:28] traditional building of binaries/tarballs at eg Mozilla is on a machine that isn't connected to the web in anyway, and you need go in a special room to do the builds (they don't do this anymore, but used to) [21:16:34] Philosophy: Yeah, you guys answered well. I was curious about dedication to full-disclosure vs. waiting for patch before making it public [21:16:37] building the tarball on csteipps laptop evades the test process afterwards [21:16:43] csteipp: do you mean this as a "trust" issue (we "trust" jenkins not to "hack" our wikis)? or is it actually about hardware? [21:16:58] mglaser: that too, good point [21:17:00] to me it sounds as if the entire private repo / infrastructure thing is something the WMF will have to figure out in any case [21:17:07] MatmaRex: Trust as in trust the build machine isn't inserting backdoors. [21:17:11] both for private & public use [21:17:12] yeah [21:17:38] Well its an interpreted language. Its a bit harder to do that covertly then with a bianry language [21:17:52] although not impossible I suppose [21:18:01] bawolff: you've seen our code, right? :P [21:18:04] bawolff: Definitely [21:18:22] greg-g: well there is that one line in Special:Version [21:18:48] what's your plan for making it easier for third party users to receive timely security upgrades? Do you have ideas on how to make this possible without manual intervention? [21:19:12] csteipp: it's kind of a fundamental openness issue, eh. it won't be a problem for us to do the builds somewhere you trust (public or not), anyway, if that's what you want [21:19:25] gwicke: everyone use debs and turn on "automatically download and install security updates"? ;) [21:19:32] you still need to trust someone (like, well, the release managers), or verify all patches yourself [21:19:43] greg-g: dunno, didn's see anything like that in the plans [21:19:52] gwicke: twas a joke, sorry [21:20:08] if you care about integrity, i'd recommend upgrading from git or via patches [21:20:53] gwicke: are you suggesting "phoning home" for updates? [21:21:06] contacting a server, yes [21:21:27] There's been failed gsoc projects in the past to do that [21:21:36] i think there's been some vocal opposition when this was last suggested? i might be misremembering [21:21:53] * bawolff would be fine with it provided it was optional, and possibly not default [21:22:23] bawolff: +1 on making it optional [21:22:46] The flip side of that is, if mediawiki can upgrade itself, that means it has to have write access to itself, which is probably not the greatest for security [21:22:50] I was more interested to hear about ideas on how to get the option at all though [21:22:59] oh yes, it would definitely have to be optional, of course [21:23:09] i imagine WMF would like to turn such a thing off :P [21:23:29] bawolff: you certainly would not want to do this as www-data [21:24:06] we haven't really considered this afaik, since this was always contentious. we prefer obvious improvements :) [21:24:16] MatmaRex: we actually have this turned on by default for most of our packages [21:24:19] gwicke: Are you suggesting have a maintenance script downloadAndUpdateToLatestVersion.php ? [21:25:00] bawolff: not necessarily [21:26:00] I just wanted to bring it up & see if there are ideas / plans on how to do this [21:26:06] for third-party users [21:26:16] gwicke: hmm. [21:27:09] well, we weren't planning to do this, for the reasons mentioned above. [21:27:50] okay, thanks! [21:39:51] mglaser, hexmode : If you're still there. I'm kind of confused what sort of things the "user group" would do. Are they meant to fund development work for non-wmf priorities, are they a support/social group (similar to a linux user group), are they meant to fund contests (Like the skinning thing you mentioned), or something else? [21:42:37] bawolff: As a first step, this will be an interest group for all the people using MW as third parties. [21:42:55] The "social group". [21:43:20] it should be a link between the WMF developers and the users out there. [21:43:40] As in a bunch of people going out to a resturant and discussing MediaWiki? [21:44:22] So as an example, people in the group might discuss the WMF development roadmap and evaluate the impact on their (outside) installations [21:44:46] bawolff, not exactly a restaurant, more like regular IRC meetings [21:44:47] :) [21:44:58] Ok, so sort of a third-party wiki advisory group [21:45:30] in that group, though, I expect that we will find issues that need development, e.g. support for another database [21:45:35] or installer. [21:46:02] so in mid-term, the group should also be able to raise money to actually fund development [21:46:19] or coordinate a network of volonteers. [21:46:30] volunteers [21:46:54] ok. But in the near term it will be a group of unpaid people who ocassionally meet on irc and make recomendations about the direction of MediaWiki development? [21:47:18] bawolff, yes, third party wiki advisory group sounds good [21:47:48] yes, kind of. Unpaid in their function as members of the group. [21:48:16] paid, maybe, as maintainers of a company's inhouse wiki [21:48:29] yes, that makes sense [21:49:00] So then as my next question, what's the benefit to incorperating such a group? [21:49:31] The user group might also be the initial seed for a MediaWiki Foundation. [21:49:42] Oh no, I said the word... ;) [21:49:52] Perhaps, but that's a long way off [21:50:11] but that's long term and the user group is meant to be some kind of proof of concept [21:50:48] benefit of incorporating: there are a few models of affiliation with Wikimedia Foundation: user groups, thematic organisations, chapters [21:51:26] starting as a user group would somehow formally indicate we are serious about this. It's more than another mailing list [21:52:42] So incorporation is part of the plan in order to get formal recognition from the foundation, which is wanted because it makes the user group be a "big" "official" thing [21:53:18] yes. formal recognition by the Foundation is what we want [21:53:56] And that's wanted purely for the symbolic benefits? [21:54:41] That way, we can get credibility, which in turn is good if we want to get organisations on board that give money, say, for development of 3rd party relevant parts of MediaWiki [21:54:53] e.g. Installer, Configuration, ACL, Skinning, .... [21:55:19] Ok. So formal recognition as a pathway for easier funding [21:55:36] that's one main aspect, yes [21:56:04] maybe also to add weight to our position when talking the the WMF developers :) [21:56:39] and, vice versa, to become a, no "the" channel of communication from WMF to 3rd parties [21:58:38] Which brings me to my second question, so in your budget, you have 68,640 set aside towards this user group. Is the plan for that to go to you two as payment for organizing this group, or is some of that to become an asset of the user group as initial seed money [22:01:06] It's meant to be a compensation for the time we spend on building the user group and ecosystem. [22:01:46] I guess I'm also kind of curious about the $18,720 going to finance/fundraising for a group that has mid- and long- term goals of distributing funds for MW development, but not short term goals of doing that or anything else that involves money (as far as I can tell) [22:02:24] There's a compound to this: we want to raise about 30k$ in addition, and this is not meant for us, but for, say, development [22:03:25] building up a user group and its structures takes time. That involves the ability to raise more money. [22:04:32] so in short term, we decided not to aim for development, but to focus on the "advisory group" aspect [22:04:45] hm, i might as well join in. by 'development' you mean sub-hiring someone else, then? [22:04:52] yes [22:04:56] MatmaRex [22:05:43] I agree that it makes sense to start with "advisory" group. [22:06:43] I think this might be similar to the current development enviroment, but with a focus on 3rd parties. There will be volunteer work (e.g. extensions contributed by companies, or just interested users), but there are issues that are beyond the individual volunteer, such as configuration management [22:08:37] I just wonder if such a group would be better formed somewhat "organically". e.g. Becoming incorporated after there's been sustained interest in the unincorporated group for some time [22:10:01] as opposed to a big master plan [22:13:57] bawolff, I kind of agree. On the other hand, a user group is just this: a lightweight form of organisation. The original master plan was to aim for a 501c3 organisation within 5 years. We decided not to propose this now and start smallish instead. [22:15:04] http://meta.wikimedia.org/wiki/Wikimedia_affiliation_models/User_Groups [22:15:16] a user group can even start unincorporated. [22:16:07] Quote: "Incorporation: Any � Wikimedia groups can be incorporated or not as they see fit" [22:17:06] and yes, I also think we have to prove the need and benefit for such an organisation first, before we even think about formal structures [22:18:21] 60 grand in organizational related costs doesn't seem like that small a starting place. [22:21:36] Most of the time goes into communication. Get the people together. Build communication structures. Identify who they are. Reach out to developers, users, WMF staff who are interested. Define the tasks and shape the ecosystem. [22:21:42] time = cost [22:21:44] :) [22:22:59] so "organisational" might be misleading, as it does not mean "the formal stuff", but "the effort needed to crystallize an environment"