[14:00:25] Krinkle: jzerebecki ? :) [14:00:39] hey [14:00:43] o/ [14:01:28] hashar: o/ [14:01:40] #startmeeting CI weekly meeting [14:01:52] so [14:02:40] presents/lurking jzerebecki legoktm Krinkle hashar [14:02:40] :) [14:02:40] I havent really preparing this week meeting :/ [14:02:42] Meeting started Tue Apr 21 14:01:40 2015 UTC and is due to finish in 60 minutes. The chair is hashar. Information about MeetBot at http://wiki.debian.org/MeetBot. [14:02:42] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [14:02:42] The meeting name has been set to 'ci_weekly_meeting' [14:02:56] hashar: We could continue triaging, only 17 left. [14:03:06] #link agenda https://www.mediawiki.org/wiki/Continuous_integration/Meetings/2015-04-21 [14:03:21] lets look at past week actions [14:03:24] then we can do the triage ? [14:03:31] Yeah [14:03:41] #topic actions retrospective [14:03:51] #link https://www.mediawiki.org/wiki/Continuous_integration/Meetings/2015-04-21#Actions_restrospective [14:03:58] Antoine to update CI isolation architecture and reply to Chase/Andrew B questions from last meeting. [14:04:03] that is still being discussed [14:04:27] #info Update CI isolation architecture: stalled, it is pending discussion about VLANs in labs network [14:04:28] When was the last communication? [14:04:34] last friday [14:04:38] cool [14:04:39] so that is in the end of mark now [14:05:13] hashar: I have a question about the Zuul package. [14:05:15] #link https://phabricator.wikimedia.org/T95959#1206932 Pending mark to figure out how to split VLANs in labs network [14:05:29] #info Antoine to reach out to mobile team and let them setup the gradle / android SDK stuff on CI labs slaves [14:05:44] hashar: It is deployed in production now, but on labs instances I still have to do it manually? How is that? I thought if it's in prod, it would work automatically via apt? [14:05:45] #info {notdone} havent talked to them :( [14:05:51] hold on [14:05:55] "Timo to write his thoughts about the Workboard columns and potential usage ??" [14:06:00] splitting vlans in labs? not happening I think :P [14:06:02] I guess we can stall that one as well ? [14:06:17] This is in the actions retrospective from last week, you skipped one point [14:06:22] but yeah I need to look at that ticket on how to support CI [14:06:24] mark: yeah sorry badly echoing what chase told me last week :/ [14:06:35] mark: he assigned https://phabricator.wikimedia.org/T95959#1206932 to you [14:06:59] I will be happy to talk about it with you as needed. Though I have no idea how the network is setup nor what is the problem at hand [14:08:17] Krinkle: which point did I skip ? [14:08:25] Zuul packaging [14:08:45] https://www.mediawiki.org/wiki/Continuous_integration/Meetings/2015-04-14/Minutes#actions retrospective [14:09:04] what a mess :) [14:09:37] well the Zuul package should be on apt.wikimedia.org [14:09:49] for both Trusty and Precise [14:10:17] Last Saturday I created slave1017, It needed manual Zuul install from your home dir [14:10:24] Puppet/apt was unable to install Zuul [14:10:31] doh [14:10:54] ah it hasn't been uploaded to trusty :/ [14:11:24] Oh [14:11:43] action point? [14:12:07] looks like Ihave closed https://phabricator.wikimedia.org/T48552#1215158 too fast [14:12:10] reopening it [14:12:22] #action Zuul package to be uploaded for Trusty. Reopening https://phabricator.wikimedia.org/T48552#1215158 [14:13:03] "Timo to write his thoughts " wasn't from last week, it was from the week before. The "Action retrospective" can be confusing since it's on the next week about week before. [14:13:20] Anyhow, last week we looked at the board and it was self-explanatory? [14:13:21] LINK: https://phabricator.wikimedia.org/tag/continuous-integration/ Ci workboard and their columns (hashar, 14:09:40) [14:13:25] (from last week) [14:13:36] yeah I think it is fine [14:13:42] I think the action point comes from 2 weeks ago [14:14:05] The way we had Backlog/Next was a little bit non-standard, but now that we have "Untriaged" it's more obvious. [14:14:08] the board columns names looks self explanatory so no need to write a doc explaining the workflow or detailling them [14:14:15] Okay :) [14:14:47] #agreed No need to document columns name on https://phabricator.wikimedia.org/tag/continuous-integration/ they are self explanatory [14:14:49] :) [14:15:05] anything from past meeting? [14:15:27] #topic CI isolation status [14:15:48] #info nodepool created its first instance on contintcloud labs project! Instance does not boot yet but we got progress [14:15:57] Yeah, but I'll wait until after ^ [14:15:58] not much to say beside that [14:16:06] #topic Triage [14:16:11] #link https://phabricator.wikimedia.org/project/board/401/query/open/ [14:16:25] Hold on, one more from past week [14:16:29] oooops [14:16:35] https://phabricator.wikimedia.org/T96025: disabled core dumps [14:16:38] Done. [14:17:08] great!!! [14:17:23] #link https://phabricator.wikimedia.org/T96025 Disable core dumps generation on CI labs slaves. Done! [14:17:39] that was setup last summer to grab core files for hhvm [14:17:43] not much use for it nowadays [14:17:51] beta still produces core iirc [14:17:54] so triage [14:18:20] hashar: So we've got a few old ones to catch up, but I'd like to discuss something more current first. [14:18:33] sure [14:19:12] I created a tree of tasks around "must wipe workspace", "set up git cache", "fix zuul io locks .git" and "create more smaller instances" [14:19:44] #link https://phabricator.wikimedia.org/T96629: Convert pool to more smaller slaves [14:19:56] Greg correctly pointed out whether this is worth doing. [14:20:15] depends on the timeline [14:20:19] I think it is worth doing because most of the work will help isolation, (it is not double work), and because this is causing problems right now. [14:20:23] given I have no idea yet when ci isolation will land [14:21:04] Zuul is not designed to work without workspace wiping, we copied openstack's system only partially. They never preserved workspaces, even before they had nodepool. [14:21:26] yeah [14:21:45] so using the git cache is definitely going to be setup for the isolated vm [14:21:48] so that can be worked on [14:21:52] Even if we remove the locks manually, there will be corruption because of cancelled process. We should just discard the workspace and start fresh from local git cache and re-clone all relevant repositories. [14:22:00] james e blair added some option to zuul-cloner to clone from a local cache [14:22:07] Yeah [14:22:37] So, see my points on https://phabricator.wikimedia.org/T96629. [14:22:48] hashar: how will you have a git cache when the isolated vm is only spun up for one job? include it in the image? [14:22:56] jzerebecki: yup [14:23:10] jzerebecki: the software that is managing the pool of instances is named 'nodepool' [14:23:13] jzerebecki: Yep. It's part of the image creation process which would happen every few hours asynchronously [14:23:25] its has the abiltiy to build a new disk image and send it to OpenStack so we can boot instances out of it [14:23:38] so the idea would be to have the image to come with all git repo freshly cloned in [14:23:42] and thus act as a mirror [14:23:53] so in theory [14:24:00] zuul-cloner will be able to clone from that mirror first [14:24:19] if it is on the same disk and that we teach zuul cloner to do hardlink / use git alternate, that is a super fast operation [14:24:29] Git clone has the ability to look at multiple remotes and prefer the local one and then fetch the rest from zuul-merger (e.g. your proposed commit) [14:24:30] then zuul-cloner will fetch the missing objects from Gerrit or zuul merger [14:25:48] hashar: What kind of instance size are you thinking for nodepool instances? [14:27:35] not sure :/ [14:27:47] we will most probably need custom ones [14:28:00] hashar: Hm.. [14:28:42] hashar: I think it would be good to make our current pool and infrastructure behave more like nodepool, specifically the part where an instance has no concurrent jobs, is a little bit smaller, and has local cache. [14:28:53] But I'm not sure what size to pick, and how to set up git cache. [14:29:31] basically the idea is that if we have no concurrency, we can do git cache updates as a time-scheduled job for each slave. So it naturally will not race with other jobs. [14:29:50] Maybe it would just be a for-loop in bash that git-clone/git-pull repos in a directory? [14:29:58] slightly relatedly, I've been wondering if we should start collapsing our jobs so that we're only running one. like 'mwext-testextension-zend' would run phplint, phpunit, and then qunit rather than having 3 different jobs [14:30:27] legoktm: that is what will happen before we switch to nodepool. [14:30:27] i thought git clone --depth was also considered in https://phabricator.wikimedia.org/T93703 but i don't find that anymore. was it considered somewhere? should --depth 1 be used in addition to --shared? [14:30:45] Krinkle: for the git cache I will be happy to handle the setup / puppet thing etc [14:30:52] jzerebecki: Yes, zuul-cloner should default to a small depth. There is no reason to do a full clone, useless. [14:30:54] I have replied on https://phabricator.wikimedia.org/T96629#1224484 [14:31:10] for git clone --depth [14:31:15] I dont think it is going to work [14:31:19] have to investigate it [14:31:25] namely, when you clone a repo origin/HEAD [14:31:34] then attempt to checkout another branch -like REL1_24- [14:31:42] I have no idea what is going to happen [14:31:46] since REL1_24 is not there [14:32:17] I prefer doing the brute force approach [14:32:17] hashar: It has to check before cloning [14:32:20] have a full copy locally [14:32:23] and do a full clone [14:32:23] you can pass commit/branch to clone [14:32:56] now that we have a Debian package it is "easy" to apply zuul patches to it [14:33:04] Either we need to add depth, or do hard links. [14:33:08] That must be a blocker. [14:33:19] I would do hard link or alternates [14:33:38] hashar: but if we always start with a clean workspace and thus reclone then it will never switch to another branch, right? [14:33:39] creating task [14:34:03] ah https://phabricator.wikimedia.org/T87294 [14:34:07] Nodepool images need Gerrit replication for git-clone performance [14:34:14] git alternates seems too scary and fragile. [14:34:32] hashar: That is slightly diferent [14:35:08] doesn't git fetch any missing objects from the remote when it is not in the alternate? [14:35:17] jzerebecki: it should :) [14:35:35] jzerebecki: Yes, but alternate means it will not make any links in the local workspace [14:35:48] one of the major issue we have right now, is that we have a git clone of mediawiki/core which take 3 minutes and waste a ton of disk space [14:35:48] There's lots of issues with it if you search the internet [14:35:54] There is no reason for that. [14:36:12] an idea is to point the mw/core git working copy to a local mirror so it fetches the objects locally / on same disk [14:36:40] hard links makes the workspace independent. [14:36:51] and still very fast [14:38:00] so do we need a new task ? [14:38:16] YEah [14:38:18] Subtask of https://phabricator.wikimedia.org/T96627 [14:39:28] I am assuming you are going to create it :) [14:39:33] Oh [14:39:35] OK :) [14:40:01] I am too much in context of Debian packages / nodepool :/ [14:40:28] if any task need my attention or me to work on it, please mention it or even assign it to me [14:40:34] will find out spare time to achieve them [14:41:16] hashar: Yeah, I need some pointer on the how to do replication. I know when (scheduled job for each slave), but not "how". [14:41:24] Also have to think about size. [14:41:30] How large is gerrit replication? [14:41:32] it is not that big [14:42:15] On lanthanum: du -sm /srv/ssd/gerrit [14:42:19] 9463 /srv/ssd/gerrit [14:42:22] or 10GB [14:42:26] Col [14:42:27] Cool [14:42:37] the problem is that when you update them [14:42:42] that creates a bunch of objects [14:42:48] that should be repacked from time to time [14:43:07] and if you repack the mirror repo, I have no idea what will happen to the work spaces that have been cloned using hardlinks [14:43:25] if the working repo has a hardlink to .git/packs/1234.pack [14:43:27] hashar: hardlinks means it has a strong reference. [14:43:35] Even rm -rf the git cache will not breka the workspace [14:43:36] and we repack the mirror so that 1234.pack disappear [14:43:41] besides, it doesn't matter because it does not race. [14:43:49] We will work wipe spaces if we use git cache [14:43:54] I think on the next operation in the working copy it will end up fetching everything from Gerrit :/ [14:43:56] and no concurrent per slave remember? [14:44:02] oh [14:44:19] so yeah can hardlink :) [14:44:27] It is a dependency, it has to happen together [14:44:43] can you please take care of filling the tasks [14:44:44] using hardlinks to pack files means then that repacking will duplicate packs instead of saving space... [14:44:51] hashar: I already did, yesterday :) [14:45:06] jzerebecki: we will just repack the mirror repo [14:45:09] Except for "T96687: Set up git replication on integration slaves" that one I created just now [14:45:22] then the Jenkins job would destroy the workspace and git clone from the mirror [14:45:29] which would use hardlinks [14:45:32] jzerebecki: No, because the clone workspace is not preserved. [14:45:58] jzerebecki: The only thing preserved between builds is the git cache which we can maintain however we like. We can disable auto-gc and run repack/gc every time we update the cache [14:45:59] jzerebecki: but you are right we have to carefully verify what exactly happens [14:46:02] whatever we want [14:46:21] will the workspace be destroyed on the next build or after the current build? [14:46:30] Right now we're not destroying anything. [14:46:37] But yes, we will do postmerge [14:46:39] postbuild* [14:46:44] Not prebuild [14:47:06] I have this task as blocker for https://phabricator.wikimedia.org/T93703 [14:47:10] (reduce copies of mediawiki/core in workspaces) [14:47:12] ah good then there is no problem with using hardlinks to pack files [14:47:20] if we clean prebuild then that task would not be fixed. [14:47:31] if would still leave core copies in 2000 workspaces for mwext-* [14:47:38] :) [14:47:53] hashar: So... I will create medium instances (instead of large currently) [14:48:03] Or should we ask labs for a small one with 2 CPU? [14:48:21] we got rid of a lot of mwext-* jobs, a good number are now using a generic "mwext-testextension-zend" and "mwext-qunit" [14:48:33] for mediawiki jobs I dont think 1 CPU is enough [14:48:44] Yeah [14:48:53] legoktm: yup nice improvement! [14:49:03] 4GB ram / 40GB space is overkill? [14:49:16] (which is medium) [14:49:24] hashar: Do we actually use multi CPU? [14:49:39] a mediawiki job would have PHPUnit starving one CPU [14:49:50] and having a second cpu for mysql / system etc would be nice [14:49:52] remember no concurrent workers [14:50:02] Ah, right. [14:50:17] system, mysql, apache, AJAX requests [14:50:20] OK. Good :) [14:50:32] so a single job can well benefit from having multiple cpus [14:50:37] Yeah [14:50:39] 4GB is probaqbly overkill [14:50:57] do you come to the releng meeting in an hour from now ? [14:51:02] we can ask Andrew B. what I think about it [14:51:39] hashar: So if we switch from large to medium, and from 4X to 1X, we will need more instances in total. Not much though, because most quota will remain the same since we'll be using smaller instances. [14:51:42] Okay [14:51:44] potentially we could ask for a new instance type with 2CPU - 2 GB RAM - 25 GB disk (10 G for system 10 G for git mirror and 5GB for rest) [14:51:52] Yeah [14:52:03] the # of instance quota can be raised as needed I am sure [14:52:13] it is probably there to prevent potential abuve [14:52:15] abuse [14:52:23] so in short [14:52:32] we need a task to determine the size of the image we want [14:52:42] Let's discuss it on https://phabricator.wikimedia.org/T96629 [14:52:44] for ci isolation, I envision us having multiple instance types [14:52:46] yea the instance count limit is regularly adjusted on request [14:52:49] Task fatigue [14:53:26] #agreed https://phabricator.wikimedia.org/T96629 to be used to discuss instance type sizes for ci [14:53:35] so in theory [14:53:46] we could have light jobs tied to small 1 CPU instances [14:54:08] rest on 2CPU instances [14:54:19] and some very heavy jobs that have multiple process to be tied to 4 CPU instances [14:54:26] nodepool has all the support for it [14:55:05] you can define label that are used by Jenkins such as trusty-light trusty-medium trusty-heavy [14:55:19] and for each, associate an instance type (m1.small m1.medium m1.large) [14:55:26] then it maintains pool of them [14:55:38] hashar: Yeah [14:55:42] then in in JJB it is all about using the proper node: stanza :) [14:55:48] at least on the paper, that looks easyu [14:56:22] so https://phabricator.wikimedia.org/T96629 Convert pool from a few large slaves (4X) to more smaller slaves (1X) [14:56:26] that sounds 'epic' [14:56:32] Meh, one week. [14:56:43] or maybe could used to be split in sub tasks [14:56:46] That's all I have. [14:56:51] :) [14:57:13] ahh I guess we can use the Phabricator check lists with [] a [] b [] c [14:59:12] Krinkle: added to the releng meeting [14:59:14] anything else ? [14:59:21] jzerebecki: anything you wanna add ? :) [14:59:31] legoktm: congrat on the mwext-jobs [14:59:37] :) [14:59:42] legoktm: I am sure we can get the left ones to be moved to that unified job [14:59:42] re https://phabricator.wikimedia.org/T95897 [14:59:49] hashar: Let's triage one old task so we make progress compared to last week [14:59:58] refresh and pick one :) [15:00:05] lets look at jzerebecki task :) [15:00:08] is there a consistent way one could use to detect if its running under CI? [15:00:16] so [15:00:28] we inject a bunch of settings in LocalSettings.php [15:00:33] and there is some $wmg variable for it [15:01:00] $wgWikimediaJenkinsCI [15:01:06] in integration/jenkins.git mediawiki/conf.d/10_set_wgWikimediaJenkinsCI.php [15:01:10] $wgWikimediaJenkinsCI = true; [15:01:19] so if the code runs on Wikimedia jenkins it has that true [15:01:35] I think we have set it up for Wikidata originally with addshore or jeroen [15:01:46] ah good that should work [15:02:03] so you can: if ( isset( $wgWikimediaJenkinsCI ) && $wgWikimediaJenkinsCI == true ) { echo "I hate Jenkins\n"; exit(0); } [15:02:29] I did comment on the task not sure whether it was very clear :( [15:03:37] #link https://phabricator.wikimedia.org/T95897 the changed job configuration extension-unittests -> extension-unittests-generic for Wikidata.git makes it not run all tests and fail [15:04:09] legoktm: what are the repos not using the standard/shared/unified mwext jobs ? [15:04:21] legoktm: are they the ones that have a bunch of dependencies? [15:04:38] or if they're non-voting [15:04:46] ah yeah non voting [15:04:48] https://github.com/wikimedia/integration-config/blob/master/jjb/mediawiki-extensions.yaml#L318 [15:04:54] those one, nothing we can do about [15:05:19] so i think https://phabricator.wikimedia.org/T96264 [15:05:20] for the repositories that needs dependencies, there is a way to inject them via extensions-load.txt [15:05:47] does not need any more info so i triaged it. [15:06:03] legoktm: so you could well have the shared job to use extensions-load.txt :) The repo having now dependencies would only load themselves. [15:06:23] #link https://phabricator.wikimedia.org/T96264 Add Wikidata to Jenkins job mediawiki-extensions-hhvm [15:06:54] hashar: er, how would that work? [15:07:09] where would we store the mapping of extension to dependencies? [15:07:27] Indeed. The execution can use extensions-load.txt, but the job itself needs a way to store it. [15:07:36] legoktm: https://github.com/wikimedia/integration-config/blob/master/jjb/mediawiki-extensions.yaml#L47-L57 [15:07:42] Unless we pass the variable from zuul-layout [15:07:54] Which is possible in theory [15:08:01] hmm [15:08:29] I have an RfC open for extension dependencies, but haven't had time to work on it yet...if/when that's done, we could clone the extension, read its dependency lists, and keep cloning till everything is resolved [15:08:33] I am not sure that zuul-cloner-extdeps is used anywhere beside in the shared mediawiki-extensions-php job [15:08:38] openstack does this for a few things already, with their roadmap of removing Jenkins entirely in mind. [15:08:59] and we do it for doc_subpath [15:09:13] legoktm: an unachieved project of mine was to have a job shared by all extensions deployed on wmf [15:09:34] I don't think that's possible due to MW's hooking system. [15:09:37] answered and marked https://phabricator.wikimedia.org/T95668 as invalid [15:10:13] jzerebecki: nice!!! [15:10:30] #link https://phabricator.wikimedia.org/T95668 Jenkins language screenshots job runs, it checks out an old VisualEditor revision. {done} [15:10:56] a teaser for later [15:10:57] https://gerrit.wikimedia.org/r/#/c/203347/3/zuul/debian_glue_functions.py,unified [15:11:21] in Zuul we can apply a python function to the job parameters before the job is triggered in Jenkins :) [15:11:30] so we can have a bunch of mappings directly in zuul [15:11:31] so we can just write custom python to pass env variables around? [15:11:35] yeah [15:11:41] in the code above [15:11:51] params[] is a dict of parameters passed to the Gearman function [15:11:59] which in turns are set as Parameters to the Jenkins job [15:12:06] and Jenkins make them env variables [15:12:13] so [15:12:24] in theory you can detect that a job named foobar-php53 [15:12:38] ooh [15:12:40] would have a PHP_BIN=/usr/bin/php5.3 [15:12:43] ok, I know how to do this then :) [15:12:49] and then have all jenkins stuff to use $PHP_BIN [15:13:01] then foobar-hhvm would have PHP_BIN=/usr/bin/hhvm [15:13:03] or whatever else [15:13:24] legoktm: https://gerrit.wikimedia.org/r/#/c/203347/ should give a good base :) [15:13:29] havent tested it though [15:13:39] well I guess it is time to close this meeting isn't it ? [15:14:29] #link https://gerrit.wikimedia.org/r/#/c/203347/ How Zuul can inject parameters to Jenkins (that end up as env variable) [15:14:30] #link https://phabricator.wikimedia.org/T96690 for using generic job for extensions with dependencies [15:14:42] hehe [15:15:05] legoktm: you can look at how we set doc_subpath that is a zuul python script as well :) [15:15:18] Krinkle: jzerebecki: anything else or can i close this meeting ? [15:15:39] Nope [15:16:08] nope [15:16:22] thanks everyone! [15:16:35] will update the wiki / spam mail later on today [15:16:39] o/ [15:16:57] legoktm: anything still ? [15:17:07] nope, just waving goodbye :P [15:17:10] hehe [15:17:22] I thought you raised your hand to ask some last question :) [15:17:24] #endmeeting [15:17:24] Meeting ended Tue Apr 21 15:17:24 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [15:17:24] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-04-21-14.01.html [15:17:24] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-04-21-14.01.txt [15:17:24] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-04-21-14.01.wiki [15:17:25] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-04-21-14.01.log.html [17:01:27] can someone invite me to the staff channel? [17:01:51] marktraceur, ^ [17:01:55] Hrrmmmmaybe! One sec stephanebisso [17:01:59] stephanebisson even. [17:02:18] Er [17:04:08] stephanebisson: You can join now [17:04:18] marktraceur: thanks! [17:05:28] marktraceur could you invite me too? [17:05:34] Jeez [17:05:47] hey, them's the wages of being helpful ;) [17:05:48] marktraceur add me too! :) [17:06:46] ryasmeen: Done [17:06:48] is this the correct channel ? https://www.youtube.com/watch?v=KDEofGDM_CI [17:06:50] ANYONE ELSE? :P [17:06:58] thanks! [17:06:59] victorgrigas: No, it is not. -.- [17:07:38] marktraceur: can you invite me too pls? [17:08:03] We /really/ need to rename this channel to -meeting. [17:08:14] And how [17:08:18] the-wub: Sure, sec [17:08:49] the-wub: Done [17:08:51] ryasmeen: Fixed [17:09:13] thanks :) [17:09:31] Please Stand By [17:09:47] cajoel: This channel is not for discussion of the meeting. [17:10:04] So anyone who is here for the meeting needs to 1. register an account and 2. join the staff channel [17:10:36] How do i do this [17:11:11] victorgrigas, /join #wikimedia-staff [17:11:22] He doesn't have a registered account, so he can't. [17:12:40] how do i register an account [17:13:04] James_F: I thought this was #-office, short for #-officehours [17:13:07] victorgrigas: OfficeWiki [[IRC]] [17:13:14] thanks [17:36:13] /msg NickServ VERIFY REGISTER lcm522 helxgrjxveoh [17:36:38] ...ouch? [17:37:23] lol woops [17:37:39] might suggest https://freenode.net/sasl/ [21:11:56] so when does the official desk reshuffling start? :) [21:12:22] ebernhardson: *wrong channel* [21:12:24] My goodness [21:12:38] office is a horrible name for a channel :P [21:12:49] It *sort of* makes sense. [21:13:38] Have we considered just having the office hours in #wikimedia? It's not used for anything else. [21:13:44] "#wikimedia-office" is apparently ambiguous [21:15:52] It shouldn't be. [21:20:55] who is in charge of the Wikimedia store? [21:21:17] The layout of the menu is broken on firefox. [21:26:28] marktraceur: regardless of should's or should not's, the reality is that it is. we have to live in reality :) [21:39:06] ragesoss: Victoria