[13:57:13] hashar: meeting in a few minutes? [13:57:20] yeah [13:58:27] I can not speak today, in friend's co-working space, but the meeting room is taken :| [13:58:54] it should be available soon, so I will be audible again [14:01:12] #startmeeting CI weekly meeting [14:01:12] Meeting started Tue Oct 20 14:01:12 2015 UTC and is due to finish in 60 minutes. The chair is hashar. Information about MeetBot at http://wiki.debian.org/MeetBot. [14:01:12] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [14:01:12] The meeting name has been set to 'ci_weekly_meeting' [14:01:35] o/ [14:03:24] I am fed up with technology [14:04:23] restarting [14:05:16] o/ [14:05:22] my mic has some issues :-( [14:06:27] o/ [14:08:16] *sigh* the browsertests for wikidata seem to fail a different way every run [14:08:46] :( [14:08:54] will be working on that this week [14:10:23] I give up [14:10:37] jzerebecki: zeljkof looks that either my brand new headphones are already dead [14:10:42] or my laptop mic plug is dead [14:10:46] I am disappointed [14:10:56] hashar: ouch :/ [14:11:07] are we having the meeting here? [14:11:26] good old irc :) [14:12:27] I bought them 20 euros [14:12:31] question regarding https://phabricator.wikimedia.org/T87169 for pywikibot/core [14:12:35] and they only served for half an hour on monday ... [14:12:46] #topic Pywikibot/core tests [14:12:54] #link https://phabricator.wikimedia.org/T87169 [14:13:15] can we just add the tox-jessie job to check? [14:13:28] I gave up following up after jayvdb stating a 11 minutes delay is acceptable [14:13:30] ok, leaving hangout then [14:13:39] I though about just running the lint checks via tox-jessie [14:13:52] and keep more specific jobs for py26 / py27 / py3x [14:13:58] so they are run in parallel [14:14:19] * hashar mumble at his headphones [14:15:09] or are there still security concerns or something like that open with the tox-jessie job? [14:15:13] for check [14:15:19] or with any nodepool job for that matter [14:15:22] kunal asked about it on the QA list as well yesterday [14:15:37] it is probably good enough to be turned on [14:16:03] but jobs running on Nodepool instances are still unrestritced network wise [14:17:04] for pywikibot we can probably enable them [14:17:08] and follow up after [14:17:21] bike shed time: we need a new pipeline name in Zuul :-} [14:18:06] really? doesn't one of the existing ones work? [14:18:29] do we have a ticket for the network restriction? [14:18:38] looking it up but can't find it :(- [14:18:46] the idea was to isolate the labs project from each others [14:19:15] #link https://phabricator.wikimedia.org/T86168 Isolate contintcloud nova project from the rest of the wmflabs cloud [14:19:19] jzerebecki: got it [14:19:43] potentially should be done at openstack level [14:19:58] should ask andrewbogott about it [14:21:02] I am adding andrew to it [14:22:15] maybe via security groups [14:22:20] asked him [14:22:32] but beside that the jobs are limited in run time by the Jenkins build timeout [14:22:39] so they are killed automatically after X minutes [14:22:48] and Nodepool deletes the instances as a result [14:24:37] ok good. so can we just reuse check and check-only? [14:24:58] check does not runs for whitelisted folks [14:25:28] check-only only votes +1 [14:25:44] check-voter does votes +2 but was meant for repo only have safe / lint tests [14:26:06] maybe we can reuse check-voter [14:26:58] there may only be one pipeline that votes... [14:27:31] yeah :-( [14:28:09] so for a given repo, we have to union its check/test triggered jobs [14:28:28] which mean the jobs needs to be in Nodepool [14:28:46] so if the project has jobs that may only be in test we need to keep that and add the job to it and also add the job to check. if the project has not test pipeline then we can use check-voter. [14:29:14] yup [14:30:03] good. next topic? [14:30:16] #topic caching package managers [14:30:26] so pip / gem / npm / composer etc [14:30:32] download a ton of tarballs from internet [14:30:38] maven also [14:30:39] and to speed that up we would need some caching [14:30:43] yeah and gradle [14:31:06] #link https://phabricator.wikimedia.org/T112560 [tracking] Disposable VMs need a cache for package managers [14:31:14] that is the tracking bug [14:31:35] I evaluated a solution that is solely for pip ( devpi https://phabricator.wikimedia.org/T114871 ) [14:31:47] works like a charm but it is only for pip [14:31:59] then angry-caching-proxy , a nodejs app https://phabricator.wikimedia.org/T112561 [14:32:02] that acts as a proxy [14:32:12] so you set http_proxy=http://angry-cache/ [14:32:13] and it works [14:32:26] na [14:32:27] egrr [14:32:31] you have to change the npm/rubygems/pip index url to point to it [14:32:49] but gems and pip redirect to https and that causes some issues [14:33:09] potentially we could hack the nodejs app to rewrite the urls from http to https to grab the packages from the https index [14:33:20] but the the index returns https url [14:33:36] and the instance ends up downloading from the https url direcTly from internet defeating the whole purpose [14:33:37] so [14:33:39] i dished that [14:33:50] I experimented with setting up a https man in the middle proxy [14:34:05] the idea is to set https_proxy=https://mitmproxy.integration.wmflabs.org/ [14:34:26] it is based on squid, which is able to terminate the TLS connection and sign it with a custom CA, [14:34:35] ca that needs to be added to instances so the package managers trust it [14:34:42] then squid download from the internet and cache the stuff [14:35:00] and squid send the material back to the client over the hacked TLS connections [14:35:28] the flow being: [14:36:03] package manger ----> https_proxy=mitmproxy:8081 ----> SQUID --> https:// to the internet [14:36:26] #action Antoine to fill a sub task to https://phabricator.wikimedia.org/T112560 describing the Squid / mitm [14:36:52] which intentionally doesn't work for certificate pinning, like github does [14:37:21] and you said npm also ships with a certificate pin [14:37:34] yeah [14:37:58] but we can add our custom CA in /etc/ssl/certs/ for it to be trusted, or in the case of npm pass --ca=custom.cert [14:38:12] I have looked at composer [14:38:26] but that works with some well crafted urls for pip / npm and gems [14:38:30] I have hit a wall with bundler [14:38:51] so I guess I will describe what I did, set up a test env and list out the appropriate commands/setup for each of the pm [14:39:08] composer doesn't even check the certificate according to csteipp [14:39:28] doesn't surprise me [14:40:02] another possibility suggested by Dan [14:40:09] and the way I prefer to improve it is to verify gpg signed git tags [14:40:20] it is that on postmerge we will run the package mangers installers to populate a cache dir [14:40:26] then save the result to a central place [14:40:44] then jobs in test/gate-and-submit would be able to retrieve that cache [14:40:48] kind of like Travis does [14:40:58] maybe it was you jzerebecki that suggested it in the first place [14:41:17] no idea who was first, but yea we talked about that before [14:41:18] the pro is that it will cache the native modules that need compilation [14:41:31] another pro is that it is transparent ( just: cache-restore && npm install ) [14:41:43] the con is we need a central storing place [14:41:47] I thought about Swift [14:42:01] but maybe we can just go with a rsync server which is reasonably easy to setup and interact with [14:42:40] the con is the same for the caching proxy, it would be a sort of central storing place, just less flexible [14:43:10] yeah [14:43:14] so I am not sure what to do :( [14:43:29] the mitm is pretty much done but I am not pleased with it [14:44:22] is there a way to give a secret to a job only when it is running in gate-and-submit or postmerge? [14:44:39] forget my question. the answer is yes. [14:45:58] we could use a dedicated job [14:46:01] so it would never be polluted by users [14:46:27] so to do that [14:46:42] I though we could have the Nodepool jobs to clone all the material under $WORKSPACE/src/ [14:46:52] and point the various package managers caches to $WORKSPACE/cache/ [14:46:58] so we can restore the cache there [14:47:04] and avoid potential conflicts with the workspace [14:48:16] do we need to isolate the cache for each repo? if not then it would probably easy. just hand out an ssh key to one instance used as the caching store. also easy to switch to swift if we get that in labs. [14:48:49] yeah I would like to split the cache per repo / branch [14:49:15] so something like mediawiki-core@REL1_26 [14:49:35] should I bring it up a proposal as a sub task of https://phabricator.wikimedia.org/T112560 "[tracking] Disposable VMs need a cache for package managers" [14:49:38] ? [14:49:46] split sure. but does it need to be isolated? as in the postmerge job for mediawiki/core can not in any way write to the one for pywikibot/core? [14:50:09] ahh [14:50:30] well the job would be hardcoded in Jenkins using ZUUL_PROJECT / ZUUL_BRANCH [14:50:39] and a credential available solely to that job [14:50:50] so a postmerge from another repo would have no way to pollute another cache [14:51:00] and the check/test whatever pipeline would lack the credentials to update it [14:51:52] one could craft a repo that executes code to intentionally write to another repos cache unless each repos cache has their own distinct credentials. [14:52:23] i mean during postmerge [14:52:41] #action bring up a proposal for cache save/restore as a subtask of https://phabricator.wikimedia.org/T112560 "[tracking] Disposable VMs need a cache for package managers" [14:53:38] jzerebecki: that would mean the change has been +2ed by someone [14:53:44] yes [14:53:58] and I think we can trust our +2ers [14:54:18] if we ever spot someone doing such a nasty thing, I guess that deserves a global ban [14:54:26] and we will have to dispose the caches [14:55:25] yea cleanup for such an occurance is fairly easy [14:56:15] this way of caching would also work for storing coverage to diff against for patch coverage [14:56:39] I have filled https://phabricator.wikimedia.org/T116017 about that [14:56:48] will have a look at using a rsync server [14:57:02] and figure out the appopriate config files to point package managers to it [14:57:27] you don't even need an rsync server. just a ssh key with an instance that has rsync installed. [14:57:28] that is probably the best solution [14:57:52] ohh [14:58:04] haven't thought about that [14:58:17] so we would pass the ssh key via the Jenkins credential systems? [14:58:21] yup [14:58:38] sounds good [14:58:44] and straightforward [14:59:08] though the rsync server could be used to serve the caches anonymously [14:59:18] it is just about include rsync::server (more or less) [14:59:42] yea for read access that is probably the better way [14:59:58] I guess we have a .plan :-} [15:00:08] sounds good [15:01:13] guess that is all for today :-} [15:01:16] thanks jan! [15:01:52] #topic meeting agenda [15:01:52] #link https://www.mediawiki.org/wiki/Continuous_integration_meetings/2015-10-20 [15:01:58] #topic misc [15:02:01] jzerebecki: the end ? [15:02:19] yup [15:02:26] thx hashar [15:02:27] oh [15:02:28] DST [15:02:35] we are changing hours next week [15:02:40] lets keep it at 4pm CET ? [15:02:44] err [15:02:48] 4CEST -> 4CET [15:03:00] which means 2pm UTC -> 3pm UTC [15:03:12] jzerebecki: ^^ [15:03:27] hashar: so just like the calendar says? fine with me. [15:03:36] #agreed Next meeting at 4pm CET (3pm UTC) [15:03:41] Danke ! [15:03:44] #endmeeting [15:03:45] Meeting ended Tue Oct 20 15:03:44 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [15:03:45] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-14.01.html [15:03:45] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-14.01.txt [15:03:45] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-14.01.wiki [15:03:45] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-14.01.log.html [15:04:41] minutes pasted on wiki https://www.mediawiki.org/wiki/Continuous_integration_meetings/2015-10-20/Minutes [15:04:49] and at bottom of https://www.mediawiki.org/wiki/Continuous_integration_meetings/2015-10-20#Meeting_minutes [15:06:40] hashar jzerebecki: are you joining browser tests meeting? [15:07:48] zeljkof: i probably should, invite plz [15:08:30] jzerebecki: coming, I thought you were already invited [15:09:35] jzerebecki: https://plus.google.com/hangouts/_/wikimedia.org/btest-triage [15:13:44] #startmeeting browser tests weekly triage meeting [15:13:44] Meeting started Tue Oct 20 15:13:44 2015 UTC and is due to finish in 60 minutes. The chair is zeljkof. Information about MeetBot at http://wiki.debian.org/MeetBot. [15:13:45] Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. [15:13:46] The meeting name has been set to 'browser_tests_weekly_triage_meeting' [15:14:50] #topic introduction [15:14:54] #link the meeting on google hangouts https://plus.google.com/hangouts/_/wikimedia.org/btest-triage [15:16:30] #link meeting notes https://www.mediawiki.org/wiki/Quality_Assurance/Browser_testing/Meetings/Notes [15:17:01] #link browser tests phabricator board https://phabricator.wikimedia.org/tag/browser-tests/ [15:17:33] o/ [15:18:25] #topic failing wikidata browsertests [15:18:50] #topic failing wikidata browsertests [15:18:55] #link https://phabricator.wikimedia.org/T110510 fix negative argument (ArgumentError) in browsertests [15:24:10] zeljkof: sorry have a bit of chat / discussion to handle so can t really triage the browser tests stuff [15:26:44] #link https://integration.wikimedia.org/ci/view/BrowserTests/view/Wikidata/job/browsertests-Wikidata-WikidataTests-linux-firefox-sauce/381/ this is the one that failed without an error [15:58:45] #action jzerebecki will create a jjb patch to disable either cucumber's pretty formatter or raita [15:58:54] #endmeeting [15:58:54] Meeting ended Tue Oct 20 15:58:53 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) [15:58:54] Minutes: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-15.13.html [15:58:54] Minutes (text): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-15.13.txt [15:58:54] Minutes (wiki): https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-15.13.wiki [15:58:55] Log: https://tools.wmflabs.org/meetbot/wikimedia-office/2015/wikimedia-office.2015-10-20-15.13.log.html