[14:01:04] #startmeeting Weekly CI triage [14:01:10] WTF [14:01:56] zeljkof: jzerebecki meetbot is dead :D [14:02:07] meeting over hangout https://plus.google.com/hangouts/_/wikimedia.org/ci-weekly?authuser=0 [14:02:45] o/ [14:03:11] well bye bye meetbot [14:03:13] o/ [14:03:30] hashar: anybody speaking in the hangout? I do not hear anything [14:03:43] you want to force reload I guess [14:04:14] broken [14:06:00] #topic Zulu slow to remote update from Gerrit [14:07:29] jzerebecki: https://phabricator.wikimedia.org/T70481 [14:08:57] jzerebecki: I wrote a script to purge old refs [14:08:59] zuul-clear-refs.py --verbose --dry-run --until 90 /srv/zuul/git/project [14:09:05] can't remember whether it is installed on gallium [14:10:22] we need zuul to create references using something like: zuul/master/2015/06/Zxxxxxx [14:12:19] demo time on gallium as zuul: /home/hashar/zuul-clear-refs.py -n -v --until 90 /srv/ssd/zuul/git/mediawiki/core/ [14:16:57] #agreed Run zuul-clear-refs.py daily on all our repositories to reclaim Zuul references https://phabricator.wikimedia.org/T103528 [14:17:43] #action Publish the last meeting notes [14:17:51] #topic composer for dependencies [14:18:10] jzerebecki migrated the Wikidata/Wikibase jobs to use composer. Need to run composer twice due to missing features though [14:20:03] #link https://github.com/wikimedia/composer-merge-plugin/issues/34 [14:20:09] wikibase has to run twice as a result :-((( [14:21:31] #link https://github.com/wikimedia/composer-merge-plugin/pull/36 Bryan Davis sent a pull request! [14:26:06] jzerebecki: https://phabricator.wikimedia.org/T90303 Fetch dependencies using composer instead of cloning mediawiki/vendor for non-wmf branches [14:48:30] quick demo of Nodepool [14:48:43] Does not put slaves offline though ( https://phabricator.wikimedia.org/T103551 ) [14:50:32] #agreed to puppetize Nodepool https://gerrit.wikimedia.org/r/#/c/201728/ [14:50:35] jzerebecki: zeljkof-meeting https://gerrit.wikimedia.org/r/#/c/201728/ :d [14:52:51] the end :D [15:02:25] marxarelli: meeting ping :) [15:06:20] #startmeeting weekly browser tests triage [15:07:54] #topic triaging triage column [15:08:07] #link to the board sorted by priority https://phabricator.wikimedia.org/project/view/1078/?order=priority [15:09:09] #agreed moving T102726 to backlog [15:10:47] #agreed moving T102546 to backlog [15:11:36] #agreed moving T102536 to backlog [15:14:04] #topic triaging doing column [15:26:52] #topic triaging waiting for column [15:37:08] #topic triaging triage column [15:52:42] done with the triage, good job everybody [15:53:25] attending: hashar, CFisch_WMDE, marxarelli [15:53:30] #endmeeting [18:13:59] question for Yuri: is the map graphing a reasonable way to put a map on-wiki irrespective of graphing any data? [18:16:12] is this the IRC channel for Showcase / Lightning Talks? (no info in the calendar invite :( ) [18:16:43] spagewmf: #wikimedia-tech [18:17:05] thx Glaisher [18:49:57] moizsyed, we're the cool kids! [21:33:36] DanielK_WMDE_ / hashar: I believe the dumps are decompressed, the stream copied, and then recompressed [21:33:55] which is not really the same as just storing the differences since last time [21:34:43] but the problem with storing the differences since last time is that there's no way to stop hosting things that were revdeleted [21:36:31] TimStarling: supposedly we will have to filter the dump with the current rev deleted list ? [21:37:08] to do it efficiently, I think you either need a database format which allows fast seeking and overwrites, or split each dump up into many files [21:37:15] that would probably be quicker than rebuilding a full dump... [21:37:56] TimStarling: cdb? [21:38:49] maybe an adaption of CDB would work, or an adaption of zip [21:39:00] both have a directory at the end, which you would have to rewrite [21:40:13] or maybe even berkeley...