[00:21:56] (03CR) 10Krinkle: "Nice :)" [integration/docroot] - 10https://gerrit.wikimedia.org/r/337947 (owner: 10Hashar) [00:41:25] 10Continuous-Integration-Config, 10Gamepress: Skin:Gamepress broke tests for all extensions - https://phabricator.wikimedia.org/T158825#3048622 (10MaxSem) [00:42:36] 10Continuous-Integration-Config, 10Gamepress: Skin:Gamepress broke tests for all extensions - https://phabricator.wikimedia.org/T158825#3048636 (10MaxSem) p:05Triage>03Unbreak! [01:16:54] 10Continuous-Integration-Config, 10Gamepress: Skin:Gamepress broke tests for all extensions - https://phabricator.wikimedia.org/T158825#3048672 (10SamanthaNguyen) 05Open>03Resolved a:03MaxSem Thanks @MaxSem! [01:56:44] 10Continuous-Integration-Config, 10Gamepress: Skin:Gamepress broke tests for all extensions - https://phabricator.wikimedia.org/T158825#3048622 (10Krinkle) Note, this was really T117710 as GamePress is not in the shared extension gate. [01:57:57] 10Continuous-Integration-Infrastructure, 13Patch-For-Review: Builds from mwext-testextension jobs sometimes pick up tests from unrelated skins - https://phabricator.wikimedia.org/T117710#1782002 (10Krinkle) @hashar I thought we no longer preserve any working copies for Jenkins jobs. We use zuul cloner and git... [03:27:34] PROBLEM - Free space - all mounts on deployment-fluorine02 is CRITICAL: CRITICAL: deployment-prep.deployment-fluorine02.diskspace._srv.byte_percentfree (<22.22%) [04:08:44] Yippee, build fixed! [04:08:45] Project selenium-MultimediaViewer » safari,beta,OS X 10.9,BrowserTests build #310: 09FIXED in 12 min: https://integration.wikimedia.org/ci/job/selenium-MultimediaViewer/BROWSER=safari,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=OS%20X%2010.9,label=BrowserTests/310/ [06:25:48] Project selenium-Wikibase » chrome,test,Linux,BrowserTests build #280: 04FAILURE in 1 hr 45 min: https://integration.wikimedia.org/ci/job/selenium-Wikibase/BROWSER=chrome,MEDIAWIKI_ENVIRONMENT=test,PLATFORM=Linux,label=BrowserTests/280/ [06:52:33] RECOVERY - Free space - all mounts on deployment-fluorine02 is OK: OK: All targets OK [08:37:21] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: Anomaly detected: 11 data above and 0 below the confidence bounds [08:49:21] RECOVERY - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is OK: OK: No anomaly detected [08:55:03] (03CR) 10Hashar: "Thanks :-}" [integration/docroot] - 10https://gerrit.wikimedia.org/r/337947 (owner: 10Hashar) [08:59:02] 10Scap, 06WMF-Legal, 07Documentation, 07Software-Licensing: Scap is lacking a license - https://phabricator.wikimedia.org/T94239#3049027 (10hashar) [09:03:34] 10Scap, 06WMF-Legal, 07Documentation, 07Software-Licensing: Scap is lacking a license - https://phabricator.wikimedia.org/T94239#3049032 (10hashar) From T94239#3045702 > Historically the scripts were solely on the cluster under /home/wikipedia/bin and /home/wikipedia/sbin. Potential authors would be anyone... [09:18:05] 10Browser-Tests-Infrastructure, 10MediaWiki-General-or-Unknown, 07JavaScript, 13Patch-For-Review, and 2 others: Port Selenium tests from Ruby to Node.js - https://phabricator.wikimedia.org/T139740#3049050 (10zeljkofilipin) [09:24:25] PROBLEM - Puppet run on deployment-conf03 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [0.0] [09:26:58] 10Continuous-Integration-Infrastructure, 13Patch-For-Review: Builds from mwext-testextension jobs sometimes pick up tests from unrelated skins - https://phabricator.wikimedia.org/T117710#3049065 (10hashar) That blocked T156333 and caused T158825 There are bunch of random skins floating around: ``` root@integ... [09:27:22] !log Clearing skins from testextension jobs T117710 salt -v '*slave*' cmd.run 'rm -fR /srv/jenkins-workspace/workspace/mwext-testextension*/src/skins/*' [09:27:26] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [09:27:26] T117710: Builds from mwext-testextension jobs sometimes pick up tests from unrelated skins - https://phabricator.wikimedia.org/T117710 [09:27:51] 10Continuous-Integration-Config, 10ContentTranslation, 03Language-2017 Sprint 2, 03Language-2017 Sprint 3, and 4 others: mwext-qunit-jessie test fails on unrelated change - https://phabricator.wikimedia.org/T153038#3049069 (10Nikerabbit) Yeah, and we are only making read queries. [09:40:18] 10Scap, 06WMF-Legal, 07Documentation, 07Software-Licensing: Scap is lacking a license - https://phabricator.wikimedia.org/T94239#1158049 (10Liuxinyu970226) Copied from V14#173: > Why there's no 2-clause or 3-clause BSD options? Or CC series? [09:54:28] RECOVERY - Puppet run on deployment-conf03 is OK: OK: Less than 1.00% above the threshold [0.0] [09:57:29] (03PS1) 10Hashar: Remove mwext-* job from mw-checks-test experimental [integration/config] - 10https://gerrit.wikimedia.org/r/339373 (https://phabricator.wikimedia.org/T117710) [10:02:02] (03CR) 10Hashar: [C: 032] Remove mwext-* job from mw-checks-test experimental [integration/config] - 10https://gerrit.wikimedia.org/r/339373 (https://phabricator.wikimedia.org/T117710) (owner: 10Hashar) [10:03:57] (03Merged) 10jenkins-bot: Remove mwext-* job from mw-checks-test experimental [integration/config] - 10https://gerrit.wikimedia.org/r/339373 (https://phabricator.wikimedia.org/T117710) (owner: 10Hashar) [10:06:21] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: Anomaly detected: 10 data above and 7 below the confidence bounds [10:09:21] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: Anomaly detected: 10 data above and 7 below the confidence bounds [10:11:21] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: Anomaly detected: 11 data above and 8 below the confidence bounds [10:14:00] (03PS1) 10Hashar: Delete php53lint jobs [integration/config] - 10https://gerrit.wikimedia.org/r/339377 (https://phabricator.wikimedia.org/T158652) [10:14:27] bah [10:17:13] hashar: bah in general? [10:17:23] yeah :( [10:17:28] :( [10:17:31] got too many things to deal with [10:17:41] (03CR) 10Zfilipin: "Tested, green: https://integration.wikimedia.org/ci/job/selenium-Core-338344/" [selenium] - 10https://gerrit.wikimedia.org/r/336824 (https://phabricator.wikimedia.org/T157695) (owner: 10Zfilipin) [10:17:49] funny you should say that... ;) [10:18:21] RECOVERY - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is OK: OK: No anomaly detected [10:22:19] (03CR) 10Zfilipin: "Tested, green: https://integration.wikimedia.org/ci/job/selenium-MultimediaViewer-338967/" [selenium] - 10https://gerrit.wikimedia.org/r/336824 (https://phabricator.wikimedia.org/T157695) (owner: 10Zfilipin) [10:28:03] Tobi_WMDE_SW: if I want to run wikibase selenium tests on my machine, which vagrant role should I enable? [10:28:30] wikibase_repo, wikidata, both? [10:34:06] I'm reading readmes in wikibase repo (root and tests/browser) but vagrant roles are not mentioned [10:34:12] nothing here too https://www.mediawiki.org/wiki/Wikibase/Programmer%27s_guide_to_Wikibase#Browser_Testing_for_Wikidata [10:41:45] zeljkof: wikibase_repo is just a plain wikibase repo install without clients [10:41:48] vagrant source says wikidata is the way to go [10:41:56] wikidata role is a bit more but AFAIK broken. [10:42:03] Tobi_WMDE_SW: :) [10:42:07] thanks, trying it out [10:42:13] I've not tried browser test with any of them yet [10:42:28] it provisioned without failures, so, so far so good [10:43:14] so, sitelinks tests will not work with wikibase_repo role, because there's no client wiki [10:43:15] I'm working on updating the code to Selenium 3, and all repos work fine, but wikibase fails completely :( [10:43:35] https://integration.wikimedia.org/ci/view/Selenium/job/selenium-Wikibase-338787/ [10:43:49] zeljkof: oh, what? that's bad [10:44:14] not sure if it is related to CI, or selenium 3, or some of the dependencies... [10:44:21] so trying to reproduce locally [10:44:21] I'll have a look in some minutes when I'm back at the office [10:55:09] ok, managed to reproduce the error on my machine, when targeting beta cluster, I was getting different errors when targeting local vagrant vm [10:55:26] a step forward [11:16:40] zeljkof: hm, looking at all these failures it seems that almost always it is having problems with locating the elements [11:17:06] which is strange since AFAIK we're not doing anything special or different from other projects here [11:17:20] Tobi_WMDE_SW: yeah, strange [11:17:28] can not find an element [11:17:53] maybe something changed in the selenium implementation in the way elements are located [11:18:02] I get the same error for both chrome and firefox [11:23:55] ok, it's not wikibase only [11:24:07] mobile frontend also fails while waiting for elements :( [11:24:08] https://integration.wikimedia.org/ci/job/selenium-MobileFrontend-339378/BROWSER=chrome,MEDIAWIKI_ENVIRONMENT=beta,PLATFORM=Linux,label=BrowserTests/1/consoleFull [11:26:24] zeljkof: ok [11:26:31] giving up for now :( [11:26:48] will try to get the patch merged and gem updated, so it is easier to test [11:26:59] now I have to patch all around just to get the test running in CI [11:27:13] zeljkof: selenium 3 should be stable already, right? or still in beta? [11:27:16] selenium 3 works fine for most repos, but fails for some, further testing and debugging needed [11:27:24] stable [11:27:26] ok [11:30:26] zeljkof: I'm wondering about the version of chrome saucelabs is using.. [11:30:43] it's complicated :) [11:31:17] I think linux builds are stuck at some version because chrome (and firefox) switched from 32bit to 64, and sauce linux machines are 32 bits at the moment... [11:31:25] I can provide details, let me check [11:31:40] zeljkof: could that be the issue with selenium 3? [11:31:50] and the failures are reproducible with latest chrome and firefox, without sauce labs, on my machine [11:31:57] zeljkof: ok. :( [11:32:02] was just a guess [11:32:05] who knows, probably, since it is reproducible on both browsers [11:32:29] maybe not a problem, but a change in how elements are located [12:03:05] (03PS2) 10Hashar: Delete php53lint jobs [integration/config] - 10https://gerrit.wikimedia.org/r/339377 (https://phabricator.wikimedia.org/T158652) [12:14:47] 10Gerrit: Automatic reviewer assignment does not work for certain repos - https://phabricator.wikimedia.org/T158844#3049271 (10Osnard) [12:17:06] (03PS1) 10Hashar: Delete composer*php53 [integration/config] - 10https://gerrit.wikimedia.org/r/339388 (https://phabricator.wikimedia.org/T158652) [12:18:46] 10Continuous-Integration-Config, 07Composer: integration/composer need a integration-composer-check-hhvm job - https://phabricator.wikimedia.org/T158845#3049284 (10hashar) [12:20:22] (03PS1) 10Hashar: Delete integration-composer-check-php53 [integration/config] - 10https://gerrit.wikimedia.org/r/339392 (https://phabricator.wikimedia.org/T158845) [12:35:32] hashar: would you be able to help me with https://phabricator.wikimedia.org/T158628 ? I can probably move forward if I am made an admin of deployment-prep [12:53:23] addshore: not this afternoon sorry. Gotta reproduce the Wikidata build failure due to autoload-de [12:53:25] v [12:55:24] 10Continuous-Integration-Config, 07Composer, 13Patch-For-Review: integration/composer need a integration-composer-check-hhvm job - https://phabricator.wikimedia.org/T158845#3049284 (10Paladox) Yeh we should do php 7.0 and hhvm. [13:26:48] 10Gerrit, 07Regression: Automatic reviewer assignment does not work for certain repos - https://phabricator.wikimedia.org/T158844#3049504 (10Aklapper) [13:33:14] How do I get an extension added to Doxygen? [13:33:56] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3049520 (10hashar) [13:35:07] hashar: around? [13:35:14] aude: yes [13:35:30] I am finally looking at the Wikibase / Hamcrest issue https://phabricator.wikimedia.org/T158674 [13:35:45] you see the issue with the composer-dev-args.js script ? [13:35:51] I have updated the test task which what I believe is a reproducible case [13:35:59] and the way we use it in https://github.com/wikimedia/integration-jenkins/blob/master/bin/mw-fetch-composer-dev.sh [13:36:10] yeah what I get is that you made the script to inject the autoload-dev section in vendor/composer.json [13:36:13] what I don't get is [13:36:22] idk if it's the best solution [13:36:44] that the ham crest libs are in require-dev , there are a bunch of composer/autoload* that seems to refers to hamcrest [13:36:53] so why the hell can't mediawiki load them ? :} [13:36:54] yeah, the autoload-dev is missing [13:37:07] they don't get copied or loaded into mediawiki-vendor's composer.json [13:37:21] I have replayed the command one by one, and sent to gerrit the result of my mediawiki/vendor https://gerrit.wikimedia.org/r/#/c/339404/ [13:37:24] when we run the composer-dev-args.js script and feed it to composer require [13:37:58] the autoload-dev files are needed to load the files that contain the global functions [13:38:09] GLOBALS ??????????????????????????????????????????????????????????????????????????? [13:38:12] ;D [13:38:17] yeah :/ [13:38:43] * aude doesn't want to get into arguments about that so ok... [13:38:49] ;-D [13:38:51] neither do i [13:38:57] lets pick that as hard constraint [13:39:10] so eventually [13:39:21] I wrote a very lame PHPUnit test that roughly: [13:39:21] use Hamcrest\Matcher; [13:39:22] what we need is the autoload-dev section added to mediawiki-vendor/composer.json [13:39:26] and have a test method that "both()"; [13:39:29] which fatals out [13:39:51] yeah, it's missing those functions which are in the autoload-dev files [13:40:31] so then I looked at vendor/composer/autoload classes [13:40:33] anddddd [13:40:35] 10Browser-Tests-Infrastructure, 10MediaWiki-General-or-Unknown, 07JavaScript, 13Patch-For-Review, and 2 others: Port Selenium tests from Ruby to Node.js - https://phabricator.wikimedia.org/T139740#3049531 (10zeljkofilipin) [13:40:42] autoload_classmap.php: 'Hamcrest\\Matcher' => $vendorDir . '/hamcrest/hamcrest-php/hamcrest/Hamcrest/Matcher.php', [13:40:42] autoload_static.php: 'Hamcrest\\Matcher' => __DIR__ . '/..' . '/hamcrest/hamcrest-php/hamcrest/Hamcrest/Matcher.php', [13:40:46] the psr4 stuff would be there [13:40:51] because it's in require-dev [13:41:00] or part of that [13:46:21] aude: I guess what I am struggling with is that I have no clue how composer autoloaders work : / [13:47:38] nor why there are so many autoloaders to pick from :D [13:47:57] :/ [13:48:09] i think these functions files are not autoloaded by default in these packages [13:48:29] they are optional and have to be defined in projects composer.json if one wishes to use them [13:48:35] the global functions [13:50:09] so even if vendor/composer/autoload_static.php has the Hamcrest\\Matcher [13:50:19] those are autoloaded by the package [13:50:22] something is not smart enough to find \Hamcrest\Matcher\both() ? [13:50:33] the globals are not automatically autoloaded [13:50:48] the ones in autoload-dev -> files [13:51:22] yeah I got the part about autoload-dev -> files not being loaded [13:51:41] https://github.com/hamcrest/hamcrest-php/pull/35 [13:51:52] the thing I don't get is why the class/function is not matched by the static_autoloader.php which apparently refers to it [13:52:34] then something I noticed is that in vendor composer/autoload_files.php is apparently manually maintained [13:52:55] i don't think it is manually maintained or not sure [13:55:13] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3049558 (10hashar) [13:56:02] ahhh [13:56:43] i'm not sure my script is ideal (it's not) solution [13:56:53] but think it would work for now [13:56:57] probably [13:58:43] composer require wouldn't work but maybe composer install or update [13:59:13] or maybe require does work + dump-autoload [14:01:07] aude: the mw-fetch-dev scripts the dump autoload [14:01:08] cd vendor && composer dump-autoload --optimize [14:01:22] handling swat brb [14:01:25] ok [14:01:38] ah addshore is doing it \O/ [14:02:11] ok [14:02:38] i think the status within the topic could be updated unless this backlog is hidden to me :) [14:02:59] hmm [14:03:02] got something working [14:03:23] ^ well thats a first xD [14:03:36] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3049580 (10hashar) [14:03:41] so at first my example on https://phabricator.wikimedia.org/T158674 was wrong I have corrected it [14:03:51] - use Hamcrest\Matcher; [14:03:53] + use Hamcrest\Matchers; [14:03:56] it is plural [14:04:03] ah [14:05:04] then I have no idea how PHP namespaces work [14:05:07] never had to use them [14:05:23] so here is how I brute force the whole thing [14:05:24] reading vendor/hamcrest/hamcrest-php/hamcrest/Hamcrest/Matchers.php [14:05:25] hashar: ;) [14:05:34] namespace Hamcrest; [14:05:34] class Matchers { [14:05:45] static function both() {} [14:05:51] } [14:06:02] and you know what? [14:06:11] PHP can't lookup a static method [14:06:21] unless it is explicitly referenced [14:06:24] couldnt that be simplified to just function both() {} [14:06:53] Zppix: make thing simpler would be to use conventions over a pile of crap of autoloaders. Eg migrate to python :D [14:07:02] * hashar rolls the drums [14:07:10] aude: \Hamcrest\Matchers::both(); # works [14:07:22] hashar: migrating to python == giving a computer to a cow [14:07:27] o_O [14:07:35] * hashar laughs [14:07:53] so my big newbie question is [14:07:56] if \Hamcrest\Matchers::both(); works [14:08:21] that seems to indicate the autoloader manage to find the proper file contains the class [14:08:37] i hate python the only reason i use it for my ircbot and (i may use it for my bot im slowly working on) is because 1) pywikibot is good for the bot im making 2)sopel for the ircbot is a decent frame [14:08:43] but not the global functions [14:11:51] * hashar tries [14:12:31] hashar: question, do you look at hashbrowns and say i'm gonna hashar you? [14:14:05] aude: ah so global functions is a second kind of problem right? [14:15:37] hashar: is there a reason behind why jessie and trusty CI tests seem to be alot slower then the others [14:17:04] aude: in my dummy case if I invoke Hamcrest\Util::registerGlobalFunctions(); [14:17:14] the global function works eg assertThat(); [14:19:57] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3049604 (10hashar) [14:20:34] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3043493 (10hashar) [14:20:49] Zppix: what are "the others"? [14:21:05] hashar: anything non jessie or trusty :P [14:21:29] give some example or I am lost ? :} [14:21:49] hashar: for example hhvm on jessie takes forever to test [14:21:50] oh, interesting [14:21:51] the jobs having jessie/trusty in their name run on a small pool of instances that get deleted on build completion [14:21:58] and it takes some time to fill up the poo [14:22:04] hashar: why is that? [14:22:22] aude: so maybe the autoload-dev are properly injected somehow? [14:23:00] aude: they are definitely in vendor/composer/autoload_static.php on my setup [14:23:11] what about https://github.com/wmde/hamcrest-html-matchers/blob/master/src/functions.php [14:23:21] and if I remove that file, and MediaWiki doMaintenance.php fails due to a missing file which would suggest it uses it [14:23:57] hashar: domaintenance.php is something im familarish with anything i can lookinto to help? [14:24:15] well I guess those global methods would need a way to be registered like upstream does? eg similar to how upstream does it in the PR you pointed https://github.com/hamcrest/hamcrest-php/pull/35 [14:24:21] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: Anomaly detected: 13 data above and 0 below the confidence bounds [14:24:31] ( which is https://github.com/hamcrest/hamcrest-php/commit/d58749fd6a9bbafdeaca0cbc51227a9e5ca99d5f ) [14:24:57] i think someone doing updates on grafana [14:26:03] aude: now I am gonna test your autoload-dev thing [14:26:08] and see how wonderful it is :} [14:26:09] could be an option iin phpunit setup [14:26:26] not sure that's best or how it's intended though [14:26:45] i love automation [14:27:22] a question I have is why the static method is not found [14:27:31] since \Hamcrest\Matchers::both(); works [14:27:40] but use Hamcrest\Matchers; both(); does not [14:27:54] no idea :/ [14:28:21] I am tempted to blame PHP [14:28:22] but most probably I have no clue how it works [14:30:47] :/ [14:32:06] hashar: thats not how it works [14:32:21] it should be colon in both situations im pretty sure [14:45:31] aude: so for the static method ::both I fail to see how it can works [14:50:19] RECOVERY - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is OK: OK: No anomaly detected [14:53:18] interesting... [14:55:38] 10Continuous-Integration-Config, 10Wikidata, 13Patch-For-Review: Fatal error: Call to undefined function Wikibase\Client\Tests\RecentChanges\both() - on jenkins - https://phabricator.wikimedia.org/T158674#3049720 (10hashar) So I don't think the PHP namespace lookup works for static methods. Eg `both()` would... [14:55:54] aude: wrote part of my conclusion on https://phabricator.wikimedia.org/T158674#3049720 [14:56:00] ok [14:56:06] so now [14:56:38] my understanding is that autoload-dev -> files -> vendor/hamcrest/hamcrest-php/hamcrest/Hamcrest.php [14:56:55] which defines a global function both(); that in turns invoke the static method \Hamcrest\Core\CombinableMatcher::both() [14:57:15] the bad thing, is that will pollute the global namespace [14:57:23] yeah [14:57:28] so maybe we should be explicit and use \Hamcrest\Util::registerGlobalFunctions();: [14:57:51] which does not quite solve autoload-dev section not being merged by CI :D [14:58:31] then once registerGlobalFunctions() is invoked, the global namespace is loaded with those functions for all following tests [14:58:35] so yeah not ideal either [14:58:47] :/ [15:00:57] so at least now I have better understanding of what is going on :) [15:01:17] none of the solutions are so nice [15:02:07] yeah :( [15:04:21] * aude off to get food [15:04:33] back later [15:36:08] (03CR) 10Hashar: "After much debugging today (captured at T158674), this solution seems to be the most straightforward." [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [15:41:10] (03Abandoned) 10Hashar: Capture composer.lock after mw-fetch-composer-dev [integration/config] - 10https://gerrit.wikimedia.org/r/339051 (https://phabricator.wikimedia.org/T158674) (owner: 10Hashar) [15:47:57] aude: polishing your change :} [15:54:17] (: [15:58:50] (03PS2) 10Hashar: Add script to copy composer require-dev and autoload-dev to mw vendor [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [15:59:21] aude: addshore https://gerrit.wikimedia.org/r/#/c/339202/ :D [15:59:23] I think it will work [15:59:33] looks like it does on my local machine. Gotta find out a way to test it though [15:59:51] I guess I can depool a Jenkins slave [15:59:55] apply that change [16:00:05] then run a faulty job on it [16:01:29] (03CR) 10Hashar: "Looks like it works on my local machine. I am not sure how to test it properly, so maybe we can harness that new behavior behind a featu" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [16:05:34] 06Release-Engineering-Team, 07Wikimedia-log-errors: Error: Couldn't find trailer dictionary - https://phabricator.wikimedia.org/T145772#3049940 (10hashar) [16:09:58] 06Release-Engineering-Team, 07Wikimedia-log-errors: Error: Couldn't find trailer dictionary - https://phabricator.wikimedia.org/T145772#3049947 (10hashar) Yeah that might be texvc or OCG who knows really :( The reason is wfShellExec() by default does not capture standard error. That ends up being relayed by... [16:13:15] hashar: thanks [16:28:43] aude: will try that tomorrow morning when CI is not so used [16:28:56] and probably test again tonight [16:29:00] ok [16:29:08] I am heading back home *wave* [16:29:23] as a temporary workaround, I added hamcrest to the wikidata build so we could have a passing build again [16:39:51] (03PS1) 10Umherirrender: [Truglass] Add npm job [integration/config] - 10https://gerrit.wikimedia.org/r/339435 [16:46:02] (03PS1) 10Umherirrender: [Gamepress] Make unit tests voting [integration/config] - 10https://gerrit.wikimedia.org/r/339436 [17:08:01] (03PS1) 10Arlolra: Run parsoid's timedmediahandler tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/339445 [17:30:48] (03CR) 10Arlolra: [C: 04-1] ""TimedMediaHandler requires the MwEmbedSupport extension."" [integration/config] - 10https://gerrit.wikimedia.org/r/339445 (owner: 10Arlolra) [17:32:21] (03PS2) 10Arlolra: Run parsoid's timedmediahandler tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/339445 [17:44:08] 10Continuous-Integration-Infrastructure, 06Labs, 10Labs-Infrastructure: Nodepool quota bump - https://phabricator.wikimedia.org/T158320#3050369 (10chasemp) Currently: ```+----------------------+--------------+ | Field | Value | +----------------------+--------------+ | cores... [17:47:39] 10Gerrit, 103d: Create repo for extension 3d and import into gerrit - https://phabricator.wikimedia.org/T158881#3050391 (10Reedy) [17:47:45] 10Continuous-Integration-Infrastructure, 06Labs, 10Labs-Infrastructure: Nodepool quota bump - https://phabricator.wikimedia.org/T158320#3050404 (10Andrew) 05Open>03Resolved a:03Andrew done! [17:47:51] 10Gerrit, 103d: Create repo for extension 3d and import into gerrit - https://phabricator.wikimedia.org/T158881#3050408 (10Reedy) [17:58:27] 10Gerrit, 103d: Create repo for extension 3d and import into gerrit - https://phabricator.wikimedia.org/T158881#3050434 (10Aklapper) > Repo needs creating on gerrit That might be https://www.mediawiki.org/wiki/Gerrit/New_repositories ? [18:19:47] thcipriani: about? [18:20:05] chasemp: sure what's up? [18:20:18] thcipriani: https://gerrit.wikimedia.org/r/#/c/339463/ via https://phabricator.wikimedia.org/T158320 [18:20:23] I feel like it's ok to merge but hashar is afk [18:20:26] wanted ot give you a heads up [18:20:33] this does nothing but let nodepool use it's new headroom [18:20:50] s/it's/its [18:20:51] chasemp: awesome, yup, hashar will be happy about that one, I think :) [18:21:17] thcipriani: +1? [18:21:50] chasemp: done [18:22:11] is it irony I'm waiting on jenkins now...? [18:23:05] alanis morissette-style irony, sure [18:23:46] congrats dude you've dated yourself [18:23:49] 90's kid confirmed [18:23:59] :) [18:24:17] :P [18:33:41] Is there a shortage of Jessie CI workers? [18:33:50] https://integration.wikimedia.org/zuul/ looks like lots of *-jessie jobs are stuck in the queued state [18:34:02] They were just bumped up [18:34:30] but stranage, it seems not alot are being used. [18:34:49] The queue looks stuck [18:34:54] There's only one jessie job in progress now [18:35:01] chasemp ^^ [18:35:13] thcipriani ^^ [18:35:27] * thcipriani looks [18:35:37] RoanKattouw: we just upped the available instances and nodepool restart wich means it's cleaning up after itself and now finally respawning [18:35:51] I think [18:35:56] !log 18:29 < chasemp> !log labnodepool1001:~# service nodepool restart [18:35:59] Logged the message at https://wikitech.wikimedia.org/wiki/Release_Engineering/SAL [18:36:14] * greg-g just cross pollinates the SALs [18:36:17] is there a way to hit two SAL's at once? [18:36:39] copy/paste to two channels :) [18:36:49] heh [18:36:51] fwiw [18:36:53] labnodepool1001:~# nodepool list | egrep 'ci\-' | wc [18:36:53] 25 [18:37:11] that's right it's just 6 are still deleting [18:37:15] and/or cleanup [18:38:59] yeah, logs look clean afaict [18:39:19] OK so the restart is just slow? [18:39:27] The queue's been stuck for almost 20 mins [18:39:29] They have all started working now [18:39:41] I see alot of jessie tests running now. [18:39:54] I'm not sure on how zuul interprets things but nodepool side I see movement [18:40:03] Oh, I see [18:40:08] gate-and-submit is moving [18:40:20] restart + lots of patch sets + stuff in gate and submit queue = perfect storm of slowness [18:40:21] test is not yet, but it makes sense to prioritize g&s (if it's doing that) [18:40:36] yeah, that queue goes through before others [18:40:39] iirc that is actually how it works [18:40:41] yep [18:40:44] Cool [18:48:38] thcipriani: things seem to be settling down [18:49:16] 10Continuous-Integration-Config, 10Page-Previews, 06Reading-Web-Backlog, 07Browser-Tests, and 2 others: add rake entrypoint and rubocop to ext:Popups - https://phabricator.wikimedia.org/T136285#3050657 (10ovasileva) 05Open>03Resolved a:03ovasileva [18:49:35] chasemp: yeah seems to be catching up slowly but sanely afaict [18:49:59] that's in opposition to the usual fast and insane [18:50:18] :) [18:52:44] its current behavior is giving me the impression of sanity is all I can say. [18:53:09] * thcipriani pulls a descartes [18:55:08] we know jenkins exists as it is slow, so something must there frustrating us [18:55:17] must be even :) [18:55:39] :D [19:32:06] 10Deployment-Systems, 10Scap: Considering adding a --no-touch flag to scap that stops automatic touch of InitialiseSettings.php - https://phabricator.wikimedia.org/T149872#3050880 (10thcipriani) >>! In T149872#3047407, @bd808 wrote: > I honestly haven't gone back to read all of this in depth and remember what... [19:39:51] 10Continuous-Integration-Infrastructure, 06Labs, 10Labs-Infrastructure: Nodepool quota bump - https://phabricator.wikimedia.org/T158320#3050943 (10hashar) And Chase kindly updated the Nodepool config via https://gerrit.wikimedia.org/r/#/c/339463/ with: max servers: 25 min jessie: 12 min trusty: 5 Thank... [19:52:47] 10Scap, 06WMF-Legal, 07Documentation, 07Software-Licensing: Scap is lacking a license - https://phabricator.wikimedia.org/T94239#3051069 (10greg) @Liuxinyu970226 A) this vote should only be voted on by those who have contributed code to scap 2) It was just a stawpoll to gauge what people were thinking. It... [19:55:36] Dear releng people, there's a TypeError from scap in the beta-scap-eqiad console output: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/143661/console [19:56:06] Seems to be non-critical, and scap is continuing to run, but we shouldn't have errors in the output [19:57:47] I thought we fixed that bugger... [19:58:04] RoanKattouw: Or maybe we should have errors to prove people pay attention ;-) [19:58:30] RainbowSprinkles: Be careful with that line of reasoning or I might turn it back to you for wikimedia-log-errors :P [19:59:14] Eh, #logspam is a one-way function. I file bugs, nothing comes out ;-) [20:00:49] RainbowSprinkles: if we had time, we would act as product owner/manager for the #wikimedia-log errors [20:01:01] Yeah, but hours/day [20:01:03] :) [20:01:07] get other teams to allocate resources to those tasks and push forward everyeone [20:01:09] but yeah [20:01:17] I am already doing too much [20:05:33] 10Gerrit, 07Regression: Automatic reviewer assignment does not work for certain repos - https://phabricator.wikimedia.org/T158844#3051110 (10valhallasw) Is this still happening? I pushed a change to ignore broken regexes yesterday, and the error log shows no new errors (apart from a changeset that apparently d... [20:21:46] 10Gerrit, 07Regression: Automatic reviewer assignment does not work for certain repos - https://phabricator.wikimedia.org/T158844#3051151 (10Paladox) There's been no gerrit config change in at least a week+, see https://github.com/wikimedia/puppet/commits/production/modules/gerrit [20:24:28] blerg. that scap error is not one single error, it's just what happens when (a) there is some error and (b) makepickle fails to send that error to logstash. Since we don't have a logtstash when we test we don't see the makepickle errors. [20:25:20] (03CR) 10Arlolra: "This is working now," [integration/config] - 10https://gerrit.wikimedia.org/r/339445 (owner: 10Arlolra) [20:28:38] hrm, error started within the past hour: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/143659/console [20:29:20] and nothing changed with scap so...and now it's gone https://integration.wikimedia.org/ci/job/beta-scap-eqiad/143663/console [20:29:24] well that bodes well. [20:32:22] (03CR) 10Hashar: "See also some discussion on the mediawiki/core patch that added the autoload-dev https://gerrit.wikimedia.org/r/#/c/335814/1/composer.json" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [20:35:22] thcipriani: which error? [20:37:29] (03CR) 10Hashar: [C: 032] "\o/ Congratulations!" [integration/config] - 10https://gerrit.wikimedia.org/r/339445 (owner: 10Arlolra) [20:38:45] (03Merged) 10jenkins-bot: Run parsoid's timedmediahandler tests as well [integration/config] - 10https://gerrit.wikimedia.org/r/339445 (owner: 10Arlolra) [20:40:31] hashar it looks like BlueOcean is about to get a stable release in jenkins :) [20:40:48] thats according to what there calling there latest version, the final countdown. [20:41:55] i can now create pipelines through blue ocean :), loving the ui :) [20:43:59] hashar: the one from https://integration.wikimedia.org/ci/job/beta-scap-eqiad/143661/console [20:45:35] the type error thing is part of the logstash loggerhandler failing, so it seems to show up from time-to-time, but it's actually hiding something. I'm going to try to track that down...somehow... [20:46:10] thcipriani: I am pretty sure we had that bug before [20:46:19] eg the 'msg' being hand crafted [20:46:26] yeah [20:46:28] and missing some arguments for string.format() to act on [20:46:51] Django shows the local variables in the stacktraces which is rather helpful :] [20:46:52] "TypeError: not enough arguments for format string" -- theres a % in the message [20:47:12] oh so some arguments would not escape % ? [20:47:18] yup, that specific problem has come up at least twice. But it's hiding some other message, which makes it hard to track down the root cause. [20:47:41] one way would be to catch the exception [20:47:45] shows the msg and args [20:47:46] and reraise [20:48:11] yeah, seems like the right thing to do for now since the root cause of the recent error is unknown and went away evidently [20:55:46] and I dont get arcanist still [20:56:10] when I "arc land" it really just push to git isnt it ? [21:04:26] 10Continuous-Integration-Infrastructure, 06Release-Engineering-Team, 13Patch-For-Review: Depool precise jenkins instances - https://phabricator.wikimedia.org/T158652#3051270 (10hashar) So. I think I will keep pilling up patches as above as axes through the CI config. Then eventually conclude with some tests... [21:05:05] (03PS2) 10Hashar: [Gamepress] Make unit tests voting [integration/config] - 10https://gerrit.wikimedia.org/r/339436 (owner: 10Umherirrender) [21:05:25] (03CR) 10Hashar: [C: 032] [Gamepress] Make unit tests voting [integration/config] - 10https://gerrit.wikimedia.org/r/339436 (owner: 10Umherirrender) [21:05:37] Yes, it's a git push, but with other magic. [21:06:12] Take the patch, rebase against target branch (usually master), confirm it was reviewed, buildables passed, warn if it's someone elses patch and not yours, then push [21:06:44] (03Merged) 10jenkins-bot: [Gamepress] Make unit tests voting [integration/config] - 10https://gerrit.wikimedia.org/r/339436 (owner: 10Umherirrender) [21:09:10] (03PS2) 10Hashar: [Truglass] Add npm job [integration/config] - 10https://gerrit.wikimedia.org/r/339435 (owner: 10Umherirrender) [21:09:27] (03CR) 10Hashar: [C: 032] [Truglass] Add npm job [integration/config] - 10https://gerrit.wikimedia.org/r/339435 (owner: 10Umherirrender) [21:10:47] (03Merged) 10jenkins-bot: [Truglass] Add npm job [integration/config] - 10https://gerrit.wikimedia.org/r/339435 (owner: 10Umherirrender) [21:22:15] hashar want me to assist on creating patches on removing percise tests? [21:22:37] per T158652 [21:22:37] T158652: Depool precise jenkins instances - https://phabricator.wikimedia.org/T158652 [21:23:41] Zppix: I sent some already [21:23:46] it is a bit messy though [21:24:09] for mediawiki repos we trigger some *php53* jobs AND *php55* jobs [21:24:32] hashar, i know but do you want me to go through and remove php53 and replace it with something else (or just remove it) [21:24:37] then got some filters to skip the php53 ones unless patch is on REL1_23 to REL1_26 [21:25:01] and some other filters to skip php55 unless the branch is not REL1_23 to REL1_26 [21:25:18] I havent made my mind yet [21:25:26] not sure whether I will send bunch of small patches [21:25:42] or just one that clears everything and adds a few tests to make sure the php55 jobs are triggered on the old branches [21:25:50] hashar ack, let me know either on the task or wikitech talk page or email me via special:EmailUser/Zppix [21:25:56] (and thus for the old branches replaces the php53 ones by php55) [21:26:12] also I got busy with Wikidata build failing due to some composer autoloader [21:26:33] hashar hey about that did you figure it out? [21:26:38] so I have missed looking at the job configuration to send emails :/ [21:26:47] for the autoloader yeah potentially figured it out [21:27:37] Zppix: https://gerrit.wikimedia.org/r/#/c/339202/ has some summary [21:27:40] hashar dont worry about the job config thing honestly, it probably doesnt exist and if it does its not really the top of the list atm we have alot of deadlines [21:27:54] which is more or less a summary of https://phabricator.wikimedia.org/T158674#3049720 [21:28:31] if you are looking for things to hack on, maybe there are a few ideas that might be worth your time to spend on them [21:28:51] one I would love to see, is move most of the logic that runs mediawiki tests OUT of Jenkins jobs and integration/jenkins scripts [21:28:56] something like a mediawiki testrunner [21:29:09] (03CR) 10Zppix: "Are we purposely leaving out an else statement in composer-copy-mw-dev.js?" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [21:29:24] that given parameters would clone proper repos, run composer or use vendor.git, inject $wg settings appropriate [21:29:35] and runs bunch of tests commands, gather results and give a 1 or 0 [21:29:54] maybe I should just start something basic [21:30:45] hashar honestly i try to stay away from the important code that would cause catatrophic failure [21:30:54] hashar considering i tend to screw things up xD [21:31:50] (03CR) 10Hashar: "By "else statement" do you mean the catch when there is no composer.json and just:" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [21:32:04] hashar i mean an actual ELSE statment [21:32:15] usually if statements are if blah blah else blah [21:32:16] Zppix: well screwing thing sup is actually a useful skill! [21:32:25] we can give you code/software to play with [21:32:29] you would screw it up [21:32:42] and we both make the software more resilient to prevent future screw up [21:32:43] hashar trust me gerrit already gives me stuff to screw up [21:32:50] ;D [21:32:54] heck being in irc gives me stuff [21:33:24] but https://gerrit.wikimedia.org/r/#/c/339202/2/tools/composer-copy-mw-dev.js I dont see an "else" ! [21:34:22] ohhhh leaving out [21:34:23] bah [21:34:26] I had mixed up [21:34:28] (03CR) 10Zppix: "as mentioned in IRC i mean an else statement such as" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [21:34:54] (03CR) 10Zppix: "correction: js" [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [21:35:04] 10Deployment-Systems, 10Scap: Considering adding a --no-touch flag to scap that stops automatic touch of InitialiseSettings.php - https://phabricator.wikimedia.org/T149872#3051328 (10demon) Tbh, I don't think it's that big of a deal to invalidate the cache, we shouldn't be worried about it all that much (it on... [21:35:23] I dont know whether an else is needed there. Guess other reviewers will comment it tomorrow [21:36:11] hashar well if something were to fail and else statement would stop the code from just erroring out possibly causing issues with other things [21:37:48] the idea of that code is to copy the autoload-dev section from mediawiki/core to mediawiki/vendor [21:37:59] then if mediawiki/core does not have such section, there is nothing to copy [21:38:05] eg [21:38:17] else { /* there is nothing to copy */ } [21:38:33] so that if is a guard [21:38:49] hashar js can do some weird stuff though [21:39:30] well [21:39:52] what it does is really prevent an error when mwComposer['autoload-dev'] is not set [21:41:25] meh, i just perfer if statements always have an else [21:42:25] well it is not necessarly needed [21:42:35] a common thing is a feature flag [21:42:48] you would have a setting such as $wgEnableFeature = false; [21:42:54] and the code gets something like: [21:43:06] if ( $wgEnableFeature ) { $this->doFeature; } [21:43:19] when the feature is not enabled, there is nothing to do [21:43:37] so surely one could comment about it: else { /* feature is disabled we dont do anything */ } [21:43:49] but that is already implied when you read: if $wgEnableFeature [21:43:57] so it depends :] [21:47:41] hashar i may add a harmless else statement that will just exit out the script properly or whatever would be approiate [21:47:56] ^ i can type in english 1% of the time [21:49:56] na we dont want to exit :] [21:50:01] if you look at https://gerrit.wikimedia.org/r/#/c/339202/2/tools/composer-copy-mw-dev.js [21:50:11] Zppix: hi [21:50:14] hashar read the whole message there [21:50:18] aude stranger danger [21:50:37] on line 9 we copy the 'require-dev' section from mediawiki to vendor [21:50:44] if the script is used on patches with older mediawiki (e.g. release branch), then autoload-dev won't be there [21:50:53] then we attempt to copy the autoload-dev one if there is one [21:50:57] it's not an error though.... [21:51:08] but if there is no autoload-dev, we still want the 'require-dev' one to be copied. So we should not exit [21:51:16] aude i know, im saying have an else to properly end the if statement is a better idea [21:51:28] hashar i realise that now after saying that [21:53:30] i'm not sure it's a 'fixme' situation in that case, though maybe could log something [21:53:51] aude that was just an example [21:54:10] ok [21:54:18] what should it log im already on the changeset i can do it [22:00:22] (03PS3) 10Zppix: Add script to copy composer require-dev and autoload-dev to mw vendor [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [22:00:38] ^ is that okay? [22:05:38] should the else be on line 13? (inside the try/catch) [22:06:01] good catch [22:06:09] pun not intented [22:06:12] and maybe say autoload-dev not found [22:06:24] and not copied (or something) [22:06:42] ok [22:07:03] it couldnt be on the catch could it [22:07:08] considering the catch needs to be closed [22:07:11] no? [22:07:44] hashar do you know? [22:07:56] let's see what jenkins says about ps3 first aude [22:08:12] it passed [22:08:13] hmm [22:08:24] the code is not run [22:08:27] oh [22:08:32] hashar can you test it [22:08:39] i have no way of testing it [22:09:00] oh you could [22:09:08] clone mediawiki/core and mediawiki/vendor [22:09:41] hashar, acctually i cannot because my pc is a little coo coo right now (im barely able to function on it as is [22:09:45] then do somethinglike: node tools/composer-copy-mw-dev.js /path/to/mediawiki/core/composer.json /path/to/mediawiki/vendor/composer.json [22:10:00] in tools? [22:10:04] * hashar upgrades Zppix PC [22:10:08] oh [22:10:22] tools is a directory in integration/jenkins.git [22:10:36] which you probably already have since you have sent a new patchset ? [22:10:44] hashar no i just edited using gerrit UI [22:10:49] OHHH [22:10:56] YOU ARE SUCH A HACKER!!!! [22:11:01] how do you do that ? [22:11:02] infact thats how i've been doing all my patchsets [22:11:11] (sorry I am kidding) [22:11:17] click on the file name then click the blue paper thingy with a pencil [22:11:28] I clone everything locally [22:11:35] * Zppix upgrades hashar to rank: Newb [22:11:37] and use my heavily customized text editor :] [22:11:55] i clone things like integration-config, puppet and core [22:12:00] and my 2 own repos [22:12:04] which does the syntax highlight with the colors I want, auto completion of code, runs lint/tests in the background etc [22:12:22] gerrit does syntax highlight which can be customised in prefs [22:12:35] for some langs [22:12:40] anyway [22:12:49] the else and log should be moved up [22:12:55] to? [22:12:55] after line 13 [22:13:18] (03PS4) 10Zppix: Add script to copy composer require-dev and autoload-dev to mw vendor [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [22:13:20] and I would change it to: "autoload-dev not found in mediawiki/core. Not copying to vendor" [22:13:23] done [22:13:24] ;] [22:13:33] I am going to read a book and sleep [22:13:38] will test that all tomorrow [22:13:45] and maybe deploy it for real [22:13:55] we will see how fast I am at testing it on the CI infra [22:13:57] (03PS5) 10Zppix: Add script to copy composer require-dev and autoload-dev to mw vendor [integration/jenkins] - 10https://gerrit.wikimedia.org/r/339202 (https://phabricator.wikimedia.org/T158674) (owner: 10Aude) [22:15:14] * aude can take a look again tomorrow [22:15:22] aude all i did was hashar said [22:15:36] ok [22:15:52] I think it will work aude [22:15:58] but I wanna test it with a few weird cases [22:16:01] sounds good [22:16:08] like different branch and some extensions known to be full of exceptions [22:16:17] yeah [22:16:31] ideally I want to get rid of that problem before the week-end [22:16:31] hashar we should create a releng dummy repo that has mutiple jobs set to non-voting [22:16:47] hashar did you ever find a good on demand service you were trying to find the other day? [22:16:58] paladox: na we gave up [22:17:12] oh [22:17:12] paladox: will look again later I guss [22:17:20] ok :) [22:17:24] Zppix: that is a good idea :] [22:17:47] Zppix: but jobs are testing a repo, so the dummy repo would need a lot of material included [22:17:47] hashar we could just put a lotta code that we can steal i mean borrow from other repos [22:18:00] so it is easier to just use the real repos [22:18:10] what we have in zuul is an 'experimental' pipeline [22:18:16] hashar not really, cause then we dont have to fluff up the commit history [22:18:19] jobs in it are only triggered when one comments 'check experimental' [22:29:14] * hashar waves [22:33:57] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051483 (10Paladox) [22:34:03] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051495 (10Paladox) p:05Triage>03Low [22:34:21] 10Gerrit, 06Release-Engineering-Team: Update gerrit to 2.14 - https://phabricator.wikimedia.org/T156120#3051523 (10Paladox) [22:34:23] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051483 (10Paladox) [22:34:31] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051483 (10Paladox) p:05Low>03Lowest [22:34:44] paladox we get it we get it you want gerrit 2.14 xD [22:38:17] Zppix no, im just preparing as there are alot of changes. [22:38:34] We need to make sure it works otherwise it could break. [22:38:51] paladox i know i was kidding, gosh why is everything i say taken so seriously :/ [22:39:00] oh sorry. [22:39:06] its okay [22:39:31] im ticked off because my PC decidied it will power off whenever it feels [22:39:47] lol [22:39:57] My pc did that before, so i got a replacement [22:40:14] Zppix that means your pc is either broken or you have some battery saving mode on. [22:41:16] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051542 (10demon) It won't work without config/setup on our end, no. And I think we should upgrade and //then// look into using it, not work on it beforehand (we d... [22:41:46] 10Gerrit, 06Release-Engineering-Team, 06Operations: Make sure replying to emails in gerrit 2.14 works - https://phabricator.wikimedia.org/T158915#3051558 (10Paladox) ok. [22:45:13] paladox considering its a desktop i dont think battery saving mode matters [22:45:33] Zppix, windows these days has it turned on by default. [22:46:38] Zppix: Did you forget to feed the fairies inside your PC? [22:46:46] Computers won't work right if the fairies are unhappy [22:46:55] RainbowSprinkles my change for allowing users to create tags and delete them through the web ui was merged today :) [22:47:12] Fairies? [22:47:51] paladox i am using windows 7 pro [22:48:01] Yeah, fairies like these http://images.photowall.com/products/45827/fairies-dancing.jpg [22:48:02] lol why not upgrade to windows 10 pro? [22:48:07] lolololol [22:48:11] Disney films [22:49:46] RainbowSprinkles im guessing your watching princess films. [22:49:59] Not right this minute, but sounds like a good idea for after work :D [22:50:21] * RainbowSprinkles scrolls through his media center for Disney movies [22:50:54] lol, you have a media center for disney movies? [22:51:12] isen't there an app for that? [22:51:14] that you pay every month for access to disney films [22:53:16] 10Scap, 06WMF-Legal, 07Documentation, 07Software-Licensing: Scap is lacking a license - https://phabricator.wikimedia.org/T94239#3051575 (10mmodell) It looks to me like we can safely choose GPLv3 or Apache 2.0, or both. I will defer to WMF Legal for a final say on that. @Slaporte: Given the poll results... [23:05:06] Not just for Disney movies [23:05:09] Lots of other movies [23:05:14] (I'm kind of a film dork)