[07:21:19] Reedy: which of the passwordblacklist patches should I merge? [07:21:27] Not sure :) [07:22:01] * Reedy looks at execution time of the tests [07:22:46] Looks like they tell us nothing, presumably due to host load [07:27:25] 0.001 -> 0.00001 would drop the false positive probability down to... 1 in 100K? [07:28:09] Question is whehter we just go for 1M [07:28:19] * Reedy makes a 3rd comparison patch [07:35:57] legoktm: Part of me says we just go for 1 in 1M false positive [07:36:04] what's the cost? [07:36:10] perf / disk space wise [07:36:35] 240K currently for 1 in 1000 [07:36:42] 476K for 1 in 1M [07:39:19] * Reedy nukes the vendor folder [07:43:55] PHP 7.2.10 [07:44:24] legoktm: Neglible difference in time between current and 1M [07:44:27] 0.02056884765625 [07:44:32] 0.019856929779053 [07:44:40] That's for the whole setup, loading from disk, and just checking for 1 password [07:44:59] 1M is apparently quicker, but I don't quite believe it [07:47:49] Ok, that's more believeable. Current 0.013-0.0.14 [07:48:49] 1M is 0.02 ish [07:50:33] Noting this isn't on the most performant VM in the world either [07:51:06] Looks like when things are warmed up, it can do them in 0.006-7 [07:51:25] Whch is the same for the current too when warmed up [07:52:57] Now we don't require older PHP support either, might be able to get further gains later from using a different bloomfilter implementation [08:00:41] <_joe_> sorry, what are you trying to optimize? [08:00:55] We're not trying to optimise anything [08:00:56] <_joe_> checking bad passwords when people change theirs? [08:01:21] We're seeing what the perf penalty is using 1 in 1M false positive rather than 1 in 1K [08:01:52] <_joe_> when do you want to use this list? on every login or just at password changes? [08:02:13] <_joe_> if the latter is the case, unless it takes 10 seconds, I'd always go with the more accurate result [08:02:42] I think it's potentially run any time you enter your password [08:02:44] So it does include logins [08:02:59] <_joe_> ok so keeping it within 1s or so is advisable [08:03:00] But we're talking < 0.02s [08:03:07] Even bumping it up to 1 in 1M [08:03:07] <_joe_> yeah, that's irrelevant [08:03:21] Aye, it was mostly sanity checking that it wasn't going to go crazy :) [08:03:32] Plus the machine I'm running it on is hardly comparable perf wise to prod [08:03:36] legoktm: Go with 1 in 1M :P [08:03:49] <_joe_> Reedy: also that :P [08:04:14] Relative perf has some basic usefulness [08:05:13] Bleugh, I accidentally clicked CR+2 in gerrit [08:05:13] :D [08:12:54] SHA384 is not supported by your openssl extension, could not verify the phar file integrity [08:12:56] gj composer [08:18:22] Reedy: y u self-merge [08:18:30] I didn't do it on purpose [08:18:38] I accidentally clicked the big blue button [08:18:46] And by the time I realised jerkins was too fast [08:19:03] words that no one has ever said before: "jenkins was too fast" [08:19:18] ikr? [08:19:32] took a whole 46 seconds [08:37:51] Reedy: there's no train this week [08:37:57] pfft [08:43:25] Reedy: also https://gerrit.wikimedia.org/r/q/project:mediawiki%252Flibs%252FPasswordBlacklist+status:open [13:40:26] legoktm: https://github.com/ralouphie/getallheaders/pull/3 and https://github.com/guzzle/psr7/pull/233 [13:40:27] :P [13:53:00] https://github.com/guzzle/promises/pull/96 and https://github.com/guzzle/guzzle/pull/2229 too :P [14:10:35] legoktm: Got a reference for "you don't need composer.json in the resultant vendor"? IIRC you had one before [15:48:50] Reedy: thank you for Guzzle review. It all looks good to me, I'd like to give legoktm a chance to look over it, as my level of familiarity with our composer usage is - um - formative. [16:27:14] Would anyone care to give https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/477581 a quick review? Adding an entry to $wmgMonologChannels. [16:36:31] anomie: hrm, would have been happy to, but it doesn't look like I can +2 that repository [16:39:15] bpirkle: The operations/mediawiki-config repository is special anyway, anything +2ed must be immediately deployed or bots in #wikimedia-operations start complaining. Reviewers who aren't prepared to deploy it typically give a +1 comment and say that it looks good to deploy. [16:45:34] anomie: ok, thanks for the info. I've done that much, in case it helps you. [16:45:41] bpirkle: Thanks [16:48:01] hi, who would be the person to steer me in the right direction to move mediawiki to the new logging infrastructure? context is https://phabricator.wikimedia.org/T211124 [16:52:10] Is there a WIP / proposed patch for unbreaking WikibaseMediaInfo with the new SlotRoleHandler code, or should I just revert it? [16:52:24] godog: bd808 did some work on MediaWiki's logging a few years ago. I don't know who has been handling it since. [16:52:50] James_F: This is the first I've heard of any breakage. Details? [16:53:10] Ah, I see an email 8 minutes ago. [16:53:18] anomie: https://phabricator.wikimedia.org/T211052 – SRH introduced new requirements that obviously we didn't meet. [16:53:31] anomie: thanks! I'll nag^Wping him [16:53:36] Given that the entire short-term point of the code was to make WBMI work better, this feels like a failure. :-) [16:54:39] James_F: Daniel is out until Thursday so unless anomie can do the unbreaking it might be better to revert [16:55:06] I'd really rather not revert it, I know it was a lot of work. [16:55:27] * anomie will roll eyes hard if it gets rolled back, because in the meeting yesterday SDC people were pushing for it to be merged... [16:55:34] But it means we've not been able to do any QA, and design work, or anything else for 24 hours ago. [16:55:56] anomie: Maybe someone should have written an integration test. [17:00:47] James_F: No one from the SDC side could figure out how to work with the new code? I think you need to call MediaWikiServices::getSlotRoleRegistry()->defineRole( 'mediainfo', ... ) to define the mediainfo role. The "..." is a factory function for a SlotRoleHandler object, either a mediainfo-specific subclass or SlotRoleHandler itself if __construct() has enough flexibility do the right thing. [17:02:48] anomie: OK, but the only human on Earth who actually understands how WBMI sub-classes some of Wikibase is Daniel. [17:03:09] I can write a sub-class, and it might even work. [17:03:15] But a heads-up would have been nice. [17:39:13] James_F: apologies on behalf of the Core Platform Team. We should indeed have given you a heads up. The patch was supposed to be a net benefit, of course, not a regression. Do you have the information you need from anomie to make a fix? Anything we can do to help? [17:40:15] CindyCicaleseWMF: I'm still in the rush of morning meetings but I *hopefully* will be able to (thanks anomie!). Will update if I get stuck/totally lost. [20:16:33] bpirkle, Reedy: we should create an unpolyfill library for getallheaders() [20:17:36] I'll do a more thoughrough look tonight [20:23:00] legoktm: thank you [22:42:29] bpirkle thanks! [22:43:16] paladox: sure thing, thanks for the bump, that one had fallen off my radar [22:43:24] :) [22:53:04] anomie hi, around for some questions on converting to db-select? :) [23:38:54] anomie could you review / merge https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/ContributionScores/+/477701/ please? :) [23:39:00] I've tested locally and seems to work