[06:49:58] heads up, after 3 years of testing, I am going to enable https://phabricator.wikimedia.org/T172489 [06:50:06] 2.5 years of those are accidental [06:51:06] <_joe_> in what sense accidental? [06:51:13] <_joe_> we just forgot to enable it? [06:54:57] kinda yes [06:55:18] we were waiting for having the check around for some time [06:55:25] to make it page [06:55:40] and once the check was there, there was not a rush to make it page [06:56:05] the check was already useful by itself to notice an issue [06:57:25] so it was a question of "what was done was good enough, so priority was lowerd on top of other things" [07:16:49] in git, how do I delete the most recent commit without touching the unstaged changes? [07:17:00] I tried git reset but it doesn't do anything... [07:49:53] _joe_: its morning btw, and I believe we are both here :) [07:58:12] the new checks are already making advances, a missconfiguration on 2 servers was detected [08:02:51] <_joe_> addshore: gimme a few minutes [08:02:59] ack! [08:07:27] <_joe_> addshore: ok so, what memory limit would you need? [08:07:52] Good question, but I'm pretty sure double should be fine [08:08:00] even + 50% should be fine [08:08:05] <_joe_> ok, modifying mwdebug1002 [08:09:20] <_joe_> for some reason I have a very bad connection to eqiad right now, please be patient [08:09:28] no worries, im here all day :) [08:10:13] <_joe_> addshore: so, as I remembered [08:10:21] <_joe_> we do set a memory limit in php.ini [08:10:29] <_joe_> but we override it in mediawiki-config [08:11:34] <_joe_> sorry I need to fix this network [08:11:37] so in theroy I could actually override said memory limit too on a debug host with the correct change? [08:11:38] np :) [08:12:12] <_joe_> yes, but don't even, we can do it by hand and run scap pull later to fix it [08:14:01] <_joe_> doing so now [08:15:21] <_joe_> addshore: do your tests on mwdebug1002 [08:16:21] let me see if it completes (may also need a timeout increase if possible) [08:17:14] yup [08:17:14] "info": "[34894448-2bcd-496c-b3c1-8dd8ce664c97] Caught exception of type WMFTimeoutException", [08:17:33] <_joe_> uhm [08:19:02] <_joe_> done [08:19:13] *runs again [08:19:13] <_joe_> in ~ 10 sec it should take effect [08:19:20] aah! :P i went too quick! [08:20:36] <_joe_> uhm you still got a WMFTimeoutException after 60 seconds? [08:20:42] its still running [08:20:47] I got a result! :D [08:20:59] I'm going to run it 3 times or so to get a few different profiles [08:21:18] <_joe_> ok, the result is "we need to put limits on this kind of requests, and poolcounter too" [08:21:29] <_joe_> I can tell you without looking at the profile :D [08:21:31] peak memory usage 707,431,680 bytes [08:21:41] <_joe_> addshore: the time it took though [08:21:44] 91,486,100 µs [08:21:45] yup [08:21:47] <_joe_> that's wildly unacceptable [08:22:13] <_joe_> tbh anything longer than 2 seconds should be rate-limited, but things that take 90s should just be forbidden [08:22:47] this is by far our most used api too [08:22:51] (our being wikibase) [08:23:04] <_joe_> happy to hear that! [08:23:30] <_joe_> lmk when I can revert the hacks [08:23:39] will do, just running it a third time now [08:26:11] _joe_: feel free to remove hacks [08:26:18] <_joe_> ack! [08:27:55] for referece, it is this ticket https://phabricator.wikimedia.org/T249587#6115380 [08:29:39] i want to try and create a situation where i can profile that exact request locally, but I fear that might be a little too much work :P [08:32:56] <_joe_> "Snak" you're missing a c there [08:33:05] <_joe_> there, I solved the problem [08:34:14] :D [10:13:11] XioNoX: what about `git reset HEAD~1` [10:14:31] jbond42: thanks, volans helped me, the reset wasn't doing what I wanted [10:15:40] ack cool [10:29:59] * volans hides [10:32:55] yet another instance of "better call volans" [10:46:12] "git volans" [13:49:54] volans (and anyone else): got a question re static methods. i create one in the ferm-status script https://github.com/wikimedia/puppet/blob/production/modules/ferm/files/ferm_status.py#L188-L198 and i call it withing the object using self (https://github.com/wikimedia/puppet/blob/production/modules/ferm/files/ferm_status.py#L219) should i infact call it with `Rule._resolve_port(port)` or even just [13:50:00] `_resolve_port(port)` or should it even be ... [13:50:03] ... a staticmethod? [13:56:35] we just deployed the new paging alerts fleet wide [13:56:50] nothing should happen, but always assuming the worse [13:57:28] jbond42: so, it works both ways calling it (self. and ClassName.). If it doesn't use self within the body multiple linters would technically complain no-self-use [13:57:35] hinting that it should be a static method [13:57:40] or a module function [13:57:43] that's up to you [13:58:17] the fact that is "private" is not a problem or particularly weird either [13:59:26] so tl;dr strictly should be called with Class but dosn;t really mater if you call with self [14:00:41] yes, ClassName is the canonical I'd say, but yeah both works. Ofc inheritance rules apply [14:00:51] ack thanks [14:00:55] that might make things less obvious, but not your case [14:01:53] * bd808 now wants to know if volans' advice was to use `git stash` or something else [14:02:58] bd808: if you're referring to a git advice, in that specific case yes, stash was involved :D [14:12:42] I may have at one particularly ugly point in refactoring MediaWiki-Vgrant stuff had a stash queue of ~15 links deep [14:14:27] lol [15:38:00] as FYI, from today all the stat100x hosts run the same config and have the same access rules [15:38:13] I have updated https://wikitech.wikimedia.org/wiki/Analytics/Data_access [15:38:24] fewer POSIX groups and complexity hopefully [15:39:12] more work still needs to be done but the bulk of it should be completed, so people during SRE Clinic duty should hate analytics less from today :D [19:31:38] apergos: is there a way to manually download the dumps referenced on https://phabricator.wikimedia.org/T221917? [19:32:37] yes, the link is right there [19:33:27] https://phabricator.wikimedia.org/T221917?#5809286 [19:34:10] if you find that's too slow, go to one of the mirrors, the path from other/ will be the same of course [22:56:37] apergos: so sorry I missed that, thanks!