[22:07:31] I have a wiki where the job queue occasionally fills up with cirrusSearchElasticaWrite jobs faster than the job runner can process them… is there a way to hook the job queue backlog into the maxlag mechanism, or some other way to make well-behaved bots slow down automatically when the job queue is full? [22:08:00] (or should I maybe try increasing $wgJobRunRate, which is currently set to 0? does that usually work for people?) [22:14:21] how is the queue full? if it's backed on database, there's no (virtual) limit. If it's backed on redis, well, it's up to the redis configuration [22:15:42] $wgJobRunRate should be 0 if you're running jobs from a background runner or periodically. I wouldn't touch that [22:16:58] “full” just in the sense of there being many jobs in it, sorry ^^ [22:17:10] not reaching any limit, but users are complaining that search results are outdated [22:17:29] and I solve it by just running runJobs in the shell in addition to the job runner until the numbers are reasonable again [22:27:43] well, there's nothing much to do. The only thing the bot may check is the replication lag, but there's not a similar one for many jobs. Although the statistics api returns the approximate number of pending jobs. A bot may query it and backoff if the number gets high enough [22:27:47] https://www.mediawiki.org/wiki/Special:ApiSandbox#action=query&format=json&meta=siteinfo&formatversion=2&siprop=statistics [22:28:39] Otherwise, tell the bot operator to run at a slower rate [22:35:27] actually, it should be possible to hook this into ApiMaxLagInfo – it’s even mentioned in the hook documentation https://gerrit.wikimedia.org/g/mediawiki/core/+/2c025ead5ff12bd507773392e5748fce161cfe46/includes/api/Hook/ApiMaxLagInfoHook.php#23 [22:35:43] though AFAICT there’s no code behind it, it’s just something Anomie suggested might make sense ^^ https://gerrit.wikimedia.org/r/c/mediawiki/core/+/436543/comment/48c6025a_3aceecfd/ [22:35:48] I’ll see if I can wire that together later