[11:17:51] lunch [13:58:46] \o [14:00:20] o/ [14:16:51] yes... [14:59:09] could explain perhaps why it's a bit more fragile? [14:59:14] yea perhaps [14:59:31] although..i dunno i guess i worry about it more because it's stateful, i guess it should "work" [15:00:02] but even just having multiple round trips increases error probability i guess [15:00:24] we should definitely avoid scrolls, but no clue how large the response is to justify multiple calls [15:02:44] my read is it's because they expect to throw away many of the results, i suppose i would probably want to try harder to move those conditions into the query but i don't fully understand from reading this what those conditions are [15:31:11] yes... [17:12:33] workout, back in ~40 [18:05:32] back [20:26:08] hmm, odd: The following artifacts could not be resolved: org.wikimedia.utils:lucene-regex-rewriter:pom:1.0.7 (absent): Could not transfer artifact org.wikimedia.utils:lucene-regex-rewriter:pom:1.0.7 from/to gitlab-maven (https://gitlab.wikimedia.org/api/v4/groups/186/-/packages/maven): status code: 401, reason phrase: Unauthorized (401) [20:27:27] but fetching it from an incognito window via ui works fine... [21:05:22] ahh, it turns out the problem was i had an old token in ~/.m2/settings.xml [21:05:37] (and was misdirected because i'm assuming this is all unauthenticated public access) [21:58:57] inflatador: running about 5’ late. looks like there might be some rdf streaming updater and wdqs lag alerts [22:06:22] ryankemper ACK, I think the wdqs lag has passed. We may still wanna look at T410320 and T410043 to see if there are some candidates for banning [22:06:22] T410320: WDQS: proactively check for/ban abusive queries - https://phabricator.wikimedia.org/T410320 [22:06:23] T410043: Resurrect/update "Wikidata Query Service Errors" logstash dashboard - https://phabricator.wikimedia.org/T410043