[11:05:39] oops, ryankemper I might have restarted blazegraph at the wrong time on wdqs1023. Sincere apologies. [11:06:10] I'm not sure I should still be trusted with my production access :/ [12:39:49] I'm taking a sick day today. Will probably be back tomorrow [12:40:21] inflatador: good luck! get better! [13:43:17] \o [13:47:59] something awkward about the docker/mwdd env on cindy, go is panic'ing (=throwing exceptions) on startup...try rebooting and hope resetting docker makes it all happy [14:10:08] can anyone say if we're already collecting metrics on the number of federated queries from the scholarly subgraph of wikidata to the main graph (and visa versa)? Did I just miss some metrics somewhere? [14:13:37] tarrow: hmm, i'm not completely sure. I know we log the queries themselves to `{eqiad,codfw}.wdqs-{external,internal}.sparql-query` but it's not clear to me we know if those are federated or not [14:13:54] maybe if federation can be detected from the raw sparql query [14:13:57] would you want to know? [14:14:38] tarrow: in a general curiosity sense, probably, but i'm not sure if we have anything actionable to do with the data [14:15:00] We're (the wikibase cloud team at WMDE) doing some work in this area looking through https://datahub.wikimedia.org/dataset/urn:li:dataset:(urn:li:dataPlatform:hive,discovery.processed_external_sparql_query,PROD)/Schema?is_lineage_mode=false&schemaFilter= [14:18:15] tarrow: plausibly that could be updated to add a field about federation [14:18:44] i'm not super familiar with it, but it's implemented via org.wikidata.query.rdf.spark.transform.queries.sparql.QueryExtractor [14:18:58] ebernhardson: yeah, there's SERVICE in the parsed q_info we are already looking at [14:19:21] which includes quite a few calls to the scholarly endpoint [14:19:35] as well as maybe some "self federation" to query.wikidata.org [14:19:53] just wanted to check we aren't reinventing the wheel [14:20:13] probably not :) we don't have a ton of metrics around the rdf bits [14:20:30] and also to determine if you would really benefit from separately tracking those graph split federation metrics? [14:22:15] honestly i don't know if we would benefit from it or not, it seems like the kinda thing that might be nice in a graph somewhere that may suggest something in the future when looking at an incident or some such, but i don't know of anything concrete today [14:48:20] ebernhardson: sure; but you didn't have some kind of KPI or something you need to track to say "the graph split is going well! we can see that 90% of queries that used to work on the joined graph has now switched to federated between the two graphs" or something? [16:46:15] meh...we have a search-query-suggestions dashboard in superset...but in the 5 years since it was last used superset lost the custom SQL query that backed it :S [16:46:41] maybe i can guess what that query was supposed to be ... [17:09:01] annoyingly there are mysterious metrics :P Like what was `search_count_norm_sum` summing? What was the normalization? :S [17:33:09] ahha..turns out you can export a dashboard into yamls, which finally shows everything it remembers (sadly not the source query though...) [18:15:49] plausibly this is what the dashboard was supposed to look like, maybe: https://superset.wikimedia.org/superset/dashboard/search-query-suggestions [18:19:32] not sure i like that form of ab test reporting, vs jupyter notebook. It's missing most of the statistical analysis, but good to remember what we were previously looking at [18:41:29] hello, do you know if by any chance brian is out today? [18:42:58] for T394543 I was looking to do the final test for the upgrade-firmware cookbook support for SSDs firmware and cirrussearch2113 was the next candidate [18:42:58] T394543: SSD firmware update not working in firmware cookbook - https://phabricator.wikimedia.org/T394543 [18:43:25] but as I see it being pooled and not downtimed maybe it's not a good candidate anymore or it just needs to be depooled/downtimed. [18:43:52] LMK in case you could give me a test host for this firmware upgrade [18:48:26] volans: hmm, should be around [18:48:39] volans: oh wait, actuallyhe said he was feeling sick this morning and took the day [18:56:48] ah sorry to hear that