[00:08:37] 10Analytics: Superset throwing up performance errors - https://phabricator.wikimedia.org/T231614 (10JKatzWMF) 05Resolved→03Open Reopening. This is happening for other metrics now My entire reading dashboard...which used to work, is throwing up performance errors. You can look at each chart to try and figur... [03:53:39] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Shorten the time it takes to move files from hadoop to dump hosts by Kerberizing/hadooping the dump hosts - https://phabricator.wikimedia.org/T234229 (10Nuria) [04:01:20] 10Analytics: Add anomaly detection alarm to detect traffic variations on countries overall - https://phabricator.wikimedia.org/T234484 (10Nuria) [04:05:04] 10Analytics, 10Analytics-Kanban: Add alarm on distribution of user agents and/or pageview titles - https://phabricator.wikimedia.org/T234588 (10Nuria) [04:51:21] 10Analytics, 10Multimedia, 10Tool-Pageviews: Add mediarequests metrics to wikistats UI - https://phabricator.wikimedia.org/T234589 (10Nuria) [04:53:06] 10Analytics, 10Multimedia, 10Tool-Pageviews: Add ability to the pageview tool in labs to get mediarequests per file similar to existing functionality to get pageviews per page title - https://phabricator.wikimedia.org/T234590 (10Nuria) [05:11:26] 10Analytics, 10Multimedia, 10Tool-Pageviews: Backfill data from mediacounts into mediarequests tables in cassandra so as to have historical mediarequest data - https://phabricator.wikimedia.org/T234591 (10Nuria) [05:15:31] 10Analytics, 10Analytics-Kanban: alarming based on anomaly detection: add alarm on distribution of user agents and/or pageview titles - https://phabricator.wikimedia.org/T234588 (10Nuria) [05:15:35] 10Analytics: Migrate eventlogging to python3 - https://phabricator.wikimedia.org/T234593 (10Nuria) [05:24:10] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10CPT Initiatives (Modern Event Platform (TEC2)), 10Services (watching): Eventlogging can use the stream config module to dynamically adjust sampling rates - https://phabricator.wikimedia.org/T234594 (10Nuria) [06:31:21] 10Analytics, 10Patch-For-Review, 10Performance-Team (Radar): Eventlogging processors are frequently failing heartbeats causing consumer group rebalances - https://phabricator.wikimedia.org/T222941 (10elukey) I just packaged and deployed 1.4.7 to deployment-eventlog05. Let's see if it works fine in there, and... [06:36:53] o/ [06:37:17] deployed python-kafka 1.4.7 on deployment-eventlog05 [06:37:21] let's see if it works there [06:56:26] 10Analytics, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Elukey: Eventlogging processors are frequently failing heartbeats causing consumer group rebalances - https://phabricator.wikimedia.org/T222941 (10elukey) [07:04:36] 10Analytics: Upgrade eventlogging to Python 3 - https://phabricator.wikimedia.org/T233231 (10elukey) [07:05:30] Good morning team [07:07:04] bonjour! [07:09:01] 10Analytics: Upgrade eventlogging to Python 3 - https://phabricator.wikimedia.org/T233231 (10elukey) [07:10:50] Hi elukey [07:12:28] 10Analytics: Upgrade eventlogging to Python 3 - https://phabricator.wikimedia.org/T233231 (10elukey) [07:15:36] 10Analytics, 10Analytics-Kanban, 10decommission, 10User-Elukey: Decommission kerberos1001 - https://phabricator.wikimedia.org/T234600 (10elukey) [07:24:31] 10Analytics, 10Analytics-Kanban, 10decommission, 10Patch-For-Review, 10User-Elukey: Decommission kerberos1001 - https://phabricator.wikimedia.org/T234600 (10ops-monitoring-bot) cookbooks.sre.hosts.decommission executed by elukey@cumin1001 for hosts: `kerberos1001.eqiad.wmnet` - kerberos1001.eqiad.wmnet... [07:28:08] 10Analytics: Upgrade eventlogging to Python 3 - https://phabricator.wikimedia.org/T233231 (10awight) >>! In T233231#5544764, @Ottomata wrote: > No idea why: > https://github.com/wikimedia/eventlogging/commit/375ae076ef70d87d7c3b7b9676d0433cd5c139c4 > > It looks like it has something to do with python 2 utf-8, w... [07:32:49] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Make the Kerberos infrastructure production ready - https://phabricator.wikimedia.org/T226089 (10elukey) [07:32:52] 10Analytics, 10Analytics-Kanban, 10decommission, 10Patch-For-Review, 10User-Elukey: Decommission kerberos1001 - https://phabricator.wikimedia.org/T234600 (10elukey) 05Open→03Resolved [07:36:10] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Make the Kerberos infrastructure production ready - https://phabricator.wikimedia.org/T226089 (10elukey) Updates: * re-created all principals and keytabs for the Hadoop test cluster and move it to krb1001/krb2001 * verified that replication works between krb10... [07:36:43] we are very close to have --^ ready [07:38:50] joal: https://www.cineca.it/en/news/leonardo-eurohpc [07:39:00] this monster is 5mins from my home :) [07:40:54] ohhhhhh elukey :) The beast looks powerful :) [07:41:57] elukey: interestingly I have coworked with some researchers in differences between those beasts and the type of infrastructure/architecture we use, and there actually are meaningfull differences :) [07:43:25] Most noticeably: storage/compute is separated for real on supercomputers - There is a big data bus to transfer big data volumes to compute, but data-collocation optimizations can't be used [07:44:18] What this leads me to think of: those beats are great for big-compute stuff with not too big data (like model simulations for instance) - Our stuff is better for big-data analytics :) [07:44:22] elukey: --^ [07:50:05] yep yep! :) [07:50:14] it is also very difficult to program those stuff [07:50:28] I remember following a course of mpi/openmp [07:50:36] elukey: more or less :) [07:50:55] elukey: a lot more low-level programing [07:51:07] better: model your algorithm to make it scale as best as possible on that monster [07:51:17] I love low level programming :) [07:51:44] * joal would love to be a good enough programmer to love low-level programming :S [07:52:11] I said I love it not that I am good at it :) [08:56:24] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Make the Kerberos infrastructure production ready - https://phabricator.wikimedia.org/T226089 (10elukey) Did a quick test: * kdestroy on an-tool1006 * stop kdc on krb1001 * kinit on an-tool1006 * check kdc logs for my username on krb2001 And everything worked... [10:16:58] 10Analytics, 10Analytics-Kanban: Upgrade matomo to its latest upstream version - https://phabricator.wikimedia.org/T234607 (10elukey) [10:21:33] 10Analytics, 10MinervaNeue, 10Readers-Web-Backlog (Kanbanana-2019-20-Q2): MinervaClientError sends malformed events - https://phabricator.wikimedia.org/T234344 (10phuedx) @Jdrewniak: > The event is sent as something like this: https://en.m.wikipedia.org/beacon/statsv?MediaWiki.minerva.WebClientError.anon=1c... [10:28:48] 10Analytics, 10MinervaNeue, 10Readers-Web-Backlog (Kanbanana-2019-20-Q2): MinervaClientError sends malformed events - https://phabricator.wikimedia.org/T234344 (10phuedx) For reference, the code that processes statsv URLs is [[ https://github.com/wikimedia/analytics-statsv/blob/master/statsv.py#L170 | here ]]. [10:47:08] * elukey lunch! [12:19:22] elukey@an-coord1001:/var/log/hadoop-hdfs/hdfs-balancer$ du -hs * [12:19:22] 2.7G balancer.log [12:28:46] elukey: is it recent? [12:31:58] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10CPT Initiatives (Modern Event Platform (TEC2)), 10Services (watching): Eventlogging Client Side can use the stream config module to dynamically adjust sampling rates - https://phabricator.wikimedia.org/T234594 (10Ottomata) [12:32:30] 10Analytics, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Elukey: Eventlogging processors are frequently failing heartbeats causing consumer group rebalances - https://phabricator.wikimedia.org/T222941 (10Ottomata) Yeehaw +1 [12:32:34] joal: hello! do you want to redo the mediawiki history snapshot today or wait until monday? [12:35:14] joal: yep today [13:41:53] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Upgrade eventlogging to Python 3 - https://phabricator.wikimedia.org/T233231 (10elukey) [13:45:48] first step --^ [13:46:24] in theory if we do things properly we could even jump to buster after that [13:50:00] Hey folks. Would I be able to find requests to https://ores.wmflabs.org in the webrequests table? [13:54:06] halfak i don't think so [13:54:22] webrequest only has stuff from prod varnish caches, and i don't think wmflabs.org goes through them...... [13:54:25] or does it? [13:55:00] Hmm. I'm honestly not sure. There's a main proxy in front of all of the hosts. That is configurable via openstack. I'll ask around -cloud to see if they know how I might find logs. [13:55:19] halfak: is that in beta? [13:55:29] / deployment-prep [13:55:31] ? [13:55:32] Nope. Separate cloud VPS. [13:55:35] ah k [13:55:59] I melted an iceberg trying to figure out the webrequests table already :'( [13:56:05] haha [13:56:17] (03PS12) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [13:56:25] halfak: we can find out pretty easy, if i hit the url and it doesn't show up in kafka [13:56:27] thten nope! [13:57:22] i'd say nope! [14:00:10] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10CPT Initiatives (Modern Event Platform (TEC2)), and 2 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Ottomata) a:03Ottomata [14:06:44] 10Analytics: Move the Analytics infrastructure to Debian Buster - https://phabricator.wikimedia.org/T234629 (10elukey) p:05Triage→03Normal [14:07:12] 10Analytics: Move the Analytics infrastructure to Debian Buster - https://phabricator.wikimedia.org/T234629 (10elukey) [14:07:16] 10Analytics, 10Patch-For-Review: Install Debian Buster on Hadoop - https://phabricator.wikimedia.org/T231067 (10elukey) [14:09:44] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10CPT Initiatives (Modern Event Platform (TEC2)), and 2 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Ottomata) I've got a few things working, but I'm not so sure I can use ResourceLoader... [14:10:18] (03PS13) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [14:15:24] (03PS14) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [14:54:30] mforns: you there? [15:00:02] fdans, yep! [15:00:18] mforns: can we bc for a second? I'm going a lil crazy with a syntax error [15:00:28] sure [15:00:31] omw [15:32:53] 10Analytics, 10Performance-Team, 10Research, 10Security-Team, 10WMF-Legal: A Large-scale Study of Wikipedia Users' Quality of Experience: data release - https://phabricator.wikimedia.org/T217318 (10Gilles) After discussing this, we've come to the conclusion that we can do this effectively with 2 separate... [15:33:06] 10Analytics, 10Performance-Team, 10Research, 10Security-Team, 10WMF-Legal: A Large-scale Study of Wikipedia Users' Quality of Experience: data release - https://phabricator.wikimedia.org/T217318 (10Gilles) a:05JFishback_WMF→03Gilles [15:35:30] 10Analytics, 10Analytics-EventLogging, 10EventBus, 10CPT Initiatives (Modern Event Platform (TEC2)), and 2 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Ottomata) PoC extension work here: https://github.com/ottomata/mediawiki-extensions-C... [15:35:51] 10Analytics, 10MinervaNeue, 10Readers-Web-Backlog (Kanbanana-2019-20-Q2): MinervaClientError sends malformed events - https://phabricator.wikimedia.org/T234344 (10Jdlrobson) isn't the real issue here that the CORs requests to maps.wikimedia.org are not working? Should we record that as a bug and let traffic... [15:36:05] (03PS15) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [15:39:05] joal: we've discovered what we think is a bug in HQL [15:40:17] it seems like if you start a grouping set with a non-column-like statement (like regexp_replace(base_name, '\t', '') ) hive won't accept it and it will throw a syntax error [15:40:48] but if you put that statement in any other position in the grouping set, it will work correctly [15:42:45] (03PS16) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [15:46:40] (03PS17) 10Fdans: Add oozie job to load top mediarequests data [analytics/refinery] - 10https://gerrit.wikimedia.org/r/538880 (https://phabricator.wikimedia.org/T233717) [16:02:08] ottomata: standuuuppppppp [16:05:02] 10Analytics, 10Analytics-Kanban: Migrate eventlogging to python3 - https://phabricator.wikimedia.org/T234593 (10Nuria) a:03elukey [16:20:47] 10Analytics, 10Analytics-Kanban, 10Multimedia, 10Tool-Pageviews: Backfill data from mediacounts into mediarequests tables in cassandra so as to have historical mediarequest data - https://phabricator.wikimedia.org/T234591 (10Nuria) a:03fdans [16:22:03] mforns: rsync from /home/mforns/mediawiki_history_dumps/2019-08 yes? [16:22:16] to /srv/dumps/xmldatadumps/public/other/mediawiki_history/2019-08 on labstore? [16:22:19] ottomata, yes :] [16:27:21] ok mforns its going! [16:27:29] \o/ [16:27:33] thanks ottomata [16:27:39] !log manually rsyncing mediawiki_history 2019-08 snapshot to labstore1006 [16:27:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:30:46] ottomata: I am wondering - is it needed only on 1006 or also on 1007? Not sure what is the diff :( [16:34:22] going off o/ [16:34:55] i think just 1006 [16:34:57] i am not sure either [16:37:35] i'm signing off too, laters all! [17:53:33] hey a-team: I'd like to sync up with one of you for a 1/2 hour to make sure the Growth Team's plan for de-whitelisting the HelpPanel schema makes sense. Unsure how to schedule that, could I have some advice? [18:11:57] nuria: is your approval needed for T234473 ? [18:27:21] 10Analytics: Superset not able to load a reading dashboard - https://phabricator.wikimedia.org/T234684 (10Nuria) [18:35:02] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 3 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10jlinehan) [18:35:15] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 2 others: Eventlogging Client Side can use the stream config module to dynamically adjust sampling rates - https://phabricator.wikimedia.org/T234594 (10jlinehan) [19:10:24] a-team: what is the policy about removing inactive EventLogging schemas from the whitelist? is there any reason not to remove them? [19:23:38] On reflection, it seems like the entries for inactive schemas might be kept intentionally, because the old data is still liable to be deleted if the entry suddenly disappears from the whitelist. Is that correct? [19:27:56] 10Analytics, 10Language-analytics, 10Product-Analytics: Hash all pageTokens or temporary identifiers from the EL Sanitization white-list for Language - https://phabricator.wikimedia.org/T226856 (10Neil_P._Quinn_WMF) 05Open→03Resolved After three months, I can report that Language doesn't have any whiteli... [19:27:59] 10Analytics, 10Product-Analytics, 10VisualEditor: Hash all pageTokens or temporary identifiers from the EL Sanitization white-list - https://phabricator.wikimedia.org/T220410 (10Neil_P._Quinn_WMF) [19:47:37] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 3 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Krinkle) >>! In T233634#5546961, @Ottomata wrote: > […] https://grafana.wikimedia.org/d/000000505/eventlogging?or... [20:04:51] neilpquinn, removing an EL schema from the sanitization white-list does not delete existing data. It just prevents that incoming data is kept in the sanitized database, and thus persisted indefinitely. [20:05:27] neilpquinn, whether to keep the existing data or drop it is up to the data owner/users. [20:05:50] I can delete a table from the event database if you want me to. [20:05:53] mforns: i see! so if we see old schemas in there that haven't received data in years, would it be reasonable to submit a patch removing them? [20:06:10] I'm just thinking about the whitelist for now— [20:06:35] neilpquinn, if you're the current owner of that data and think it has no historical value, yes! [20:07:54] oh, I see what you mean neilpquinn, if you think no more events will be coming in, then yes let's remove it form the white-list. [20:09:34] mforns: okay, thank you, that answers my question! I'm glad because if we remove defunct entries, it will be easier to survey what's currently being persisted :) I'll update the docs on Wikitech [20:09:39] is this about HelpPanel? Nettrom mentioned that above [20:10:02] neilpquinn, sure, thanks a lot. [20:11:26] I neilpquinn and I have two separate questions about similar things [20:11:27] mforns: actually, I wanted to review any whitelisted schemas for Language because of https://phabricator.wikimedia.org/T226856, and I noticed a bunch of defunct, inactive schemas (10+) :) [20:11:40] *I think [20:11:46] not including HelpPanel [20:12:08] nettrom you're a modern day Descartes [20:12:18] neilpquinn: as I am [20:12:23] neilpquinn, ok :] [20:12:32] Nettrom, can I help? [20:12:48] heh [20:13:21] mforns: hopefully :) I'm working with the Growth Team on de-whitelisting the HelpPanel schema, but also retaining some data in it as we have data from two experiments with two different retention deadlines [20:13:49] aha [20:14:01] I'm trying to learn how this process would go [20:14:08] the data you want to keep is already in the event_sanitized database? [20:14:33] mforns: correct [20:14:39] I assume, because the event (unsanitized) database does only keep last 90 days of data [20:14:40] ok [20:15:01] so, the only thing needed is removing the schema from the white-list [20:15:14] existing data will not be removed because of that [20:15:33] mforns: well, it's complicated by the fact that event_sanitized.helppanel has both data that should be kept and data that should be deleted [20:15:47] oh, I see [20:16:19] mforns: it's also complicated by the fact that moving forward, we'll have data coming in that's also data we want to keep [20:16:53] Nettrom, then why do you want to remove it from the white-list? [20:18:34] the reason you want to delete parts of the data is because it contains privacy-sensitive information? [20:18:52] or is it just to clean invalid data? [20:19:23] mforns: parts of the data is privacy-sensitive and covered by an extension to the data retention policy that is about to expire [20:19:37] oh, I understand now [20:20:07] the schema is reused in two places, which is why we have multiple experiments running all logging data through the HelpPanel schema [20:20:20] (or maybe it's three places, I know it's at least >1) [20:20:39] aha [20:20:40] mforns: I [20:20:51] I'm not sure if selectively deleting data from event_sanitized.helppanel is an option? [20:21:12] mmmm [20:22:09] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 3 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Ottomata) > There are currently 71 active schemas on enwiki (counted from RL's schemaRevision object client-side... [20:22:22] is the sensitive data to be removed located in a particular slice of time, or is it rather in a given field throughout all time? [20:22:33] or fields [20:24:35] 10Analytics, 10Product-Analytics: Hash all pageTokens or temporary identifiers from the EL Sanitization white-list for Editing - https://phabricator.wikimedia.org/T226855 (10Neil_P._Quinn_WMF) 05Open→03Resolved I've checked through the Editing schemas on the whitelist (`EditAttemptStep` and `VisualEditorFe... [20:24:39] 10Analytics, 10Product-Analytics, 10VisualEditor: Hash all pageTokens or temporary identifiers from the EL Sanitization white-list - https://phabricator.wikimedia.org/T220410 (10Neil_P._Quinn_WMF) [20:24:43] mforns: it's a combination of time (prior to deployment of the Homepage) and rows in the table (which should be identifiable through certain values of a given column) [20:24:56] aha [20:26:28] Nettrom, deleting slices of time (all fields) is easy in Hadoop, but to isolate the rows that need deletion we'd need to do some work by hand [20:27:30] mforns: okay, so if I were to back up the data that is to be retained and then ask you to delete a slice of time, that would perhaps be a straightforward way to solve this? [20:27:30] like insert-select the rows to be kept into another temporary table, then delete all the original data, and finally move the data to be kept back to the original place [20:28:33] Nettrom, yes, that's an option as well [20:29:16] but then the backed up data will not be in the table any more right? would that be a problem? [20:29:36] mforns: not really, we can adapt our queries to handle it [20:30:26] I'll discuss this some more with the team and figure out what we can do [20:30:34] gotta run to a meeting, thanks for your help mforns! :) [20:30:55] Nettrom, I'd say it's better to put the retainable data back to the HelpPanel table, no? This way you don't need to alter queries [20:31:07] ok, will talk about this in our standup meeting [20:31:10] and let you know [20:31:17] thanks! [20:33:05] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 3 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Ottomata) See also {T228175} (@jlinehan can provide more info here). [21:58:10] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 3 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Nuria) @Ottomata I think loading at once config for all streams could never happen on 1st pageload (which is when... [22:12:29] 10Analytics: Superset throwing up performance errors - https://phabricator.wikimedia.org/T231614 (10Nuria) I am closing this in favour of this ticket: https://phabricator.wikimedia.org/T234684 The problem in the dashboard here had to do with cardinality and number of segments (i checked some UA fields have card... [22:12:40] 10Analytics: Superset throwing up performance errors - https://phabricator.wikimedia.org/T231614 (10Nuria) 05Open→03Resolved [22:25:28] 10Analytics: Superset not able to load a reading dashboard - https://phabricator.wikimedia.org/T234684 (10Nuria) I see errors reaching to 1001 but i thought it was 1003 the one that answered to superset? ` raceback (most recent call last): File "/srv/deployment/analytics/superset/venv/lib/python3.7/site-pack... [22:43:15] 10Analytics: Superset not able to load a reading dashboard - https://phabricator.wikimedia.org/T234684 (10Nuria) This dashboard also includes pageviews by browser family (top 50) in a timespan of 4 years, if i execute that query in 1001 it works in 200ms curl -X POST 'http://localhost:8082/druid/v2/?pretty'... [23:06:18] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 4 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Krinkle) [23:07:34] 10Analytics, 10Analytics-EventLogging, 10Better Use Of Data, 10EventBus, and 4 others: Modern Event Platform: Stream Configuration: Implementation - https://phabricator.wikimedia.org/T233634 (10Krinkle) //(Tagging to remember next week)// @Ottomata I'll elaborate next week, but the 2K is not the EL code,... [23:18:20] 10Analytics: Superset not able to load a reading dashboard - https://phabricator.wikimedia.org/T234684 (10Nuria) mmm.. funny if you do "view results" they are there but a top50 requests requires looking at UAs, i wonder if the number of bytes sent to superset for this query is too large as well