[06:05:27] /win 28 [06:05:29] uff [06:05:31] morning :) [07:13:27] 10Analytics, 10Pageviews-Anomaly: Abnormal peaks @ huwiki - https://phabricator.wikimedia.org/T249792 (10Bencemac) [07:55:29] 10Analytics, 10Pageviews-Anomaly: Abnormal peaks @ huwiki - https://phabricator.wikimedia.org/T249792 (10Aklapper) @Bencemac: Is this the case for both desktop and mobile? [08:16:33] 10Analytics, 10Pageviews-Anomaly: Abnormal peaks @ huwiki - https://phabricator.wikimedia.org/T249792 (10Bencemac) Only desktop, I believe. [08:48:29] (03CR) 10Addshore: Only track unique users disabling TwoColConflict (031 comment) [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587232 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [08:57:40] 10Analytics, 10Operations, 10ops-eqiad, 10Patch-For-Review: (Need by: TBD) rack/setup/install kafka-jumbo100[789].eqiad.wmnet - https://phabricator.wikimedia.org/T244506 (10elukey) Summary: - the partman recipe is fixed - 1009 seems good - 1007's mgmt is not reachable - 1008's mgmt works, but I can't pxe... [09:22:50] * elukey out to buy groceries, available on phone! [11:20:09] all right back, quick lunch and then I'll be back [11:45:03] (03PS1) 10WMDE-Fisch: Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587720 (https://phabricator.wikimedia.org/T247944) [12:44:33] (03CR) 10Awight: [C: 03+2] "And I guess user_id / up_user isn't unique across wikis..." [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587720 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [12:44:59] (03Merged) 10jenkins-bot: Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587720 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [12:49:52] (03PS1) 10WMDE-Fisch: Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587728 (https://phabricator.wikimedia.org/T247944) [12:50:11] 10Analytics, 10Analytics-Kanban: Upgrade AMD ROCm to latest upstream - https://phabricator.wikimedia.org/T247082 (10elukey) From Miriam's tests it seems that stat1008 (with ROCm 3.3) works better than before, with some positive effects also on T248574. Since some tests are ongoing on stat1005 I'll update it in... [12:50:35] (03CR) 10Addshore: [C: 03+2] Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587728 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [12:50:41] (03CR) 10Addshore: [V: 03+2 C: 03+2] Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587728 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [12:50:55] (03CR) 10WMDE-Fisch: "> And I guess user_id / up_user isn't unique across wikis..." [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587720 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [12:51:06] (03Merged) 10jenkins-bot: Fix unique user count for TwoColConf [analytics/wmde/scripts] - 10https://gerrit.wikimedia.org/r/587728 (https://phabricator.wikimedia.org/T247944) (owner: 10WMDE-Fisch) [13:01:27] Is there anyway quarry can take a list of accounts and output when they were registered and another query for when they were locked? [13:06:25] RhinosF1: hey, we do not really maintain quarry, not sure if people will be able to answer :( [13:07:32] elukey: I mean, I can give anyone a list of the accounts tonight in whatever format is best and the sql help with [13:08:44] ah okok, no idea, let's see if people can help later on [13:10:35] elukey: it being ran privately is probably safer given the usecase, I can throw someone an email or PM if they want more details [13:12:34] ack! [13:22:59] hey team :] [13:30:49] not part of team but afternoon anyway [13:52:57] hello! [13:55:25] hi ottomata [14:03:17] 10Analytics, 10Better Use Of Data, 10Wikimedia-Logstash, 10Documentation, and 3 others: Documentation of client side error logging capabilities on mediawiki - https://phabricator.wikimedia.org/T248884 (10Niedzielski) I was hoping to add a Web team [[ https://www.mediawiki.org/wiki/Reading/Web/Chores | chor... [14:25:52] 10Analytics, 10Better Use Of Data, 10Wikimedia-Logstash, 10Documentation, and 3 others: Documentation of client side error logging capabilities on mediawiki - https://phabricator.wikimedia.org/T248884 (10Nuria) @Niedzielski I think is worth syncing up with members of product infrastructure team on your end... [14:41:56] \o/ internet is back :) [14:48:47] 10Analytics: Superset: Repeatedly asking to re-log in - https://phabricator.wikimedia.org/T249824 (10Aklapper) [14:51:58] 10Analytics, 10Better Use Of Data, 10Wikimedia-Logstash, 10Documentation, and 3 others: Documentation of client side error logging capabilities on mediawiki - https://phabricator.wikimedia.org/T248884 (10Niedzielski) Thanks, @Nuria! The link you shared says it's unable to be completely restored but hopeful... [14:52:36] 10Analytics: Superset: Repeatedly asking to re-log in - https://phabricator.wikimedia.org/T249824 (10Aklapper) [14:58:25] 10Analytics: Superset: Repeatedly asking to re-log in - https://phabricator.wikimedia.org/T249824 (10Nuria) Do you get this same error on a different browser? Chrome or similar? We have seen this error being reported by other users but not for a while which makes me think it might be a particular browser issue. [15:00:29] 10Analytics, 10Better Use Of Data, 10Wikimedia-Logstash, 10Documentation, and 3 others: Documentation of client side error logging capabilities on mediawiki - https://phabricator.wikimedia.org/T248884 (10Nuria) @Niedzielski If you go to log stash home page you can see a link to mediawiki frontend error da... [15:03:44] 10Analytics, 10Better Use Of Data, 10Wikimedia-Logstash, 10Documentation, and 3 others: Documentation of client side error logging capabilities on mediawiki - https://phabricator.wikimedia.org/T248884 (10Niedzielski) Got it. Thanks, @nuria! [15:11:52] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Vet high volume bot spike detection code - https://phabricator.wikimedia.org/T238363 (10Nuria) [15:11:54] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Create UDF for actor id generation - https://phabricator.wikimedia.org/T247342 (10Nuria) 05Open→03Resolved [15:12:06] 10Analytics: Defining a better authentication scheme for Druid and Presto - https://phabricator.wikimedia.org/T241189 (10Nuria) [15:12:08] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Prepare the Hadoop Analytics cluster for Kerberos - https://phabricator.wikimedia.org/T237269 (10Nuria) [15:12:10] 10Analytics, 10Analytics-Kanban, 10Product-Analytics, 10Patch-For-Review, 10User-Elukey: Kerberize Superset to allow Presto queries - https://phabricator.wikimedia.org/T239903 (10Nuria) 05Open→03Resolved [15:13:02] 10Analytics, 10Analytics-Kanban, 10Better Use Of Data, 10Desktop Improvements, and 8 others: Enable client side error logging in prod for small wiki - https://phabricator.wikimedia.org/T246030 (10Nuria) 05Open→03Resolved [15:13:06] 10Analytics, 10Better Use Of Data, 10Desktop Improvements, 10Product-Infrastructure-Team-Backlog, and 7 others: Client side error logging production launch - https://phabricator.wikimedia.org/T226986 (10Nuria) [15:20:11] !log absent spark refine timers on an-coord1001 and move them to an-launcher1001 [15:20:12] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:22:43] elukey: just checked, and indeed the timer script does store the salts in an-coord1001 under /etc/refinery/salts/eventlogging_sanitization [15:23:04] elukey: we need to copy those over to the new host before launching sanitization [15:24:23] maybe an improvement to that would be to add a line to that puppet script that pulls from hdfs first, then executes saltrotate, then puts back to hdfs [15:24:40] this way we move the state to hdfs [15:25:18] PROBLEM - Check the last execution of refine_mediawiki_job_events on an-coord1001 is CRITICAL: NRPE: Command check_check_refine_mediawiki_job_events_status not defined https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [15:25:33] this is me --^ [15:25:52] PROBLEM - Check the last execution of refine_mediawiki_events on an-coord1001 is CRITICAL: NRPE: Command check_check_refine_mediawiki_events_status not defined https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [15:25:54] mforns: what happens if for some reason we loose that file? Say host completely borked for a while, disk corrupted, etc.. [15:42:52] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Move systemd timer from an-coord1001 to an-launcher1001 - https://phabricator.wikimedia.org/T249593 (10elukey) [15:54:22] 10Analytics, 10Research: Proposed adjustment to wmf.wikidata_item_page_link to better handle page moves - https://phabricator.wikimedia.org/T249773 (10Isaac) [15:56:12] 10Analytics: Possible bot traffic from SA not categorized as bot traffic but regular pageviews - https://phabricator.wikimedia.org/T249835 (10ssingh) [15:57:20] 10Analytics: GPUs are not correctly handling multitasking - https://phabricator.wikimedia.org/T248574 (10Miriam) So we did a few tests with the latest ROCm version. * When the GPU saturates, there is no need to reboot, as killing the stalled processes is enough for the GPU to release the resources. This is a big... [16:00:33] !log move camus timers from an-coord1001 to an-launcher1001 [16:00:34] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:02:20] 10Analytics: Possible bot traffic from SA not categorized as bot traffic but regular pageviews - https://phabricator.wikimedia.org/T249835 (10ssingh) [16:04:40] PROBLEM - Check the last execution of camus-mediawiki_job on an-coord1001 is CRITICAL: NRPE: Command check_check_camus-mediawiki_job_status not defined https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [16:04:45] yes yes [16:04:48] I am bad I know [16:04:58] PROBLEM - Check the last execution of camus-eventlogging on an-coord1001 is CRITICAL: NRPE: Command check_check_camus-eventlogging_status not defined https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [16:05:40] PROBLEM - Check the last execution of camus-mediawiki_analytics_events on an-coord1001 is CRITICAL: NRPE: Command check_check_camus-mediawiki_analytics_events_status not defined https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [16:10:01] all right all good for now! [16:13:32] joal: https://phabricator.wikimedia.org/T248574 [16:18:01] 10Analytics, 10Analytics-Kanban: Move systemd timer from an-coord1001 to an-launcher1001 - https://phabricator.wikimedia.org/T249593 (10elukey) [16:18:43] are we doing grooming? [16:32:34] I guess no :) [16:34:12] elukey: re. "what happens if for some reason we loose that file?" yea, that's why I think we should pull it from hdfs no? [16:35:49] mforns: also save no? [16:36:19] elukey: yes. 1) pull from hdfs, 2) execute saltrotate locally 3) push to hdfs [16:36:33] this way the local copy is disposable [16:36:49] +! [16:36:51] +1 [16:37:01] mforns: is it difficult to implement? [16:38:11] elukey: I'm not 100% sure, but I think it should be just adding the hdfs dfs get command at the beginning of /usr/local/bin/refinery-eventlogging-saltrotate no? [16:39:03] and also the save/put, or is it already happening? (sorry trying to understand) [16:39:49] elukey: yes, it's already happening, the current command is: [16:40:06] /srv/deployment/analytics/refinery/bin/saltrotate -p '3 months' -b '50 days' /etc/refinery/salts/eventlogging_sanitization && hdfs dfs -rm -r /user/hdfs/salts/eventlogging_sanitization && hdfs dfs -put /etc/refinery/salts/eventlogging_sanitization /user/hdfs/salts [16:41:25] ah it is not part of the script, but it is done via && [16:41:26] akc [16:41:35] we could add at the start of it: rm -r /etc/refinery/salts/eventlogging_sanitization && hdfs dfs -get /user/hdfs/salts/eventlogging_sanitization /etc/refinery/salts && [16:41:40] yes [16:42:02] get might have -f to overwrite, need to check [16:42:08] but yes seems super feasible [16:42:13] let's do it before moving it [16:42:13] don't know if this would be very hacky? [16:42:22] ok makes sense [16:42:26] nah seems good [16:42:36] eventually it would be great to have it part of the script [16:42:48] but it is python so no client etc.. [16:42:52] hmm, at the beginning the script did that [16:42:53] yes now I get it [16:43:12] but we changed to make the script less coupled [16:43:13] well I mean instead of the ^^ [16:43:17] && [16:43:23] aha [16:43:33] anyway, adding a get at the beginning is super fine [16:43:34] +1 [16:43:46] can you send a code change whenever you have a moment? [16:43:53] elukey: sure! [16:45:00] <3 [16:59:44] 10Analytics: Explore in jupyter notebook whether the raw pageview timeseries can help on outage/censhorsip automatic detection - https://phabricator.wikimedia.org/T249849 (10Nuria) [17:01:51] 10Analytics, 10Analytics-Kanban, 10Analytics-SWAP, 10Product-Analytics: pip not accessible in new SWAP virtual environments - https://phabricator.wikimedia.org/T247752 (10Nuria) 05Open→03Resolved [17:03:47] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Investigate sporadic failures in oozie hive actions due to Kerberos auth - https://phabricator.wikimedia.org/T241650 (10Nuria) This issue does not seem like it is happen again, does it? [17:05:14] elukey: https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/587815/ [17:05:21] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Services (watching): jsonschema-tools should add a 'latest' symlink - https://phabricator.wikimedia.org/T245859 (10Nuria) 05Open→03Resolved [17:05:24] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Event-Platform, and 2 others: Modern Event Platform: Schema Registry: Implementation - https://phabricator.wikimedia.org/T206789 (10Nuria) [17:07:45] 10Analytics, 10Analytics-Kanban: Unify puppet roles for stat and notebook hosts - https://phabricator.wikimedia.org/T243934 (10Nuria) [17:07:47] 10Analytics, 10Analytics-Kanban, 10Operations, 10User-Elukey: Refactor Analytics POSIX groups in puppet to improve maintainability - https://phabricator.wikimedia.org/T246578 (10Nuria) 05Open→03Resolved [17:09:12] mforns: left a small comment to avoid the -r in rm, just to be safer [17:09:33] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10MW-1.35-notes (1.35.0-wmf.26; 2020-03-31), 10Patch-For-Review: Update MW Vagrant to work with EventLogging and EventGate changes - https://phabricator.wikimedia.org/T240355 (10Nuria) 05Open→03Resolved [17:09:34] aha [17:09:36] 10Analytics, 10Better Use Of Data, 10Event-Platform, 10MW-1.35-notes (1.35.0-wmf.27; 2020-04-07), and 2 others: EventLogging MEP Upgrade - https://phabricator.wikimedia.org/T238544 (10Nuria) [17:09:37] also left a comment [17:10:36] mforns: the dir is created by puppet in data_purge.pp, all good [17:10:36] elukey: but that is a directory? will it be removed without -r? [17:10:44] oh cool [17:10:53] ah snap it is a dir, okok [17:10:57] I thought it was a file [17:11:11] 10Analytics, 10Analytics-Kanban, 10User-Elukey: Prepare the Hadoop Analytics cluster for Kerberos - https://phabricator.wikimedia.org/T237269 (10Nuria) [17:11:13] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: profile::hive::site_hdfs should work with kerberos-run-command - https://phabricator.wikimedia.org/T240880 (10Nuria) 05Open→03Resolved [17:11:17] oh, it is a dir containing all salts [17:12:51] file { ["${refinery_config_dir}/salts", "${refinery_config_dir}/salts/eventlogging_sanitization"]: [17:12:54] ensure => 'directory', [17:12:57] owner => 'analytics', [17:12:59] } [17:13:02] mforns: --^ [17:13:11] it is created by puppet, can we clean up its content? like rm -f something/* [17:13:18] and hdfs dfs -get something/* [17:13:47] elukey: so if <%= refinery_config_dir %> for some reason instead of "/etc/refinery" is updated to "/etc/refinery ", then it would remove everything... [17:14:39] oh, good question... [17:15:01] I don't know, it might get in conflict with the puppet agent [17:15:32] how many files are stored in there? [17:15:43] what we can do is remove the ensure->directory of the latter "${refinery_config_dir}/salts/eventlogging_sanitization" [17:15:47] joal: yt? i've almsot got stuff working but am about to try someo crazy stuff, maybe you have some tips [17:15:54] for now just a couple [17:17:53] hi ottomata - How may I help? [17:18:33] bc? [17:18:45] sure ottomata [17:20:02] mforns: let's do something like this: if file is not present, hdfs get, otherwise regular command [17:20:48] elukey: or... wait updating the patch [17:24:06] mforns: it is as simple as if [ -z $(ls -A /path) ]; do hdfs dfs -get etc..; fi [17:24:38] elukey: I don't understand why we need that? [17:25:13] mforns: to avoid any rm, and be safe [17:25:36] the script checks if the dir is empty, and if so if gets from hdfs [17:25:45] since we'd only need to do it when we bootstrap [17:26:06] aah..., but then it's still keeping state there [17:26:33] mforns: yeah but it saves it at the end on hdfs, so in case of fire we have a backup [17:27:12] elukey: I see [17:27:35] we'd have a backup anyway in hdfs's trash folder for a month [17:27:37] no? [17:28:14] mforns: I mean that my initial concern was if we had state only on an-coord1001, and not on hdfs [17:28:23] now the only problem is bootstrapping in my opinion [17:28:24] I see [17:28:28] does it make sense? [17:29:38] elukey: yes, totally, will change [17:29:57] super [17:30:21] mforns: going to log-off but will check tomorrow morning! Thanks :) [17:30:29] ok, see ya! [17:30:32] o/ [17:31:57] (03CR) 10Nuria: Add parse_user_agent transform function (033 comments) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/587305 (https://phabricator.wikimedia.org/T238230) (owner: 10Ottomata) [18:37:53] (03PS1) 10Mforns: Add traffic entropy data quality stats in hourly resolution [analytics/refinery] - 10https://gerrit.wikimedia.org/r/587844 (https://phabricator.wikimedia.org/T249759) [18:56:35] (03PS3) 10Ottomata: Unify Refine transform functions and add user agent parser transform [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/586447 (https://phabricator.wikimedia.org/T238230) [19:03:32] (03CR) 10jerkins-bot: [V: 04-1] Unify Refine transform functions and add user agent parser transform [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/586447 (https://phabricator.wikimedia.org/T238230) (owner: 10Ottomata) [19:04:07] (03CR) 10Ottomata: "See new patches in https://gerrit.wikimedia.org/r/c/analytics/refinery/source/+/586447" (034 comments) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/587305 (https://phabricator.wikimedia.org/T238230) (owner: 10Ottomata) [19:04:27] (03Abandoned) 10Ottomata: Add parse_user_agent transform function [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/587305 (https://phabricator.wikimedia.org/T238230) (owner: 10Ottomata) [19:11:08] (03PS4) 10Ottomata: Unify Refine transform functions and add user agent parser transform [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/586447 (https://phabricator.wikimedia.org/T238230) [19:46:42] nuria: have you encountered null pointer exceptions when using GetGeoDataUDF before? [20:02:12] (03CR) 10Nuria: [C: 03+1] Add traffic entropy data quality stats in hourly resolution [analytics/refinery] - 10https://gerrit.wikimedia.org/r/587844 (https://phabricator.wikimedia.org/T249759) (owner: 10Mforns) [20:20:48] Hello hello [20:22:08] If I recall correctly (let me see... https://phabricator.wikimedia.org/T219522, https://phabricator.wikimedia.org/T216528, https://phabricator.wikimedia.org/T216226) someone once said: GPUs on stat1005? [20:23:49] So I was thinking how costly would it be to have xgboost compiled w. GPU support (see: https://xgboost.readthedocs.io/en/latest/build.html) on stat1005? Just asking. Thanks. [20:30:54] ottomata: there might be something weird with camus :( [20:31:11] ooh? [20:31:16] hour 15:00 is missing the _IMPORTED flag [20:31:27] for upload and text (webrequest) [20:31:52] that is more or less when I switched to an-launcher1001 [20:32:17] but following hours are ook? [20:32:57] yep [20:33:38] that is very strange [20:34:27] looking... [20:36:39] in /wmf/data/raw/webrequest/webrequest_text/hourly/2020/04/09/15 there are files, I am wondering if for some reason I have interrupted the last import for the hour [20:36:43] or something simialr [20:37:04] i'm thinking it might be just a miss run of the camus checker [20:37:09] it is the one that writes those flags [20:37:32] going to try to run it and see... [20:38:32] ahh interesting [20:40:08] size wise the hour seems to be around 90G, in line with adjacent hours [20:42:44] i'm not 100% sure how to re-run the checker for that time, the checker uses camus offset files.. [20:43:47] ah snap [20:44:12] ah the camus wrapper we have makes this hard [20:44:16] the jar has more options [20:45:53] let me know if I can help [20:46:36] also this seems happened only for webrequest [20:48:24] hm ya [20:48:35] so what i'm doing now [20:48:53] i did the same cmmand that the systemd timer does but without the --run flag, and added the --dry-run flag [20:49:01] this printed out the java command that gets run for the checker [20:49:19] now i'm doing that with -n 60 (look back 60 runs) [20:49:28] but i think it isn'tt finding anything to do? [20:50:27] AHH it is outputinig to another file.. [20:50:56] ya elukey this time it wrote the flag [20:50:56] Flag created: hdfs://analytics-hadoop/wmf/data/raw/webrequest/webrequest_upload/hourly/2020/04/09/15/_IMPORTED [20:51:05] wow! [20:51:36] re-running webrequest hour 15:00 [20:51:48] thx guys [20:51:56] elukey: [20:52:08] is time messed up on an-launcher? [20:52:21] hmmm wait no, date output is just now showing AM/PM [20:52:23] which itt didn't used to! [20:52:25] ah that's annoyihg [20:52:34] elukey@an-launcher1001:~$ date [20:52:35] Thu 09 Apr 2020 08:52:24 PM UTC [20:52:39] looks good [20:52:59] ottomata: all good jobs are running now [20:53:00] ya but why PM that's so annoying [20:53:01] opk great [20:53:28] thanks a ton for this fix, I wouldn't really have thought about it [20:54:20] !log re-run webrequest upload/text hour 15:00 from Hue (stuck due to missing _IMPORTED flag, caused by an-launcher1001 migration. Andrew fixed it re-running manually the Camus checker) [20:54:21] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [20:56:36] ok also sent an email to alerts@ explaining the issue [20:56:48] super, everything looks good, thanks again ottomata [20:56:51] o/ [20:58:06] mforns: all the last storm of alerts are probably all jobs waiting for hour 15:00 [20:58:20] yeah [21:02:02] yea yea