[00:33:20] 10Analytics: Check home leftovers of lexnasser - https://phabricator.wikimedia.org/T252363 (10lexnasser) I think that the following should be saved: - stat1007: `api`, `byc`, `refinery` - notebook1003: `Search_Engine_Testing.ipynb`, `Geoeditors.ipynb` - hive: `lex.webrequest_subset`, `lex.geoeditors_public_monthly` [03:18:08] 10Analytics: Check home leftovers of lexnasser - https://phabricator.wikimedia.org/T252363 (10Nuria) @lexnasser is coming back in October . I mentioned this to @MoritzMuehlenhoff and he was OK with leaving homedir as is (we can move the data from notebooks to the stats machines) [03:23:47] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Nuria) @Dzahn is there an additional step we do to verify employment? [05:29:20] 10Analytics: Check home leftovers of lexnasser - https://phabricator.wikimedia.org/T252363 (10elukey) Seems fine, but I'll copy his notebook100x dirs to stat1007 since the hosts will be deprecated by the time he will be back :) [05:56:08] joal: bonjour! When you have time: [05:57:01] 1) I have stopped your jupyter unit on stat1008, for some reason it doesn't load, the error is weird, can you tell me if you did something different to create it? Also, can you reloging to jupyterhub's UI to see if it works ? [05:57:05] 2) [05:57:24] when you are ready let's test + deploy the new druid snapshot for aqs :) [06:00:15] 10Analytics, 10Analytics-Cluster: Monitoring GPU Usage on stat Machines - https://phabricator.wikimedia.org/T251938 (10elukey) 05Resolved→03Open [06:01:36] (03PS39) 10Fdans: Add pageview daily dump oozie job to replace Pagecounts-EZ [analytics/refinery] - 10https://gerrit.wikimedia.org/r/595152 (https://phabricator.wikimedia.org/T251777) [06:01:52] (03CR) 10Fdans: Add pageview daily dump oozie job to replace Pagecounts-EZ (0310 comments) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/595152 (https://phabricator.wikimedia.org/T251777) (owner: 10Fdans) [06:56:04] 10Analytics, 10Analytics-Kanban: Move the Analytics infrastructure to Debian Buster - https://phabricator.wikimedia.org/T234629 (10elukey) [07:05:51] 10Analytics, 10Analytics-Kanban: Move Matomo to Debian Buster - https://phabricator.wikimedia.org/T252740 (10elukey) [07:07:49] 10Analytics, 10Analytics-Kanban: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) [07:13:27] 10Analytics, 10Operations, 10vm-requests: Create a VM for matomo1002 (eqiad) - https://phabricator.wikimedia.org/T252742 (10elukey) [07:14:07] Good morning elukey - No time in the morning as usual - just restarted the notebook server: fails again :( I assume the error might come from me rsyncing my whole home folder, including venv with different version :( I realized that after - I'm sorry :( [07:14:22] About AQS snapshot: let's do it early afternoon if ok for you [07:15:06] joal: ahhh ok that explains, okok :) [07:15:22] whenever you want, np :) [07:15:29] elukey: do we have a not too complicated way to reset the venve? [07:16:13] joal: in theory, if you mv it to say venv-backup and then re-login it jupyterhub would re-create it [07:16:32] ack elukey [07:17:06] elukey: tresting [07:18:04] elukey: still failed :( [07:18:50] the issue is /bin/bash: line 0: exec: jupyterhub-singleuser: not found, that is weird, it should work [07:20:10] I am going to test with my notebook, there is probably a PEBCAK [07:20:15] (from my side I mean) [07:20:22] stat1008 was one of the last to be configured [07:20:24] elukey: could very well be from my side :) [07:20:26] I might have missed something [07:20:36] nono it is surely on my side, the error is weird [07:21:02] elukey: no venv recreated so far [07:22:08] ahhh [07:22:29] joal: now yes [07:22:34] I had to restart jupyterhib [07:23:17] ah it works now! [07:24:52] * elukey afk for a bit! [07:50:32] shakky internet today :S [07:51:12] indeed elukey, notebook server started :) [08:15:37] I ran into something awkward after moving a scala-spark library over to Gerrit. Is it possible that no other repos are using sbt, and I'm looking at creating a CI image from scratch? [08:16:41] s/library/one-off thing/ ... it isn't a big deal if I have to V+2 by hand, of course. [08:41:44] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) @Nuria Yea, for now we can still check on the corporate LDAP (OIT) servers (though they might be shut down in th... [08:42:08] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) a:03Dzahn [08:58:31] ottomata: presto-server/presto-cli is flagged for downgrade on an-presto1001, it has 0.2666-2 installed, but -1 is in the apt.wikimedia.org, looking at the changelog it appears that -2 should get uploaded to the repo? [09:30:59] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) Now we have: ` matomo | 3.11.0-2 | stretch-wikimedia | main | amd64, i386 matomo | 3.13.3-1 | stretch-wikimedia | thirdparty/matomo | amd64 matomo |... [09:34:39] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10MoritzMuehlenhoff) >>! In T252741#6136337, @elukey wrote: > Now we have: > > ` > matomo | 3.11.0-2 | stretch-wikimedia | main | amd64, i386 > matomo | 3.13.3... [09:39:37] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) And ready for the upgrade: ` elukey@matomo1001:~$ apt-cache policy matomo matomo: Installed: 3.11.0-2 Candidate: 3.13.3-1 Version table: 3.13.3-1 1001... [09:50:31] !log set matomo in maintenance mode as prep step for upgrade [09:50:33] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [09:53:03] !log upgrade matomo to 3.13.3 [09:53:05] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [09:53:42] Experimental image for linting sbt repos: https://gerrit.wikimedia.org/r/#/c/integration/config/+/596407 [09:56:32] matomo upgraded! [10:10:16] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) Removed the package from the main component as Moritz suggested. Created https://wikitech.wikimedia.org/wiki/Analytics/Systems/Matomo#Upgrade_Matomo [10:10:26] 10Analytics, 10Analytics-Kanban: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) p:05Triage→03High [10:10:56] 10Analytics, 10Analytics-Kanban: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) a:03elukey [10:11:41] sooo a-team the only thing in the train etherpad is to deploy refinery to bump up a job's refinery hive version, so I'm going to do that now unless someone objects [10:13:57] cool beans [10:20:46] 10Analytics, 10Analytics-Kanban: Upgrade matomo to the latest upstream - https://phabricator.wikimedia.org/T252741 (10elukey) While writing the docs, I realized that I missed a step, namely checking that the last version in the upstream debian repo is effectively the latest released. It is not in this case, I... [10:23:31] 10Analytics, 10Analytics-Cluster, 10User-Elukey: Monitoring GPU Usage on stat Machines - https://phabricator.wikimedia.org/T251938 (10elukey) [10:27:17] 10Analytics, 10Analytics-Cluster, 10User-Elukey: Monitoring GPU Usage on stat Machines - https://phabricator.wikimedia.org/T251938 (10elukey) I see `/opt/rocm-3.3.0/bin/rocm-smi --showpids` that could help, will investigate! [10:34:39] fdans: o/ I am going out for lunch but ping me on the phone if you need help [10:35:06] elukey: thanks! have a good pranzo [10:35:17] grazie! [11:01:01] (03PS3) 10Awight: Clean up table persistence [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596225 [11:01:50] (03PS1) 10Awight: Strict linting [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596413 [11:15:51] Hi team [11:18:56] (03CR) 10Thiemo Kreuz (WMDE): [C: 03+2] Strict linting (031 comment) [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596413 (owner: 10Awight) [11:22:02] So, one day soon I might have new, high-quality data about edit conflicts. Should I be thinking about publishing some subset of that publicly? (T252507) [11:22:04] T252507: TwoColConflict analytics April refresh - https://phabricator.wikimedia.org/T252507 [11:22:59] awight: as long as the data you rely on is public already, I don't see why not [11:25:02] joal: Most this comes from new eventlogging streams, so no I don't think it's been public before now. And of course, some of it must be excluded like the joined, high-resolution EditAttemptStep records. [11:25:33] right awight - in that case a formal review from WMF-Legal seems the correct approach :) [11:26:46] I'll write up what I'm imagining on-wiki or in a task, thanks for the pointer :) [11:39:33] (03CR) 10Awight: Strict linting (031 comment) [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596413 (owner: 10Awight) [11:39:36] (03CR) 10Awight: [V: 03+2] Strict linting [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596413 (owner: 10Awight) [11:45:51] 10Analytics, 10MediaWiki-extensions-WikibaseRepository, 10Wikidata, 10wikidata-tech-focus: ApiAction log in data lake doesn't record Wikibase API actions - https://phabricator.wikimedia.org/T174474 (10Aklapper) [11:45:54] 10Analytics, 10MediaWiki-API, 10CPT Initiatives (Modern Event Platform (TEC2)), 10Patch-For-Review, 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Aklapper) [11:48:03] (03PS4) 10Awight: Clean up table persistence [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596225 [11:52:51] 10Analytics, 10Analytics-Cluster, 10User-Elukey: Monitoring GPU Usage on stat Machines - https://phabricator.wikimedia.org/T251938 (10Aroraakhil) Thanks much! [12:02:41] Q: How can I reset Kerberos password for stat machine? [12:03:36] (03PS1) 10Awight: Embed Jupyter kernel template [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596421 [12:07:08] (03PS2) 10Awight: Embed Jupyter kernel template [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596421 [12:13:28] A: Seems I could recall it now :) [12:16:40] Where can I find the debug log for my jupyter kernel? Something's not right... [12:23:37] awight: what happens? :) [12:23:51] kart_: good :) [12:24:16] awight: in any case, you can file a bug with the Analytics tag [12:24:22] but we can also discuss it in here [12:25:47] elukey: Thanks. Yeah it's not urgent, just a slight inconvenience. I've been using a custom kernel copied from joal, which worked well until I added one more JAR to its classpath, so I'm sure it's something simple. [12:25:56] Now it reboots in a kernel death loop. [12:26:34] I found the commandline in ps and am trying to run it from the console... [12:27:10] /facepalm [12:27:25] I used spaces where I should have had a comma. [12:27:51] awight: I would have suggested checking that first :) [12:28:52] elukey: shall we deploy aqs? [12:29:42] joal: sure! [12:30:06] joal: aqs1004 depooled and ready [12:30:14] yupack elukey [12:31:20] all good elukey [12:35:00] :100%: Kernel works now, thanks for the friendly ear :-) [12:35:39] (03PS3) 10Awight: Embed Jupyter kernel template [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596421 [12:37:04] 10Analytics, 10Analytics-Kanban: Move Archiva to Debian Buster - https://phabricator.wikimedia.org/T252767 (10elukey) [12:38:30] I have a data pipeline which we'll only use one-off now and then, but it creates spark parquet dirs of data which is expensive to calculate. Can you recommend a HDFS path where I can leave this for a few years? Currently in /tmp/awight. [12:39:14] awight: definitely not /tmp it'll get deleted every now and then:) [12:39:15] --also, we may want to access this data from notebooks from time to time, which is why I'm thinking medium-term persistence might make sense. [12:39:27] Good tip! [12:39:32] awight: your personal folder on HDFS won't get touch (except if you leave) [12:39:54] awight: /user/awight [12:40:20] haha okay ty. I'll ask at WMDE whether there's any appetite for keeping our stuff in a shared path [12:40:41] ack awight - if so, a ticket with explanations will help [12:40:51] +1 [12:43:59] elukey: all good for AQS? [12:44:27] joal: I am doing the roll restart [12:44:47] ack elukey - since you didn't answer my previous ping, I didn't know where we were :) [12:44:54] thanks elukey :) [12:45:36] ah sorry I am creating tasks :) [12:46:00] no prob elukey - /me is being insistent :) [12:46:45] joal: all done! [12:46:50] \o/ [12:46:52] hecking in UI [12:47:25] elukey: I like those moments when UI is just a little slower because of data recomputing :) [12:48:28] poor Druid [12:48:58] elukey: I warm it up before we swap ;) [12:49:00] I am about to create the task to add 4 new druid nodes [12:49:10] All good elukey - Thanks again :) [12:49:39] !log Release 2020-04 mediawiki_history_reduced to public druid for AQS (elukey did it :-P) [12:49:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [12:49:47] MOAR DRUIDS :) [12:49:56] elukey: shall we plan to bump to 0.18 as well? [12:50:34] elukey: there is some testing I'd like to be - If perfs change a lot for group-by (I don't expect but eh, who knows), maybe some of our mediawiki-reduced discussion can be straight solved [12:54:08] joal: I'd prefer to just concentrate to upgrade to Buster + adding nodes for the moment, and do 0.18 next q [12:54:17] no problemo elukey :) [12:54:19] basically in 15 days [12:54:47] elukey: 1 month and a half ;) [12:55:06] elukey: there still is time this quarter [12:55:44] elukey: Thanks for documentation update :) [12:56:07] ah yes June is still Q4 [12:56:09] also elukey - I'm done oin notebook machines :) [12:56:21] Thanks for the gentle push :) [12:56:23] nono in June I should be able to work on Druid 0.18 [12:56:35] just gimme 2 weeks to work on Buster first :D [12:56:41] otherwise too many moving parts [12:56:51] (new hosts, buster, etc..) [12:56:57] please elukey - You tell me when you're ready :) [12:58:15] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Move Archiva to Debian Buster - https://phabricator.wikimedia.org/T252767 (10elukey) The disk usage is very interesting: https://grafana.wikimedia.org/d/000000377/host-overview?panelId=12&fullscreen&orgId=1&refresh=5m&var-server=archiva1001&var-datasource... [12:58:22] joal: --^ is interesting [12:58:32] looking [12:58:50] elukey: broken link :) [12:59:45] elukey: how come there are regular spikes like that? [13:02:33] joal: what spikes? [13:03:09] what I am worried about is the disk space [13:03:16] Ah [13:03:32] there is also a cron that andrew created for symlinks etc.. [13:03:53] it is probably that one that causes some load, but nothing problematic in my opinion [13:04:06] /var/lib/archiva is ~90G [13:04:08] ok makes sense [13:04:28] and it grew +8G during the past 5 months IIUC [13:04:31] elukey: we should do some cleaning of cached jars I guess [13:04:46] joal: is it ok? Otherwise I can ask for a new vm with 200G [13:04:53] (the one with buster) [13:05:07] it is better to do a clean up if we have unused jars [13:05:19] but if they are useful, let's keep them [13:06:01] elukey: cleaning should be ok (first requests for jars whill be slower, and that's it) - We should probably nonetheless get more storage, as we are asking others (discovery noticeably to use the same pattern we have, which is using archiva-only repo) [13:06:54] joal: what I mean is cleaning up old refinery jars for example, the ones that we don't use anymore [13:07:00] they shouldn't be requested again no? [13:07:28] agreed elukey - We should also clean the cache - I'm pretty sure some jars are here that will not be used ever again [13:09:05] actually, maybe the cache cleans itself by default... [13:17:08] 10Analytics: Add new Druid nodes to analytics and public clusters - https://phabricator.wikimedia.org/T252771 (10elukey) [13:22:26] 10Analytics: Add new Druid nodes to analytics and public clusters - https://phabricator.wikimedia.org/T252771 (10elukey) On druid1001: ` /dev/mapper/druid1001--vg-druid 2.9T 1.2T 1.6T 43% /var/lib/druid ` A clean solution could be this - for each node: - disable puppet - we stop all daemons (could be coup... [13:23:34] 10Analytics, 10Analytics-Kanban: Move the Analytics infrastructure to Debian Buster - https://phabricator.wikimedia.org/T234629 (10elukey) [13:26:42] (03PS1) 10Awight: Make task-numbered notebook more self-explanatory [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596435 [13:27:02] (03PS1) 10Awight: Move data to semi-permanent path [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596436 [13:28:00] (03PS1) 10Awight: Helper to lookup namespace names by ID [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596438 [13:36:23] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Evaluate possible replacements for Camus: Gobblin, Marmaray, etc. - https://phabricator.wikimedia.org/T238400 (10Ottomata) I haven't tried Gobblin or Marmamay yet, but here are some quick thoughts. Gobblin runs jobs in MapReduce, much like Camus. It is a h... [13:37:20] 10Analytics: Add new Druid nodes to analytics and public clusters - https://phabricator.wikimedia.org/T252771 (10Ottomata) +1 [13:39:26] * elukey afk for a bit [13:41:37] 10Analytics, 10Event-Platform, 10Inuka-Team (Kanban), 10KaiOS-Wikipedia-app (MVP), 10Patch-For-Review: Capture and send back client-side errors - https://phabricator.wikimedia.org/T248615 (10AMuigai) 05Open→03Resolved [13:47:03] (03PS5) 10Awight: Clean up table persistence [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596225 [13:47:06] (03PS1) 10Awight: Remove known buggy conflicts [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596439 [13:49:33] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Evaluate possible replacements for Camus: Gobblin, Marmaray, Kafka Connect HDFS, etc. - https://phabricator.wikimedia.org/T238400 (10Ottomata) [13:51:23] (03PS2) 10Awight: Remove known buggy conflicts [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596439 [14:06:19] 10Analytics, 10Product-Analytics: [Spike] Should EventLogging support DNT? - https://phabricator.wikimedia.org/T252438 (10mforns) I think we should respect DNT. I know at the WMF we take our user's privacy very seriously. Even if we collect DNT events, I know that the information that we keep will never be us... [14:14:13] 10Analytics, 10Operations: Move kafkamon hosts to Debian Buster - https://phabricator.wikimedia.org/T252773 (10elukey) [14:20:32] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Evaluate possible replacements for Camus: Gobblin, Marmaray, Kafka Connect HDFS, etc. - https://phabricator.wikimedia.org/T238400 (10Ottomata) A con against Kafka Connect: It does not run natively in Yarn. We'd have to run it in k8s or figure out how to run... [14:25:28] 10Analytics, 10Event-Platform: Replace Camus with Kafka Connect for event data imports - https://phabricator.wikimedia.org/T223628 (10Ottomata) Merging this into a parent task about replacing Camus with ____. [14:25:47] 10Analytics, 10Event-Platform: Replace Camus with Kafka Connect for event data imports - https://phabricator.wikimedia.org/T223628 (10Ottomata) [14:26:38] 10Analytics, 10Event-Platform: Kafka Connect development work - https://phabricator.wikimedia.org/T223626 (10Ottomata) Merging this into the ingestion framework evaluation task. [14:26:55] 10Analytics, 10Event-Platform: Kafka Connect development work - https://phabricator.wikimedia.org/T223626 (10Ottomata) [14:26:57] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Evaluate possible replacements for Camus: Gobblin, Marmaray, Kafka Connect HDFS, etc. - https://phabricator.wikimedia.org/T238400 (10Ottomata) [14:27:11] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Evaluate possible replacements for Camus: Gobblin, Marmaray, Kafka Connect HDFS, etc. - https://phabricator.wikimedia.org/T238400 (10Ottomata) [14:27:15] 10Analytics, 10Analytics-EventLogging, 10Event-Platform, 10Goal, 10Services (watching): Modern Event Platform: Stream Connectors - https://phabricator.wikimedia.org/T214430 (10Ottomata) [14:35:58] 10Analytics, 10Analytics-Kanban: Add new kafka brokers kafka-jumbo100[789] to the jumbo-eqiad Kafka cluster - https://phabricator.wikimedia.org/T252675 (10Ottomata) [14:38:12] 10Analytics, 10Analytics-Kanban: Corrupted parquet statistics when querying webrequest data via Superset/Presto - https://phabricator.wikimedia.org/T251231 (10Nuria) 05Open→03Resolved [14:38:22] 10Analytics, 10Analytics-Kanban: Corrupted parquet statistics when querying webrequest data via Superset/Presto - https://phabricator.wikimedia.org/T251231 (10Nuria) [14:40:06] 10Analytics, 10Analytics-Kanban: Geoeditors job is faliing due to problems with geo udf - https://phabricator.wikimedia.org/T252205 (10Nuria) This needs a deployment to be effective, did we deployed and re-started the job? [14:40:23] 10Analytics, 10Analytics-Kanban: Troubleshoot EventLogging sanitization immediate - https://phabricator.wikimedia.org/T251794 (10Nuria) 05Open→03Resolved [14:40:34] 10Analytics, 10Analytics-Kanban: Troubleshoot EventLogging sanitization immediate - https://phabricator.wikimedia.org/T251794 (10Nuria) [14:40:58] 10Analytics, 10Analytics-Kanban, 10Analytics-SWAP, 10Product-Analytics, 10User-Elukey: pip not accessible in new SWAP virtual environments - https://phabricator.wikimedia.org/T247752 (10nshahquinn-wmf) FYI, @Mayakp.wiki was blocked by this when trying to switch one of the stat machines in response to T24... [14:41:09] 10Analytics, 10Analytics-Kanban: Corrupted parquet statistics when querying webrequest data via Superset/Presto - https://phabricator.wikimedia.org/T251231 (10elukey) 05Resolved→03Open [14:41:12] 10Analytics, 10Analytics-Kanban: Unify puppet roles for stat and notebook hosts - https://phabricator.wikimedia.org/T243934 (10Nuria) [14:41:15] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review, 10User-Elukey: Add SWAP profile to stat1005 - https://phabricator.wikimedia.org/T245179 (10Nuria) 05Open→03Resolved [14:41:30] 10Analytics, 10Analytics-Kanban: Corrupted parquet statistics when querying webrequest data via Superset/Presto - https://phabricator.wikimedia.org/T251231 (10elukey) Before closing, let's figure out if this is a Presto bug or something else in our parquet files :) [14:42:36] 10Analytics, 10Analytics-Kanban: Automated deletion of actor data for bot prediction after 90 days - https://phabricator.wikimedia.org/T247344 (10Nuria) [14:42:43] 10Analytics, 10Analytics-Kanban: Automated deletion of actor data for bot prediction after 90 days - https://phabricator.wikimedia.org/T247344 (10Nuria) 05Open→03Resolved [14:42:47] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Create UDF for actor id generation - https://phabricator.wikimedia.org/T247342 (10Nuria) [14:43:20] 10Analytics, 10Research-Backlog: [Open question] Improve bot identification at scale - https://phabricator.wikimedia.org/T138207 (10Nuria) [14:43:24] 10Analytics: Label high volume bot spikes in pageview data as automated traffic - https://phabricator.wikimedia.org/T238357 (10Nuria) 05Open→03Resolved [14:43:44] 10Analytics, 10Pageviews-API, 10Pageviews-Anomaly: "Venuše (planeta)" on cs.wp has surprisingly high numbers in Pageviews Analysis (and also Topviews Analysis) - https://phabricator.wikimedia.org/T239532 (10Nuria) 05Open→03Resolved [14:43:47] 10Analytics: Label high volume bot spikes in pageview data as automated traffic - https://phabricator.wikimedia.org/T238357 (10Nuria) [14:44:14] elukey: should we maybe refactor the java puppet class now and also use it for kafka broker? [14:44:17] i can do that real quick I thkn [14:44:43] 10Analytics, 10Analytics-Kanban, 10Operations, 10Traffic, 10Patch-For-Review: Remove North Korea from data quality traffic entropy reports - https://phabricator.wikimedia.org/T251546 (10Nuria) 05Open→03Resolved [14:44:59] 10Analytics, 10Analytics-Kanban: Make sqoop run in production queue - https://phabricator.wikimedia.org/T249155 (10Nuria) 05Open→03Resolved [14:45:35] ottomata: there is the question of the file.encoding thing, it will get applied in multiple places that we might not want [14:45:38] in theory it is harmless [14:45:56] but we use it only for analytics afaics [14:46:31] but if you feel super strongly about it ok [14:46:34] 10Analytics, 10Analytics-Kanban: Kerberos-run-command doesn't work with spark-submit [workaround] - https://phabricator.wikimedia.org/T250161 (10Nuria) 05Open→03Resolved [14:46:54] yeah i think we can do both [14:47:05] java_8 class, but keep analytics class that includes java_8 class [14:47:37] 10Analytics, 10Analytics-Kanban: Unify puppet roles for stat and notebook hosts - https://phabricator.wikimedia.org/T243934 (10Nuria) [14:47:39] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Unify stat1007 puppet role with the rest of the stats cluster - https://phabricator.wikimedia.org/T249754 (10Nuria) 05Open→03Resolved [14:48:13] ottomata: ack better option yes [14:49:49] 10Analytics, 10Analytics-Kanban: Unify puppet roles for stat and notebook hosts - https://phabricator.wikimedia.org/T243934 (10elukey) Everything is done except decommissioning the notebook hosts, that can be done separately (there is a subtask about it). [14:49:57] 10Analytics, 10Analytics-Kanban: Unify puppet roles for stat and notebook hosts - https://phabricator.wikimedia.org/T243934 (10elukey) [14:50:24] \o/ [14:50:54] 10Analytics, 10Analytics-Kanban, 10Research, 10Patch-For-Review: covid19 data preservation - https://phabricator.wikimedia.org/T248600 (10Nuria) 05Open→03Resolved [14:51:57] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations, 10Patch-For-Review: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) @soworu You have been added to the "wmf" group. You should now be able to login. [14:52:36] 10Analytics, 10Analytics-Kanban, 10Privacy Engineering, 10Privacy, and 3 others: Identify pending analyses needing access to data older than 90 days - https://phabricator.wikimedia.org/T250857 (10Nuria) ping on this, while there is no deadline i was hoping to resolve this issue before the end of quarter if... [14:53:11] 10Analytics, 10Product-Analytics, 10User-Elukey: Learn how to make dashboard on top of data on hadoop/hive via presto - https://phabricator.wikimedia.org/T247329 (10Nuria) [14:54:59] 10Analytics, 10Analytics-Kanban: Legend for 12 months is not correct in graphs of analytics - https://phabricator.wikimedia.org/T238894 (10Nuria) 05Open→03Resolved [14:55:38] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations, 10Patch-For-Review: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) @elukey @Ottomata Please sync soworu's user for Hue access (needed per https://wikitech.wi... [14:58:49] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations, 10Patch-For-Review: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Ottomata) For Hue the user must also have shell access and be added to the `analytics-privatedata... [14:58:56] 10Analytics-Kanban, 10Better Use Of Data, 10Product-Analytics: Experiment with Druid and SqlAlchemy - https://phabricator.wikimedia.org/T249681 (10Nuria) >Got Druid error while using pageviews_hourly table and group by project and countries: This is expected as sql access uses presto which will not be able t... [14:59:11] 10Analytics-Kanban, 10Better Use Of Data, 10Product-Analytics: Superset Updates - https://phabricator.wikimedia.org/T211706 (10Nuria) [14:59:13] 10Analytics-Kanban, 10Better Use Of Data, 10Product-Analytics: Experiment with Druid and SqlAlchemy - https://phabricator.wikimedia.org/T249681 (10Nuria) 05Open→03Resolved [14:59:24] elukey: https://gerrit.wikimedia.org/r/c/operations/puppet/+/596460 [15:01:26] ping elukey milimetric mforns [15:02:25] sorry, lil late nuria [15:12:28] (03PS3) 10Awight: Remove known buggy conflicts [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596439 [15:12:31] (03PS4) 10Awight: Count displayed rows [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596145 (https://phabricator.wikimedia.org/T252507) [15:20:23] 10Analytics, 10Analytics-Kanban: Move the Analytics infrastructure to Debian Buster - https://phabricator.wikimedia.org/T234629 (10elukey) [15:24:52] 10Analytics, 10Operations, 10ops-codfw: furud mgmt interface is down - https://phabricator.wikimedia.org/T252616 (10Papaul) 05Open→03Resolved It was a loosed cable . ` PING 10.193.1.42 (10.193.1.42) 56(84) bytes of data. 64 bytes from 10.193.1.42: icmp_seq=1 ttl=62 time=2.08 ms 64 bytes from 10.193.1.42... [15:28:30] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) Oh, thanks for pointing that out @Ottomata In this case i am not sure if that is really needed because the re... [15:29:14] (03CR) 10Awight: [C: 04-1] "Even though the data is temporarily stranded under my home dir, it still deserves more namespacing:" [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596436 (owner: 10Awight) [15:38:56] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Nuria) >@soworu You will have superset and turnilo This is all you need to access all data to be clear. [15:39:53] 10Analytics, 10Analytics-Kanban, 10LDAP-Access-Requests, 10Operations: LDAP access to the wmf group for Segun Oworu (superset, turnilo, hue) - https://phabricator.wikimedia.org/T252703 (10Dzahn) 05Open→03Resolved Alright, thanks Nuria for clarification. I will claim it's resolved then. If any issues f... [15:50:31] 10Analytics, 10Event-Platform, 10Inuka-Team (Kanban), 10KaiOS-Wikipedia-app (MVP), 10Patch-For-Review: Capture and send back client-side errors - https://phabricator.wikimedia.org/T248615 (10Ottomata) @SBison can you add your code to the list of clients here? https://wikitech.wikimedia.org/wiki/Event_Pla... [16:22:02] Hi @elukey did you see my reply in Phab about getting write access to /srv/published on the new Jupyter servers? I'm blocked until I can move my reports to that dir. Thanks. [16:26:27] 10Analytics, 10Event-Platform, 10MediaWiki-Vagrant: EventLogging vagrant role fails to provision - https://phabricator.wikimedia.org/T252794 (10DLynch) I'd speculate it's to do with the changes from T240355. [16:40:27] snowick: hi! I saw it but didn't get time to investigate yet [16:40:50] snowick: on what host are you trying? [16:41:03] @elukey stat1007 [16:41:40] (03PS5) 10Awight: Analyze row count [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596188 (https://phabricator.wikimedia.org/T252507) [16:41:43] (03PS1) 10Awight: Refresh notebooks with April 2020 data [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596480 (https://phabricator.wikimedia.org/T252507) [16:42:00] snowick: and what commands are you executing? (trying to repro) [16:42:36] if they are on the task I'll check them [16:42:42] >cd /srv/published >mkdir dashboards [16:44:34] strange, you are in the wikidev group and you should be able to create a dir [16:44:45] @elukey I think we had this problem before on another server because I wasn't in the right user group so didn't have the right permissions [16:45:22] snowick: I just sudoed as your user, and I was able to create the dir [16:45:26] a test dir sorry [16:45:32] what error do you get? [16:45:56] hang on let me try again [16:48:01] ok, it's working on terminal, sorry can't repro error but glad it works, thanks for checking [16:49:47] snowick: super, glad that it works! [16:50:32] me too don't know what was wrong before but I'll let you know when my files are off the notebook servers [16:51:45] snowick: yes please no rush, I just pinged people two weeks in advance as reminder, you can do it even next week [16:51:59] I didn't mean to push you now :) [16:52:28] ok good I prob won't get it done til then, no problem [16:58:11] * elukey afk for a bit [17:00:48] elukey: did you knwo about https://grafana.wikimedia.org/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22eqiad%20prometheus%2Fk8s%22,%7B%7D,%7B%22mode%22:%22Metrics%22%7D,%7B%22ui%22:%5Btrue,true,true,%22none%22%5D%7D%5D [17:00:49] oops [17:00:50] no [17:00:53] https://grafana.wikimedia.org/explore [17:00:55] !?!?!??! [17:00:58] i just learned abouut it [17:01:05] this is so much better than tunnelling to promethues ui [17:01:14] (03CR) 10Thiemo Kreuz (WMDE): [C: 03+2] Make task-numbered notebook more self-explanatory [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596435 (owner: 10Awight) [17:02:14] ottomata: i must have no permits for that cause i do not see anything different [17:02:37] (03CR) 10Thiemo Kreuz (WMDE): ""Temporarily stranded under my home dir". 😆" [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596436 (owner: 10Awight) [17:03:12] you have to log into grafana probably [17:03:22] but you shoudl be able to with your ldap [17:03:36] wow ottomata - Thanks for that link !!! [17:09:19] if only we could get a grafana druid datasource working! [17:10:19] ottomata: have we even tried? [17:10:21] https://grafana.com/grafana/plugins/abhisant-druid-datasource/installation [17:10:22] (03CR) 10Thiemo Kreuz (WMDE): Helper to lookup namespace names by ID (033 comments) [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596438 (owner: 10Awight) [17:11:15] i have looked several times for a good druid datasourec plugin but they never worked because they were too old [17:11:23] dunno if this the one i tried or not [17:11:27] ack ottomata [17:11:31] (03CR) 10Thiemo Kreuz (WMDE): "Uh, I honestly can't say anything about this. Feel free to merge anyhow." [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596421 (owner: 10Awight) [17:19:30] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Iflorez) Hi @elukey I'll transfer files and shut down notebooks over the next few days. I'll check in on Tuesday with an update or questions if any. Thank you! [17:36:17] 10Analytics, 10Event-Platform, 10Inuka-Team (Kanban), 10KaiOS-Wikipedia-app (MVP), 10Patch-For-Review: Capture and send back client-side errors - https://phabricator.wikimedia.org/T248615 (10SBisson) >>! In T248615#6137559, @Ottomata wrote: > @SBisson can you add your code to the list of clients here? ht... [17:39:57] 10Analytics, 10Event-Platform, 10Inuka-Team (Kanban), 10KaiOS-Wikipedia-app (MVP), 10Patch-For-Review: Capture and send back client-side errors - https://phabricator.wikimedia.org/T248615 (10Ottomata) Thank you! [17:48:41] joal: btw did you see that hadoop 3 has a built slider-like functionality that can run docker images? :o [18:25:20] 10Analytics, 10Event-Platform, 10MediaWiki-Vagrant: EventLogging vagrant role fails to provision - https://phabricator.wikimedia.org/T252794 (10Tgr) [18:25:22] 10Analytics, 10Analytics-EventLogging, 10MediaWiki-Vagrant: eventlogging vagrant role: 'ParsedRequirement' object has no attribute 'req' - https://phabricator.wikimedia.org/T251864 (10Tgr) [18:26:08] 10Analytics, 10Event-Platform, 10MediaWiki-Vagrant: EventLogging vagrant role fails to provision - https://phabricator.wikimedia.org/T252794 (10Tgr) I'll remove the broken library when I get around to it. The error doesn't really break anything though. [18:38:41] (03CR) 10Awight: [V: 03+2] Make task-numbered notebook more self-explanatory [analytics/wmde/TW/edit-conflicts] - 10https://gerrit.wikimedia.org/r/596435 (owner: 10Awight) [18:41:22] 10Analytics, 10Analytics-Kanban: Add TLS to Kafka Mirror Maker - https://phabricator.wikimedia.org/T250250 (10Ottomata) Ah! Since profile::kafka::mirror `ssl.keystore.location`, it will attempt to authenticate with Kafka. User:ANONYMOUS is allowed to do anything, but if a client authenticates and their princ... [18:41:43] !log fixed TLS authentication for Kafka mirror maker on jumbo - T250250 [18:41:45] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:41:46] T250250: Add TLS to Kafka Mirror Maker - https://phabricator.wikimedia.org/T250250 [19:01:15] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10MMiller_WMF) @elukey -- I tried to do this today, and I rsynced my home directory from notebook1004 to stat1005. Now that I try to run my notebooks there, I think not all the packages I'm used to are installed there, for inst... [19:08:33] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Ottomata) @MMiller_WMF pip install pandasql? I should hope folks aren't trying to rsync their jupyter virtualenvs, that'll almost surely break things :) [19:12:24] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10MMiller_WMF) @Ottomata -- I rsynced my whole home directory. Does that mean I rsynced my jupyter virtualenv? Here's what I got when I tried to pip install: {F31818387} [19:12:48] 10Analytics, 10Analytics-Kanban: Add TLS to Kafka Mirror Maker - https://phabricator.wikimedia.org/T250250 (10elukey) >>! In T250250#6138068, @Ottomata wrote: > Ah! Since profile::kafka::mirror sets `ssl.keystore.location`, kafka-mirror will attempt to authenticate with Kafka. User:ANONYMOUS is allowed to do... [19:13:27] ottomata: so mirror maker didn't work all this time? [19:13:51] elukey: it was running, but i think it hadn't bounced since your change [19:13:58] I remember to have restarted it on 1001, but at this point I did a sloppy change [19:13:59] so the bounce on 1006 today picked up your new configs [19:14:00] and it couldnt' start [19:14:14] what the hell, really strange [19:14:27] I'll do more tests next time, sorry for the problem :) [19:14:35] err it was meant to be :( [19:15:47] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Ottomata) Yup, I think that will break stuff! Let me reset your venv on stat1005. I just stopped your Notebook Server too. Try to log back in and we'll see if you get a clean venv. [19:16:22] elukey: no worries! glad we caught it before it really broke :) [19:16:53] hey elukey Q did you give users specific instructions to rsync their homedirs from notebook boxes? [19:16:54] well we'd have got alarms for lag I supppose, so we'd have got icinga screaming soonish hopefully [19:17:26] well, only one was bouned so only one process went down, the other consumers picked up the slack :) [19:17:43] ottomata: I did not, I was about to say that we should state explicitly limits for venvs on wikitech [19:17:45] didn't think of this before either, but users should not rsync venv [19:17:47] yeah [19:18:23] the page is https://wikitech.wikimedia.org/wiki/Analytics/Systems/Clients#Rsync_between_clients [19:18:57] the example mentions stat1005 and notebook1004, that unfortunately is not great for jupyter [19:19:27] but in theory between stretch-based hosts it should work [19:19:29] no? [19:19:34] I mean copying the venv [19:21:22] i'll edit task description. [19:21:26] added --exclude venv to that doc too [19:21:58] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Ottomata) [19:23:02] thanks :) [19:23:23] I just got https://github.com/wikimedia/operations-software-druid_exporter/pull/10/files for the new exporter [19:23:41] druid 0.18 seems to have a 'router' daemon? [19:24:09] https://druid.apache.org/docs/latest/design/router.html [19:24:11] wow [19:25:51] all right all seems good, logging off, ttl o/ [19:26:49] laters! [19:32:48] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10cchen) @Ottomata Hi Andrew, me and Jennifer rsynced the virtualenvs and ran into errors as well.. can you also reset our venv on stat1005? username is conniecc1 and jiawang Thank you! [19:36:58] 10Analytics, 10Analytics-Kanban, 10Privacy Engineering, 10Privacy, and 3 others: Identify pending analyses needing access to data older than 90 days - https://phabricator.wikimedia.org/T250857 (10Mayakp.wiki) Just been informed that DiscussionTools data will be retained for more than 90 days, which could i... [20:09:00] ottomata: I'm just now reading https://phabricator.wikimedia.org/T241241#6088935 and thinking on it. It looks like you're decided and going forward, but if you want to brainbounce more I'm happy to. [20:09:57] milimetric: btw that was made before https://phabricator.wikimedia.org/T251609 [20:10:00] so more context is in ^ [20:10:06] i think we don't really have a good solutoin [20:10:19] yep, I'm reading that one now [20:10:32] ok, yeah, and I think we need a great solution, so I'll think super hard :) [20:10:35] we were going to try to put everything in stream config [20:10:50] buut taht doesn't really work, because then we'd also need a way to ID which streams each eventgate instance is allowed to produce [20:11:32] milimetric: i really don't lke gobblin [20:11:36] i thikn i am still biased thouvgh [20:11:41] tryiing to be neutral...:) [20:11:56] I think that was the opinion last time too, I never looked at it with any detail [20:12:19] what don't you like about goblin? [20:14:09] it is just so complicated! [20:14:19] maybe that is a good thing but it doesn't feel like it [20:17:23] the docs are extensive, but i've been reading them for probably a total of 5 hours between yesterday and today and i'm still not sure how to make it do what camus does... [20:22:16] hello analytics! do I need separate credentials to connect to each shard on the replicas? It looks like I can't get into anything other than s1-analytics-replica.eqiad.wmnet [20:25:46] musikanimal: all the credential stuff should be handled by the analytics-mysql wrapper [20:25:49] are you using that/ [20:25:50] ? [20:26:20] ottomata: wow, yea, gobblin seems to take a very comprehensive approach, even has data quality checks and other stuff built in [20:26:47] not sure... I normally run `mysql --defaults-file=/etc/mysql/conf.d/analytics-research-client.cnf -hs1-analytics-replica.eqiad.wmnet -P 3311`, changing that to s2 or s8 just hangs [20:28:20] musikanimal: you shoudl be able to just do [20:28:20] e..g [20:28:21] analytics-mysql itwiki [20:28:25] and it wwill connect you to the rigiht one [20:28:29] https://wikitech.wikimedia.org/wiki/Analytics/Systems/MariaDB [20:28:33] see MySQL wrapper [20:28:49] beautiful!!! I did not know about that. Thank you :) [20:29:52] ottomata: what Camus does is between https://gobblin.readthedocs.io/en/latest/user-guide/Compaction/ and https://gobblin.readthedocs.io/en/latest/user-guide/Partitioned-Writers/ it feels like, no? [20:30:11] camsu does compaction but we dobn't use it [20:30:17] all we use yeah is parttioned writers [20:30:32] but i mean, we are thnking bigger here, so doing MORE than what cmaus does is good [20:30:52] kafka connect hdfs's hive integration is actualy really good and easy to use [20:31:27] it was pretty easy for me to set it up to automaticaly ingest kafka data (iin a semi-streaming manner even!) and to have it create and evolve a hive tables (using jsonschema with my converter plugin) AND have it auto add partitions [20:31:49] gobblin can do that too i thnk [20:31:50] https://gobblin.readthedocs.io/en/latest/user-guide/Hive-Registration/ [20:31:56] but we are going to have to write some code for sure [20:33:43] yeah, it's clearly extensible but no idea what the ecosystem looks like and if there are good solutions already for our use cases [20:34:03] but the code you have to write seems simple and cookie cutter... boring Java but at least simple [20:34:16] the compaction handles late arrivals in a systematic way, that seems nice [20:35:00] I read the architecture and all that makes sense, seems simple enough. [20:36:27] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Ottomata) OK done. FYI, all you have to do to reset your venv is to delete your ~/venv directory (or move it out of the way). SWAP will recreate the venv from scratch if it doesn't exist if you stop your jupyter server and t... [20:37:15] milimetric: i think I get what we'd need to implement to make it work [20:37:43] i do not yet get thhe deployment models, or how to struture config. i am not sure the difference between configs for gobblin process vs configs for a job [20:37:47] or if there is any difference [20:43:01] yeah, it's weird where gobblin ends and where yarn or oozie or whatever pick up, it's almost like gobblin wants to do all of those things but is trying hard to hold itself back [20:43:22] yeah [20:43:38] and config management too [20:43:39] https://gobblin.readthedocs.io/en/latest/user-guide/Config-Management/ [20:45:27] at retention [20:45:28] https://gobblin.readthedocs.io/en/latest/data-management/Gobblin-Retention/ [20:45:32] i mean, maybe all this complexity is good [20:45:34] not quite sure [20:50:28] it seems to have lots of opinions, but no strong opinions, I don't hate the aesthetics so far, I guess it'd be good to test and get it to do actual work, yak shave for a bit, see how that feels [20:55:39] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10MMiller_WMF) Thanks, @Ottomata. I re-opened Jupyterhub, but now am not able to start a server: {F31818560} [20:58:57] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10jwang) @Ottomata, I met the same issue that MMiller_WMF had. Any idea? Thanks, Jennifer [21:07:42] interesting, the Config-Management. It feels overwhelming to think about schemas, this, and the kind of stuff we have to put into Atlas. I think the challenge will be unifying as much of that as possible, defining once and automatically propagating to the other systems in a clear simple way. [21:08:00] like schema -> job config -> metadata [21:09:54] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10Ottomata) Can you log out and log back in? [21:15:16] 10Analytics: Decomission notebook hosts - https://phabricator.wikimedia.org/T249752 (10jwang) @Ottomata, after I logged out and logged in again, the issue is gone. thank you! Jennifer [21:38:41] 10Analytics, 10Analytics-Wikistats: Implement inequality metrics for WikiStats - https://phabricator.wikimedia.org/T248964 (10Milimetric) Likewise, @Quasipodo.