[08:27:11] elukey: o/ [08:32:13] joal: o/ [08:32:45] elukey: Last week was my messy week ... [08:33:17] I realised that my loading job for cassandra was still loading the old keyspace (the one we used for first texsts) [08:33:21] elukey: --^ [08:33:42] * joal cries and hides [08:34:13] * elukey hugs joal [08:34:18] I have restarted the laoding and load-test, and hope that perf result won't drastically change [08:38:11] finger crossed [08:41:29] elukey: now I have finished crying, how was your weekend? [08:42:10] goood! Weather not super great but overall nice :) [08:42:17] cool :) [08:42:26] for once weather was great here :) [08:42:32] I had some thoughts about vk, it bugs me too much :P [08:42:39] huhuhu [08:42:51] I know the feeling [08:44:54] last patch should be ready to go but I would really love to add some unit testing to the codebase [08:46:26] mobrovac: hello! Today I'd need to reboot kafka100[12] for kernel upgrades and it would be great to have you around when I do it [08:46:38] I ran into a couple of kafka-producer issues with codfw [08:47:11] I think I know why but for eqiad I'd prefer to proceed with more caution [08:49:12] what's the procedure you plan to use? [08:53:38] it's the one outlined in the Admin wiki, namely removing one server from LVS, make sure no traffic is flowing, stopping kafka, rebooting, ensuring kafka is ok and launching a preferred replica election, check EventBus, re-add the server to LVS [08:54:06] and then of course same thing for the other host [08:54:45] but last week I got a 500 for /v1/events (but not for /v1/topics) due to an issue pushing stuff to kafka. I restarted Event Logging and all went fine [08:55:33] probably because zk needs to be told the server you are rebooting will no longer repond [08:55:42] respond [08:56:19] no no zk doesn't need to know anything, the producer should switch to another broker asking for metadata and broker partition leaders [08:57:10] but I rebooted multiple times event bus and last week was the only time I got some issue [08:57:11] aren't those kept in zk? [08:58:19] kafka uses zookeeper to coordinate the cluster activities but the producers are in charge of asking for metadata and reacting to timeouts afaik [08:58:57] k [08:58:57] moreover, when you stop kafka the broker notifies the cluster about its departure [08:59:05] so it is a clean shutdown [08:59:10] kk [09:03:51] mobrovac: so I am going to schedule downtime for kafka1001 and start the procedure, good time for you? I'll just ask to check that nothing is on fire [09:04:53] euh i've just deployed a new version of change-prop [09:04:58] and i've got an errand to run [09:05:06] elukey: ok to do it an hour from now? [09:05:37] mobrovac: super fine :) [09:05:40] ping me when you are ok [09:05:51] cool, thnx [09:22:29] Analytics, DBA: dbstore1002 crashed - https://phabricator.wikimedia.org/T136333#2356887 (jcrespo) Open>Resolved a:jcrespo [09:34:57] Analytics-Cluster, Analytics-Kanban, Patch-For-Review: Single Kafka partition replica periodically lags - https://phabricator.wikimedia.org/T121407#2356914 (elukey) This bug was resolved by Kafka 0.9 migration [09:35:03] Analytics-Cluster, Analytics-Kanban, Patch-For-Review: Single Kafka partition replica periodically lags - https://phabricator.wikimedia.org/T121407#2356915 (elukey) Open>Resolved [09:42:49] joal: newbie question about hive.. I am trying to repro https://phabricator.wikimedia.org/T136844 [09:42:57] so I tried [09:43:06] select * from webrequest where hostname is null and webrequest_source = "text" and year = 2016 and month = 5 and day = 30 and (hour >=1 or hour <= 5); [09:43:14] in wmf_raw and wmf [09:43:22] but I can't find any result [09:43:34] is there anything really wrong that I am doing? [09:44:20] elukey: you can go for wmf_raw (json is wmf_raw [09:44:31] elukey: and you can restrict to hour = 1 (example is so) [09:44:40] But except from that, it should work [09:45:02] ah yes I tried with "use wmf_raw" and "use wmf", but nothing [09:45:10] hm [09:45:15] indeed [09:45:20] I'll ask to madhu [09:45:25] elukey: trying [10:01:55] elukey: tried with various times, no result :( [10:02:37] joal: thanks! I'll update the phab task! [10:04:22] Analytics: Investigate where Kafka records will almost all null fields are coming from - https://phabricator.wikimedia.org/T136844#2357017 (elukey) I tried today the following beeline/hive query but didn't find anything within wmf_raw and wmf: ``` select * from webrequest where hostname is null and webreque... [10:16:46] Analytics-Wikistats: Unexpected increase in traffic for 4 languages in same region, on smaller projects - https://phabricator.wikimedia.org/T136084#2357026 (ezachte) 99% of the robots.txt queries on ru.wikipedia.org have same user agent string: "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1)" (so no clu... [10:22:33] elukey: ping? [10:24:19] mobrovac: pong [10:24:56] elukey: the correct answer is "beer pong" [10:24:56] :P [10:25:06] hahahaha [10:25:08] i'm here so we can proceed [10:25:15] sure, let me log everything [10:25:46] elukey: re the list of brokers, the eventlogging service gets that from puppet, so it might be good to remove the node being rebooted [10:27:01] mobrovac: mmmmm the kafka producer should be able to react by itself, but just to double check we are going to test event logging before re-enabling it via LVS. What do you think? [10:27:34] re-enabling what? [10:27:36] * mobrovac confused [10:27:49] mobrovac: via confctl :) [10:27:59] btw, RB and change-prop send stuff to the evenlogging service all the time [10:28:00] let me explain, my bad [10:28:16] so it needs to be able to function properly [10:28:25] otherwise updates to pages are lost [10:28:30] yeah [10:28:31] MW is sending there too [10:28:47] pybal also has layer 7 pings [10:29:21] atm I de-pooled kafka1001.eqiad.wmnet from eventbus.svc.eqiad.wmflabs [10:29:24] via confctl [10:29:44] so even after the reboot, the server will not receive traffic for eventbus.svc.eqiad.wmflabs [10:29:47] ok, that's for the http proxy service [10:30:46] I assume that all the services that need to use event bus contact the servers via eventbus.svc.eqiad.wmflabs [10:30:49] right? [10:30:54] please don't tell me that this is not the case :D [10:31:11] ah ok kafka contacted directly [10:31:17] probably [10:31:35] * elukey waits for Marko's assurances before proceeding [10:33:53] mobrovac: ? [10:34:28] elukey: for eventbus http service, yes [10:34:32] but let me double check [10:34:54] also, i need to check where is change-prop sending events exactly [10:36:14] mobrovac: thanks :) [10:39:23] I am almost sure that eventbus.svc.eqiad.wmflabs, but anyhow if kafka is used directly it must be able to react properly to a broker failure [10:39:48] "that eventbus.svc.eqiad.wmflabs" is the main entry point [10:39:58] ok, so MW is sending to eventbus.svc, so we're good there [10:40:14] change-prop sends the events directly to kafka [10:40:19] and uses metadata from ZK [10:41:00] i don't think that will be a problem, though, since on a clean shutdown zk should be updated [10:41:22] yep [10:41:34] however, change-prop clients that were initialised with, e.g. kafka1001, they'll keep that client [10:41:38] hmmm [10:41:41] so no bueno [10:42:01] elukey: do yo have an estimate of how long would the whole process take? [10:42:07] 15 min? 30? [10:42:30] time for a reboot, assuming that nothing will break [10:42:58] mobrovac: anyhow, let's analyze what happens to change prop when it contacts kafka1001 [10:43:25] i would suggest stopping change-prop during the upgrade [10:43:27] the kafka producer will go in timeout and will ask to another broker the topics metadata [10:43:35] :O [10:43:43] and bring it back up once the upgrade it done entirely [10:44:00] let's do this [10:44:06] wait for ottomata [10:44:16] we also have the Event bus sync right? [10:44:24] this upgrade can wait [10:44:27] yes, at 17 CEST [10:44:33] agreed [10:44:35] all right so let's have a good discussion [10:44:45] re-adding kafka1001 to eventbus.svc.eqiad.wmflabs [10:46:35] done :) [10:49:01] mobrovac: last thing! If you have time today, https://gerrit.wikimedia.org/r/#/c/292568/1 :P [10:49:33] kk elukey, will look at it once i get out of the zone :) [10:50:23] whenever you have time, thanks :) [10:56:43] joal: http://events.linuxfoundation.org/events/apache-big-data-europe [10:57:23] elukey: shall we meet over there ? [10:57:52] that would be awesome :) [10:58:11] elukey: Let's ask nuria if she agrees for us to go there [10:58:20] elukey: And let's ask marcel as well ;) [10:59:00] yep! [11:38:37] * elukey lunch! [11:43:19] * joal is AFK for a while [12:39:34] (PS1) KartikMistry: pep8 cleanup [analytics/limn-language-data] - https://gerrit.wikimedia.org/r/292915 [12:43:50] Analytics-EventLogging, Performance-Team, Patch-For-Review: Support kafka in eventlogging client on terbium - https://phabricator.wikimedia.org/T112660#2357299 (faidon) [12:43:54] Analytics, Analytics-EventLogging: Upgrade eventlogging servers to Jessie - https://phabricator.wikimedia.org/T114199#2357296 (faidon) declined>Open Could you or someone else elaborate a little bit on what's the new plan? A migration to systemd (& jessie) eventually is inevitable and a migration... [13:03:31] (PS2) KartikMistry: pep8 cleanup [analytics/limn-language-data] - https://gerrit.wikimedia.org/r/292915 [13:21:41] hi team :] [13:32:52] Analytics-Wikistats: Unexpected increase in traffic for 4 languages in same region, on smaller projects - https://phabricator.wikimedia.org/T136084#2357454 (ezachte) @Nemo_bis > Only the /wiki URLs are counted, right? Not sure, but only text/html requests. My query grabbed all requests, including images,... [13:40:09] mforns: o/ [13:40:14] hey elukey [13:40:17] :[ [13:40:19] ah! [13:40:21] :] [13:41:12] human life is so weird, between happiness and sadness there's only 1 cm [13:43:07] ahhahah [13:56:27] hahha [13:56:29] \ [13:56:30] " [14:15:53] ottomata1: https://gerrit.wikimedia.org/r/#/c/292568/1 - what do you think? [14:17:03] I am a bit struggling to decide what is best, between not replicating something existing (and than gets stale as soon as services updates it) or having analytics specific alarms [14:24:54] ah snap probably without analytics in monitoring::service->contact_group we would not get icinga alerts in here [14:25:23] heyyy elukey sorry, there's a guy here trying to fix our internet setup [14:25:34] loooking... [14:30:33] ahh yes so now I finally got how to add hadoop alerts in here [14:30:44] nrpe::monitor_service->contact_group = admins,analytics [14:37:31] (CR) Nikerabbit: [C: -1] pep8 cleanup (1 comment) [analytics/limn-language-data] - https://gerrit.wikimedia.org/r/292915 (owner: KartikMistry) [14:45:38] Analytics, Analytics-EventLogging: Upgrade eventlogging servers to Jessie - https://phabricator.wikimedia.org/T114199#2357519 (Ottomata) eventlog1001 is Trusty, not Precise, so we didn't think it was urgent. Closing this doesn't mean we won't do it, it just means we aren't letting it take up any headspa... [14:47:35] Analytics, Analytics-EventLogging: Upgrade eventlogging servers to Jessie - https://phabricator.wikimedia.org/T114199#2357522 (Ottomata) [14:47:37] Analytics-EventLogging, Performance-Team, Patch-For-Review: Support kafka in eventlogging client on terbium - https://phabricator.wikimedia.org/T112660#2357521 (Ottomata) [14:57:18] Analytics, Analytics-EventLogging: Upgrade eventlogging servers to Jessie - https://phabricator.wikimedia.org/T114199#2357536 (faidon) Thanks :) Not urgent, but needs to happen at some point regardless (upstart is pretty dead, even in Ubuntu), so keeping this open sounds like a plan. Did you try @ units... [14:57:29] mmmm so contact_group => 'admins,team-services', is set also in aqs::monitoring [14:57:53] and now I get how the multi-instance check is done [14:58:09] in the beginning of the class you have [14:58:10] define cassandra::instance( [14:58:16] $instances = $::cassandra::instances, [14:58:20] ) { [14:58:23] and then at some point [14:58:33] check_command => "check_tcp_ip!${listen_address}!9042", [14:59:05] not sure if the aqs class is that smart [15:00:05] Analytics, Analytics-EventLogging: Upgrade eventlogging servers to Jessie - https://phabricator.wikimedia.org/T114199#2357544 (Ottomata) Templated units, right? Ja I tried that. IIRC, that doesn't help much with grouping of services, just with DRYing them. [15:01:01] ah no it is very basic [15:01:17] joal: elukey wanna come to eventbus meeting? [15:01:22] ottomata: joining [15:08:50] ottomata: sorry I'm late, joining [15:17:51] Analytics, MediaWiki-extensions-WikimediaEvents, The-Wikipedia-Library, Wikimedia-General-or-Unknown, Patch-For-Review: Implement Schema:ExternalLinkChange - https://phabricator.wikimedia.org/T115119#2357624 (Milimetric) Yeah, still nothing, something else must be wrong. Is there someone I c... [15:22:56] Analytics, Commons, Multimedia, Tabular-Data, and 4 others: Review shared data namespace (tabular data) implementation - https://phabricator.wikimedia.org/T134426#2357634 (Milimetric) > Another needed feature is to reuse metadata from another page, thus allowing multiple pages to have the same st... [15:25:51] Analytics, Commons, Multimedia, Tabular-Data, and 4 others: Review shared data namespace (tabular data) implementation - https://phabricator.wikimedia.org/T134426#2357641 (Yurik) @Milimetric, lets do it online - most interested parties are all over the globe, hard to pick a time. [15:27:58] Analytics, Commons, Multimedia, Tabular-Data, and 4 others: Review shared data namespace (tabular data) implementation - https://phabricator.wikimedia.org/T134426#2357644 (Milimetric) works for me, brainstorm "meetings" could be new tasks in #tabular-data? [15:29:42] milimetric, yep, go ahead :) [15:50:04] Analytics, Revision-Slider, TCB-Team, WMDE-Analytics-Engineering, and 2 others: Data need: Data need: Explore range of article revision comparisons - https://phabricator.wikimedia.org/T134861#2357759 (Lea_WMDE) [16:05:52] madhuvishy: you froze [16:06:42] ha ha i spoke fully before i realized [16:33:18] Analytics, Discovery: Many crapped encoded titles in pagecounts raw - https://phabricator.wikimedia.org/T137013#2357945 (Milimetric) Open>declined @eranroz this dataset (pagecouts-all-sites) will be deprecated at the end of this month. Please use the pageviews dataset. For more information see... [16:34:31] Analytics-Kanban, Discovery: Many crapped encoded titles in pagecounts raw - https://phabricator.wikimedia.org/T137013#2357948 (Milimetric) [16:34:43] Analytics-Kanban, Operations, ops-eqiad: Smartctl disk defects on kafka1012 - https://phabricator.wikimedia.org/T136933#2357954 (Milimetric) [16:35:22] Analytics, Analytics-Cluster: Beeline does not print full stack traces when a query fails {hawk} - https://phabricator.wikimedia.org/T136858#2357963 (Milimetric) p:Triage>Normal [16:36:11] Analytics: Investigate where Kafka records will almost all null fields are coming from - https://phabricator.wikimedia.org/T136844#2357970 (Milimetric) p:Triage>Normal [16:36:13] madhuvishy: who are the knowledgeable jenkins folks? hashar? who else? [16:36:59] ottomata: not sure - i guess some other people in releng but I mostly worked with hashar. What's up? [16:37:25] i know a bit now - can may be help [16:37:25] i need to tell the eventlogging python tox check that the librdkafka .deb package is required [16:37:34] ah ah [16:37:37] pip install confluent-kafka compiles against it [16:37:48] https://integration.wikimedia.org/ci/job/tox-jessie/8541/console [16:38:05] interesting [16:38:08] looking [16:38:40] Analytics: Puppetize job that saves old versions of geoIP database - https://phabricator.wikimedia.org/T136732#2357987 (Milimetric) p:Triage>Normal [16:40:23] Analytics-Cluster, Analytics-Kanban: Kafka 0.9's partitions rebalance causes data log mtime reset messing up with time based log retention - https://phabricator.wikimedia.org/T136690#2357994 (Milimetric) p:Triage>Unbreak! [16:43:28] Analytics, Datasets-General-or-Unknown, Services: Many error 500 from pageviews API "Error in Cassandra table storage backend" - https://phabricator.wikimedia.org/T125345#1985043 (Milimetric) Thanks for the bug report, looks like that 10 req/s limit is too high. For now we might not get to tuning it... [16:43:46] Analytics, Datasets-General-or-Unknown, Services: Many error 500 from pageviews API "Error in Cassandra table storage backend" - https://phabricator.wikimedia.org/T125345#1985043 (Milimetric) p:Normal>High [16:44:46] ottomata: I think I know how to do that [16:44:52] let me patch [16:48:13] Analytics, Patch-For-Review: Implement a standard page title normalization algorithm (same as mediawiki) - https://phabricator.wikimedia.org/T126669#2358017 (Milimetric) a:Milimetric>None [16:48:37] oo, ok! [16:48:46] i think it needs librdkafka1 and librdkafka-dev [16:49:57] ottomata: I think https://github.com/wikimedia/eventlogging/blob/master/tox.ini [16:50:02] you add deps= [16:50:43] hmmm madhuvishy but they are deb package deps [16:50:47] not pip [16:50:51] right looking [16:51:07] and, i'm not sure if it is right to add it to tox/setup.py stuff [16:51:23] because the .deb dependency is specific to debian and our stack [16:51:30] right [16:51:36] we can add in jenkins i think [16:51:41] yeah, that would be a good place if we can [16:51:46] tox jobs take a toxenv arg [16:52:12] the pywikibot job specifies nose and nose34 [16:52:22] but dont think they are deb either [16:59:20] ottomata: hmmm i guess they go here? https://github.com/wikimedia/operations-puppet/blob/production/modules/contint/manifests/packages.pp [17:01:09] oh, hm! [17:01:35] ok madhuvishy pretty inelegant but that will do [17:01:36] thank you! [17:01:44] ottomata: there are also [17:01:50] https://github.com/wikimedia/operations-puppet/tree/production/modules/contint/manifests/packages [17:01:58] so you could put them just in python.pp [17:02:16] or somehow have a new class [17:02:44] hmm [17:02:44] aye [17:03:02] hm, i see [17:03:10] ok madhuvishy something in there will make sense [17:03:14] but i dont think we can isolate a class and apply it to a job [17:03:18] aye [17:03:32] so python.pp seems like the narrowest place [17:40:50] milimetric, joal, I was trying renaming stuff in testwiki and I think we were doing wrong assumptions... [17:41:10] when a page P is renamed, id(P) stays the same [17:41:27] the redirect that is created (if created) is the one that has a new id [17:42:08] Also, I'm confused what move_redir means, and I could not reproduce such an event... [17:42:17] weird... [17:42:18] mforns: REAAAAAALLY? So basically, for the existing page, page_id stays the same, title changes, and new page is created with old title? [17:42:29] yes [17:42:34] wow [17:42:39] Good finding mforns !!!! [17:42:42] so the page_id is the meta id we needed essentially [17:42:47] at least for move (not move_redir) [17:42:58] milimetric: Means, no need for meta_id ;) [17:43:04] madhuvishy: o/ [17:43:04] but this is not necessarily how it always worked, though unlikely it changed [17:43:18] elukey: \o [17:43:24] it just looks like the data suggests otherwise. weird [17:43:46] madhuvishy: if you have time today would you mind to check https://phabricator.wikimedia.org/T136844 ? [17:43:57] elukey: ah yes [17:44:00] milimetric, joal, yea, so mmhhh, could we find someone to shed light? :] [17:44:05] I'll have to stare harder at the log events [17:44:06] thanksss [17:44:17] mforns: I can try :) [17:44:59] brb [17:49:32] joal, I can try that too :] [17:49:53] mforns: I'll try to ping Gilles Dubuc tomorrow morning [17:50:05] joal, ok, cool [17:50:44] here are the things I found: https://www.mediawiki.org/wiki/Manual:Page_moving [17:51:24] and also the definition of move_redir is: "move over a redirect" [17:56:23] here https://www.mail-archive.com/mediawiki-commits@lists.wikimedia.org/msg352645.html move_redir is defined as "Move with overwriting of redirects" [17:57:23] and here: https://gist.github.com/gr2m/2906718 it is defined as "move overwriting a redirect in the new title" [17:58:05] I tried that in testwiki and I could not, I guess move_redir is only for admins [17:58:58] oh! I see... so move_redir totally erases an old page? Crazy [17:59:10] guess it makes sense it's more restricted [17:59:29] move_redir * -> singularity. mission complete [17:59:33] milimetric, I managed to create a move_redir [17:59:38] xDDD [18:00:11] when you move a normal page, it changes its name and leaves a redirect behind, right? that's a move. [18:00:30] right [18:00:33] now, when you move the page back to its earlier title, it creates a move_redir! [18:00:44] because you smash the newly created redirect page [18:01:06] yeah, but it doesn't have to be an earlier title of the same page [18:01:18] it could be another totally separate page, right? [18:01:36] like say A -> B is the renaming of A to B [18:02:12] move: new page table record, title A, new id, is_redirect, old A record is renamed to B [18:02:28] move_redir: B is an existing page? [18:02:37] (reading manual now) [18:03:58] it's vague [18:06:03] milimetric, in my tests, I only managed to do the move_redir when moving back to the own newly created redirect [18:06:18] I tried to move it over another redirect, and it didn't let me [18:06:47] so if you move A->B you can't then move C->A, right? [18:06:48] even if the redirect was pointing at the page at that point in time [18:06:58] milimetric, I guess so [18:07:43] ok, cool [18:07:59] so then it is a bit simpler, but still has a few problems [18:08:22] aha [18:09:41] note that when doing a move_redir, the old redir disappears (I guess it is removed from the table) and another one with a new id is created for the title the page had before the move_redir [18:10:12] unless the flag noredir is 1! heheh [18:11:11] wait that's a confusing way to say it [18:11:19] so we move A->B [18:11:23] then B->A [18:11:51] that's not special, it's just that the title A happens to already exist. [18:12:13] milimetric, yes [18:12:16] but the same thing happens: the record with title B and the initial id for this page changes the title to A [18:12:48] and then a new page record is created with title A and a new id [18:12:59] oh, but the special thing is the old record with title A is deleted, right, that's different [18:13:24] does that get deleted as in any revisions to it go to the archive table?! [18:13:49] no idea... [18:15:07] milimetric, I have to leave, I'll connect later and synch up with you, or tomorrow [18:15:20] no problem, thanks for all the help [18:15:37] * milimetric is going to be afk for a bit [18:32:26] ottomata: completed all Brandon suggestions for the vk patch, if you could review the last version of the code today it would be gooood [18:32:43] I'll run a lot of tests tomorrow and package it [18:33:06] final step would be to deploy it on misc/maps by the end of the week [18:33:17] a-team: logging offfffffff! byeeeee o/ [18:34:39] elukey: ok will do! [19:15:58] !log restarting kafka broker on kafka1020 to test python consumption client [19:16:00] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log, Master [19:27:43] Analytics-Kanban, Patch-For-Review: Event Logging doesn't handle kafka nodes restart cleanly - https://phabricator.wikimedia.org/T133779#2358665 (Ottomata) confluent-kafka seems to work A-ok! It passed the production kafka broker bounce test. Am running 2 processors in a screen on stat1002, will leave... [19:34:33] PROBLEM - Difference between raw and validated EventLogging overall message rates on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [30.0] [19:36:53] ^ that should clear in a sec [19:42:55] RECOVERY - Difference between raw and validated EventLogging overall message rates on graphite1001 is OK: OK: Less than 20.00% above the threshold [20.0] [20:44:29] madhuvishy: indeed the .deb for python packages can be added to the contint::packages::python . That is only taken in account once per day when images are rebuild though :( [20:44:50] madhuvishy: we would need to find a way for developers to list their .deb dependencies and have CI sudo apt-get install them somehow [20:47:43] hi milimetric [20:48:00] hi [20:49:03] hey mforns what's up [20:50:05] hasharAway: hmmm I thought of having a pre build shell script to do that in the job - but doesn't seem like the best idea [20:53:11] milimetric, can I help in sth? [20:53:49] no worries Marcel, I got this page thing, I'll figure it out [20:54:04] ok :] good luck [20:54:54] feel free to grab something fun off the kanban or think about how to join the user and page intermediates to get the full denormalized schema [20:55:28] ok, will do [21:00:50] madhuvishy: needs for extra packages does not happen that much often anyway. For Mediawiki services, their npm modules and tests depends on various packages and that ended up being defined in the service puppet class. Example is modules/graphoid/manifests/packages.pp [21:01:20] madhuvishy: one can list out runtime packages and build/test packages. Then CI nodes can just include graphoid::packages and puppet sort it out [21:02:11] madhuvishy: anyway for python modules, contint::packages::python is probably good enough [21:02:50] madhuvishy: anyway sleep time. Thanks for supporting others regarding CI :) [21:03:06] Right makes sense [21:03:17] Good night hasharAway :) [22:25:05] Analytics, MediaWiki-extensions-CentralNotice, Operations, Traffic: Generate a list of junk CN cookies being sent by clients - https://phabricator.wikimedia.org/T132374#2359414 (AndyRussG) [23:49:06] ahoy, are there any analytics jobs that call zero portal to get info from it? [23:51:07] Analytics, MediaWiki-Authentication-and-authorization: Verify there are no analytics jobs accessing Zero portal - https://phabricator.wikimedia.org/T137174#2359752 (Yurik) [23:51:11] that ^