[01:03:58] 10Analytics, 10Contributors-Analysis, 10Product-Analytics: Streamline Superset signup and authentication - https://phabricator.wikimedia.org/T203132 (10Tbayer) [01:05:12] 10Analytics, 10Contributors-Analysis, 10Product-Analytics: Streamline Superset signup and authentication - https://phabricator.wikimedia.org/T203132 (10Tbayer) >>! In T203132#4544113, @Neil_P._Quinn_WMF wrote: >>>! In T203132#4544021, @Nuria wrote: >>>Ask the Analytics team to create a new Superset account.... [05:44:21] morning! [05:49:27] ok going to resume the hadoop workers reboots [05:49:33] each time checking the bios settings (sigh) [06:09:10] checked all the remaining worker nodes, no trace of the boot load issue [06:20:16] !log re-run webrequest-load-wf-text-2018-8-31-4, failed for hadoop workers reboots [06:20:17] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [06:31:20] 10Analytics, 10Analytics-Kanban: Reboot Analytics hosts for kernel security upgrades - https://phabricator.wikimedia.org/T203165 (10elukey) [06:48:29] 10Analytics, 10Analytics-Kanban: Reboot Analytics hosts for kernel security upgrades - https://phabricator.wikimedia.org/T203165 (10elukey) [06:57:11] elukey: what was the BIOS issue? [06:57:26] and thorium fully unused now, BTW? [06:59:22] moritzm: so when I rebooted analytics1042 it went into PXE, luckily not going through the reimage since it didn't find any disk to use. racadm get bios.BiosBootSettings.BootSeq returned BootSeq=NIC.Integrated.1-1-1 (and not BootSeq=HardDisk.List.1-1,NIC.Integrated.1-1-1) [06:59:43] ugh [07:00:00] this is the first time that we reboot the worker nodes after the reimage to stretch, realized it yesterday [07:00:04] leftover from a failed reimage, probably [07:00:12] so I checked all the hadoop workers and they are fine [07:00:18] especially the masters :D [07:00:22] sometimes the IPMI commands seem to fail mysteriously [07:00:44] :( [07:01:04] about thorium, it will be used for stat.wikimedia.org and datasets.wikimedia.org [07:01:13] when I reimaged the parsoid servers to stretch I also had an error when the reimage script sent the command to reboot, but the hw didn't act on it [07:01:14] and other analytics websites [07:01:44] but andrew already sent an email to alert about the upcoming stretch upgrade [07:01:48] ok, but with turnilo gone I can remove nodejs? [07:01:58] not gone, move to a different VM [07:02:05] ah yes sire [07:02:06] *sure [07:02:13] k, doing that now [07:03:47] what't the timeline for the reimage? if it happens in the next 1-2 weeks, I'll make a note to skip it for L1TF reboots (as the reimage will pick up the latest kernel anyway) [07:04:28] Sept 5th [07:05:03] we are almost in stretch-only :) [07:05:07] (in analytics) [07:05:17] (and also java 7 free now!) [07:06:52] thx, adding a note to skip it, then [07:07:28] we still have Java 7 on the old Kafka cluster, though [07:07:32] 10Analytics, 10Operations: Decommission Ganeti vm meitnerium.wikimedia.org (old Archiva host) - https://phabricator.wikimedia.org/T203087 (10elukey) [07:07:50] ah snap you are right [07:07:53] and for zookeeper on conf[12]00[1-3] [07:07:56] I almost forgot about all these nodes [07:08:06] yep those are technically not analytics' [07:08:07] :P [07:08:45] yeah, when Giuseppe is back I'll look into rebooting these [07:09:10] also zookeeper is not on conf100[123] anymore [07:09:22] only on conf100[4-6] (stretch + java8) [07:09:22] it's also really not great that we're co-hosting etcd and zookeeper on servers, we should split that off mid-term [07:09:43] I specifically asked for that but I was told (gently) no :) [07:09:58] ok, it's still installed, though (on 100[1-3]) [07:10:22] but it only overcomplicates any maintenance and there's no real connection on the service level that I can see? [07:10:22] yep yep, IIRC I only masked it at the time [07:10:31] nope, completely separate [07:10:42] I wanted to order 6 new hosts for zookeeper [07:10:57] to be completely separated from etcd [07:10:57] I'll bring it up with Joe at some point again (wrt split( [07:11:03] thanks :) [07:11:21] apart from the mentioned servers we only use Java 7 on contint* [07:11:34] very nice, we are closing down [07:11:43] yeah, light at the end of the tunnel :-) [07:12:13] the main issue with the analytics kafka brokers is that they receive traffic from mediawiki, that it is using a super old kafka client. We tried to switch to jumbo once but there were perf issues [07:12:42] https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/MediaWiki_Avro_Logging [07:26:49] !log re-run webrequest-load-wf-text-2018-8-31-4, failed due to hadoop workers reboots [07:26:50] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [07:33:08] Many thanks for the bios-messed-up-reboots elukey :) [07:33:11] morning [07:33:39] good morning :) [07:34:24] just rebooted 1059 [07:34:45] less than 20 to go (4/5 batches and we should be done) [08:02:46] analytics1057 returned a I/O error for a disk while booting, running fsck [08:02:52] weird second time that happens in this batch [08:19:10] and I think it is my fault [08:19:44] this time I concatenated, via cumin, the datanode daemon stop and the shutdown -r +1 (that should wait a min before) [08:20:06] apparently doing it is too fast and causes some partition inconsistencies [08:20:14] this is the only explanation I foundd [08:20:25] (also fsck reported datanode paths as broken) [08:20:42] now they are fixed, so I'll try to be more conservative [08:20:49] even if it should be the same as the last reboot [08:20:55] but now we have a new OS etc.. [08:21:02] not sure, anyhow will add some delay [08:32:53] elukey: could be related to already-started operations that take time to finish? [08:34:52] (03PS1) 10Joal: Correct end-point definitions [analytics/aqs] - 10https://gerrit.wikimedia.org/r/456573 [08:35:42] joal: not sure, I think that the shutdown happens to soon for the kernel to save its state to the fs [08:36:07] I think I don't fully understand what's at stake here :) [08:38:52] I don't either, but usually the kernel allows the write syscalls to file system to return, even if it has not sent everythign to the disk controller [08:39:01] it does that periodically on the background [08:39:42] so if a reboot/crash/etc.. happens when the state is not yet fully saved on the disk partition, when the os boots it recognize the weirdness [08:39:54] and fsck usually is able to fix the most common errors [08:40:03] I think that I was the source of this :D [08:40:13] (the source of the inconsistency :D) [08:41:19] ok ... This kinda tells me I know very small about how computers actually work ;) [08:52:04] I don't agree with this statement :) [08:53:17] joal: this is explained nicely https://docs.oracle.com/cd/E19455-01/805-7228/6j6q7uf0e/index.html [08:55:46] interesting ! Thanks elukey :) [09:00:37] joal: the link is a bit confusing, because those in-core buffers are (if I am not terribly mistaken) part of the Kernel page cache [09:00:55] that one is periodically flushed by the kernel to the block devices [09:01:10] I was confused for a moment with the different terminology [09:15:51] !log re-run webrequest-load-wf-upload-2018-8-31-[7,8], failed due to hadoop workers reboots [09:15:52] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [09:24:11] elukey: I have a strategy to vet those geowiki numbers if you have a little time today :) [09:24:17] sure! [09:41:14] still rebooting the 106* series [09:41:21] takes a bit of time to drain jobs [09:57:38] 10Analytics: Wikistats: add functions you apply to dimensional data such as "accumulate" - https://phabricator.wikimedia.org/T203180 (10JAllemandou) Quick note on that idea: As per the discussion with @fdans and @Nuria yestderday, it would be great to implement the notion of **function** to be applied to a datas... [09:59:17] 10Analytics-Kanban: Add endpoints to RESTBase for new WKS2 endpoints - https://phabricator.wikimedia.org/T203175 (10JAllemandou) See https://github.com/wikimedia/restbase/pull/1056 [09:59:32] 10Analytics-Kanban: Add endpoints to RESTBase for new WKS2 endpoints - https://phabricator.wikimedia.org/T203175 (10JAllemandou) a:03JAllemandou [10:04:45] 10Analytics-Kanban, 10Patch-For-Review: Fix mediawiki-history-druid oozie job - https://phabricator.wikimedia.org/T201620 (10JAllemandou) [10:05:56] !log re-run webrequest-load-wf-upload-2018-8-31-7, failed due to hadoop workers reboots [10:05:57] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [10:10:30] fdans: [10:10:31] # Puppet Name: refinery-druid-drop-public-snapshots [10:10:31] MAILTO=analytics-alerts@wikimedia.org [10:10:31] 0 7 15 * * export PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python && /srv/deployment/analytics/refinery/bin/refinery-drop-druid-snapshots -d mediawiki_history_reduced -t druid1004.eqiad.wmnet:8081 -s -f /var/log/refinery/drop-druid-public-snapshots.log [10:10:36] looks good [10:11:28] niiiiiiiice thank you elukey :D [10:17:44] \o/ ! [12:04:50] Taking a rbeak team [12:17:05] mmmm so an1068 seems dead [12:17:52] It was stuck during boot, I powercycled it, and now it is showing a black screen in console [12:19:49] maybe try "racadm racreset"? [12:20:06] (03PS1) 10Fdans: Makes closed sites searchable and disables private sites [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/456612 (https://phabricator.wikimedia.org/T203105) [12:24:23] moritzm: I don't recall which one also resets the password to access mgtm, IIRC once I did the hard one and Chris had to intervene manually [12:26:09] was it another command? [12:27:25] ah yes there is soft and hard [12:27:37] I don't think racreset resets the password, I've used it multiple times before, it only reboots the mgmt card [12:28:32] ah, https://wikitech.wikimedia.org/wiki/Management_Interfaces agrees [12:28:46] In case the management card is unresponsive to IPMI and maybe ping but SSH is still working, a card reset can be attempted. It will just restart the card OS, not affecting (in theory) the underlying host. [12:29:13] if that doesn't help Chris needs to drain flea power or have a look in presense [12:41:01] nope same thing, need to open a task to Chris :( [12:41:47] ack, he's usually not in the DC on Fridays, though [12:44:14] 10Analytics, 10Operations, 10ops-eqiad: analytics1068 doesn't boot - https://phabricator.wikimedia.org/T203244 (10elukey) p:05Triage>03Normal [12:44:53] not a big deal, it can wait Monday [13:16:06] hi, mornin [13:19:47] hello Dan! [13:33:19] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: Wikistats 2.0: "aa.wikipedia.org" exists and has data available, but marked "Invalid" - https://phabricator.wikimedia.org/T187414 (10fdans) The issue with aa.wikipedia.org is fixed in https://gerrit.wikimedia.org/r/#/c/analytics/wi... [14:30:54] last two hosts to reboot and all the hadoop workers are done [14:31:39] !log re-run webrequest-load-wf-upload-2018-8-31-11, failed due to hadoop workers reboots [14:31:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [15:06:35] 10Analytics, 10Contributors-Analysis, 10Product-Analytics: Streamline Superset signup and authentication - https://phabricator.wikimedia.org/T203132 (10Milimetric) > Nice! Let's make sure to update the documentation (https://wikitech.wikimedia.org/wiki/Analytics/Systems/Superset#Access ) regarding any improv... [15:07:56] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Drop mediawiki history old snapshots from druid public cluster - https://phabricator.wikimedia.org/T197889 (10Nuria) 05Open>03Resolved [15:18:12] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats: Add project-families to backend for additive-metrics - https://phabricator.wikimedia.org/T203258 (10JAllemandou) p:05Triage>03High [15:18:58] (03PS2) 10Joal: Update WKS2 endpoints to accept project families [analytics/aqs] - 10https://gerrit.wikimedia.org/r/456442 (https://phabricator.wikimedia.org/T203258) [15:19:37] 10Analytics, 10Analytics-Kanban, 10Analytics-Wikistats, 10Patch-For-Review: Add project-families to AQS for additive-metrics - https://phabricator.wikimedia.org/T203258 (10JAllemandou) a:03JAllemandou [15:20:00] 10Analytics, 10MediaWiki-API, 10Patch-For-Review, 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Milimetric) ping @anomie or @Tgr: were either of you interested in finishing this change https://gerrit.wikimedia.org/r/#/c/anal... [15:52:56] (03PS1) 10Joal: [WIP] Add python script importing xml dumps onto hdfs [analytics/refinery] - 10https://gerrit.wikimedia.org/r/456654 (https://phabricator.wikimedia.org/T202489) [16:04:38] (03PS8) 10Milimetric: Annotate wikistats [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/440971 (https://phabricator.wikimedia.org/T194705) [16:05:45] (03CR) 10Milimetric: "For testing, you'll need:" [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/440971 (https://phabricator.wikimedia.org/T194705) (owner: 10Milimetric) [16:29:25] 10Analytics, 10MediaWiki-API, 10Patch-For-Review, 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Anomie) Feel free as far as I'm concerned. I don't know anything about Oozie. [16:31:44] * elukey off! [17:15:58] Say I want to query the Data Lake to get data on user edits for each day of their first, say 14, days since registration. I can run a query with 14 SUM(IF(…)) statements (one for each day) to get that (grouping over user IDs). Is there a trick I could use to make it less verbose, or is having an IF for each day the way to go? [17:17:24] 10Analytics, 10MediaWiki-API, 10Patch-For-Review, 10User-Addshore: Run ETL for wmf_raw.ActionApi into wmf.action_* aggregate tables - https://phabricator.wikimedia.org/T137321 (10Nuria) To be fair I do not think we will have any time to work on this soon and I am really of the opinion that it would be bes... [17:18:02] Nettrom: on meeting can look at this later , maybe joal or milimetric can help [17:24:15] nuria: thanks! not a rush, I have a query that should work, so I’m mostly wondering if there’s an HQL-way of doing this that I just can find :) [17:26:22] *just can’t [18:04:18] Nettrom: ok, let me see, me no sql master you want to have a list of say count(edits), timestamp per editor with 15 timestamps [18:05:57] count(edits) , timestamp where timestamp editor_registration_date Right? [18:08:02] pseudo code as data on data lake differs, i think you first want to get registration dates per editor (for teh time period you are considering) and with those build your second query . This style of syntax might help : https://github.com/wikimedia/analytics-refinery/blob/master/oozie/virtualpageview/hourly/virtualpageview_hourly.hql#L38 [18:10:05] Nettrom: see the "with blah as " that creates initial table called blah used in the subsequent query. I will try to first run it for 1 smaller wiki on 1 of the snapshots , say russian or romanian wiki [18:21:51] nuria: since I’m using the mediawiki_history table (limited to cswiki and kowiki), user registration is available for every row, so it’s fairly easy to identify if an edit was made within a specific day. [18:22:40] the question is mainly whether I need an IF-statement for each day, or if there’s a trick that allows me to build that part of the query, for example with a loop [18:22:47] I suspect the answer is “no” :) [18:23:13] (03CR) 10Nuria: Update WKS2 endpoints to accept project families (033 comments) [analytics/aqs] - 10https://gerrit.wikimedia.org/r/456442 (https://phabricator.wikimedia.org/T203258) (owner: 10Joal) [18:27:38] Nettrom: Heya - Reformulating to make sure I understand correctly [18:28:14] Nettrom: You want, for each edit made within 14 days of user regsitration, know how many days since registration? [18:28:50] joal: for each day since registration, how many edits were made on that day [18:28:58] (for some number of days, say 14, or 30) [18:29:13] (03Abandoned) 10Nuria: Removing bad line left out on release branch [analytics/wikistats2] (release) - 10https://gerrit.wikimedia.org/r/455735 (owner: 10Nuria) [18:29:34] Nettrom: you want the number of edits made on day 1 after registration, day 2 etc per day? [18:30:05] Nettrom: if so, I think imbricated-queries are the easiest [18:30:16] joal: that’s what I’m looking for, yes [18:30:39] Nettrom: Give me let's say, 10 minutes, I should be back with something [18:30:41] it is good joal came cause i have no clue what imbricated means [18:30:47] :D [18:30:57] nuria: query over a query [18:30:58] nuria: me neither! this makes me very happy, thanks joal! :) [18:31:19] imbricated is the englishization of a french term - It probably doesn't exist :) [18:31:32] joal: what is the French term, “imbrication”? [18:31:35] sometimes englishization works though [18:31:43] yes [18:31:56] Ah - nested is the word [18:32:15] ahahahahah [18:33:00] I am a beginner in French, so I will try to add this to my vocabulary :) [18:33:17] “imbrication” is also an English term, so Google Translate is being unhelpful [18:34:35] Nettrom: AS with freedom or liberty, englishization of the french term exists (liberté in french), but sounds less english than the other [18:40:23] Nettrom: actually, feels like nested is not even needed [18:44:23] joal: as soon as you said “not even needed”, I was wondering if this could be done with a few joins. Make a table of “days”, join it with the user table to get timestamps since registration for each day, then join that with the history table to get the revisions, then sum over user and days [18:44:58] Nettrom: Computing the date-diff in SQL, then group by this [18:45:10] Nettrom: Testing my query and sending you the result [18:45:20] joal: awesome, thanks so much! [18:46:40] Nettrom: https://gist.github.com/jobar/c76ea11cd636e5fd2c7e396453bb5b29 [18:47:32] Nettrom: funny spike at day 8 :) [18:48:11] (03CR) 10Nuria: Makes closed sites searchable and disables private sites (031 comment) [analytics/wikistats2] - 10https://gerrit.wikimedia.org/r/456612 (https://phabricator.wikimedia.org/T203105) (owner: 10Fdans) [18:49:14] joal: ah see, that makes total sense [18:57:03] nuria: I've seen your comments on project-families, and I have question about the main one [18:57:44] joal: yessir [18:58:39] nuria: Is what you suggest is that we split subdomain and family in 2, like in: edits/aggregate/en/wikipedia/.... [18:58:42] ? [18:59:17] or that we name all-wikipedia-projets something different than project family? [18:59:43] joal: that helps me a lot, thanks again so much for your help with this! :) [18:59:49] joal: ah no, but i see what you mean, cause parsing is url based the "position" is fixed for the project parameter [18:59:54] and yes, there are some interesting spikes in when users come back to make additional edits [19:00:24] edits\/aggregate\/\/: Unknown command [19:00:38] sorry "edits/aggregate//" [19:01:00] Nettrom: You might be interested to correlate the registration_lag we defined with event_user_revision_count (incremental number of revision per user) [19:01:10] Nettrom: You're very welcome :) [19:01:17] * joal is happy when data is useful :) [19:01:49] nuria: absolutely true [19:02:16] nuria: We can't easily have edits/aggregate// or edits/aggregate// with different interprestation for them [19:02:34] joal: ya cause our parameters are not named [19:02:49] So, as is "all-projects" a special value, we have all-wikipedia-projects and other families special values [19:03:14] nuria: I feel it's easier to maintain this way than having a dedicated different endpoint [19:04:45] joal: i agree that if we are talking about the same metric it makes no sense to have it on a different endpoint [19:05:08] joal: let me look at user requests in this regard, one sec [19:16:22] :) this doesn't count since I'm way late to the party, but I would've done it the exact same way joal [19:16:46] milimetric: late people are always welcome when they share my views :-P [19:17:55] nuria: Just went for a bunch of comment over your prez - Super nice :) Please feel free to discard everything :) [19:18:18] milimetric: jajaja [19:22:25] joal: ok agreed that we have to live with things as they are, I saw in our tickets families are called "all language versions" by some users but i think "all-wikipedia-projects" is a better alternative cc milimetric [19:23:48] nuria: the reason I picked this option is to mimic the other parameters we have (all-projects, all-editor-types ...) - And I think it is very explicit [19:25:47] nuria / joal: did you chat about this somewhere other than https://gerrit.wikimedia.org/r/#/c/analytics/aqs/+/456442/2/v1/bytes-difference.yaml? [19:26:00] (03CR) 10Nuria: "Updating per conversation on Irc, given that our parameters are not named and their meaning comes from the position in the url is not easy" [analytics/aqs] - 10https://gerrit.wikimedia.org/r/456442 (https://phabricator.wikimedia.org/T203258) (owner: 10Joal) [19:26:33] milimetric: no, we were talking about it on irc just 30 mins ago [19:26:40] milimetric: nope, I don't think so [19:26:48] Ah - too late [19:26:50] oh IRC, ok, reading [19:28:26] (03CR) 10Joal: "Seems that we are nearly good to go - Yay :)" (031 comment) [analytics/aqs] - 10https://gerrit.wikimedia.org/r/456442 (https://phabricator.wikimedia.org/T203258) (owner: 10Joal) [19:29:10] milimetric: more thoughts welcome [19:29:27] very much indeed ! [19:29:48] yeah, was just reading the code too [19:31:22] joal: you could just apply ^all-(.*)-projects$ to the project and look up the matched group [19:31:34] and I think that format is great actually [19:31:59] milimetric: Nope - regex-injection [19:31:59] in my mind, we could even extend that project parameter to be "wiki project" [19:32:20] and people could specify wiki-project-Medicine [19:32:36] milimetric: you can kill druid easilly with a regex [19:32:57] milimetric: we either want to filter before and then apply the way you describe, or define first [19:33:40] milimetric: filtering first makes sense, and allows for easier integration in future [19:34:20] oh I didn't mean to send that to druid, just to "lookup" the regex, so in makeRegexFilter [19:36:03] never mind anyway, it's totally fine the way it is [19:36:06] and less code change [19:36:28] milimetric: I got your point though, and did it this way on purpose for now [19:36:44] yeah, so I like overloading that term, that was my thought about it from the very beginning in pageview API, and I continue to dream of a day when we have wiki-project level stats :) [19:37:04] milimetric: One day, my friend [19:37:48] milimetric: that day shall come but let's first show families in wikistats UI, maybe next quarter, challenges ahead cause there are no per family pageviews just unqiue devices [19:38:23] oh yeah, I mean, that's why it's a dream :) [19:38:42] nuria: the problem of per-family pageviews might be very easy to solve [19:39:04] I don't dream of stuff that's like well organized in the project plan and kanban, that'd be no fun :) [19:39:18] yeah, pageviews are additive [19:39:22] * joal hugs milimetric [19:40:04] milimetric: we serve pageviews in WKS2 at projectview level only - Maybe we could load than in druid? [19:40:53] mmm, would be cool, does it save us more time than it costs us though? [19:41:22] milimetric: removes 4 jobs in cassandra, and provides more querying patterns [19:41:55] well we're moving those jobs to simpler Druid jobs [19:41:58] *druid loading [19:42:22] milimetric: indeed - with parquet loading, no data-prep, no pre-aggregation [19:42:53] milimetric: will launch an indexation manually, to chek datasize [19:42:58] oh! you'd let the druid indexer take care of aggregation and feed it raw?! No... [19:43:07] milimetric: of course yes ! [19:43:15] milimetric: from projectviews [19:43:23] oh duh right [19:44:10] yeah, that's nice, also opens the door even more widely for wiki-project! It'd be easier to add if it's in Druid [20:15:42] nuria: if you have time, can you try doing this in your vagrant wiki's js console? [20:15:44] https://www.irccloud.com/pastebin/queWAFoG/ [20:16:32] for me it's working on the Special:JavaScriptTest page, but not on MainPage... which makes very little sense [20:16:41] and it of course works on other wikis [20:19:11] milimetric: can do but now iam in the middle of fixing https://phabricator.wikimedia.org/T203275 [20:19:20] ah, no rush at all [20:19:21] just curious [20:29:39] 10Analytics, 10ContentTranslation, 10Operations, 10SRE-Access-Requests: Add kartik to analytics-privatedata-users - https://phabricator.wikimedia.org/T135704 (10Petar.petkovic) [20:30:53] nuria: I posted the question on my gerrit change, so timo will see that, no worries [20:31:09] it's just weird, it was kind of half-working last week when I tried it. [20:34:33] 10Analytics, 10ContentTranslation, 10Operations, 10SRE-Access-Requests: Add amire80 to analytics-privatedata-users group - https://phabricator.wikimedia.org/T122524 (10Petar.petkovic) [22:54:54] milimetric: yt? [22:55:41] hey nuria what's up [22:56:07] milimetric: ah you might be done for today! i just realized it is late [22:56:19] jus t tried your mw vagrant stuff [22:56:37] oh, did it work for you? [22:56:59] milimetric: and on main page does an ajax post: "http://localhost:8080/w/api.php". [22:57:29] Yeah, the error is in the .then after the post [23:00:21] milimetric: no error in console i can see, we can look at this on MOnday if you want, i will be working