[01:07:53] milimetric: nuria hi!! another quick question about Hive data... webrequest x_analyitcs still has loggedIn data as indicated here? https://wikitech.wikimedia.org/wiki/X-Analytics Maybe it's only set for certain kinds of requests? I'm not seeing it set some requests where I expected it on Special:BannerLoader [02:28:14] (PS24) Milimetric: Script sqooping mediawiki tables into hdfs [analytics/refinery] - https://gerrit.wikimedia.org/r/306292 (https://phabricator.wikimedia.org/T141476) [02:35:07] (PS25) Milimetric: Script sqooping mediawiki tables into hdfs [analytics/refinery] - https://gerrit.wikimedia.org/r/306292 (https://phabricator.wikimedia.org/T141476) [02:36:54] (PS26) Milimetric: Script sqooping mediawiki tables into hdfs [analytics/refinery] - https://gerrit.wikimedia.org/r/306292 (https://phabricator.wikimedia.org/T141476) [06:26:44] Analytics, Commons, Multimedia, Tabular-Data, and 4 others: Review shared data namespace (tabular data) implementation - https://phabricator.wikimedia.org/T134426#2743916 (Yurik) [07:13:38] Analytics, Discovery, Discovery-Analysis, Operations, Ops-Access-Requests: Pivot access for Discovery's Analysis team - https://phabricator.wikimedia.org/T149144#2744012 (Peachey88) [07:28:18] Analytics-Cluster, Operations, ops-eqiad: Degraded RAID on oxygen - https://phabricator.wikimedia.org/T149167#2744023 (Peachey88) kafkatee cluster best I can tell [07:47:04] joal: o/ [07:47:27] Yesterday Eric confirmed that alter + repair system_auth should work fine [07:47:36] so whenever you are ready I'd start the procedure [08:40:25] elukey: Hi ! o/ [08:40:33] elukey: Let's go whenever you want :) [08:40:48] elukey: Should be blazzing fast (system_auth is very small) [08:42:04] joal: all right proceeding [08:42:21] so first thing [08:42:32] logging in as cassandra in cqlsh [08:42:33] and then [08:42:34] ALTER KEYSPACE "system_auth" WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 6 }; [08:44:17] (logging in with cqlsh -u cassandra aqs1006-a) [08:44:20] joal: --^ [08:44:27] +1 [08:45:20] running nodetool repairs [08:45:31] elukey: one node after antoher, right? [08:46:20] one instance at the time [08:46:27] even more precise ;) [08:46:31] it takes ~30 seconds to complete :) [08:46:38] yeah, would have guessed that [08:48:55] some 503s registered, probably unavoidable [08:49:16] elukey: weird [08:49:24] elukey: I think ot's not suppose to happen [08:49:42] will dig in the logs a bit [08:49:48] after finishing [08:49:53] sure, thanks mate [08:50:27] I haven't de-pooled each node first [08:50:31] this is probably why [08:50:42] I thought it was not necessary [08:53:14] from logstash (aqs-cassandra) I see only INFO messages [08:53:15] weird [08:56:41] Analytics-Kanban, Wikipedia-iOS-App-Backlog, iOS-app-feature-Analytics, iOS-app-v5.3.0-Welsh-Dragon: Drop in iOS app pageviews since version 5.2.0 - https://phabricator.wikimedia.org/T148663#2744090 (JAllemandou) [08:57:37] elukey: I wouldn't have thought repair would cause issues [08:57:41] elukey: weird :( [08:58:03] elukey: I also want to thank you again for the fix in yarn UI (spark) - This really makes my life easier :) [09:00:31] joal: sorry that took so long! [09:00:52] elukey: please don't be sorry, you fixed an issue, this is good ! [09:01:18] mod_proxy_html is really nice but that behavior was really weird [09:01:43] elukey: I'm moving toward working T148663 - it's urgent and related to our pageview code base [09:01:43] T148663: Drop in iOS app pageviews since version 5.2.0 - https://phabricator.wikimedia.org/T148663 [09:03:56] wow [09:04:08] sneaky header [09:04:34] elukey: Our pageview definition is not flexible enough unforntunately [09:04:38] also it reminded to me that I have to start documenting about ldap groups [09:10:25] (CR) Hashar: "check experimental" [analytics/dashiki] - https://gerrit.wikimedia.org/r/316904 (https://phabricator.wikimedia.org/T147884) (owner: Milimetric) [09:11:50] Analytics-Dashiki, Continuous-Integration-Config, Patch-For-Review: Add CI job for Dashiki - https://phabricator.wikimedia.org/T148019#2744145 (hashar) I guess that this more or less depends on bower -> npm migration T147884 [10:24:45] Krenair: o/ If you have time I'd need to ask some basic questions about the wmf and nda ldap groups [10:24:51] more specifically, things like https://phabricator.wikimedia.org/T148832 [10:36:46] We have now Pivot in production (pivot.wikimedia.org) but several people can't access it (even some members of the WMF apparently not in the wmf LDAP group) [10:36:59] so what I want to do is understanding how to proceed [10:47:08] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2744294 (elukey) [10:47:15] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2734568 (elukey) p:Triage>Normal [10:48:57] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2734568 (elukey) @DarTar would it make sense to add initially one GPU card only to stat1003 (research data cruncher) and see how it goes, rather tha... [11:04:26] Analytics, Analytics-Cluster: Improve Hue user management - https://phabricator.wikimedia.org/T127850#2744338 (elukey) Another thing to consider is if the users need to be in a nda-like LDAP group or not. [11:14:24] joal: one thing that we should do in the ops meeting is to groom https://phabricator.wikimedia.org/project/view/655/ [11:14:33] for some reason we don't do it often [11:15:46] dcausse ping [11:20:09] * elukey lunch! [12:03:23] milimetric: I found T148832 already opened for WMDE people, but we [12:03:24] T148832: NDA access for Daniel - https://phabricator.wikimedia.org/T148832 [12:03:36] we'd need another one for whoever is not in [12:03:42] 'wmf' like Leila [12:07:25] elukey: but people like Leila should be in wmf by default, so that should be filed as a bug, no? [12:07:57] Or still on Ops-Access-Requests [12:08:01] ? [12:09:09] elukey: I was going to make the list for you but I can't find where in puppet people are set up in wmf/nda, so I didn't know who to put in the list [12:12:35] physikerwelt____: pong [12:14:58] milimetric: sure, I think it is a bug, I'll try to come up with the list :) [12:15:46] elukey: so there's no publicly accessible puppet for this? [12:16:36] milimetric: what do you mean? [12:16:52] wmf/nda are set in LDAP only as far as I know [12:18:10] i see, yeah, I meant if there was a list I could see so I could give you people not already set up [12:20:21] ah yeah this is a bit tricky [12:20:50] I was thinking to use the data.yaml file in the admin module (the ones listing the ssh keys) [12:21:00] but there is no mention of wmf people [12:21:18] but I can start from a diff between LDAP and that list [12:21:26] the users are not the same all the times [12:21:31] this is another interesting thing [12:24:19] I always wondered why it seemed such a big deal to onboard/offboard people. Now I understand :) [12:27:16] milimetric: more horrors are linked from https://phabricator.wikimedia.org/T142815 :-) [12:27:37] cleaning this up will be a forthcoming ops goal, though [12:27:50] "Require/track email addresses" [12:27:58] what?! lol, people don't have email addresses? [12:28:18] that's awesome, moritzm, I toast to you and all who are cleaning this up [12:31:39] milimetric: this is the task that I was talking about yesterday: https://phabricator.wikimedia.org/T129786 [12:31:43] couldn't find it [12:34:12] goodmorning! does anyone know if WMF prefers DSA or RSA ssh keys? [12:35:24] I don't think that there is any restriction, probably a strong RSA key (>= 2048) is good enough [12:37:19] Hey milimetric [12:37:49] hi :) [12:37:51] milimetric: I didn't answer to your message, but I confirm spark correctly reads boolean in the minwiki test table :) [12:38:06] Thanks for having solved that ! [12:39:15] yay, did you try wikidatawiki too? [12:39:20] (should be the same) [12:39:22] milimetric: I didn't ! [12:39:27] milimetric: will do now2 [12:42:54] milimetric: Confimred with wikidatawiki ! [12:42:57] Yay :) [12:43:37] score [12:44:03] i added a note in the patch about why this is so messed up [12:44:32] milimetric: it's mysql defaults that have changed, right? [12:44:39] I still am a bit shocked that we have different schemas, complete with different columns and types, across all our dbs [12:44:55] yeah, but ... like, create scripts should not leave stuff up to defaults... [12:45:06] milimetric: I bless you for having worked through that, but I'm a bit scared [12:45:18] it's "create table" not "create whatever you best think fits my purpose" [12:45:31] milimetric: :) [12:45:57] milimetric: We indeed kinda have a core need of correct DBs [12:58:59] thanks elukey [13:02:25] o/ [13:02:39] Heya halfak [13:33:14] Analytics, Commons, Multimedia, Tabular-Data, and 3 others: Allow structured datasets on a central repository (CSV, TSV, JSON, GeoJSON, XML, ...) - https://phabricator.wikimedia.org/T120452#2744753 (Yurik) Tabular data is now enabled on labs commons: https://commons.wikimedia.beta.wmflabs.org/wi... [13:41:23] hi team :] [13:43:46] hiyaa [13:47:29] milimetric: do you have 10 mins to review T149187 with me? [13:47:29] T149187: WMF or NDA LDAP access request for WMF employee - https://phabricator.wikimedia.org/T149187 [13:47:39] after that I'll add them to wmf [14:04:39] elukey: sorry been in a meeting [14:04:47] batcave in a bit? [14:08:46] elukey: ok, in cave [14:13:52] milimetric: sorry just seen the msg, joining [14:52:29] Hi everyone! I'm from the Citolytics project and we plan to test a new system for article recommendations ( https://phabricator.wikimedia.org/T142477 ). Therefore, we're looking for a way to get the recommendations to ElasticSearch to make them accessible via CirrusSearch. Can somebody provide some guidance how we can accomplish this? The recommendation generation is currently done with an Apache Flink job and is not d [14:58:25] mschwarzer: hi, can you clarify the context of your project? [15:00:32] ottomata, joal: stadddupppp [15:00:48] We want to do a research whether a link-based approach can outperform the current text-based MoreLikeThis. The goal is to A/B test both methods in the Android app. [15:02:09] Analytics, Commons, Multimedia, Tabular-Data, and 3 others: Allow structured datasets on a central repository (CSV, TSV, JSON, GeoJSON, XML, ...) - https://phabricator.wikimedia.org/T120452#2745063 (JEumerus) Hmm, if that goes live where will the repository be? [15:14:52] Analytics, MediaWiki-extensions-WikimediaEvents, The-Wikipedia-Library, Wikimedia-General-or-Unknown, and 2 others: Implement Schema:ExternalLinksChange - https://phabricator.wikimedia.org/T115119#2745110 (Milimetric) I can try to debug if someone will sit with me and show me how. I've never rea... [15:32:10] milimetric, mforns: defrief meeting ? [15:32:21] nuria, not in batcave? [15:32:30] mforns: let's do teh other link [15:32:31] *the [15:32:34] ok [15:57:07] joal / nuria: I wonder if there's a way to make git notify us of changes on specific lines of code (like on the User Agent string in the mobile apps) [15:57:23] milimetric: ya, git triggers [15:57:26] I mean, of course there's a way, we could do it manually [15:57:27] milimetric: if this is feasible, that'd be awesome ! [15:57:35] milimetric: jaja, ya i know [15:57:39] but I mean, I wonder if it's worthwhile for us to set it up [15:57:43] i woudl prefer humans notifying us [15:57:46] me too :) [15:57:58] * milimetric does not trust humans [16:16:08] mschwarzer: as I said in phab if it's only for research purpose I suppose you need to contact the research team first to have an idea on how you can proceed [16:40:01] Analytics, Discovery, Discovery-Analysis, LDAP-Access-Requests, Operations: Pivot access for Discovery's Analysis team - https://phabricator.wikimedia.org/T149144#2745374 (Krenair) I think pivot access just relies on wmf/nda grouping in LDAP, not production shell [17:02:59] milimetric, yt? [17:04:02] hey mforns I'm just eating lunch, what's up [17:04:32] hey milimetric, I'll let you eat lunch, np, I'll ping you back in a while [17:05:03] we have the dashboard meeting together in a bit, maybe we can talk after that [17:05:09] milimetric, sure [17:09:52] Analytics, Discovery, Discovery-Analysis, LDAP-Access-Requests, Operations: Pivot access for Discovery's Analysis team - https://phabricator.wikimedia.org/T149144#2745487 (elukey) [17:17:15] * elukey afk! [17:17:18] byeeeee o/ [17:17:23] bye elukey ! [17:30:17] milimetric: I realize that our ee dashboards and SoS are at the same time cc mforns [17:30:38] yep, np nuria we'll run the meeting [17:30:41] anything you want us to impart? [17:31:24] milimetric: usual analytics TLC mixed with tough love [17:31:28] *thought [17:31:39] :) [17:31:46] tough [17:31:48] yes [17:45:38] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2745636 (ellery) @elukey I have a slight preference for stat1004 since it has access to HDFS [17:56:55] oh milimetric I forgot, do you want to look at pageviews.js problem in a couple minutes? [17:58:22] mforns: yeah, come to cave [17:58:26] cool [17:58:28] recapping last meeting with nuria [18:28:26] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2745920 (Ottomata) Would stat1002 be acceptable? It is more of a local ‘compute’ node (more storage and RAM) than stat1004. [18:37:29] Analytics, Analytics-Cluster, Operations, Research-and-Data, and 2 others: GPU upgrade for stats machine - https://phabricator.wikimedia.org/T148843#2745932 (ellery) Yes, I was operating under the assumption that stat1004 was the local "compute" node and that stat1002 is more or less reserved for... [18:43:39] hmm, elukey, still around? [18:48:07] hi does the anlytics custer use yarn? [18:50:36] I see the answer is yes https://wikitech.wikimedia.org/wiki/Analytics/Cluster/Spark [18:50:46] physikerwelt: yup :) [18:51:29] nuria: a bunch of these dashiki tests were never working in the first place, I'm very glad we did this cleanup [18:51:49] milimetric: on meeting can talk in a bit [18:51:52] the code was broken (like in exception cases and other edge cases) and the tests were broken [18:52:19] no worries, just wanted to say this cleanup was definitely worthwhile now that I see the results [18:52:53] ottomata we were discussing if it would be more resonable to reimplement a flink job (https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/yarn_setup.html) or to motivate someone(TM) to deploy flink [18:56:26] physikerwelt: you are singing a very beautiful song :D [18:56:48] we have experimented with running flink in yarn [18:56:50] and have done it [18:57:02] but, it is not officially supported by analytics [18:57:05] so, you can totally do it [18:57:17] but you'll have to do it on your own, we can help as much as we can, but we don't have the throughput to productionize it yet [18:57:30] physikerwelt: can i ask what is your end goal? [18:57:37] but, down the road, we are talking about a streaming cluster, possibly a dedicated flink cluster, unsure [18:57:41] yeah, i'm v curious too! [18:58:08] we want to improve article recommendations https://phabricator.wikimedia.org/T143197#2745943 [18:58:36] physikerwelt: aha, and why is flink needed? [18:58:55] physikerwelt: recommendations are calclated async as prior data needs to be chewed up [19:00:48] to be really honest, we started with flink since I was involved in the development of flink (or stratosphere at that time) and it has a much nicer api compared to hadoop, however this specific problem could also be solved with map reduce alone [19:02:00] we calculate the recommendation based on the links in other articles [19:02:46] Analytics-Kanban, Wikipedia-iOS-App-Backlog, iOS-app-feature-Analytics, iOS-app-v5.3.0-Welsh-Dragon: Drop in iOS app pageviews since version 5.2.0 - https://phabricator.wikimedia.org/T148663#2729390 (Nuria) @Jminor: Please be so kind to communicate changes to apps user agents to us before they h... [19:03:25] physikerwelt: ya, there is nothing on your ticket that implies flink [19:04:33] physikerwelt: as far as i can see. But I would like to request that prior to running an experiment you define the metrics you are trying to move. I do not see that in the ticket (side comment to flink discussion) [19:06:01] physikerwelt: +1 to what nuria says, but Flink on Yarn is totally fine [19:06:03] if you can run it [19:06:32] nuria yes, I'll the metircs to https://phabricator.wikimedia.org/T142477 [19:06:51] analytics team doesn't support it yet, but it is possible to submit as a yarn job using just flink .jars, etc. [19:07:07] not sure how that would integrate with oozie as a regular job, but i'm sure its possible [19:07:23] (and personally i think its exciting, flink is super cool :) ) [19:07:30] ottomata thank you [19:07:55] physikerwelt: what dataset does citolytics run on? [19:08:08] physikerwelt: That ticket talks about data collection, not specific metrics (did i miss it?) [19:09:04] nuria: , i think they want to push recommendations calcualted in hadoop to elasticsearch [19:09:10] ottomata: at TU Berlin we used wikidumps that contain links. but in a production environment this does not seem to be reasonable [19:09:13] they already use hadoop to generate some input to elastic search [19:09:55] physikerwelt: yeah, processing xml dumps in hadoop is hard, halfak and joal have some experience with that though [19:10:10] also some folks on the research team do some link analysis [19:10:16] not sure what dataset they start with [19:11:05] physikerwelt, were you using pagelinks? [19:11:07] ottomata yes but we use the link proximity.. this is somehow special so we need to extract that from the wikitext [19:11:52] halfak we were using the normal wiki links like [[link]] [19:12:08] physikerwelt, yes. Wikilinks are stored in the pagelinks table. [19:12:46] halfak yes, but not the position [19:13:19] ahh.. You want to parse wikitext to get links at a char offset? [19:13:32] so the measure uses the distance of two links to calculate the relatedness [19:13:33] yes [19:14:10] Like [[Berlin]] is in [[Germany]] --> (Berlin, Germany, 2) [19:14:15] physikerwelt, how do you feel about python 3.x? [19:15:18] halfak we are quite flexible with the language ... after mathoid I can now even use javascript [19:15:37] OK. We have good utilities for python 3 [19:15:43] I recommend mwxml [19:15:52] It makes it easy to split processing over multiple cores. [19:15:56] I use it in hadoop streaming too. [19:16:00] * halfak gets an example [19:16:42] https://github.com/mediawiki-utilities/python-mwxml/blob/master/ipython/labs_example.ipynb [19:16:45] This is a good one. [19:17:03] It finds revisions that change the number of image links. [19:17:10] nuria for our previous evaluation we were using the click through rate, I think we did not plan to change that [19:17:24] physikerwelt, It would work for your purposes with minor changes [19:18:05] halfak ok but do you think it would be good to use the dumps [19:18:25] Yeah. mwxml is designed to process the dumps simply and efficiently. [19:18:45] I think that is really the only way that you'll find image links within the text efficiently. [19:18:52] The API would not really be tractable at this scale. [19:18:57] No other way to get text. [19:19:51] ok and is there a kind of regular scheduled job that we could use to get the data to the elasticsearch index [19:20:35] ^ no idea what you mean :) [19:21:35] physikerwelt: we have a weekly job that exports data from hadoop to elasticsearch. The best way to get data into that is if your job could output a hive table that has the page id, and then whatever data to be exported [19:23:06] basically we would join the existing data we export against your table using the primary key, and then ship the various data [19:23:13] err, primary key = page id [19:23:37] ok that sounds doable [19:24:56] If you wanted to do mwxml still, you could use hadoop streaming. That's what I do when I want to process the XML dumps directly in hadoop. You need to set 1 mapper per file though. [19:25:25] I'm sure Java has some good SAX parsers, but I think you'll find that managing events is a pain. [19:25:38] Either way, this seems like a pretty straightforward processing job. [19:26:39] halfak have you ever written to a hive table with mwxml? [19:29:27] Yup. [19:29:51] So, outputting to a JSON/columns format is easy. [19:30:17] FYI: http://pythonhosted.org/mwxml/ [19:31:24] physikerwelt, this utility converts XML revisions to "revdocs" json https://github.com/mediawiki-utilities/python-mwxml/blob/master/mwxml/utilities/dump2revdocs.py [19:33:03] halfak do you also have redirect resolution https://github.com/wikimedia/citolytics/blob/master/support/runtimes.md (it doubled our flink runtime) [19:35:18] halfak we were running the experiments on our research groups student cluster using a DOP of 200 and it still took a while [19:35:44] physikerwelt, I'm not sure how you'd resolve redirects from an XML dump file. [19:35:51] But if you can figure out a way, rock on. [19:35:58] mwxml just makes it easy to parse XML [19:36:16] physikerwelt: do we track anywhere clickthrough rates in Android? Or was this a one off metric calculation for this experiement? [19:36:25] * experiment [19:37:40] nuria so are android users excluded from the tracking [19:38:02] physikerwelt: what tracking? [19:38:23] physikerwelt: sorry, we do not calculate click through metrics at all [19:38:30] physikerwelt: if that is your question [19:39:10] yes, we used flink to calculate it from the pageview data [19:42:53] physikerwelt: ok, you approximated clickthrough rates from pageview data dataset and you hope to do same just for android users [19:43:51] Analytics-Kanban, Discovery, Operations, Discovery-Analysis (Current work), and 2 others: Can't install R package Boom (& bsts) on stat1002 (but can on stat1003) - https://phabricator.wikimedia.org/T147682#2746113 (mpopov) @Ottomata: Any luck? [19:44:13] nuria we were expecting that we could see from the clickstream data that the link was clicked from the article page on the android app [19:45:05] Analytics-Kanban, Discovery, Operations, Discovery-Analysis (Current work), and 2 others: Can't install R package Boom (& bsts) on stat1002 (but can on stat1003) - https://phabricator.wikimedia.org/T147682#2746114 (Ottomata) No, sorry, I haven't had time yet :/ not sure when I'll get to this. W... [19:45:08] the main problem of our last evaluation was that it worked only for articles that were linked within the text [19:45:22] otherwise there was no entry in the clistreams [19:49:30] physikerwelt: my advise would be that before thinking about flink or similar you made sure that you have available the data you need to quantify your experiment. i would calculate the metrics you hope to move for a while before you run it to actually make sure your experiemnt is effective. [19:50:47] physikerwelt: otherwise you might spent uneeded energy and cycles trying to measure some variables that are not measurable (not saying this is the case, just making sure you can truly build your experiment) [19:52:17] nuria Yes that's certainly a valid point. But those are slightly unrelated topics. [19:54:04] physikerwelt: well, if you are going to run an A/B test determining that the metric you want to move is computable is the 1st thing we need to do [19:54:32] physikerwelt: halfak or peer can advise in this regard too [19:56:23] nuria yes. I'll have to check back with my collegue on that topic [19:57:37] physikerwelt: and determining whether that metric is subjected to so much random variation that might render your experiment not-doable is probably teh second thing we need to do [19:57:40] However, regardless of the use of use of the mobile app or another front end device we need to find a way to generate the data in a way that it can be used in production [19:58:12] physikerwelt: any considerations as to tech (flink or otherwise) should come after these two, we can help you with those but you need to stablish the validity of your experiment first [20:03:12] nuria ok, we will elaborate on that ... I did not expect that this will be the discussed much later, when we write a paper [20:10:56] physikerwelt: to write a paper you need sound data, let's make sure you have that before any tech discussions [20:15:51] nuria you convinced me... however writing a paper is obviously not my main motivation, but to improve the recommendations. So we will calculate the performance values for the current MLT based system first [20:36:04] Analytics-Kanban, Operations, Performance-Team, Reading-Admin, Traffic: Preliminary Design document for A/B testing - https://phabricator.wikimedia.org/T143694#2746331 (Nuria) @BBlack Would you be so kind as to look at our latest proposal to bucket users on doc : https://docs.google.com/docum... [20:44:40] Analytics-Kanban, Operations, Performance-Team, Reading-Admin, Traffic: Preliminary Design document for A/B testing - https://phabricator.wikimedia.org/T143694#2746336 (ellery) The pseudo code does not quite match the current text description of the Double Bucket proposal. [22:11:40] zareen: welcome ! [22:12:04] good luck with your project [22:12:26] hi matanya! thanks! [22:12:36] :)