[05:49:37] How much novelty should our natural language generation system embody, compared to what prior efforts on the subject (such as those discussed on the mailing list) have produced? Should it be modeled entirely from scratch using items (for constructors) and lexemes (for renderers) as primitives? Or should we be more inclined to parallel something from some other playbook in such a model? [10:10:42] I just liked to share the paper of @vrandecic in Communications of the ACM about Wikifunctions and Abstract Wikipedia as it did not find it. [10:10:44] https://cacm.acm.org/magazines/2021/4/251343-building-a-multilingual-wikipedia/fulltext [16:50:25] That's an interesting question, and my answer is "up to the community, the development team won't be deciding that". As a community member I'd say that we should learn from existing systems but use the freedom to deviate from them and build from scratch (re @mahir256: How much novelty should our natural language generation system embody, compared to what prior efforts on the subject (such as those discussed on the mailing l