[16:44:19] I liked what @vrandecic has written in the Weekly Summary. [16:44:36] This is just what I pointed out in my IJCAI Workshop Paper. [16:45:12] You cannot believe how NLP and ML scientists are interested in the approach when I presented my paper to them. [16:47:56] There is only another slight point that I also raised when dealing with LLM and Abstract Wikipedia or Wikifunctions. It is the point that word order in the training corpus can affect how the output is generated. [16:49:16] For example, these three subjects should be equivalent: [16:49:16] "A, B, and C work together" [16:49:18] "B, C, and A work together" [16:49:19] "C, B, and A work together". [16:50:09] However, for an LLM, the probability is not equivalent. Fine-tuning does not solve the issue. [16:50:34] Probably, we can find a way with Wikifunctions for training data augmentation for LLM. [16:51:53] I know that I do not bring something new because I already shared my paper in this channel. But, I think that it is useful to write this again. Concerning the Paper itself, it is under review for an IJCAI 2025 Workshop.