

The study of this in academia
you are linking to an arxiv preprint. I do not know these researchers. there is nothing that indicates to me that this source is any more credible than a blog post.
has found that LLM hallucination rate can be dropped to almost nothing
where? It doesn’t seem to be in this preprint, which is mostly a history of RAG and mentions hallucinations only as a problem affecting certain types of RAG more than other types. It makes some relative claims about accuracy that suggest including irrelevant data might make models more accurate. It doesn’t mention anything about “hallucination rate being dropped to almost nothing”.
(less than a human)
you know what has a 0% hallucination rate about the contents of a text? the text
You can see in the images I posted that it both answered the question and also correctly cited the source which was the entire point of contention.
this is anecdotal evidence, and also not the only point of contention. Another point was, for example, that ai text is horrible to read. I don’t think RAG(or any other tacked-on tool they’ve been trying for the past few years) fixes that.




Which makes that a bit harder to do. Any clue why?





as I said, the text has a 0% error rate about the contents of the text, which is what the LLM is summarising, and to which it adds it’s own error rate. Then you read that and add your error rate.
can we???
why… would I want that? I read novels because I like reading novels? I also think that on summaries LLMs are especially bad, since there is no distinction between “important” and “unimportant” in the architecture. The point of a summary is to only get the important points, so it clashes.
no LLM can do this. LLMs are notoriously bad at doing any analysis of this kind of style element because of their architecture. why would you pick this example
I still have not seen any evidence for this, and it still does not adress the point that the summary would be pretty much unreadable