Do legal AI tools that use RAG still hallucinate?

Start
Large language models (LLMs) have a well-known propensity to “hallucinate,” or provide false information in response to the user’s prompt—note that the National Institute of Standards and Technology’s preferred term for this is “confabulate,” but this has yet to catch on. Researchers at Stanford University previously found that legal AI tools hallucinated 58–82% of the time on legal queries….
By: Fenwick & West LLP
Previous Story

Netherlands – The Dutch Data Protection Authority publishes guidance on facial recognition (May 2 2024)

Next Story

Interpreting the Printed Matter Doctrine in Inter Partes Review