Large language models (LLMs) have a well-known propensity to “hallucinate,” or provide false information in response to the user’s prompt—note that the National Institute of Standards and Technology’s preferred term for this is “confabulate,” but this has yet to catch on. Researchers at Stanford University previously found that legal AI tools hallucinated 58–82% of the time on legal queries….
By: Fenwick & West LLP
By: Fenwick & West LLP