New study finds AI legal research tools unreliable, prone to 'hallucinations'
SACRAMENTO - A new study by researchers at Stanford and Yale universities has raised concerns about the reliability of artificial intelligence (AI) tools used for legal research, urging lawyers to be cautious about AI outputs, reported Xinhua.
The study, titled "Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools", examined different AI tools lawyers use for legal research to assess the potential for generating false information known as "hallucinations".
Lawyers are increasingly using AI tools to assist in their legal practice. However, the large language models used in these tools are prone to "hallucinate" and thus pose risks to their practice, said the authors of the study published on Stanford University's website.
Some companies that make these AI tools claim they have fixed this problem, but the researchers found their claims "overstated".
The study tested popular tools from LexisNexis and Thomson Reuters, and found they made mistakes 17 per cent to 33 per cent of the time.
According to the study, some "hallucinations" happen when AI tools cite non-existent legal rules or misinterpret legal precedents.
The study, citing previous research, showed that as of January 2024, at least 41 of the top 100 largest law firms in the United States have begun to use AI in their practice.
The authors called on lawyers to supervise and verify AI-generated outputs while urging AI tool providers to be honest about the accuracy of their products and provide evidence to back them up. - BERNAMA-XINHUA