Anthropic Tried to Defend Itself With AI and It Backfired Horribly
The advent of AI has already made a splash in the legal world, to say the least. In the past few months, we've watched as a tech entrepreneur gave testimony through an AI avatar, trial lawyers filed a massive brief riddled with AI hallucinations, and the MyPillow guy tried to exonerate himself in front of a federal judge with ChatGPT. By now, it ought to be a well-known fact that AI is an unreliable source of info for just about anything, let alone for something as intricate as a legal filing. One Stanford University study found that AI tools make […]


The advent of AI has already made a splash in the legal world, to say the least.
In the past few months, we've watched as a tech entrepreneur gave testimony through an AI avatar, trial lawyers filed a massive brief riddled with AI hallucinations, and the MyPillow guy tried to exonerate himself in front of a federal judge with ChatGPT.
By now, it ought to be a well-known fact that AI is an unreliable source of info for just about anything, let alone for something as intricate as a legal filing. One Stanford University study found that AI tools make up information on 58 to 82 percent of legal queries — an astonishing amount, in other words.
That's evidently something AI company Anthropic wasn't aware of, because they were just caught using AI as part of its defense against allegations that the company trained its software on copywritten music.
Earlier this week, a federal judge in California raged that Anthropic had filed a brief containing a major "hallucination," the term describing AI's knack for making up information that doesn't actually exist.
Per Reuters, those music publishers filing suit against the AI company argued that Anthropic cited a "nonexistent academic article" in a filing in order to lend credibility to Anthropic's case. The judge demanded answers, and Anthropic's was mind numbing.
Rather than deny the fact that the AI produced a hallucination, defense attorneys doubled down. They admitted to using Anthropic's own AI chatbot Claude to write their legal filing. Anthropic Defense Attorney Ivana Dukanovic claims that, while the source Claude cited started off as genuine, its formatting became lost in translation — which is why the article's title and authors led to an article that didn't exist.
As far as Anthropic is concerned, according to The Verge, Claude simply made an "honest citation mistake, and not a fabrication of authority."
"I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," Dukanovic confessed. "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error."
Anthropic apologized for the flagrant error, saying it was "an embarrassing and unintentional mistake."
Whatever someone wants to call it, one thing it clearly is not: A great sales pitch for Claude.
It'd be fair to assume that Anthropic, of all companies, would have a better internal process in place for scrutinizing the work of its in-house AI system — especially before it's in the hands of a judge overseeing a landmark copyright case.
As it stands, Claude is joining the ranks of infamous courtroom gaffs committed by the likes of OpenAI's ChatGPT and Google's Gemini — further evidence that no existing AI model has what it takes to go up in front of a judge.
More on AI: Judge Blasts Law Firm for Using ChatGPT to Estimate Legal Costs
The post Anthropic Tried to Defend Itself With AI and It Backfired Horribly appeared first on Futurism.