Jason Boleman//April 21, 2026//
Since the public launch of generative artificial intelligence tools in November 2022, professional industries have adapted and embraced the technology in daily office life. And the legal community is no exception, with lawyers increasingly finding use cases for AI in their professional lives.
But generative AI is not without flaws. In recent years, courts have reported receiving filings containing AI “hallucinations,” or instances where documents drafted entirely or with the assistance of AI contained citations to fake cases, false citations or fabricated quotes.
An informal “AI Hallucination Cases” database maintained by a Paris business professor has tallied more than 800 U.S. legal decisions in cases in which generative AI produced hallucinations, including several in Virginia’s federal district courts and the 4th U.S. Circuit Court of Appeals.
Noting the possibility of errors that could run afoul of legal ethics rules, lawyers and legal scholars have stated that attorneys should be responsible in their usage and implementation of AI in their practices.
“I’d advise lawyers to treat generative AI models like an intern who’s extremely knowledgeable but unreliable and way too eager to please,” said University of Virginia School of Law Professor Kevin Cope.
In March, the issue of AI hallucinations grabbed the attention of legal professionals in the region following an unpublished opinion from the 4th U.S. Circuit Court of Appeals.
In In re: Eric Chibueze Nwaubani, the Fourth Circuit issued a public admonishment of Washington, D.C., attorney Eric Nwaubani, who filed a brief “contain[ing] citations to nonexistent judicial opinions potentially derived from generative artificial intelligence.
“The proceeding before us is an original one,” the court’s per curiam opinion noted.
Ultimately, the federal court found a violation of the District of Columbia’s local rules and noted that “courts across the country are grappling with how to address the use of generative AI in materials submitted to the court — and nonexistent cases … are the frequent posterchild for problems.
“The result — while the use of generative AI is currently perceived by some in the legal field as an exception, it may soon become the norm,” the judges noted.
The court ultimately found the citation of nonexistent cases to be in violation of the local rules and issued the public admonishment. Nwaubani did not respond to a request for comment.
“The facts here should remind all members of the Fourth Circuit Bar that there is no shortcut around the rules,” the court wrote. “Whether done in modern, innovative ways or old-fashioned ways, citing nonexistent decisions is not permitted.”
The opinion is the latest local example of hallucinated cases appearing in legal filings. Iria Giuffrida, a professor at William & Mary Law School, clarified that this phenomenon is not increasing in the way one might think.
“I wouldn’t say there is an increase in hallucination, but there is an increase in hallucinated cases being found,” Giuffrida said. “The number of hallucinations in terms of the output of the machine is pretty much the same, but there are more lawyers using it, and lawyers are failing to check the cites.”
The pitfalls of AI can come because “the tendency of the tool is to affirm [the user’s] point of view,” Giuffrida noted.
“The output is likely to really fascinate lawyers because they now have exactly the kind of argument that they need to win in court,” Guiffrida said. “The problem is, it’s not based on actual case law or statute because the hallucinations are beyond just case law.”
Cope noted that large language models contain many benefits for lawyers, notably in helping summarize inputted materials “reliably and without hallucinations.
“But where LLMs often fail is when they’re asked to provide support for a specific legal claim, and no such support in fact exists,” Cope said. “In those cases, they often act as though a fabricated answer is better than no answer and hallucinate one.”
Giuffrida noted that hallucinations have begun “percolating up into judgments,” and said she “would like a little bit of compassion for the position judges find themselves in.
“Some judges do have clerks who should be checking the cites, but not every judge has one,” Giuffrida said. “They don’t have the bandwidth of being expected to check every submission’s cite, so the responsibility is with us lawyers.”
Beth Burgin Waller, chair of Woods Rogers’ Cybersecurity and Data Privacy Practice, said she tries to “responsibly lean on AI tools in the same way I do any other technology.
“The rise of AI has shifted my practice toward advising clients on AI governance, compliance and AI-specific contractual considerations,” she said.
As an attorney working on cybersecurity and critical infrastructure matters, Waller noted that her use of AI tools to enhance her work also comes with considerations to protect her clients’ privacy and security.
“I apply strict guardrails to ensure client confidences are protected, particularly amid evolving national security risks,” she added.
The implementation of AI in practices like Waller’s is expected to continue, with Giuffrida stating that the use of generative AI has changed as the tools have evolved.
As a professor of the practice of law who researches emerging technologies,
Giuffrida found that over the years, the use of generative AI has shifted from saving time with workflow to using the tools “to test their cases.”
The use of AI thus creates a “faster first draft, but necessarily a faster overall work product” because of the need to check citations.
“It’s really useful in having [generative AI] almost like a mirror that you use to see where the blind spots are that exist in your argument,” Giuffrida said. “That’s where I think my colleagues could truly draw great value from these tools.”
Cope echoed this potential usage of AI for attorneys.
“They can be pretty good at drafting arguments based on primary legal sources,
filings and memos, if provided with enough guidance,” Cope said.
As a word of caution for her fellow attorneys, Waller added that all work products that AI is involved in should be “carefully cite-checked against authoritative legal sources to confirm that the cited cases exist and are accurately described.”
The potential for mistakes in documents can also come when an attorney is working in an area where they may lack experience. Cope said AI tools should be used “with great caution in areas of law you’re less familiar with.”
He added, “Just like when working with a junior lawyer, you need to know enough to know when they’re probably wrong.”
While firm policies and a skeptical eye can help guard against hallucinations finding their way into court filings, it does not cover all instances where the courtroom and generative AI usage intersect.
Many of the cases in the informal AI hallucinations database cite decisions in cases involving pro se litigants who are representing themselves in court. Waller cited local rules adopted in Richmond courts that address both attorneys and pro se litigants.
Adopted on Jan. 13, the local rule on artificial intelligence explicitly allows the use of AI tools but requires litigants to provide a certification form 10 days before trial or hearing disclosing the use of AI in any filing.
“[A]ll parties employing AI bear the burden of exercising due care in reviewing filings before their submission to the Court to ensure they are ‘hallucination-free’ and are otherwise well grounded in fact and warranted by existing law or a good faith argument for the extension, modification, or reversal of existing law,” the local rule states.
The rule noted that the failure to check AI sources may violate Va. Code § 8.01-271.1(B), the statute governing the signing of pleadings and motions.
“It is my hope that more jurisdictions adopt rules similar to those implemented by the Circuit Court for the City of Richmond,” Waller said.
For her law students, Giuffrida said, it is important to teach them early on about the use of AI. Her course on AI and the law launched at William & Mary in 2018 and has evolved following the commercial availability of generative AI platforms.
Part of the course involves requiring use of an AI tool to draft a paper of the student’s choice. That paper includes reflections on using AI in the writing process and observations of elements of AI use that can lead to pitfalls.
“I want my students to understand in the safety of a course the strengths and weaknesses of these tools,” Giuffrida said.
As with Richmond’s local rule, Giuffrida’s students must fill out an AI disclosure statement following the course explaining what AI tools were used and in what context.
“In many courts nationwide, we are seeing an increase of courts asking lawyers to confirm if they used a large language model, and if so, how,” Giuffrida said. “I’m trying to get them in the habit of disclosing the use of the tool.”
Despite hesitancy by some of her students — and some practicing attorneys — to adopt AI, Giuffrida said, nobody studying AI’s use in the legal profession would advise attorneys not to use AI tools.
However, remaining educated on the tools and checking behind any AI output is crucial to success.
“The responsibility is for us lawyers to make sure that we are confident that what we say in our court documents is correct and that our citations exist,” Giuffrida said.
“I’d advise lawyers to treat generative AI models like an intern who’s extremely knowledgeable but unreliable and way too eager to please.”
— University of Virginia School of Law Professor Kevin Cope