Please ensure Javascript is enabled for purposes of website accessibility

ChatGPT– Just because we can doesn’t mean we should

Jeffrey H. Geiger//June 19, 2023

Jeffrey H. Geiger

Geiger

ChatGPT– Just because we can doesn’t mean we should

Jeffrey H. Geiger//June 19, 2023

Jeffrey H. Geiger
Geiger

Artificial intelligence technology including tools like ChatGPT, have become a hot topic across the legal landscape. From concerned law professors scanning student work for chatbot generated content, to practicing lawyers finding themselves in hot water during case work, these tools are a double-edged sword. And as the tech space continues to develop more and more intelligent tools, we’re left with increasing responsibility as attorneys to decipher their risks and rewards.

A Forbes magazine article recently highlighted an instance of the AI tool — ChatGPT — being used inappropriately and ineffectively by an attorney. Where did things go wrong? A Manhattan attorney, representing a man in a personal injury lawsuit, submitted a federal court filing citing at least six cases that simply didn’t exist. The offending lawyer used ChatGPT to fill gaps in his research, which may seem like a great way to save time and collect case studies effectively and efficiently. But unfortunately, and allegedly unbeknownst to the user of this new technology, it can have a knack for inventing facts, people, places and yes, even court cases, out of thin air. The result? He is now facing a sanctions hearing.

AI technology like ChatGPT could be a useful tool when used correctly and with a healthy dose of verification. Just like any other tool, it should be used with caution and with proper safeguards in place to ensure an accurate — and ethical — outcome. Here are several tips for how to approach AI tools in the legal landscape:

Fact Check. Independent, manual fact checking will never go out of style. It can be a slog but it’s better than facing sanctions or other legal ramifications that can affect your ability to practice.

Use multiple sources for research. Only relying on AI tools to complete research will undoubtedly result in mistakes and inaccuracies. Just like we use multiple sources of text and case work to apply to our research, don’t stick to one method of technology for assistance.

When in doubt, leave it out. If you’re unsure of how best to apply this new technology to your research, casework or more, you are under no obligation to use it. In fact, you might find that relying on more traditional methods suits you just fine without the stress or worry an AI tool can add.

Keep it ethical.  Your obligations of competence, diligence and supervision remain.  The “robot did it” will never be an acceptable excuse.

The bottom line? Just because a tool may be useful when it comes to law, we should still question whether that tool should be used. Consensus at the moment indicates that ChatGPT is a great starting point for content creation, perhaps even in legal work, but that legal professionals should proceed with caution when applying the tool. An age-old problem continues to persist; humans are often apt to believe anything the machines say, no matter how wrong. And as “smart” as it may be, ChatGPT doesn’t know when its providing inaccurate information. The responsibility is squarely upon our shoulders to check facts, get them right and serve our clients with the sharpest variety of tools available, including one very old fashioned method; the power of the human mind. Your clients and partners (and legal malpractice insurers) will thank you.

Jeffrey H. Geiger is a partner with Sands Anderson and serves as general counsel and on its board of directors. He practices in the areas of civil litigation and in the representation of lawyers and law firms, assisting with their business, ethics and professional responsibility matters.

Verdicts & Settlements

See All Verdicts & Settlements

Opinion Digests

See All Digests