US Lawyer Uses ChatGPT For Case Research – But It Backfires

Sarah Raines
In the ever-evolving landscape of legal research and technology, lawyers are constantly exploring innovative methods to streamline their workflow and enhance their ability to provide accurate and effective counsel to their clients.
ChatGPT By OpenAI. Photo: BoliviaInteligente | Unsplash

While many legal professionals have found value in leveraging AI tools like ChatGPT for various aspects of their work, one US lawyer’s reliance on this technology took an unexpected turn, leading to an unfortunate backfire. It is reported that Steven Schwartz, a US Lawyer, has been penalized for using ChatGPT for his court case.

As reported by the BBC, Steven A Schwartz is reportedly facing a court hearing after the US lawyer admitted using AI for case research. It was confirmed that research created by the AI chatbot came up as false. The judge explained that the court faced an “unprecedented circumstance.” It was discovered that the filing referenced examples of legal cases that did not exist.

OpenAI’s ChatGPT is known to create original texts on request. However, it usually warns that “it can produce inaccurate information.” With this turn of events, OpenAI may risk losing users relying on ChatGPT for various information needs.

ChatGPT’s Error Explained

Over the past few months, ChatGPT has garnered over 100 million users. People use the AI chatbot to generate information in response to their prompts. Although the chatbot always warns that it may produce inaccurate information, there is no denying that it serves as the primary AI assistant for many users.

However, after the issue with the US lawyer, people may be forced to rethink their reliance on the AI chatbot for information. Lawyer Schwartz was representing a man who sued an airline for injuries caused by a serving cart while onboard the plane. Schwartz admitted to using OpenAI’s chatbot for his research. He, however, lamented that he was “unaware of the possibility that the content could be false.”

Schwartz has said he will not use the AI chatbot again to supplement his research work without verifying its authenticity. But this does not change the fact that users may hesitate to use the AI chatbot.

Is ChatGPT Reliable?

ChatGPT answers questions using natural, human-like language.

Last month, ChatGPT accused a law professor of sexually harassing a student. The problem is there is no evidence anywhere to support this claim. Similarly, Steven Schwartz has fallen victim to the inaccuracy of ChatGPT. While the AI chatbot has been increasingly helpful to users all over the world, the impact of its inaccuracies seems to cost more.

Chatbots like ChatGPT have challenges like this. Google’s Bard has also been reported to generate false information to some prompts. This proves the fear of tech experts who worry that AI could get out of hand.

AI chatbots like ChatGPT were created to “sound impressive,” not to be accurate. As a result, users need to pay attention to the information they get from these chatbots. ChatGPT provides fast and easy answers to prompts but notes that these answers must be verified.

Will OpenAI Be Sued?

This is not the first time OpenAI’s ChatGPT has generated false information to prompts. The AI company is yet to release any official statement concerning this. And in truth, they may not have to. This is because the company is not responsible for how chatbot users use the information they get from it.

The lawyer in question, Steven Schwartz, said this was his first time using ChatGPT for legal research. But unfortunately, he became a victim of inaccurate information generated by the AI chatbot. It is safe to say that OpenAI is not to be blamed for Mr. Schwartz’s failure to verify the information.

Tech companies like OpenAI always try to include a disclaimer acknowledging that AI chatbots may generate falsified information. So, users should beware.


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts