AI legal advice case asks whether ChatGPT crosses legal boundaries
Legal dispute tests whether AI tools can provide legal advice without violating practice-of-law rules.
A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.
The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.
The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.
Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.
Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.
Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.
The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
