Growing scrutiny over AI errors in professional use

Experts advise treating AI as a helper, not a decision-maker, to prevent costly workplace mistakes.

Judges and professionals face mounting problems from AI errors, prompting calls for stricter oversight and verification.

Judges and employers are confronting a surge in AI-generated mistakes, from fabricated legal citations to inaccurate workplace data. Courts in the United States have already recorded hundreds of flawed filings, raising concerns about unchecked reliance on generative systems.

Experts urge professionals to treat AI as an assistant rather than an authority. Tools can support research and report writing, yet unchecked outputs often contain subtle inaccuracies that could mislead users or damage reputations.

Data scientist Damien Charlotin has identified nearly 500 court documents containing false AI-generated information within months. Even established firms have faced judicial penalties after submitting briefs with non-existent case references, underlining growing professional risks.

Workplace advisers recommend verifying AI results, protecting confidential information, and obtaining consent when using digital notetakers. Training and prompt literacy are becoming essential skills as AI tools continue shaping daily operations across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot