AI cheating allegation sparks discrimination lawsuit

The case raises urgent questions about how universities handle AI suspicions when technology, academic integrity, and disability rights collide.

Enterprise AI adoption is accelerating as workers use tools more frequently and for more complex tasks, gaining speed, quality and new capabilities across departments.

A University of Michigan student has filed a federal lawsuit accusing the university of disability discrimination after professors allegedly claimed she used AI to write her essays. The student, identified in court documents as ‘Jane Doe,’ denies using AI and argues that symptoms linked to her medical conditions were wrongly interpreted as signs of cheating.

According to the complaint, Doe has obsessive-compulsive disorder and generalised anxiety disorder. Her lawyers argue that traits associated with those conditions, including a formal tone, structured writing, and consistent style, were cited by instructors as evidence that her work was AI-generated. They say she provided proof and medical documentation supporting her case but was still subjected to disciplinary action and prevented from graduating.

The lawsuit alleges that the university failed to provide appropriate disability-related accommodations during the academic integrity process. It also claims that the same professor who raised the concerns remained responsible for grading and overseeing remedial work, despite what the complaint describes as subjective judgments and questionable AI-detection methods.

The case highlights broader tensions on campuses as educators grapple with the rapid rise of generative AI tools. Professors across the United States report growing difficulty distinguishing between student work and machine-generated text, while students have increasingly challenged accusations they say rely on unreliable detection software.

Similar legal disputes have emerged elsewhere, with students and families filing lawsuits after being accused of submitting AI-written assignments. Research has suggested that some AI-detection systems can produce inaccurate results, raising concerns about fairness and due process in academic settings.

The University of Michigan has been asked to comment on the lawsuit, which is likely to intensify debate over how institutions balance academic integrity, disability rights, and the limits of emerging AI detection technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot