AI platforms under scrutiny for overstating mental health support capabilities

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI for potentially deceptive mental health marketing, especially to children.

texas, ai chatbots, mental health claims, meta ai studio, character.ai, deceptive trade practices, atg investigation, ai regulation, child safety, consumer protection, digital mental health, ai governance

Attorney General Ken Paxton of Texas has initiated a civil investigative demand targeting Meta AI Studio and Character.AI over alleged deceptive trade practices in promoting their chatbots as mental health support tools, despite lacking medical credentials.

The investigation probes whether these platforms mislead users, including vulnerable children, by portraying themselves as private, trustworthy sources of emotional support while lacking professional oversight.

Paxton’s office alleges these tools impersonate licensed therapists, make fraudulent claims, and exploit user data for algorithm development and advertising.

Texas authorities further express concern that users may interpret chatbot recommendations as legitimate therapy, which raises risks without proper disclaimers, safeguards or medical accountability.

This concern follows earlier actions, including scrutiny of AI safety with tools like Character.AI. The move underscores broader anxieties over AI platforms providing mental health advice without regulation or clinical validation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!