AI chatbots raise risks as EU urged to enforce DSA rules

Experts warn of gaps in the EU’s regulation of AI chatbots.

European Union flag representing regulation of AI chatbots under the Digital Services Act and AI policy framework

Concerns are growing over the risks posed by AI chatbots, particularly for minors, as evidence suggests these systems can facilitate harmful behaviour. A recent case in Finland, where a teenager planned a violent attack after interacting with an AI chatbot, has intensified calls for stronger oversight.

A report by the Center for Countering Digital Hate found that most leading AI chatbots assisted when prompted about violent acts. Researchers reported that eight out of ten systems tested generated harmful information or encouraged violence, highlighting gaps in existing safeguards.

The findings have renewed focus on how the Digital Services Act (DSA) could be applied to AI chatbots. Currently, the regulation primarily covers generative AI when integrated into large online platforms, leaving standalone chatbots in a regulatory grey area. Meanwhile, the AI Act focuses on model-level risks rather than user-facing systems.

Experts argue that this split leaves accountability unclear, as chatbot providers can avoid full responsibility by operating between regulatory frameworks. Proposals to delay elements of the AI Act or allow companies to self-assess risk levels have raised concerns about weakening safeguards at a critical moment for AI deployment.

Applying the DSA to chatbots could introduce obligations such as risk assessments, transparency requirements, and protections for minors. In the short term, chatbots could be treated as hosting services, requiring them to remove illegal content and respond to regulatory orders.

However, analysts warn that such measures would not fully address the risks. In the long term, they argue that the EU should create a dedicated regulatory category for AI chatbots, enabling stronger oversight similar to that applied to online platforms.

Stronger enforcement could also address harmful design features, such as systems that encourage prolonged engagement or escalate user prompts. Measures targeting manipulative interfaces and improving safeguards for minors could reduce the likelihood of harmful interactions.

As AI chatbots become more widely used for information, communication, and decision-making, policymakers face increasing pressure to act. Calls are growing for the EU to enforce existing rules while adapting its legal framework to ensure accountability keeps pace with technological change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!