Denmark faces backlash over AI welfare surveillance

Amnesty urges Denmark to halt AI-driven welfare tools and calls for EU intervention.

Amnesty International criticises Denmark’s AI-driven welfare system for breaching privacy and fostering discrimination.

Concerns are mounting over Denmark’s use of AI in welfare fraud detection, with Amnesty International condemning the system for violating privacy and risking discrimination. Algorithms developed by Udbetaling Danmark (UDK) and ATP flag individuals suspected of benefit fraud, potentially breaching EU laws. Amnesty argues these tools classify citizens unfairly, resembling prohibited social scoring practices.

The AI models process extensive personal data, including residency, citizenship, and sensitive information that may act as proxies for ethnicity or migration status. Critics highlight the disproportionate targeting of marginalised groups, such as migrants and low-income individuals. Amnesty accuses the algorithms of fostering systemic discrimination while exacerbating existing inequalities within Denmark’s social structure.

Experts warn that the system undermines trust, with many recipients reporting stress and depression linked to invasive investigations. Specific algorithms like ‘Really Single’ scrutinise family dynamics and living arrangements, often without clear criteria, leading to arbitrary decisions. Amnesty’s findings suggest these practices compromise human dignity and fail to uphold transparency.

Amnesty is urging Danish authorities to halt the system’s use and for the EU to clarify AI regulations. The organisation emphasises the need for oversight and bans on discriminatory data use. Danish authorities dispute Amnesty’s findings but have yet to offer transparency on their algorithmic processes.