Amnesty International raises alarm over AI-driven discrimination in Danish welfare system
The new report calls for halting the current system, increased transparency, and adherence to international human rights standards.
Amnesty International has raised significant concerns about the Danish welfare authority, Udbetaling Danmark (UDK), and its partner, Arbejdsmarkedets Tillægspension (ATP), using AI tools in fraud detection for social benefits.
The organisation warns that these AI systems may disproportionately discriminate against vulnerable groups, including individuals with disabilities, low-income persons, migrants, refugees, and marginalised racial communities. This is detailed in Amnesty’s report, ‘Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State,’ which criticises the risk of entrenching social inequalities instead of supporting at-risk populations.
The report condemns what it describes as mass surveillance practices, highlighting the erosion of privacy due to the extensive collection of sensitive data such as residency, citizenship, and family relationships. Amnesty argues that such practices not only compromise individual dignity but also facilitate algorithmic discrimination, particularly through systems like the ‘Really Single’ and ‘Model Abroad’ algorithms. These tools may unfairly target atypical family setups or those with foreign affiliations, further marginalising already vulnerable communities. The psychological impact is severe, with individuals describing the stress of ongoing investigations as living ‘at the end of a gun,’ exacerbating mental distress particularly among people with disabilities.
Why does it matter?
The report points to issues of transparency and accountability, critiquing UDK and ATP for resisting full disclosure of their AI systems and dismissing claims of using a social scoring mechanism without robust justification. It also links these practices to potential violations of international, EU, and Danish commitments to privacy and non-discrimination. Amnesty called for an immediate halt to the use of these algorithms, the prohibition of ‘foreign affiliation’ data in risk assessments, and urged the European Commission to provide clarity on AI practices considered as social scoring, ensuring that human rights are safeguarded amid technological advancements.