UK’s information commissioner warns DWP of contempt of court over lack of transparency in AI welfare claims

The UK’s DWP employed AI for fraud detection in welfare claims, but its secrecy prompted the information commissioner’s contempt of court warning. Concerns include withholding information, rejecting freedom of information requests, and blocking inquiries from MPs, highlighting a lack of transparency in the use of AI in welfare programs.

 Flag, United Kingdom Flag

The Department of Work and Pensions (DWP), which is responsible for pensions and child maintenance policy in the UK, has deployed AI tools in the past two years to detect fraud and error in universal credit claims.

At the same time, the UK’s information commissioner has warned the DWP that it risks contempt of court due to the growing concerns over the transparency of these AI tools in investigation processes. Namely, the information commissioner stated that the DWP has 35 to change its approach and improve its border handling of freedom of information requests. According to the Guardian, the government has maintained secrecy over the system, refusing to comply with freedom of information requests and blocking questions from MPs.

In July 2022, the Guardian asked the DWP the type of information that the AI’s tools algorithms are fed when deciding who might be cheating. However, the DWP refused to provide any information claiming that the release of such information would harm the prevention or detection of crime. The DWP has said it takes compliance with the Freedom of Information Act and the Cabinet Office code of practice seriously and keeps its approach to the publication of information under constant review.

Why does it matter?

Despite the information commissioner, UN experts have warned that without greater transparency, the use of AI in the UK’s welfare system could create serious problems for benefit claimants. Child poverty campaigners have also stated that the use of AI tools could have devastating consequences if benefits are suspended. Additionally, Big Brother Watch, a transparency campaign group, has expressed serious concern about the government’s actions. They previously reported that 540,000 individuals applying for benefits had their fraud risk scores assigned by secretive algorithms before receiving support.