How Labour Is Using Biased AI to Determine Benefit Claims

Spread the love

After pushback from disability justice and privacy groups, Labour scrapped the Tories’ dystopian data protection and digital information (DPDI) bill, only to replace it with their own version – the data (use and access) bill. Far from being an improvement on what the Tories had planned, this new bill will massively expand the scope of automated decision-making. What’s more, Labour has also resurrected one of the most controversial parts of the DPDI in the form of the public authorities (fraud, error and recovery) bill, which will force banks to spy on all their customers in the interest of tackling welfare fraud. 

Prime minister Keir Starmer has said he wants to “mainline AI” into the veins of government. Starmer’s AI fervour is encouraged by the Tony Blair Institute, which advocated for turning the DWP into an “AI exemplar” department in a July 2024 report. Labour is opening the floodgates to not just a digitised welfare system that treats all claimants as suspects simply because they need support – but a public sector where AI tools are being used to dictate our lives in ways that we aren’t necessarily aware of. 

Biased and ineffective. 

In documents released under the Freedom of Information Act at the end of last year, the DWP admitted to finding bias in an AI tool used to detect fraud in universal credit (UC) claims.

The machine learning tool – which focuses on claims for cash advances to cover the five-week waiting period while a UC application is processed – had been assessed for fairness by the DWP several times since at least July 2023, the documents revealed. Each analysis found that the algorithm and intervention process are more likely to incorrectly flag claimants with certain protected characteristics, to an extent researchers considered “statistically significant”. Essentially, the AI incorrectly assumed that some people were more likely to commit fraud based on factors including their age, nationality and whether or not they were married. 

What’s more, the DWP’s own reports admit that the metrics it uses to assess fairness are incomplete. It fails to test for bias towards many marginalised and discriminated against cohorts – or for bias regarding intersectional vulnerabilities.

It’s become clear that the Labour government is betting the house on AI as a magic formula for bolstering growth. In January, it published a 50-point AI opportunities action plan – essentially a notice of intent to procure services for AI development, allowing the government to “rapidly test, build or buy tools”. 

Disability campaigners have warned that benefits claimants are merely guinea pigs in the government’s public sector AI plan – without their consent or even their awareness. The DWP has been using machine learning tools since at least 2020, but there is a serious lack of transparency around the identity of all AI tools used, their efficacy and what – if any – safeguards have been put in place to prevent bias or mitigate the impact of real-world harm. 

https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/

Keir Starmer says that his Labour Party is intensely relaxed about assaulting the very poorest and most vulnerable.
Keir Starmer says that his Labour Party is intensely relaxed about assaulting the very poorest and most vulnerable.

Leave a Reply