By Seraphine Lai

The modern British welfare state, emergent from industrialisation and the dual crisis of a global recession followed by a Second World War created conditions for a consensus around the need to build a society better able to deal with the human costs of a largely unregulated market economy. Subsequently, the economic downturn of the 1970s, followed by the rise of neoliberalism as a global ideology, has seen the public sector shrink and financialization take hold of the economy, presenting numerous challenges to the welfare state and its continued relevance.
And yet, perhaps no greater threat has been presented than the increasing integration of artificial intelligence (AI) into the public and third sectors across the United Kingdom. With the crisis of the COVID-19 pandemic swiftly changing the parameters of the economy and state, it has been noted by academics that the elevated role of Big Tech will cause social welfare as we know it to become “strongly driven by private corporations, and it will use their tools and platforms–whose ultimate goal is generating profit. Crucially, it will be based on opaque and intrusive forms of datafication”. In fact, we can already see this happening before our very eyes – currently, over 55 automated tools are deployed by UK public authorities, and hundreds of social workers are utilizing AI systems, such as the Magic Notes Tool, which records conversations and suggests actions for client management.[1] Already, while industry experts can see the potential of AI to enhance efficiency and address operational challenges in social work, they have also raised serious ethical, legal and practical concerns about its use in contexts involving vulnerable populations.[2]
Historically, industries reliant on human connection–such as social work, therapy, education and community management– have been perceived as resistant to the burgeoning leaps in automation development. These professions, which depend on trust, relational understanding, and the ability to make contextually nuanced decisions, require human practitioners who bring emotional intelligence and ethical reasoning to their work; qualities that are difficult to replicate through algorithmic models. However, the growing adoption of AI challenges these assumptions, as its capabilities to analyse data and suggest actionable outcomes begin to influence even the most human-centred professions.
The trend to turn more and more of our social lives into data points which can be collected and analysed is rapidly transforming the ways in which the provision of public services is organised, with significant implications for how we might think about the welfare state. While the emphasis on data infrastructures in the context of COVID-19 has made this more explicit, the conditions for these developments were already well underway. As noted by the Philip Alston, the UN Special Rapporteur on extreme poverty and human rights, the “digital welfare state” is already a reality or emerging in many countries across the globe. In these states, “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target, and punish”.[3]
Indeed, we can see these effects happening around us. Magic Notes, currently used in councils such as Swindon, Barnet, and Kingston, supplement social workers when dealing with social welfare cases, helping not only to keep track of the conversations between them and the client, but to generate summaries and follow-up actions that the social worker can choose to take. While some social workers report that the tool enhances engagement with clients and improves efficiency, critics argue that its reliance on sensitive personal data–including health records, finances, and demographics–raises critical issues of data privacy and governance.[4] Although the system includes features such as data deletion within one month and prohibits the use of personal data for AI training, questions about the adequacy of these safeguards persist. Furthermore, and perhaps most significantly, the threat that these AI tools will eventually replace human decision-making, transforming human practitioners into mere human operators, remains a significant worry, one that is not without reasonable grounds.[5]
AI systems used by the UK government are known to perpetuate bias in relation to people’s age, disability, marital status and nationality. According to an internal assessment of a machine-learning programme by the Department for Work and Pensions (DWP) used to review thousands of claims for universal credit payments across England, it was found that the system incorrectly selected people from some groups more than others when recommending whom to investigate for potential fraud, leading to a “statistically significant outcome disparity”.[6] Further research found that 200,000 people have been wrongly investigated for housing benefit fraud and error because of poor algorithmic judgment.[7]
Undoubtedly, technology has played a transformative role in the move across the UK’s public and third sectors towards the use of AI to cut costs and save. Following the 2008 financial crisis, austerity policies further weakened the British public sector, leading to substantial cuts and the privatization of welfare services. Local councils in the UK, having to adjust and transition to operating in a service economy reliant on global supply chains and precious labour whilst witnessing a significant decline in trade union membership, were forced to face budget reductions of up to 60%, whilst public assets were increasingly transferred to private ownership, shrinking state enterprises to less than 2%. It is thus not surprising that the emergence of disruptive technology–which facilitated mass data collection, automation and the creation of AI–heightened these tensions and forced the public sector to begin considering the integration of AI systems to make service provision more efficient and allow the welfare state to deliver on its promises. The integration of technology into public administration has thus further advanced social engineering, population management and the monitoring of citizens. Early digitalisation efforts rationalised public services but also reinforced distinctions between “deserving” and “undeserving” citizens, with academics describing this shift as an evolution from “new public management” to “new public analytics”, centred on algorithmic regulation. Across Europe, automated decision-making now allocates public health treatments, identifies child neglect, and detects benefit fraud. In the UK, these technological integrations raise critical questions about datafication and its implications for public infrastructure and human rights.
These questions are echoed by professional organisations and academics. The British Association of Social Workers (BASW) has called for a national framework of ethical principles for AI use in social work, highlighting the importance of accountability and the protection of human rights. Christian Kerr, a senior social work lecturer at Leeds Beckett University, has urged local authorities to carefully consider the implications of AI for privacy and ethical standards. He has called for a more cautious approach to AI implementation, arguing that regulators, professional associations, educators, and practitioners must first address its moral challenges before it becomes embedded in social work practice.
Nevertheless, there are voices within the profession advocating for the sector to engage proactively with AI. BASW chair Julia Ross has argued that while AI presents challenges, social workers cannot afford to reject the technology outright. “If we just stand back and say we don’t like it, then we won’t do ourselves, the profession, or the people we work with any advantage,” Ross stated.[8] She emphasized the need to merge AI tools with the emotional intelligence and ethical reasoning that are hallmarks of social work, rather than allowing technology to displace these fundamental aspects of the profession.
This debate reflects a broader tension between the opportunities and risks presented by AI in social work. On the one hand, AI has the potential to streamline administrative tasks, freeing up social workers to focus on building relationships with clients and addressing their needs. On the other hand, the risks associated with data misuse, algorithmic bias, and over-reliance on automated systems demand robust regulatory frameworks and ethical oversight.
Most insidiously perhaps, are the concerns about datafication fundamentally changing the way in which the welfare state operates. Data-driven systems often frame social issues as individual problems, emphasizing personal risk factors (e.g. behaviour, characteristics) over structural causes like poverty or systemic racism. This reflects a neoliberal governance style which dangerously shifts responsibility from the state to individuals, undermining the notion of shared social responsibility. With welfare systems also increasingly relying on data from a wide array of sources to embed it into private data industries, there is a move towards deepened state surveillance, particularly for marginalised groups, creating “digital poorhouses” where hyper-surveillance and biased algorithms perpetuate existing inequalities, influenced by the discriminatory construction of datasets.
In a macro-view, the integration of data-driven systems into the welfare state also causes the public sector to become more reliant on revenue extraction through rents (e.g. money or data), embedding welfare systems within commercial operations and global markets. This dependency transforms social welfare into a computationally optimised problem, displacing public infrastructure with private, programmable infrastructures. The commodification of data will threaten to restructure social practices, prioritising data accumulation over traditional production, with platforms acting as intermediaries to extract value from economic and social interactions. Rentierism will only expand this phenomenon, wherein the welfare state will treat data as a primary source of value while turning services into rentable assets.
In conclusion, the integration of AI into the UK’s welfare system appears inevitable, driven by government enthusiasm that often overlooks the pressing need for regulation. This shift has profound implications for millions of vulnerable individuals, as decisions traditionally grounded in human empathy risk being ceded to algorithms. While AI has the potential to enhance efficiency, it is imperative to critically examine its role and prioritise human decision-making in sectors like social welfare, where the stakes are deeply personal and far-reaching. Awareness and accountability must guide this transformation to ensure technology serves, rather than undermines, the public good.
Bibliography
[1]Guardian News and Media. (2024, September 28). Social Workers in England begin using AI systems to assist their work. The Guardian. https://www.theguardian.com/society/2024/sep/28/social-workers-england-ai-system-magic-notes
[2]The Tracking Automated Government Register. Public Law Project. (2024, December 5). https://publiclawproject.org.uk/resources/the-tracking-automated-government-register/
[3]Dencik, L. (1970, January 1). The datafied welfare state: A perspective from the UK. SpringerLink. https://link.springer.com/chapter/10.1007/978-3-030-96180-0_7
[4]Niklas, J. (2022, October 3). We need to talk about social rights in AI Policy. Media@LSE. https://blogs.lse.ac.uk/medialse/2022/10/03/we-need-to-talk-about-social-rights-in-ai-policy/
[5] Saied-Tessier, A. (2024, May). AI in the family justice system. Nuffield Family Justice Observatory. https://www.nuffieldfjo.org.uk/wp-content/uploads/2024/05/NFJO_AI_Briefing_Final.pdf
[6] Booth, R. (2024, December 6). Revealed: Bias found in AI system used to detect UK benefits fraud. The Guardian. https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits
[7] McRae, I. (2025, January 14). AI in DWP benefits system could “harm” people, experts fear. Big Issue. https://www.bigissue.com/news/social-justice/ai-dwp-benefits-system-universal-credit/
[8] Koutsounia, A. (2025, February 4). AI could be time-saving for social workers but needs regulation, say sector bodies. Community Care. https://www.communitycare.co.uk/2024/10/04/ai-could-be-time-saving-for-social-workers-but-needs-regulation-say-sector-bodies/
Comments