top of page

AI Regulation in the UK: Following or Leading the Global Trend?

Leon Harness

Updated: Feb 13

By Leon Harness


Detailed in the AI regulation White Paper of August 2023[1] (the ‘white paper’),  and its response to feedback in February 2024,[2] the UK government is seemingly prioritising a flexible, principles-based framework and sector-specific laws over comprehensive regulation. Conversely, the current global leading AI regulation – the European Union’s Artificial Intelligence Act (‘EU AI Act’) – provides a comprehensive, risk-based framework of horizontal regulation across all 27 member states, and the enforcement of the majority of provisions will commence in August 2026.[3] 

 

The UK’s lighter touch framework seeks to invite innovation through the ‘AI Action Plan’[4] to leverage AI to improve public services and foster economic growth by evaluating the UK’s infrastructure needs and attract AI talent. As the EU builds an ambitious model that seeks to manage AI risks systemically and encourage innovation within set parameters, the UK seems to favour a hands-off approach to avoid stifling innovative sectoral growth.

 

This article will examine whether the UK’s planned AI regulatory framework can carve out a unique leadership role in the development of AI regulation, or is it destined to follow larger global leaders in the field? To do this, I will run a comparative commentary between the EU’s AI act and the current framework the UK is planning to implement.   

 

 

UK Framework Principles

The white paper establishes five cross-sectoral principles for existing regulators to interpret and apply within their respective domains:

 

        i.            Principle 1 –AI systems function in a robust, secure, and safe way.

·       This requires providing guidance as to what good cybersecurity looks like, applying risk management frameworks, and being aware of technical standards and risk treatment measures.

 

       ii.            Principle 2 –AI systems are appropriately transparent and explainable.

·       Regulators will need to set expectations relating to: (a) the nature and purpose of AI systems in question; (b) the data being used; (c) the training data used; (d) the logic and process used; and (e) accountability for the AI system and any specific outcomes

 

     iii.            Principle 3 – AI systems are fair. They should not discriminate unfairly, undermine the legal rights of individuals or organisations, or create unfair market outcomes.

·       Regulators will need to articulate what ‘fair’ means with reference to their specific sectors and implement appropriate governance requirements.

 

     iv.            Principle 4 – Regulators should ensure there are governance measures in place, with clear lines of accountability across the AI life cycle.

·       Regulators will determine who is accountable for compliance with existing regulation and the principles and provide guidance on how to demonstrate accountability.

 

       v.            Principle 5 – Impacted third parties and actors in the AI life cycle are able to contest an AI decision or outcome that is harmful or creates a material risk of harm, and access suitable redress.

·       Regulators will need to consider creating or updating guidance that identifies the formal routes of redress offered to those affected by AI harms. [5]

 

Differences between the frameworks


The EU’s risk-based approach ensures that regulation is proportional to the potential harm caused by AI systems. For example, high risk AI technologies, such as critical infrastructure that could put the life of citizens at risk or law enforcement systems, are subject to strict obligations before being allowed to enter the market.[6] In comparison, the UK’s framework avoids categorising risks formally and leaves the job for sector regulators to develop appropriate guidance, codes of practice, and avoid AI-specific legislation. However, there is a growing consensus of the potential harms and risks that can arise from insufficiently regulated AI.

 

Enforcement and accountability are a key difference. The EU AI Act imposes administrative fines of up to €35 million or 7% of worldwide annual turnover, whichever is higher,[7]   for non-compliance. The centralised European AI Office oversees the AI Act’s enforcement[8] which ensures uniform application across EU member states. In contrast, the UK lacks the centralised structure to enforce its framework. Instead, the government plans to delegate enforcement responsibility to existing regulators, such as the Financial Conduct Authority, Information Commissioner’s Office, or the Competition and Markets authority.[9] This approach allows for context-specific governance; allowing for each regulator to develop policies that are relevant for their jurisdiction. This provides flexibility over the EU AI Act, ensuring that regulations are proportionate to the sectoral risks. However, this flexibility comes at the cost of risking a fragmented regulatory landscape. General purpose AI systems, like ChatGPT, operate across multiple sectors which make it difficult for sectoral regulators to coordinate their oversight. In an attempt to mitigate this, the white paper suggests cross-sectoral principles[10]  and core characteristics of AI (adaptivity and autonomy)[11] to encourage cross-sector coordination of regulation. As these principles are non-statutory, and lack legal obligations of enforcement, the UK’s principle-led model will struggle to manage the risks of more complicated AI systems that operate across multiple sectors.[12]

 

Can the UK lead in AI governance?


The UK can be view as a balance between the safety-focused, heavily regulated approach of the EU and the less regulated approach of the US under the Trump administration where states continue to adopt sector-specific AI regulation.[13] In addition, the UK AI Safety Institute is positioning itself as a global leader in undertaking global research on the most important risks that AI presents to society.[14] Despite this positioning, post-Brexit, the UK government’s own deregulatory agenda presents challenges to AI governance. The draft Digital Information and Data Protection Bill aims to substitute and replace the retained EU law from the General Data Protection Regulation (GDPR)[15] with a more lenient model. This will introduce a ‘growth and innovation’ duty that’ll limit the independence of the Information Commissioner’s Office by requiring government approval for its codes of practice; raising concerns about politicisation and weakened safeguarding capacity for consumer data.[16]

 

In addition, there is further concern stemming from the dwindling independence of oversight bodies like  the Responsible Technology Adoption Unit (formally the Centre for Data Ethics and Innovation (CDEI)),[17] which now fosters working partnerships[18] with the private sector rather than its original purpose of advising on data ethics and AI governance. This new role reduces its ability to hold the government accountable with the risk of private actors overriding public interest concerns.[19] 

 

The EU AI Act imposes strict rules on high-risk systems in critical sectors[20] – while the UK is no longer bound by EU law post Brexit, it is obligated to maintain data adequacy agreements with the EU to enable the free flow of data between institutions. If the UK fails to maintain data adequacy, UK-based companies would face barriers to doing business in the EU.[21] Therefore, despite planning a separate regulatory framework for AI systems, the EU’s regulatory framework creates a strong incentive to align as failing to do so risks exclusion from the largest internal market and leaves the UK at a competitive disadvantage. Moreover, like the EU’s GDPR, the AI Act is a global first in attempting to regulate AI and could set global standards for multinational companies to adhere to[22] as failure would leave them being unable to access the EU’s internal market. Therefore, companies operating in both the UK and EU will likely opt to follow the stricter EU regulation to avoid regulatory duplication.[23]

 

As AI continues to gather traction, so will other nations attempt at regulation. We can already see other nations looking to legislate their own set of rules that’ll further fragment the global digital world.[24] A way to avoid this is through cooperation on AI governance, such as the EU-US Trade and Technology Council – a partnership committed to cooperate on new technologies based on their shared democratic values and respect for human rights.[25] Since the UK has exited from the EU, it is no longer involved in these discussions and so it is limited in its ability to influence AI governance on an international scale because it lacks the cross-jurisdictional reach of the EU or the international influence of the US.

 

Conclusion


The UK’s AI regulatory strategy positions it as a potential innovation hub, but its success hinges on balancing growth with regulatory safeguards and unifying enforcement oversight. The sector-led and principle-based approach provides flexibility but risks creating gaps in regulation, and undermining public trust in the framework due to the lack of bodies with independent oversight over the government’s implementation of policy. To avoid this, the UK could enact statutory enforcement for its AI principles and improve the regulating actors’ ability to coordinate cross-sector enforcement.

 

On the international field, the UK must accept it lacks the cross-jurisdictional influence that other international actors possess. The deregulatory ambitions may well foster innovation in the UK but this will come at the expense of uncertainty and reduce global competitiveness, as business may favour aligning with established regulatory frameworks, like the EU AI Act, to ensure market access and compliance across multiple jurisdictions.


Bibliography


[3] White and Case, Long awaited EU AI Act becomes law after publication in the EU’s Official Journal (July 2024), https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal’

[4] Department for Science, Innovation and Technology, AI expert to lead Action Plan to ensure UK reaps the benefits of Artificial Intelligence, ‘https://www.gov.uk/government/news/ai-expert-to-lead-action-plan-to-ensure-uk-reaps-the-benefits-of-artificial-intelligence’

[5] White and Case, AI Watch: Global regulatory tracker – United Kingdom, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-kingdom’

[9] Nb.5 under the ‘Status of the AI Regulations’ section of the article

[10] Nb 1

[12] TechPolicy press, The AI Gambit: Will the UK Lead or Follow?, https://www.techpolicy.press/the-ai-gambit-will-the-uk-lead-or-follow-/’

[14] Ibid

[16] Nb 12 under ‘Deregulation and the Weakness of Oversight’ section of the article

[17] Gov.uk, Centre for Data Ethics and Innovation is now part of Department for Science, Innovation and Technology, https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation’

[19] Nb 12 under ‘Deregulation and the Weakness of Oversight’ section of the article

[20] Nb 6

[21] Nb 12, under ‘International Constraints: The Looming Shadow of the European Union’ section of the article

[22] ibid

[23] ibid

 
 
 

Comments


bottom of page