AI governance made clear: UK’s new platform and EU’s compliance path
The UK Government has just announced its AI Assurance Platform, a new "one-stop-shop" for UK businesses to find guidance on identifying and mitigating potential risks posed by AI adoption. This platform is a cornerstone of the UK Government’s broader AI strategy, focused on fostering innovation, promoting safety and establishing transparent governance frameworks. This approach stands in contrast to the EU’s more regulatory-focused AI governance model. This article explores these differing approaches to AI governance and how UK businesses can successfully navigate this dual landscape.
The UK’s AI governance approach: Fostering trustworthy AI
With the introduction of the AI Assurance Platform, British companies will now have access to essential tools and guidance to manage AI risks effectively, fostering a safer AI environment while encouraging innovation. This initiative aims to build public trust in AI and establish the UK as a global leader in AI safety and assurance.
The UK’s platform will offer practical resources for responsible AI use, particularly tailored for small and medium-sized enterprises (SMEs). Businesses will be able to access self-assessment tools that enable them to implement responsible AI management practices. Additionally, the platform will include a public consultation to ensure its tools and services reflect industry needs. It will also aid British companies in aligning with international AI standards, strengthening the UK’s reputation as a hub for AI assurance expertise.
The platform is currently being developed, though the exact release date has not yet been specified.
The EU’s AI Act: A framework for compliance and global impact
While the UK is advancing its AI support, the EU is taking a regulatory approach through the EU Artificial Intelligence Act (“the EU’s AI Act”). The EU’s AI Act adopts a risk-based approach which categorises AI systems by their potential impact – prohibiting certain uses while heavily regulating others deemed high-risk. This allows the EU to tailor regulatory obligations according to the potential impact of the AI technology on individuals and society.
Under the EU’s AI Act, all AI systems will be categorised under one of the following four risk levels:
- Prohibited AI systems: These include AI systems that pose a clear threat to the safety, livelihoods or rights of people. Examples include systems that manipulate individuals’ behaviour through subliminal techniques or exploit vulnerabilities based on age or disability. Government use of AI systems to assess for social scoring, such as systems which assess and rank an individuals' behaviour or characteristics in order to make broad judgements on them is also prohibited.
- High-risk AI systems: AI systems with significant potential to affect fundamental rights and safety fall into this category. Examples include AI used in critical sectors like healthcare, finance, law enforcement and education. High-risk AI systems are subject to strict compliance requirements, including rigorous risk management, technical documentation, data governance and human oversight. These systems will also require third-party assessment before being deployed.
- Limited risk AI systems: Limited-risk AI systems, while not highly regulated, still require transparency measures, especially if they interact directly with humans. For example, chatbots and AI-driven customer service systems will need to inform users they are interacting with AI.
- Minimal or No-Risk AI Systems: These encompass most AI applications, such as spam filters or recommendation engines. While there are no specific regulatory requirements for these systems under the EU’s AI Act, businesses are encouraged to adopt best practices.
Why the EU’s AI Act is relevant to UK businesses
Whilst the EU’s AI Act does not directly apply in the UK, it will nevertheless impact upon many businesses here. In particular, UK businesses operating in or serving EU markets must comply with the EU’s AI Act’s requirements. This includes UK companies that directly provide AI solutions to EU clients or UK companies that have EU-based subsidiaries or affiliates. Moreover, UK companies must comply with the EU’s AI Act if their AI system’s outputs are used within the EU, even when the system itself is deployed outside EU borders.
Timelines for compliance with the EU’s AI Act have been established for different risk classifications. While the majority of the obligations will apply to organisations from 2 August 2026, some provisions will apply before and after that milestone. Non-compliance with the EU’s AI Act could lead to significant fines, which are capped at a percentage of global annual turnover in the previous financial year or a fixed amount (whichever is higher), as follows:
- €35 million or 7% of global annual turnover for non-compliance with prohibited AI system rules.
- €15 million or 3% of global annual turnover for non-compliance with other obligations.
- €5 million or 1% of global annual turnover for supplying incorrect, incomplete or misleading information required under the EU’s AI Act.
It is anticipated that by establishing high standards for AI governance, the EU’s AI Act will trigger a "Brussels effect," whereby the EU’s regulations set a precedent that influences global AI policies, including those of the UK Government. As a result, compliance with the EU’s AI Act may also benefit even purely UK-centric companies by helping them prepare for future regulatory trends and showcase a commitment to ethical AI practices.
Compliance strategies for UK businesses
To navigate this complex regulatory landscape, UK businesses should take a proactive approach to compliance with both the UK's AI assurance framework and the EU's AI Act, which should include the following:
- Inventory of AI systems: Businesses should start by identifying and categorising their AI systems according to the risk framework established by the EU AI Act. This inventory serves as a baseline for assessing whether your AI systems will be subject to the EU’s AI Act’s requirements.
- Assess compliance obligations: UK businesses must assess each AI system’s compliance requirements based on its risk classification and intended use in EU or UK markets.
- Implement transparency measures: For systems with user interaction, businesses must ensure transparent notifications to align with both UK and EU standards.
- Engage in the UK’s consultation: Participating in the UK's public consultation will allow businesses to influence the development of AI support resources, ensuring they are tailored to industry needs.
- Plan resources, strategy and processes for compliance: Compliance with the EU’s AI Act may require substantial changes to internal processes, resources and strategic approaches.
Conclusion
The UK and EU’s respective approaches to AI governance illustrate the evolving regulatory landscape businesses must navigate. While the EU AI Act provides a robust regulatory framework, the UK’s AI assurance platform offers complementary support, creating a balanced approach that fosters both innovation and trust in AI technologies. UK businesses that proactively align with these standards can not only mitigate compliance risks but also enhance their reputational standing as leaders in responsible AI deployment.
What can we do to help?
For guidance on navigating these regulatory landscapes, please contact either James Sarjantson on 0113 201 0401 – ku.oc1733097336.fcl@1733097336nostn1733097336ajras1733097336j1733097336 or Thomas Taylor on 0113 204 0407 – ku.oc1733097336.fcl@1733097336rolya1733097336tt1733097336.