乐鱼(Leyu)体育官网

Evolving plans for AI regulation

Reconciling frameworks with the competitiveness agenda

April 2025

The EU鈥檚 prescriptive AI Act has entered into force, with rules for generative AI applying from August 2025. The UK is instead proceeding with its flexible, principles-based approach, requiring no new regulatory frameworks in the short term. However, in the face of geopolitical developments, both jurisdictions are simultaneously reconsidering their overall balance between competitiveness and regulatory safeguards.

UK developments

The BoE/FCA鈥檚 latest AI聽聽has shown that 75% of firms are already using some form of AI in their operations 鈥� up from 53% in 2022. This increase is not being driven solely by back-office efficiency gains but also includes significant use cases such as credit risk assessments (an activity designated as 鈥榟igh risk鈥� under the EU AI Act), algorithmic trading and capital management.

To investigate this trend, the Treasury Committee has鈥燼 Call for Evidence on the impacts of the increased use of AI in banking, pensions and other areas of financial services. And the Bank of England鈥檚 Financial Policy Committee has an assessment of AI鈥檚 impact on financial stability.

Nonetheless, for now, the UK government is continuing with its principles-based approach 鈥� see more in our previous articles here, here and here. The 听补苍诲听聽have determined that their existing toolkits remain appropriate to address the risks posed by AI as these risks are 鈥榥ot unique鈥�. Being predominantly outcomes-focused, these toolkits provide sufficient agility to adjust to unforeseen technology and market changes.

This approach is being reinforced by the wider push for growth and competitiveness (see more in our article here). With AI seen as a key 鈥渆ngine for growth鈥�, the government鈥檚 focus is pivoting away from safeguards and towards innovation (while still, of course, accounting for national security).

For example, at the Paris AI Action Summit in February, the UK and US were the only two countries to opt-out of signing the non-binding international on 鈥榠nclusive and sustainable鈥� AI.

The government has renamed its AI Safety Institute to become the AI Security Institute. The announcing this renaming promised that the rebranded institute 鈥渨ill not focus on bias or freedom of speech,鈥� but instead will prioritise unleashing economic growth.

The FCA continues to encourage innovators and wider stakeholders to engage with its initiatives (including its recent ).

And the government has issued a to the AI Opportunities Action Plan, endorsing almost all of the original 50 recommendations 鈥� including setting up AI 鈥済rowth zones鈥�, creating a 鈥渟overeign AI unit鈥� and requiring regulators to publicly report annually on their activities to promote AI innovation. The response also confirmed that there will be a consultation on legislation to protect against risks associated with the 鈥渘ext generation of the most powerful models鈥�.

EU developments

In the EU, the AI Act has entered into force with rules for generative models applying from August 2025. The Act classifies AI applications into risk levels, introducing stringent requirements for the most high-risk areas.

This is being complemented by additional lower-level FS guidance from the European Supervisory Authorities (ESAs), demonstrating how existing sectorial legislation should be interpreted in the context of AI (e.g., from ESMA on ensuring compliance with MiFID II, from EIOPA on AI risk management).

Member States have until August 2025 to designate the national competent authorities that will oversee application of the Act鈥檚 rules within their jurisdiction and carry out surveillance. As the capitals are expected to select different types of authority (e.g., data protection, telecommunications, bespoke AI bodies), certain challenges are expected. In particular, these authorities will need to ensure they find a 鈥榗ommon language鈥� and avoid regulatory fragmentation.

In advance of full applicability of the Act (from August 2026), some Member States (e.g., Spain, Italy) have chosen to implement their own national rulebooks. These rulebooks will need to be monitored and updated to reflect any future amendments to the Act.

Being the first AI law by a major jurisdiction, the AI Act represented a major regulatory milestone. However, its prescriptive nature arguably reduces its ability to remain agile in the face of fast-moving technology. In fact, during negotiations, the underlying risk classification system had to be amended to account for the emergence of general-purpose AI. And as with the UK, the EU is also having to reconcile its approach with increasing calls for international competitiveness.

The AI Code of Practice (COP) for General Purpose AI complements the Act and aims to provide more detailed guidance for companies to ensure they adhere to ethical standards, even for systems that aren鈥檛 considered high-risk. If successful, the EC can give the COP a legal standing for providers to self-certify conformity with as part of their compliance. Due to be completed in May, the COP has already been significantly watered-down (or 鈥渟implified鈥�) during drafting 鈥� mostly in response to industry concerns and to align with the . The third draft has introduced much greater flexibility on copyright requirements, along with further adjustments to high-risk safety and security measures, and transparency reporting obligations.

The EU has also a 鈧�200 billion InvestAI strategy, signalling an increasing reliance on private capital to fuel growth in this area. These sources of private funding could also push policymakers to reduce the compliance burden.

International developments

Although individual jurisdictions 鈥� like the UK and EU 鈥� are being influenced by the drive for competitiveness, the international standard setters continue to focus on risks (and mitigants).

The has flagged concerns around herding and concentration risk in capital markets, especially if trading strategies become largely derived from open-source models. As a result, they urge national regulators to provide guidance on model risk management and emphasise stress testing.

has echoed these concerns, identifying most-commonly cited AI risks to include concentration, third-party dependencies and data considerations. These risks become more concerning when paired with the trend of increasing use of AI to support decision-making.

What this means for firms

Despite any movement towards regulatory 鈥渟implification鈥�, firms still need to ensure their risk and control frameworks properly account for AI use.

Those firms with a footprint in the EU must begin navigating the AI Act as a baseline 鈥� either building new or uplifting existing governance and control frameworks.

UK firms currently do not have a prescriptive rulebook to comply with. However, in many ways, their task is more difficult, as the onus is left on them to figure out how to manage this rapidly changing technology.

How 乐鱼(Leyu)体育官网 in the UK can help

乐鱼(Leyu)体育官网 in the UK has experience of advising businesses on integrating new technology into their operations, including developing AI integration and adoption plans. Our technology teams can provide expertise and build out test cases, while our risk and legal teams can support with designing and implementing control frameworks.聽

If you have any questions or would like to discuss any matters concerning AI, please get in touch.

Our insights

Regulatory Insights

Providing pragmatic and insightful intelligence on regulatory developments.

Digital Finance

The digitalisation of the financial sector continues.

Our people

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

乐鱼(Leyu)体育官网 in the UK

Bronwyn Allan

Manager, Regulatory Insight Center

乐鱼(Leyu)体育官网-UK


Connect with us

乐鱼(Leyu)体育官网 combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat