Lack of AI governance could leave pension trustees exposed

Elisabeth Storey and Sheila Pancholi say trustees should take all possible steps to ensure members are protected

clock • 5 min read
RSM's Elisabeth Storey and Sheila Pancholi
Image:

RSM's Elisabeth Storey and Sheila Pancholi

As the speed of AI adoption escalates, regulation is struggling to keep up, leaving pensions trustees potentially exposed to AI risks and litigation.

Recent data from the Bank of England highlights that 75% of financial firms have already started using AI, and a further 10% plan to do so in the near future. Insurers are now starting to offer cover for AI-related risks to protect businesses from losses if tools such as chat bots go awry.

UK regulation in this area is currently lacking, so what do pensions trustees need to know to protect members, and who is legally liable if something goes wrong? RSM UK explores what can be done to minimise the risks for trustees and pensions savers.

The rise of AI in the pensions industry

AI tools are becoming more widespread than ever, and the pensions industry is no exception, with a critical difference being that pension scheme trustees are one step removed from the testing and implementation of those AI tools, as they are often reliant upon third parties for the key functions of their pension scheme operations.

As AI tools become more widely used for pension scheme governance, administration, and member communications, trustees need to understand which third parties are using AI, what they are using, and how it impacts their members.

If trustees are not asking questions and don't know which of their advisors are using AI, or how their member data is being used and stored, how can they ensure their member data is protected at all stages, including when shared with third parties?

The regulation of AI

Unfortunately, the regulation of AI use is scarce at present, leaving pension scheme trustees open to the risks of the misuse and abuse of AI. The UK is currently relying on the General Data Protection Regulation (GDPR) and the Equality Act.

The GDPR was last updated in 2021 post-Brexit, and the Equality Act dates back to 2010, when AI use was scarce. The world of AI has moved on at such a pace that this regulation is now woefully lacking. A ‘wild west' of AI assurance firms has also appeared in recent years, some of which are also AI developers, raising concerns about their independence. As a result, The BSI (British Standards Institution) is introducing new standards to support the AI audit market immanently.

The Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) don't currently offer any specific rules on AI use, although the FCA is currently developing an AI live testing service, due to go live in September, which will support firms in deploying AI safely and responsibly.

European Union lawmakers signed the Artificial Intelligence Act in June 2024, and while this regulation doesn't apply to UK based scheme members, its principles can be used as a more up to date and helpful guide.

Those trustees who have pension scheme members resident in the EU should however note that the Draft EU AI Act does apply to them if the AI system impacts people located in the EU. Trustees may be at risk of non-compliance with the EU AI Act if they don't know where their members are based, and what AI tools are being used on the data relating to those non-UK residents. The EU AI Act will be fully in force by Summer next year, and it's highly likely that the UK Government will follow with a similar Act.

TPR guidance

The Pensions Regulator (TPR) has recognised the importance of this issue, publishing its Digital, Data and Technology (DDaT) Strategy in October 2024.

This provided a good starting point and was followed up by the publication of TPR's Data Strategy in March 2025. TPR is also establishing a Pensions Data and Digital Working Group, which aims to drive better collaboration across the sector around use of digital technology and deliver improved saver outcomes.

Currently though neither the Trustee Toolkit nor the General Code cover AI. This situation is worrying, as trustees could still be held liable for any consequences their members might suffer, such as loss of funds through fraud or identity theft.

Any AI failures resulting in data breaches could see fines in the £ millions imposed by the ICO, and the reputational damage to the employer's brand could be significant, depending on nature of the AI failure and the impact it has on scheme members.

What should trustees be doing?

Many trustees use third party organisations to provide essential services to their pension scheme. Part of the General Code's Effective Systems of Governance (ESOG) requirements are for trustees to demonstrate oversight over those third parties. For many trustees, this means obtaining, reviewing and challenging the Internal Controls included in the Audit and Assurance Framework (AAF) reports of those third parties. But the latest AAF requirements were published in 2020, and there is no specific testing of internal controls relating to AI risks.

Trustees should therefore extend their questioning of third parties to include documentation of how and where AI is being used for their scheme, what risk assessments have been carried out, and how ongoing use of AI is being monitored.

Trustees should also have an AI strategy or policy in place to govern and monitor the use of AI.

Trustees should also consider including AI on their risk register and ensuring that appropriate controls are in place to govern its use in all areas of scheme operations, including third parties.

If the trustee board does not itself have the skillset to identify and mitigate AI risks, an AI governance audit by an external party can help trustees get to grips with monitoring and AAF reporting requirements in relation to AI risks (including those from third parties).

Who is ultimately legally responsible if AI goes rogue is currently unclear, so we'd recommend trustees take all possible steps to ensure their members are protected and AI is used responsibly.

Elisabeth Storey is pensions audit director and Sheila Pancholi is national lead for technology and cyber risk assurance at RSM UK

More on Admin / Technology

Lack of AI governance could leave pension trustees exposed

Lack of AI governance could leave pension trustees exposed

Elisabeth Storey and Sheila Pancholi say trustees should take all possible steps to ensure members are protected

Elisabeth Storey and Sheila Pancholi
clock 30 July 2025 • 5 min read
LCP publishes tips for successful large scheme onboarding to dashboards

LCP publishes tips for successful large scheme onboarding to dashboards

Schemes should have effective communication, governance and project management

Jasmine Urquhart
clock 28 July 2025 • 1 min read
XPS illiquid asset trading solution passes £330m in deals

XPS illiquid asset trading solution passes £330m in deals

Two transactions made last month with help of secondary market solution

Jasmine Urquhart
clock 23 July 2025 • 1 min read
Trustpilot