How AI deepfakes are becoming the new frontier of pension fraud

Eddie Harrison asks how trustees can defend against increasingly sophisticated AI fraud

clock • 6 min read
Eddie Harrison: The threat is here and it is accelerating
Image:

Eddie Harrison: The threat is here and it is accelerating

According to Action Fraud, losses from pension fraud totalled £17.5m in 2024, with an average loss of approximately £34,000 per individual. That may seem a lot, but very soon those figures may feel like chicken feed.

That's because where fraudsters once relied on basic attacks – such as email hacking, phishing, and duplicate accounts – generative AI is shifting the goalposts. Fraudsters can now access sophisticated, low-cost deepfake technology including headshots, real-time video face filters, voice changers, and AI-crafted messages to bypass fraud controls.

Their deepfakes can replicate the voice of a family member, the face of a financial adviser, or a recently deceased former pensioner with near perfect fidelity.

For trustees of UK occupational pension funds, deepfakes are both a financial risk and a governance challenge. Schemes need to ask whether their fraud controls can defend against a threat that didn't even exist when their fraud controls were created.

KYC spoofs

The fiduciary duty of schemes is to pay the right benefit to the right person. Firms use checks and balances to ensure they are paying the right benefits, to the right bank accounts and particularly to people who still deserve them.

Until now, schemes have relied upon tangible factors to prevent frauds, such as a face on a video call, a voice on a telephone, or a signed certificate. Deepfake technologies risk invalidating these approaches outright.

The risk of deepfakes is most pronounced at the points of highest risk such as pension transfers, benefit crystallisation events, and when nominated beneficiaries are added or amended. While many of the checks which take place when these events occur have become increasingly automated, deepfakes have been specifically built to bypass the controls which are in place to mitigate these higher risk activities.

In a world where identity can be manufactured, verifications have become a real-time problem rather than a simple, one-off verification process.

Proof-of-life vulnerabilities

Proof-of-life checks present a related risk. Many schemes rely on periodic confirmation processes designed for a simpler threat: catching the non-reporting of a member's death. Forms, emails, and video check-ins reflect that scope and were never built to withstand deliberate impersonation.

The problem is compounded by the over one million UK pensioners living abroad, for whom proof-of-life is already harder to verify and official records more difficult to cross-reference.

The financial consequences are already visible before deepfakes are factored in.

Overpayments linked to undetected deaths represent a substantial drain on scheme assets – for the state pension as well across defined benefit (DB) and defined contribution (DC). Introduce a technology that makes fraudulent proof-of-life confirmation easier to execute and harder to detect, and that trajectory shoots up.

For trustees – particularly those overseeing defined contribution schemes, where individual member outcomes are directly at stake – the fiduciary dimension of this risk is equally concerning.

The key consideration is whether trustees can demonstrate, credibly and with evidence, that the verification standards they applied were adequate for the threat environment.

Where a scheme makes a payment on the basis of a verification process compromised by a deepfake, the question of whether trustees exercised appropriate diligence will fall under the spotlight.

That scrutiny will be applied against TPR's General Code of Practice, in force since March 2024, which requires trustees to maintain an Effective System of Governance and, for schemes with 100 or more members, complete a formal Own Risk Assessment documenting how material risks are identified and mitigated.

A fraud risk register with no reference to AI-enabled identity threats will be difficult to defend. TPR has already flagged impersonation fraud as a priority concern. The regulatory infrastructure for accountability is in place. What's absent is explicit guidance naming deepfakes as a specific threat within it. This will likely follow soon.

What trustees can do now

For trustees, the priority is to move from awareness to action. That means five things:

  1. Map your proof of life process end-to-end: Validation is no longer a one-time event, every time you communicate with a member, take a video call or send out comms, this touchpoint with the member creates data which can be used to detect potential fraud. Treating each touchpoint as a standalone event prevents you from detecting changes in IP address, phone numbers used or even being able to spot differences inherent in different deepfakes.
  2. Risk assess your customer processes: Organisations must ask what activity are we performing for our customer? What are the risks involved in this? A pension withdrawal of a large sum or changing bank details for payouts is high risk; whereas inquiring about payment amounts but making no changes is lower risk.  Once activities are mapped end-to-end, controls can be put in place which are reflective of those risks.
  3. Audit every point where human interaction is treated as proof of identity: Map the moments where a human judgement call is the primary or sole verification control. These are your exposure points, and they must be known before they can be addressed.
  4. Stress-test your processes against spoofing scenarios: Work with administrators to run structured exercises simulating deepfake-enabled fraud attempts and assess whether your current frameworks would detect them.
  5. Increase oversight of third-party administrators: Review administration agreements to establish whether minimum verification standards are explicitly specified and adequate for the current threat environment. Monitor their fraud rates to ensure that they remain at an acceptable level and take action against administrators who are repeatedly failing to meet the standards.

Key defence enablers

Executing this plan requires an uplift in the technological capabilities underpinning verification.

A credible defensive posture requires interlocking layers. AI-powered deepfake detection provides a first line of defence at the point of human interaction, identifying artefacts that synthetic media leaves in video and audio that a human reviewer would not catch. This must sit within an automated verification framework incorporating passive liveness checks (confirming physical presence rather than a replay or AI-generated image) alongside biometric identity assurance that ties confirmed presence to a verified identity record.

For schemes with overseas members, the challenge extends further: an effective solution must validate official documents across a wide range of international jurisdictions and integrate with national and cross-border databases where data-sharing agreements permit.

Treating identity verification as a live governance question is also key. That means ensuring fraud-risk registers reflect AI-enabled threats specifically. It also means reviewing scheme rules and administration agreements to confirm that verification decisions are made on a basis adequate for the environment schemes now operate in. And it means accepting that the standard against which trustee conduct will be judged is not what was common practice at the time, but what was available, known, and reasonably implementable.

The time to act is now

In 2019, the CEO of a UK energy firm was deceived by an AI-generated voice call, in a voice indistinguishable from that of his own boss, into transferring €220,000 to a fraudster.

That attack required significant technical sophistication. Today, the same capability is available as a commercial subscription service for less than £30 a month and requires no specialist knowledge.

The threat is here, it is accelerating, and UK occupational pension schemes with their large, ageing, geographically dispersed memberships, and governance frameworks built for a different era, are squarely in its sights.

Trustees who act now will better protect their members and set the standard for what good governance looks like in the AI era.

Eddie Harrison is chief growth officer at Navro

More on Admin / Technology

Webinar: Fraud and identity in the pensions industry

Webinar: Fraud and identity in the pensions industry

Event will look at cybercrime trends and developments in identity risk

Professional Pensions
clock 12 May 2026 • 1 min read
Industry urged to move faster on responsible use of AI as admin pressures intensify

Industry urged to move faster on responsible use of AI as admin pressures intensify

Perceptions the technology is a ‘wild west tool’ are incorrect

Jonathan Stapleton
clock 11 May 2026 • 2 min read
'Press ahead' with dashboard prep as connection deadline just six months away

'Press ahead' with dashboard prep as connection deadline just six months away

Maps CEO urges providers to maintain close contact with the PDP and regulators

Holly Roach
clock 30 April 2026 • 1 min read
Trustpilot