.png)
The Observer (tag the observer account) published a piece back in March on the dire state of member data in the Teachers’ Pension Scheme- an all-too-familiar issue across the UK pensions landscape. I submitted a letter in response. It wasn’t published, but the point still stands- and is arguably more urgent now than ever. So I’m sharing it here.
The technology exists. The tools exist. What’s missing is the urgency.
It’s 2025- accurate data should be the baseline, not the exception.

Read the original article on the Guardian.

An enterprise-grade fraud stack is not a product. It is a latency-constrained decisioning system in which multiple layers – data collection, identity validation, enrichment, scoring, and decisioning – operate as a single flow. In most transaction environments, that entire loop runs in under 300 milliseconds for transaction decisions, and only marginally longer for onboarding.
The challenge is not assembling the stack. Most institutions already have the core components in place, often across multiple vendors and internal systems. The challenge is understanding how those components interact in practice – and where the system produces decisions that appear well-supported, but are not.
A fraud decision is not generated by a single model or rule. It is the result of a sequence of stages, each contributing a different type of signal or constraint.
At a high level, the system collects observable signals, validates identity claims, enriches those signals with external data, applies probabilistic scoring, enforces deterministic rules, and aggregates all outputs into a final decision. Cases that fall outside clear thresholds are escalated, and outcomes are fed back into the system to continuously refine performance.
This flow is consistent across financial institutions, even where implementation details differ . What varies is the relative strength of each layer, and the degree to which each one contributes meaningful signal to the final decision.
In practice, this decisioning flow can be broken down into eight functional layers:
1. Signal Collection
The system captures all observable inputs at the point of interaction, including device fingerprinting, IP intelligence, behavioral biometrics, and identity data. These signals form the raw input for all downstream analysis.
2. Identity Verification (IDV)
Identity attributes are validated against trusted sources such as credit bureau headers, SSA records, and sanctions lists. This establishes whether the identity exists and meets regulatory requirements.
3. Data Enrichment
External data sources are used to expand the identity profile. This includes email intelligence, phone intelligence, address validation, and consortium-based signals that provide additional context beyond the initial claim.
4. Risk Scoring
Machine learning models transform raw and enriched signals into probabilistic risk scores. These models typically target specific fraud types, including application fraud, synthetic identity fraud, and account takeover.
5. Rules Engine
Deterministic rules enforce policy and known fraud patterns. These include hard blocks (e.g., sanctions matches), velocity thresholds, and mismatch conditions that cannot be fully captured by models.
6. Orchestration & Decisioning
All signals, model outputs, and rule evaluations are aggregated into a final decision – approve, review, or decline – through a centralized decisioning layer.
7. Step-Up & Case Management
Cases that fall into intermediate risk bands are escalated through additional verification (e.g., biometric checks, OTP) or routed to human investigation workflows.
8. Feedback & Model Governance
Confirmed fraud outcomes, false positives, and analyst decisions are fed back into the system to retrain models, refine rules, and monitor performance over time.
This architecture is broadly consistent across the industry. The presence of these layers, however, does not guarantee effective decisioning.
The following simplified view highlights how each layer contributes to the final decision, and where its limitations typically emerge:

This view is intentionally reductive. Its purpose is not to describe the system exhaustively, but to make visible where signal strength and decision confidence can diverge.
Failures rarely occur because a layer is absent. They occur when a layer produces an output that appears sufficient, but lacks underlying depth.
An identity may pass bureau and SSA validation, present no device or velocity risk, and return acceptable enrichment signals. Yet the identity may still lack coherence across time – no consistent footprint, no reinforcing signals, and no evidence of persistence.
This is the central gap.
Most stacks are effective at confirming that an identity exists. Many can confirm that a user is physically present. Far fewer can determine whether the identity behaves like a reliable individual over time.
These limitations are not purely technical. They are structural.
Latency constraints limit the ability to incorporate deeper or slower data sources. Scale requires reliance on generalized models rather than case-specific analysis. Cost and conversion pressures reduce tolerance for additional friction or enrichment calls.
As a result, systems tend to emphasize:
Both are necessary. Neither is sufficient to fully resolve identity risk.
The “perfect” fraud stack is a myth. In practice, every stack reflects a set of trade-offs – between latency, cost, scale, and risk tolerance. Different institutions prioritize different parts of the system:

Understanding the structure of a fraud stack is necessary, but not sufficient. The more important task is evaluating how the stack behaves under real conditions.
Key questions include:
Fraud does not typically exploit missing components. It exploits the assumptions created by partial signal coverage.
This report provides a structural view of the modern fraud stack. In the accompanying evaluation guide, we extend this framework to:
Follow us to be notified when the full evaluation guide is released.

A recent data review identified deceased members still recorded as active – including deaths dating back to 2002.

A recent pension data cleanse for a large UK industrial defined benefit scheme identified that approximately 2% of members were deceased, including several individuals whose deaths dated back more than twenty years.
Two members recorded as active in the scheme records were found to have died in 2002.
For large defined benefit schemes, discrepancies of this scale can represent a material number of member records requiring validation before insurer pricing can proceed.
No administrative exception had been raised. The discrepancy only became visible once member records were validated against external sources.
These findings illustrate how member data inaccuracies can remain embedded within scheme records for extended periods without triggering operational alerts.

When schemes approach buy-in or buy-out transactions, insurers undertake detailed due diligence on the member population. Confidence in the integrity of scheme data therefore becomes an important consideration.
Insurers typically review several areas, including:
Where information cannot be independently validated, additional verification work may be required before pricing can be confirmed. In some cases this can extend transaction timelines or introduce further assumptions into pricing models.
The Pensions Regulator also emphasises that trustees are responsible for maintaining complete and accurate member data as part of effective scheme governance.
Pension schemes operate over long time horizons. Member records may remain in administrative systems for several decades and often pass through multiple administrators and technology platforms.
Over time, several structural issues can arise. Members may pass away without the scheme being notified, particularly where contact with the scheme has been lost.
In England and Wales alone, over half a million deaths are registered each year, according to the UK Office for National Statistics (ONS). Reconciling long-standing member records against this scale of national mortality data is therefore an important element of maintaining accurate scheme populations.
Increasing international mobility also reduces visibility within domestic datasets. Addresses and contact details may remain unchanged for extended periods, and historical system migrations can introduce inconsistencies across records.
These issues do not necessarily affect day-to-day administration but can become visible when scheme data is examined more closely during transaction preparation.
To address these risks, schemes increasingly supplement internal records with additional verification sources such as:
Platforms such as Heka help consolidate these signals into structured intelligence. This allows schemes to validate member records, identify mortality indicators, and improve confidence in the accuracy of their member population.
Undetected deaths in scheme records illustrate a broader issue: member data can deteriorate silently over time.
Routine administrative processes may not surface these discrepancies. However, when schemes approach buy-in or buy-out preparation, such gaps can become operationally and financially relevant.
Early validation of member data can therefore reduce uncertainty, support insurer due diligence, and improve readiness for endgame transactions.

The "traditional" UK retiree is a vanishing demographic. As of 2026, the Office for National Statistics (ONS) and the DWP report that over 1.1 million UK pensioners now reside overseas. This isn't just a trend for high-net-worth individuals; it is a cross-demographic shift driven by global mobility and the search for lower costs of living.
However, the risk to pension schemes doesn't start at the point of retirement. It begins decades earlier.
While pensioners moving abroad is a well-documented trend, a more systemic risk is quietly accumulating in the "deferred" category: The Young Mobile Workforce.
1. The Fiduciary "Out of Touch" Trap
A trustee’s duty of care does not end when a member moves overseas. Traditional UK-centric tracing is no longer a "reasonable endeavor" when a significant portion of the membership is international. Without global data, trustees cannot fulfill mandated disclosure requirements or support members in making informed retirement choices.
2. The Mortality Blindspot
The most significant financial risk is overpayment. Without robust international mortality screening, schemes can continue paying benefits for years after a member has passed away overseas. Reclaiming these funds from foreign jurisdictions is legally complex and often impossible.
3. Member Welfare & Social Responsibility
Small pots represent a member's future livelihood. When schemes lose touch, they lose the ability to provide value. For the mobile workforce, being "out of touch" means being "under-saved."
To address these complexities, the industry is moving toward AI-enabled web intelligence that looks beyond standard registry searches. Heka’s approach focuses on three core pillars to restore scheme integrity:
As the UK workforce becomes more international, the risk of "lost" members is no longer a fringe issue – it is a core governance challenge. Trustees who bridge the global data gap today will protect their members’ welfare and their scheme’s long-term financial health.