.png)
2025 marked a turning point in digital identity risk. Fraud didn’t simply become more sophisticated – it became industrialized. What emerged across financial institutions was not a new fraud “type,” but a new production model: fraud operations shifted from human-led tactics to system-led pipelines capable of assembling identities, navigating onboarding flows, and adapting to defenses at machine speed.
Synthetic identities, account takeover attempts, and document fraud didn’t just rise in volume; they became more operationally consistent, more repeatable, and more automated. Fraud rings began functioning less like informal criminal networks and more like tech companies: deploying AI agents, modular tooling, continuous integration pipelines, and automated QA-style probing of institutional controls.
This is why so many identity controls failed in 2025. They were calibrated for adversaries who behave like people.
The most consequential development of 2025 was the normalization of autonomous or semi-autonomous fraud workflows. AI agents began executing tasks traditionally requiring human coordination: assembling identity components, navigating onboarding flows, probing rule thresholds, and iterating on failures in real time. Anthropic’s September findings – documenting agentic AI gaining access to confirmed high-value targets – validated what fraud teams were already observing: the attacker is no longer just an individual actor but a persistent, adaptive system.
According to Visa, activity across their ecosystem shows clear evidence of an AI shift. Mentions of “AI Agent” in underground forums have surged 477%, reflecting how quickly fraudsters are adopting autonomous systems for social engineering, data harvesting, and payment workflows.

Operational consequences were immediate:
Controls calibrated for human irregularity struggled against machine-level consistency. The threat model had shifted, but the control model had not.
2025 also saw the industrialization of synthetic identity creation – driven by both generative AI and the rapid expansion of fraud-as-a-service (FaaS) marketplaces. What previously required technical skill or bespoke manual work is now fully productized. Criminal marketplaces provide identity components, pre-validated templates, and automated tooling that mirror legitimate SaaS workflows.
.jpg)
These marketplaces supply:
This ecosystem eliminated traditional constraints on identity fabrication. In North America, synthetic document fraud rose 311% year-on-year. Globally, deepfake incidents surged 700%. And with access to consumer data platforms like BeenVerified, fraud actors needed little more than a name to construct a plausible identity footprint.
The critical challenge was not just volume, but coherence: synthetic identities were often too clean, too consistent, and too well-structured. Legacy controls interpret clean data as low risk. But today, the absence of noise is often the strongest indicator of machine-assembled identity.
Because FaaS marketplaces standardized production, institutions began seeing near-identical identity patterns across geographies, platforms, and product types – a hallmark of industrialized fraud. Controls validated what “existed,” not whether it reflected a real human identity. That gap widened every quarter in 2025.
As fraud operations industrialized, several foundational identity controls reached structural limits. These were not tactical failures; they reflected the fact that the underlying assumptions behind these controls no longer matched the behavior of modern adversaries.
For years, device fingerprinting was a strong differentiator between legitimate users and automated or high-risk actors. This vulnerability was exposed by Europol’s Operation SIMCARTEL in October 2025, one of many recent cases where criminals used genuine hardware and SIM box technology, specifically 40,000 physical SIM cards, to generate real, high-entropy device signals that bypassed checks. Fraud rings moved from spoofing devices to operating them at scale, eroding the effectiveness of fingerprinting models designed to catch software-based manipulation.
With PII volume at unprecedented levels and AI retrieval tools able to surface answers instantly, knowledge-based authentication no longer correlated with human identity ownership. Breaches like the TransUnion incident in late August 2025, which exposed 4.4 million sensitive records, flood the dark web with PII. These events provide bad actors with the exact answers needed to bypass security questions, and when paired with AI retrieval tools, render KBA controls defenseless. What was once a fallback escalated into a near-zero-value signal.
High-volume, automated adversarial probing enabled fraud actors to map rule thresholds with precision. UK Finance and Cifas jointly reported 26,000 ATO attempts engineered to stay just under the £500 review limit. Rules didn’t fail because they were poorly designed. They failed because automation made them predictable.
Most controls still anchor identity validation to isolated events – onboarding, large transactions, or high-friction workflows. Fraud operations exploited the unmonitored spaces in between:
Legacy controls were built for linear journeys. Fraud in 2025 moved laterally.
The institutions that performed best in 2025 were not the ones with the most tools – they were the ones that recalibrated how identity is evaluated and how fraud is expected to behave. The shift was operational, not philosophical: identity is no longer an event to verify, but a system to monitor continuously.
Three strategic adjustments separated resilient teams from those that saw the highest loss spikes.
Onboarding signals are now the weakest indicators of identity integrity. Fraud prevention improved when teams shifted focus to:
Continuous identity monitoring is replacing traditional KYC cadence. The strongest institutions treated identity as something that must prove itself repeatedly, not once.
Industrialized fraud exploits the gaps left by internal-only models. High-performing institutions widened their aperture and integrated signals from:
These signals exposed synthetics that passed internal checks flawlessly but could not replicate authentic, long-term human activity on the open web.
Identity integrity is now a multi-environment assessment, not an internal verification process.
Most fraud in 2025 exhibited machine-level regularity – predictable timing, optimized retries, stable sequences. Teams that succeeded treated automation as a primary signal, incorporating:
Fraud no longer “looks suspicious”; it behaves systematically. Detection must reflect that.
Fragmented fraud stacks produced fragmented intelligence. Institutions saw the strongest improvements when they unified:
into a single, coherent decision layer. Data orchestration provided two outcomes legacy stacks could not:
The shift isn’t toward more controls; it is toward coordination.
Identity controls didn’t fail in 2025 because institutions lacked capability. They failed because the models underpinning those controls were anchored to a world where identity was stable, fraud was manual, and behavioral irregularity differentiated good actors from bad.
In 2025, identity became dynamic and distributed. Fraud became industrialized and system-led.
Institutions that recalibrate their approach now – treating identity as a living system, integrating external context, and unifying decisioning layers – will be best positioned to defend against the operational realities of 2026.
At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.

FOR IMMEDIATE RELEASE
Windare Ventures, Barclays and other institutional investors back Heka’s AI engine as financial institutions seek stronger defenses against synthetic fraud and identity manipulation.
New York, 15 July 2025
Consumer fraud is at an all-time high. Last year, losses hit $12.5 billion – a 38% jump year-over-year. The rise is fueled by burner behavior, synthetic profiles, and AI-generated content. But the tools meant to stop it – from credit bureau data to velocity models – miss what’s happening online. Heka was built to close that gap.
Inspired by the tradecraft of the intelligence community, Heka analyzes how a person actually behaves and appears across the open web. Its proprietary AI engine assembles digital profiles that surface alias use, reputational exposure, and behavioral anomalies. This helps financial institutions detect synthetic activity, connect with real customers, and act faster with confidence.
At the core of Heka’s web intelligence engine is an analyst-grade AI agent. Unlike legacy tools that rely on static files, scores, or blacklists, Heka’s AI processes large volumes of web data to produce structured outputs like fraud indicators, updated contact details, and contextual risk signals. In one recent deployment with a global payment processor, Heka’s AI engine caught 65% of account takeover losses without disrupting healthy user activity.
Heka is already generating millions in revenue through partnerships with banks, payment processors, and pension funds. Clients use Heka’s intelligence to support critical decisions from fraud mitigation to account management and recovery. The $14 million Series A round, led by Windare Ventures with participation by Barclays, Cornèr Banca, and other institutional investors, will accelerate Heka’s U.S. expansion and deepen its footprint across the UK and Europe.
“Heka’s offering stood out for its ability to address a critical need in financial services – helping institutions make faster, smarter decisions using trustworthy external data. We’re proud to support their continued growth as they scale in the U.S.” said Kester Keating, Head of US Principal Investments at Barclays.
Ori Ashkenazi, Managing Partner at Windare Ventures, added: “Identity isn’t a fixed file anymore. It’s a stream of behavior. Heka does what most AI can’t: it actually works in the wild, delivering signals banks can use seamlessly in workflows.”
Heka was founded by Rafael Berber, former Global Head of Equity Trading at Merrill Lynch; Ishay Horowitz, a senior officer in the Israeli intelligence community; and Idan Bar-Dov, a fintech and high-tech lawyer. The broader team includes intel analysts, data scientists, and domain experts in fraud, credit, and compliance.
“The credit bureaus were built for another era. Today, both consumers and risk live online. Heka’s mission is to be the default source of truth for this new digital reality – always-on, accurate, and explainable.” said Idan Bar-Dov, the Co-founder and CEO of Heka.
About Heka
Heka delivers web intelligence to financial services. Its AI engine is used by banks, payment processors, and pension funds to fill critical blind spots in fraud mitigation, credit-decision, and account recovery. The company was founded in 2021 and is headquartered in New York and Tel Aviv.
Press contact
Joy Phua Katsovich, VP Marketing | joy@hekaglobal.com
.png)
Fraud is no longer a technical skill. It’s a shopping experience.
What used to require specialized knowledge, custom scripting, and underground connections is now available through polished marketplaces that look indistinguishable from mainstream e-commerce platforms. Scrollable product cards. Star ratings. Tiered subscriptions. “Customers also bought…” recommendations.
Fraud-as-a-Service (FaaS) is not just an ecosystem – it is a parallel economy, built on the same principles as Amazon, Fiverr, and Shopify, but optimized for identity crime.
The result is a dramatic shift in the threat landscape: lower entry barriers, lower operational costs, and attacks that scale instantly. Fraud is no longer limited by human capability – it is limited only by how quickly these marketplaces can generate new products.
This blog exposes how the FaaS ecosystem actually works, what is available inside these marketplaces, and why the industrialization of fraud is reshaping digital risk.
The biggest misconception about digital crime is that it is messy, unstructured, and technically demanding. The truth is the opposite.
Today’s fraud marketplaces offer:

The experience mirrors legitimate SaaS:
And like Fiverr, each vendor specializes. There are providers for:
Fraud hasn’t just scaled – it has industrialized.
This is the part most institutions underestimate. The breadth and maturity of offerings is staggering. Here is what is openly sold across FaaS platforms – with the same clarity you’d expect from Amazon.

Full synthetic personas sold as complete packages:
Vendors guarantee the profile will pass KYC at specific institutions.
And the price range? $25–$200 per profile.
These aren’t crude Photoshopped IDs. They include:
Some vendors offer automated generation APIs: “Generate 1,000 EU passports → Deliver in 40 seconds.”

Pre-built phishing engines with:
Price: $10–$50 per campaign, often with free updates.
Many platforms now include "Fraud-GPT” engines – fraud-tuned GenAI models capable of producing tailored scam messages, emotional manipulation scripts, romance-fraud personas, and real-time social-engineering dialog. These systems can hold multi-turn conversations with victims while dynamically adjusting tone, urgency, and narrative to increase conversion rates.

Not just credential stuffing – full operational bots:
These bots now learn from failure and retry with adjusted parameters.
Just add username and phone number. These bundles include:
They are marketed explicitly: “ATO at scale. 94% success rate on XYZ bank. Guaranteed replacement if blocked.”
Highly organized product categories:
Every item has age, source, and validity score.

Turnkey operations:
When you step back from the catalog of available tools, one truth becomes impossible to ignore: fraud is no longer defined by human capability. It is defined by the capabilities of the systems that now produce and distribute it.
Every component of the fraud economy – identity creation, verification bypass, account takeover, social engineering, automation – has been modularized, optimized, and packaged for scale. The human actor is no longer the limiting factor. The marketplace provides the expertise, the automation provides the execution, and the criminal business model provides the incentive structure.
The result is a threat landscape that looks less like episodic misconduct and more like a supply chain. Fraud behaves like a coordinated operation, not a series of individual attempts. It adapts quickly, repeats consistently, and expands effortlessly – because the work is performed by tools, not people.
This is why traditional controls struggle. Identity verification was built on the assumption that inconsistencies, friction, and human error would reveal risk. But the industrialization of fraud produces identities that are consistent, documents that are polished, and behavioral patterns that are machine-stable. What used to feel like a red flag – a clean file, a frictionless onboarding journey – is now a symptom of a system-generated identity.
The deeper consequence is strategic: the attacker no longer “thinks” like a human adversary. They probe controls the way software tests an API. They run parallel attempts the way a product team runs A/B tests. They scale operations the way cloud infrastructure scales workloads. And because their tooling is continuously updated, their learning curve is steep – while defenses remain constrained by review cycles, risk committees, and static models.
For financial institutions, the rise of Fraud-as-a-Service has exposed the limits of a decades-old assumption: that identity can be validated by inspecting individual attributes. In an industrialized fraud economy, every discrete signal – documents, device profiles, PII, behavioral cues – can be purchased, replicated, or simulated on demand. A synthetic identity can now satisfy every checkbox a traditional onboarding flow requires.
What it cannot reliably produce is contextual coherence.
Real customers exhibit history, relationships, communication patterns, platform interactions, and digital residue that accumulate organically. Their identities make sense across time, across channels, and across environments. Their behavior reflects inconsistency, natural drift, and the kinds of imperfections that automated systems struggle to fabricate.
Synthetic identities, even sophisticated ones, tend to be:
This is the gap FIs must now address. Identity is no longer something you confirm once. It is something you understand – continuously – by examining whether its story holds together.
The operational shift is simple to articulate, harder to execute:
Verification must move from checking attributes to validating coherence.
Does the identity align with long-term behavioral patterns?
Does the footprint exist beyond the onboarding moment?
Does it behave like a human navigating life, or a system navigating workflows?
Does it fit the context in which it appears?
Fraud has become industrial. Identity fabrication has become automated. What separates real from synthetic is no longer the presence of data, but whether that data forms a believable whole.
Financial institutions that recalibrate their controls toward coherence – contextual, cross-signal intelligence – will be positioned to detect what Fraud-as-a-Service still struggles to imitate: the complexity of genuine human identity.
At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.
In an AI-versus-AI world, timing is everything. The earlier your system understands an identity, the sooner you can stop the threat.
.png)
2025 marked a turning point in digital identity risk. Fraud didn’t simply become more sophisticated – it became industrialized. What emerged across financial institutions was not a new fraud “type,” but a new production model: fraud operations shifted from human-led tactics to system-led pipelines capable of assembling identities, navigating onboarding flows, and adapting to defenses at machine speed.
Synthetic identities, account takeover attempts, and document fraud didn’t just rise in volume; they became more operationally consistent, more repeatable, and more automated. Fraud rings began functioning less like informal criminal networks and more like tech companies: deploying AI agents, modular tooling, continuous integration pipelines, and automated QA-style probing of institutional controls.
This is why so many identity controls failed in 2025. They were calibrated for adversaries who behave like people.
The most consequential development of 2025 was the normalization of autonomous or semi-autonomous fraud workflows. AI agents began executing tasks traditionally requiring human coordination: assembling identity components, navigating onboarding flows, probing rule thresholds, and iterating on failures in real time. Anthropic’s September findings – documenting agentic AI gaining access to confirmed high-value targets – validated what fraud teams were already observing: the attacker is no longer just an individual actor but a persistent, adaptive system.
According to Visa, activity across their ecosystem shows clear evidence of an AI shift. Mentions of “AI Agent” in underground forums have surged 477%, reflecting how quickly fraudsters are adopting autonomous systems for social engineering, data harvesting, and payment workflows.

Operational consequences were immediate:
Controls calibrated for human irregularity struggled against machine-level consistency. The threat model had shifted, but the control model had not.
2025 also saw the industrialization of synthetic identity creation – driven by both generative AI and the rapid expansion of fraud-as-a-service (FaaS) marketplaces. What previously required technical skill or bespoke manual work is now fully productized. Criminal marketplaces provide identity components, pre-validated templates, and automated tooling that mirror legitimate SaaS workflows.
.jpg)
These marketplaces supply:
This ecosystem eliminated traditional constraints on identity fabrication. In North America, synthetic document fraud rose 311% year-on-year. Globally, deepfake incidents surged 700%. And with access to consumer data platforms like BeenVerified, fraud actors needed little more than a name to construct a plausible identity footprint.
The critical challenge was not just volume, but coherence: synthetic identities were often too clean, too consistent, and too well-structured. Legacy controls interpret clean data as low risk. But today, the absence of noise is often the strongest indicator of machine-assembled identity.
Because FaaS marketplaces standardized production, institutions began seeing near-identical identity patterns across geographies, platforms, and product types – a hallmark of industrialized fraud. Controls validated what “existed,” not whether it reflected a real human identity. That gap widened every quarter in 2025.
As fraud operations industrialized, several foundational identity controls reached structural limits. These were not tactical failures; they reflected the fact that the underlying assumptions behind these controls no longer matched the behavior of modern adversaries.
For years, device fingerprinting was a strong differentiator between legitimate users and automated or high-risk actors. This vulnerability was exposed by Europol’s Operation SIMCARTEL in October 2025, one of many recent cases where criminals used genuine hardware and SIM box technology, specifically 40,000 physical SIM cards, to generate real, high-entropy device signals that bypassed checks. Fraud rings moved from spoofing devices to operating them at scale, eroding the effectiveness of fingerprinting models designed to catch software-based manipulation.
With PII volume at unprecedented levels and AI retrieval tools able to surface answers instantly, knowledge-based authentication no longer correlated with human identity ownership. Breaches like the TransUnion incident in late August 2025, which exposed 4.4 million sensitive records, flood the dark web with PII. These events provide bad actors with the exact answers needed to bypass security questions, and when paired with AI retrieval tools, render KBA controls defenseless. What was once a fallback escalated into a near-zero-value signal.
High-volume, automated adversarial probing enabled fraud actors to map rule thresholds with precision. UK Finance and Cifas jointly reported 26,000 ATO attempts engineered to stay just under the £500 review limit. Rules didn’t fail because they were poorly designed. They failed because automation made them predictable.
Most controls still anchor identity validation to isolated events – onboarding, large transactions, or high-friction workflows. Fraud operations exploited the unmonitored spaces in between:
Legacy controls were built for linear journeys. Fraud in 2025 moved laterally.
The institutions that performed best in 2025 were not the ones with the most tools – they were the ones that recalibrated how identity is evaluated and how fraud is expected to behave. The shift was operational, not philosophical: identity is no longer an event to verify, but a system to monitor continuously.
Three strategic adjustments separated resilient teams from those that saw the highest loss spikes.
Onboarding signals are now the weakest indicators of identity integrity. Fraud prevention improved when teams shifted focus to:
Continuous identity monitoring is replacing traditional KYC cadence. The strongest institutions treated identity as something that must prove itself repeatedly, not once.
Industrialized fraud exploits the gaps left by internal-only models. High-performing institutions widened their aperture and integrated signals from:
These signals exposed synthetics that passed internal checks flawlessly but could not replicate authentic, long-term human activity on the open web.
Identity integrity is now a multi-environment assessment, not an internal verification process.
Most fraud in 2025 exhibited machine-level regularity – predictable timing, optimized retries, stable sequences. Teams that succeeded treated automation as a primary signal, incorporating:
Fraud no longer “looks suspicious”; it behaves systematically. Detection must reflect that.
Fragmented fraud stacks produced fragmented intelligence. Institutions saw the strongest improvements when they unified:
into a single, coherent decision layer. Data orchestration provided two outcomes legacy stacks could not:
The shift isn’t toward more controls; it is toward coordination.
Identity controls didn’t fail in 2025 because institutions lacked capability. They failed because the models underpinning those controls were anchored to a world where identity was stable, fraud was manual, and behavioral irregularity differentiated good actors from bad.
In 2025, identity became dynamic and distributed. Fraud became industrialized and system-led.
Institutions that recalibrate their approach now – treating identity as a living system, integrating external context, and unifying decisioning layers – will be best positioned to defend against the operational realities of 2026.
At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.

The biggest shift in fraud today isn’t the sophistication of attackers – it’s the way identity itself has changed.
AI has blurred the boundaries between real and fake. Identities can now be assembled, morphed, or automated using the same technologies that power legitimate digital experiences. Fraudsters don’t need to steal an identity anymore; they can manufacture one. They don’t guess passwords manually; they automate the behavioral patterns of real users. They operate across borders, devices, and platforms with no meaningful friction.
The scale of the problem continues to accelerate. According to the Deloitte Center for Financial Services, synthetic identity fraud is expected to reach US $23 billion in losses by 2030. Meanwhile, account takeover (ATO) activity has risen by nearly 32% since 2021, with an estimated 77 million people affected, according to Security.org. These trends reflect not only rising attack volume, but the widening gap between how identity operates today and how legacy systems attempt to secure it.
This isn’t just “more fraud.” It’s a fundamental reconfiguration of what identity means in digital finance – and how easily it can be manipulated. Synthetic profiles that behave like real customers, account takeovers that mimic human activity, and dormant accounts exploited at scale are no longer anomalies. They are a logical outcome of this new system.
The challenge for banks, neobanks, and fintechs is no longer verifying who someone is, but understanding how digital entities behave over time and across the open web.
Most fraud stacks were built for a world where:
Today’s adversaries exploit the gaps in that outdated model.

Blind Spot 1 — Static Identity Verification
Traditional KYC treats identity as fixed. Synthetic profiles exploit this entirely by presenting clean credit files, plausible documents, and AI-generated faces that pass onboarding without friction.
Blind Spot 2 — Device and Channel Intelligence
Legacy device fingerprinting and IP checks no longer differentiate bots from humans. AI agents now mimic device signatures, geolocation drift, and even natural session friction.
Blind Spot 3 — Transaction-Centric Rules
Fraud rarely begins with a transaction anymore. Synthetics age accounts for months, ATO attackers update contact information silently, and dormant accounts remain inactive until the moment they’re exploited.
In short: fraud has become dynamic; most defenses remain static.
For decades, digital identity was treated as a stable set of attributes: a name, a date of birth, an address, and a document. The financial system – and most fraud controls – were built around this premise. But digital identity in 2025 behaves very differently from the identities these systems were designed to protect.
Identity today is expressed through patterns of activity, not static attributes. Consumers interact across dozens of platforms, maintain multiple email addresses, replace devices frequently, and leave fragmented traces across the open web. None of this is inherently suspicious – it’s simply the consequence of modern digital life.
The challenge is that fraudsters now operate inside these same patterns.
A synthetic identity can resemble a thin-file customer.
An ATO attacker can look like a user switching devices.
A dormant account can appear indistinguishable from legitimate inactivity.
In other words, the difficulty is not that fraudsters hide outside normal behavior – it is that the behavior considered “normal” has expanded so dramatically that older models no longer capture its boundaries.
This disconnect between how modern identity behaves and how traditional systems verify it is precisely what makes certain attack vectors so effective today. Synthetic identities, account takeovers, and dormant-account exploitation thrive not because they are new techniques, but because they operate within the fluid, multi-channel reality of contemporary digital identity – where behavior shifts quickly, signals are fragmented, and legacy controls cannot keep pace.
Synthetic identities combine real data fragments with fabricated details to create a customer no institution can validate – because no real person is missing. This gives attackers long periods of undetected activity to build credibility.
Fraudsters use synthetics to:
Equifax estimates synthetics now account for 50–70% of credit fraud losses among U.S. banks.
One-time verification cannot identify a profile that was never tied to a real human. Institutions need ongoing, external intelligence that answers a different question:
Does this identity behave like an actual person across the real web?
Account takeover (ATO) is particularly difficult because it begins with a legitimate user and legitimate credentials. Financial losses tied to ATO continue to grow. VPNRanks reports a sustained increase in both direct financial impact and the volume of compromised accounts, further reflecting how identity-based attacks have become central to modern fraud.

Fraudsters increasingly use AI to automate:
Once inside, attackers move quickly to secure control:
Early indicators are subtle and often scattered:
The issue is not verifying credentials; it is determining whether the behavior matches the real user.
Dormant or inactive accounts, once considered low-risk, have become reliable targets for fraud. Their inactivity provides long periods of concealment, and they often receive less scrutiny than active accounts. This makes them attractive staging grounds for synthetic identities, mule activity, and small-value laundering that can later escalate.
Fraudsters use dormant accounts because they represent the perfect blend of low visibility and high permission: the infrastructure of a legitimate customer without the scrutiny of an active one.
Dormant accounts are vulnerable because of their inactivity – not in spite of it.
Institutions benefit from:
Dormant ≠ safe. Dormant = unobserved.
Fraud today is not opportunistic. It is operational, coordinated, and increasingly automated.
AI enables fraudsters to automate tasks that were once slow or manual:
This automation feeds into a consistent operational lifecycle.
Most institutions detect fraud in Stage 5. Modern prevention requires detecting divergence in Stages 1–4.
Fraud has evolved from discrete events to continuous identity manipulation. Defenses must do the same. This shift is fundamental:

Institutions must understand identity the way attackers exploit it – as something dynamic, contextual, and shaped by behavior over time.
Fraud is becoming faster, more coordinated, and scaling at levels never seen before. Institutions that adapt will be those that begin viewing it as a continuously evolving system.
Those that win the next phase of this battle will stop relying on static checks and begin treating identity as something contextual and continuously evolving.
That requires intelligence that looks beyond internal systems and into the open web, where digital footprints, behavioral signals, and online history reveal whether an identity behaves like a real person, or a synthetic construct designed to exploit the gaps.
At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.
In an AI-versus-AI world, timing is everything. The earlier your system understands an identity, the sooner you can stop the threat.

FOR IMMEDIATE RELEASE
Windare Ventures, Barclays and other institutional investors back Heka’s AI engine as financial institutions seek stronger defenses against synthetic fraud and identity manipulation.
New York, 15 July 2025
Consumer fraud is at an all-time high. Last year, losses hit $12.5 billion – a 38% jump year-over-year. The rise is fueled by burner behavior, synthetic profiles, and AI-generated content. But the tools meant to stop it – from credit bureau data to velocity models – miss what’s happening online. Heka was built to close that gap.
Inspired by the tradecraft of the intelligence community, Heka analyzes how a person actually behaves and appears across the open web. Its proprietary AI engine assembles digital profiles that surface alias use, reputational exposure, and behavioral anomalies. This helps financial institutions detect synthetic activity, connect with real customers, and act faster with confidence.
At the core of Heka’s web intelligence engine is an analyst-grade AI agent. Unlike legacy tools that rely on static files, scores, or blacklists, Heka’s AI processes large volumes of web data to produce structured outputs like fraud indicators, updated contact details, and contextual risk signals. In one recent deployment with a global payment processor, Heka’s AI engine caught 65% of account takeover losses without disrupting healthy user activity.
Heka is already generating millions in revenue through partnerships with banks, payment processors, and pension funds. Clients use Heka’s intelligence to support critical decisions from fraud mitigation to account management and recovery. The $14 million Series A round, led by Windare Ventures with participation by Barclays, Cornèr Banca, and other institutional investors, will accelerate Heka’s U.S. expansion and deepen its footprint across the UK and Europe.
“Heka’s offering stood out for its ability to address a critical need in financial services – helping institutions make faster, smarter decisions using trustworthy external data. We’re proud to support their continued growth as they scale in the U.S.” said Kester Keating, Head of US Principal Investments at Barclays.
Ori Ashkenazi, Managing Partner at Windare Ventures, added: “Identity isn’t a fixed file anymore. It’s a stream of behavior. Heka does what most AI can’t: it actually works in the wild, delivering signals banks can use seamlessly in workflows.”
Heka was founded by Rafael Berber, former Global Head of Equity Trading at Merrill Lynch; Ishay Horowitz, a senior officer in the Israeli intelligence community; and Idan Bar-Dov, a fintech and high-tech lawyer. The broader team includes intel analysts, data scientists, and domain experts in fraud, credit, and compliance.
“The credit bureaus were built for another era. Today, both consumers and risk live online. Heka’s mission is to be the default source of truth for this new digital reality – always-on, accurate, and explainable.” said Idan Bar-Dov, the Co-founder and CEO of Heka.
About Heka
Heka delivers web intelligence to financial services. Its AI engine is used by banks, payment processors, and pension funds to fill critical blind spots in fraud mitigation, credit-decision, and account recovery. The company was founded in 2021 and is headquartered in New York and Tel Aviv.
Press contact
Joy Phua Katsovich, VP Marketing | joy@hekaglobal.com
.png)
We’re proud to announce our partnership with ZEDRA Governance to help pension schemes tackle one of the sector’s biggest challenges: tracing missing members.
Following a successful pilot where Heka’s AI-powered tracing identified 50% of previously unreachable members, ZEDRA will now offer our technology to clients via a dedicated architecture, bringing scale and speed to both small and large schemes.
“Reuniting members with their full retirement benefits is a core fiduciary duty,” said Mark Stopard, Head of Proposition Development at ZEDRA Governance. “We’re excited to see the results of this initiative as part of our commitment to helping clients solve the issue of lost pensions.”
Heka's technology helps schemes locate current contact details, life status, and digital signals even when records are outdated or fragmented. By partnering with ZEDRA, we’re enabling better member engagement, reduced risk, and readiness for future reforms.
“Many of the toughest challenges in the pensions sector start with missing data,” said Max Lack, Business Development Manager at Heka. “Solving that unlocks everything else- from dashboard readiness to retirement adequacy.”
Read the full announcement on ZEDRA’s website.
.png)
We’re excited to announce that Heka is now live on NayaOne, the leading fintech and data marketplace for financial institutions.
Through the NayaOne platform, banks and insurers can now securely trial Heka’s external customer intelligence engine- accessing real-time, explainable insights for credit, fraud, onboarding, and more, all within a sandboxed environment.
This marks a major step in making Heka more accessible to innovation teams looking to accelerate decision-making with trustworthy, real-time web intelligence.
.png)
We’re proud to support Dalriada Trustees in tracing victims of pension fraud using our AI-driven identity and contact resolution tools. The collaboration has already reunited members with their rightful benefits where traditional tracing methods failed. Read the full article published by Professional Pensions to learn more about how our partnership is helping deliver real outcomes in complex fraud scenarios.
👉 As featured in Professional Pensions