Credit Risk Signals You Can Trust.

Credit files only tell part of the story. Heka helps underwriters assess intent, stability, and reputational and lifestyle risk using real-time signals from the open web – enabling credit teams to increase exposure to strong applicants while reducing overall risk.
TURNING WEB NOISE INTO DECISIONS

How It Works

Existing Customer Record

Name, ID, date of birth, city — even if incomplete.

Live Data Extraction

Heka scans the open web for real-time behavioural, reputational, and relational signals.

AI-Powered Signal Structuring

Our AI parses noise into patterns, turning raw data into clear signals that reveal risk factors and anomalies — with full traceability.

Actionable, Traceable Output

A structured, explainable result — with indicators showing whether the profile is:

  • No risk factor found
  • Needs Underwriter Review
  • Critical

We return only what you need — clear, evidence-based results, with links to source.

Powered by
Heka’s Identity Intelligence Engine

Imagine an AI analyst agent that thinks like your best team member - trained to trace digital footprints, flag risk, and never miss a signal.
Real-time & batch API – no install needed
Global coverage spanning
50+
countries
Proprietary AI models for identity, behaviour and risk
Fully explainable & traceable outputs
Privacy by Design
Purpose-built for financial services
3,000+
verified data
sources
Embedded LLMs in decision workflow
Testimonials
The trustee wants to pay all members the right benefits, so it was important to explore different avenues to try and find these missing members. Heka has helped us do this quickly and effectively. From our perspective, it is really important that Heka is able to find missing members regardless of whether or not they are based in the UK.
Doug Ross
Trustee Chair
 at 
MNRPF
Heka’s offering stood out for its ability to address a critical need in financial services – helping institutions make faster, smarter decisions using trustworthy external data. We’re proud to support their continued growth as they scale in the U.S.
Kester Keating
Head of US Principal Investments
 at 
Barclays
Making sure all members receive their benefits is a core fiduciary duty, as is controlling the financial costs of missing member data. Further to that, tracing these individuals will allow schemes to significantly boost their ability to re-engage members…We’re excited to see the results of this initiative.
Mark Stopard
 at 
Zedra
Dalriada has welcomed the work done by Heka to date, which has enabled us to get in contact with some members we were previously unable to. The techniques employed by Heka are innovative and this has seen positive results where other, more traditional member tracing options have failed.
Sean Browes
Professional Trustee
 at 
Dalriada
Heka has completely transformed how I think about client DD. Their tech-savvy approach delivers fast, smart insights that help us win at onboarding. It’s a pleasure working with them.
Moran Alon
CEO
 at 
Banque Pictet

Explore More Resources

Button Text

Why Did So Many Identity Controls Fail in 2025?

Why did the industry's most trusted identity controls fail in 2025? Explore the structural limits of device intelligence, KBA, and static rules in an age of automation.

2025 marked a turning point in digital identity risk. Fraud didn’t simply become more sophisticated – it became industrialized. What emerged across financial institutions was not a new fraud “type,” but a new production model: fraud operations shifted from human-led tactics to system-led pipelines capable of assembling identities, navigating onboarding flows, and adapting to defenses at machine speed.

Synthetic identities, account takeover attempts, and document fraud didn’t just rise in volume; they became more operationally consistent, more repeatable, and more automated. Fraud rings began functioning less like informal criminal networks and more like tech companies: deploying AI agents, modular tooling, continuous integration pipelines, and automated QA-style probing of institutional controls.

This is why so many identity controls failed in 2025. They were calibrated for adversaries who behave like people. 

Automation Became the Default Operating Mode

The most consequential development of 2025 was the normalization of autonomous or semi-autonomous fraud workflows. AI agents began executing tasks traditionally requiring human coordination: assembling identity components, navigating onboarding flows, probing rule thresholds, and iterating on failures in real time. Anthropic’s September findings – documenting agentic AI gaining access to confirmed high-value targets – validated what fraud teams were already observing: the attacker is no longer just an individual actor but a persistent, adaptive system.

According to Visa, activity across their ecosystem shows clear evidence of an AI shift. Mentions of “AI Agent” in underground forums have surged 477%, reflecting how quickly fraudsters are adopting autonomous systems for social engineering, data harvesting, and payment workflows.

Underground fraud forums mentioning "AI Agent" from Visa Report: Five Forces Reshaping Payment Security in 2025

Operational consequences were immediate:

  • Attempt volumes exceeded human-constrained detection models
  • Timing patterns became too consistent for human-based anomaly rules
  • Retries and adjustments became systematic rather than opportunistic
  • Session structures behaved more like software than people
  • Attacks ran continuously, unaffected by time zones, fatigue, or manual bottlenecks

Controls calibrated for human irregularity struggled against machine-level consistency. The threat model had shifted, but the control model had not.

Synthetic Identity Production Reached Industrial Scale

2025 also saw the industrialization of synthetic identity creation – driven by both generative AI and the rapid expansion of fraud-as-a-service (FaaS) marketplaces. What previously required technical skill or bespoke manual work is now fully productized. Criminal marketplaces provide identity components, pre-validated templates, and automated tooling that mirror legitimate SaaS workflows.

One of many Fraud-as-a-service marketplaces Heka's team found

These marketplaces supply:

  • AI-generated facial images and liveness-passing videos
  • Country-specific forged document packs
  • Pre-scraped digital footprints from public and commercial sources
  • Bulk synthetic identity templates with coherent PII
  • Automated onboarding scripts designed to work across popular IDV vendors
  • APIs capable of generating thousands of synthetic profiles at once
  • And more…

This ecosystem eliminated traditional constraints on identity fabrication. In North America, synthetic document fraud rose 311% year-on-year. Globally, deepfake incidents surged 700%. And with access to consumer data platforms like BeenVerified, fraud actors needed little more than a name to construct a plausible identity footprint.

The critical challenge was not just volume, but coherence: synthetic identities were often too clean, too consistent, and too well-structured. Legacy controls interpret clean data as low risk. But today, the absence of noise is often the strongest indicator of machine-assembled identity.

Because FaaS marketplaces standardized production, institutions began seeing near-identical identity patterns across geographies, platforms, and product types – a hallmark of industrialized fraud. Controls validated what “existed,” not whether it reflected a real human identity. That gap widened every quarter in 2025.

Where Identity Controls Reached Their Limits

As fraud operations industrialized, several foundational identity controls reached structural limits. These were not tactical failures; they reflected the fact that the underlying assumptions behind these controls no longer matched the behavior of modern adversaries.

Device intelligence weakened as attackers shifted to hardware

For years, device fingerprinting was a strong differentiator between legitimate users and automated or high-risk actors. This vulnerability was exposed by Europol’s Operation SIMCARTEL in October 2025, one of many recent cases where criminals used genuine hardware and SIM box technology, specifically 40,000 physical SIM cards, to generate real, high-entropy device signals that bypassed checks. Fraud rings moved from spoofing devices to operating them at scale, eroding the effectiveness of fingerprinting models designed to catch software-based manipulation.

Knowledge-based authentication effectively collapsed

With PII volume at unprecedented levels and AI retrieval tools able to surface answers instantly, knowledge-based authentication no longer correlated with human identity ownership. Breaches like the TransUnion incident in late August 2025, which exposed 4.4 million sensitive records, flood the dark web with PII. These events provide bad actors with the exact answers needed to bypass security questions, and when paired with AI retrieval tools, render KBA controls defenseless. What was once a fallback escalated into a near-zero-value signal.

Rules were systematically reverse-engineered

High-volume, automated adversarial probing enabled fraud actors to map rule thresholds with precision. UK Finance and Cifas jointly reported 26,000 ATO attempts engineered to stay just under the £500 review limit. Rules didn’t fail because they were poorly designed. They failed because automation made them predictable.

Lifecycle gaps remained unprotected

Most controls still anchor identity validation to isolated events – onboarding, large transactions, or high-friction workflows. Fraud operations exploited the unmonitored spaces in between:

  • contact detail changes
  • dormant account reactivation
  • incremental credential resets
  • low-value testing

Legacy controls were built for linear journeys. Fraud in 2025 moved laterally.

What 2026 Fraud Strategy Now Requires

The institutions that performed best in 2025 were not the ones with the most tools – they were the ones that recalibrated how identity is evaluated and how fraud is expected to behave. The shift was operational, not philosophical: identity is no longer an event to verify, but a system to monitor continuously.

Three strategic adjustments separated resilient teams from those that saw the highest loss spikes.

1. Treat identity as a longitudinal signal, not a point-in-time check

Onboarding signals are now the weakest indicators of identity integrity. Fraud prevention improved when teams shifted focus to:

  • behavioral drift over time
  • sequence patterns across user journeys
  • changes in device, channel, or footprint lineage
  • reactivation profiles on dormant accounts

Continuous identity monitoring is replacing traditional KYC cadence. The strongest institutions treated identity as something that must prove itself repeatedly, not once.

2. Incorporate external and open-web intelligence into identity decisions

Industrialized fraud exploits the gaps left by internal-only models. High-performing institutions widened their aperture and integrated signals from:

  • digital footprint depth and entropy
  • cross-platform identity reuse
  • domain/phone/email lineage
  • web presence maturity
  • global device networks and associations

These signals exposed synthetics that passed internal checks flawlessly but could not replicate authentic, long-term human activity on the open web.

Identity integrity is now a multi-environment assessment, not an internal verification process.

3. Detect automation explicitly

Most fraud in 2025 exhibited machine-level regularity – predictable timing, optimized retries, stable sequences. Teams that succeeded treated automation as a primary signal, incorporating:

  • micro-timing analysis
  • session-structure profiling
  • velocity and retry pattern detection
  • non-human cadence modeling

Fraud no longer “looks suspicious”; it behaves systematically. Detection must reflect that.

4. Shift from tool stacks to orchestration

Fragmented fraud stacks produced fragmented intelligence. Institutions saw the strongest improvements when they unified:

  • IDV
  • behavioral analytics
  • device and network intelligence
  • OSINT and digital footprint context
  • transaction and account-change data

into a single, coherent decision layer. Data orchestration provided two outcomes legacy stacks could not:

  1. Contextual scoring – identities evaluated across signals, not in isolation
  2. Consistent policy application – reducing false positives and operational drag

The shift isn’t toward more controls; it is toward coordination.

Closing Perspective

Identity controls didn’t fail in 2025 because institutions lacked capability. They failed because the models underpinning those controls were anchored to a world where identity was stable, fraud was manual, and behavioral irregularity differentiated good actors from bad.

In 2025, identity became dynamic and distributed. Fraud became industrialized and system-led.

Institutions that recalibrate their approach now – treating identity as a living system, integrating external context, and unifying decisioning layers – will be best positioned to defend against the operational realities of 2026.

At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.

The New Faces of Fraud: How AI Is Redefining Identity, Behavior, and Digital Risk

Modern fraud has become dynamic, yet most defenses remain static. Learn how to identify the three critical blind spots in today’s fraud stacks and shift toward a model of continuous intelligence.

1. Introduction – Identity Is No Longer a Fixed Attribute

The biggest shift in fraud today isn’t the sophistication of attackers – it’s the way identity itself has changed.

AI has blurred the boundaries between real and fake. Identities can now be assembled, morphed, or automated using the same technologies that power legitimate digital experiences. Fraudsters don’t need to steal an identity anymore; they can manufacture one. They don’t guess passwords manually; they automate the behavioral patterns of real users. They operate across borders, devices, and platforms with no meaningful friction.

The scale of the problem continues to accelerate. According to the Deloitte Center for Financial Services, synthetic identity fraud is expected to reach US $23 billion in losses by 2030. Meanwhile, account takeover (ATO) activity has risen by nearly 32% since 2021, with an estimated 77 million people affected, according to Security.org. These trends reflect not only rising attack volume, but the widening gap between how identity operates today and how legacy systems attempt to secure it.

This isn’t just “more fraud.” It’s a fundamental reconfiguration of what identity means in digital finance – and how easily it can be manipulated. Synthetic profiles that behave like real customers, account takeovers that mimic human activity, and dormant accounts exploited at scale are no longer anomalies. They are a logical outcome of this new system.

The challenge for banks, neobanks, and fintechs is no longer verifying who someone is, but understanding how digital entities behave over time and across the open web.

2. The Blind Spots in Modern Fraud Prevention

Most fraud stacks were built for a world where:

  • identity was stable
  • behavior was predictable
  • fraud required human effort

Today’s adversaries exploit the gaps in that outdated model.

The Blind Spots in Modern Fraud Prevention | Artwork generated by Gemini AI

Blind Spot 1 — Static Identity Verification

Traditional KYC treats identity as fixed. Synthetic profiles exploit this entirely by presenting clean credit files, plausible documents, and AI-generated faces that pass onboarding without friction.

Blind Spot 2 — Device and Channel Intelligence

Legacy device fingerprinting and IP checks no longer differentiate bots from humans. AI agents now mimic device signatures, geolocation drift, and even natural session friction.

Blind Spot 3 — Transaction-Centric Rules

Fraud rarely begins with a transaction anymore. Synthetics age accounts for months, ATO attackers update contact information silently, and dormant accounts remain inactive until the moment they’re exploited.

In short: fraud has become dynamic; most defenses remain static.

3. The Changing Nature of Digital Identity

For decades, digital identity was treated as a stable set of attributes: a name, a date of birth, an address, and a document. The financial system – and most fraud controls – were built around this premise. But digital identity in 2025 behaves very differently from the identities these systems were designed to protect.

Identity today is expressed through patterns of activity, not static attributes. Consumers interact across dozens of platforms, maintain multiple email addresses, replace devices frequently, and leave fragmented traces across the open web. None of this is inherently suspicious – it’s simply the consequence of modern digital life.

The challenge is that fraudsters now operate inside these same patterns.
A synthetic identity can resemble a thin-file customer.
An ATO attacker can look like a user switching devices.
A dormant account can appear indistinguishable from legitimate inactivity.

In other words, the difficulty is not that fraudsters hide outside normal behavior – it is that the behavior considered “normal” has expanded so dramatically that older models no longer capture its boundaries.

This disconnect between how modern identity behaves and how traditional systems verify it is precisely what makes certain attack vectors so effective today. Synthetic identities, account takeovers, and dormant-account exploitation thrive not because they are new techniques, but because they operate within the fluid, multi-channel reality of contemporary digital identity – where behavior shifts quickly, signals are fragmented, and legacy controls cannot keep pace.

4. Synthetic IDs: Fraud With No Victim and No Footprint

Synthetic identities combine real data fragments with fabricated details to create a customer no institution can validate – because no real person is missing. This gives attackers long periods of undetected activity to build credibility.

Fraudsters use synthetics to:

  • open accounts and credit lines,
  • build transaction history,
  • establish low-risk behavioral patterns,
  • execute high-value bust-outs that are difficult to recover.
Why synthetics succeed
  • Thin-file customers look similar to fabricated identities.
  • AI-generated faces and documents bypass superficial verification.
  • Onboarding flows optimized for user experience leave less room for deep checks.
  • Synthetic identities “warm up” gradually, behaving consistently for months.

Equifax estimates synthetics now account for 50–70% of credit fraud losses among U.S. banks.

What institutions must modernize

One-time verification cannot identify a profile that was never tied to a real human. Institutions need ongoing, external intelligence that answers a different question:

Does this identity behave like an actual person across the real web?

5. Account Takeover: When Verified Identity Becomes the Attack Surface

Account takeover (ATO) is particularly difficult because it begins with a legitimate user and legitimate credentials. Financial losses tied to ATO continue to grow. VPNRanks reports a sustained increase in both direct financial impact and the volume of compromised accounts, further reflecting how identity-based attacks have become central to modern fraud.

Financial losses tied to ATO, 2022-2025

Fraudsters increasingly use AI to automate:

  • credential-stuffing attempts,
  • session replay and friction simulation,
  • device and browser mimicry,
  • navigation patterns that resemble human users.

Once inside, attackers move quickly to secure control:

  • updating email addresses and phone numbers,
  • adding new devices,
  • temporarily disabling MFA,
  • initiating transfers or withdrawals.
Signals that matter today

Early indicators are subtle and often scattered:

  • Email change + new device within a short window
  • Logins from IP ranges linked to synthetic identity clusters
  • High-velocity credential attempts preceding a legitimate login
  • Sudden extensions of the user’s online footprint
  • Contact detail changes followed by credential resets

The issue is not verifying credentials; it is determining whether the behavior matches the real user.

6. Dormant Accounts: The Silent Fraud Vector

Dormant or inactive accounts, once considered low-risk, have become reliable targets for fraud. Their inactivity provides long periods of concealment, and they often receive less scrutiny than active accounts. This makes them attractive staging grounds for synthetic identities, mule activity, and small-value laundering that can later escalate.

Fraudsters use dormant accounts because they represent the perfect blend of low visibility and high permission: the infrastructure of a legitimate customer without the scrutiny of an active one.

Why dormant ≠ low-risk

Dormant accounts are vulnerable because of their inactivity – not in spite of it.

  • They bypass many ongoing monitoring rules.
    Most systems deprioritize accounts with no transactional activity.
  • Attackers can prepare without triggering alerts.
    Inactivity hides credential testing, information gathering, and initial contact-detail changes.
  • Reactivation flows are often weaker than onboarding flows.
    Institutions assume returning customers are inherently trustworthy.
  • Contact updates rarely raise suspicion.
    A fraudster changing an email or phone number on a dormant account is often treated as routine.
  • Fraud can accumulate undetected for long periods.
    Months or years of dormancy create a wide window for planning, staging, and lateral movement.
Better defenses

Institutions benefit from:

  • refreshing identity lineage at the moment of reactivation,
  • updating digital-footprint context rather than relying on historical data,
  • linking dormant accounts to known synthetic or mule clusters.

Dormant ≠ safe. Dormant = unobserved.

7. How Modern Fraud Actually Operates (AI + Lifecycle)

Fraud today is not opportunistic. It is operational, coordinated, and increasingly automated.

How AI amplifies fraud operations

AI enables fraudsters to automate tasks that were once slow or manual:

  • Identity creation: synthetic faces, forged documents, fabricated businesses
  • Scalable onboarding: bots submitting high volumes of applications
  • Behavioral mimicry: friction simulation, geolocation drift, session replay
  • Customer-support evasion: LLM agents bypassing KBA or manipulating staff
  • OSINT mining: automated scraping of breached data and persona fragments

This automation feeds into a consistent operational lifecycle.

The modern fraud lifecycle
  1. Identity Fabrication
    AI assembles identity components designed to pass onboarding.
  2. Frictionless Onboarding
    Attackers target institutions with low-friction digital processes.
  3. Seasoning or Dormancy
    Accounts age quietly, building legitimacy or remaining inactive.
  4. Account Manipulation
    Email, phone, and device updates prepare the account for monetization.
  5. Monetization & Disappearance
    Funds move quickly – often across jurisdictions – before detection.

Most institutions detect fraud in Stage 5. Modern prevention requires detecting divergence in Stages 1–4.

8. Rethinking Defense: From Static Checks to Continuous Intelligence

Fraud has evolved from discrete events to continuous identity manipulation. Defenses must do the same. This shift is fundamental:

Legacy vs. modern fraud defense | Artwork generated by Gemini AI

Institutions must understand identity the way attackers exploit it – as something dynamic, contextual, and shaped by behavior over time.

9. Conclusion

Fraud is becoming faster, more coordinated, and scaling at levels never seen before. Institutions that adapt will be those that begin viewing it as a continuously evolving system.

Those that win the next phase of this battle will stop relying on static checks and begin treating identity as something contextual and continuously evolving.

That requires intelligence that looks beyond internal systems and into the open web, where digital footprints, behavioral signals, and online history reveal whether an identity behaves like a real person, or a synthetic construct designed to exploit the gaps.

At Heka Global, our platform delivers real-time, explainable intelligence from thousands of global data sources to help fraud teams spot non-human patterns, identity inconsistencies, and early lifecycle divergence long before losses occur.

In an AI-versus-AI world, timing is everything. The earlier your system understands an identity, the sooner you can stop the threat.

Data. It's What's for Dinner.

The article discusses the largely positive reception of the UK’s 2025 Pension Schemes Bill.

What is data? Loosely defined, data is facts or statistics collected together for reference or analysis. That definition does a poor job of painting a picture to show you what data is, where you can find it, and what to do with it. It’s like the instructions competitors in “The Great British Bake-Off” get during their technical challenge, where they are told to “make bread” or “bake” and not given any additional information. But any collection operation that intends on being around for more than a few more months needs to know a lot more about data than that definition. They need to be eating it, sleeping with it, taking it to meet their parents.

Data should be influencing every decision you make - whom to call and when, what to put in the body or subject line of an email, where to put the “Make a Payment” button on your portal. Data has become a critical asset for making informed decisions and optimizing recovery strategies. As a company focused on helping organizations analyze their data, we’ve seen firsthand how leveraging the right information can drastically improve collection rates and operational efficiency. However, with the vast amount of data available, it can be challenging to identify which data points are truly essential to your collection operation. In this blog post, we'll explore the most important data points you should focus on, how AI in collections and predictive analytics can enhance their value, and provide actionable insights to help you take your collection strategies to the next level.

What makes this such a difficult topic is that this is very much a “your mileage may vary” type of situation. The data that you have access to and that is important to you is going to be different then the data that your peers and colleagues have access to and is important to them. It’s not exactly a snowflake situation, but the nuances and idiosyncrasies of different collection platforms and appetites for risk are going to change the quality and quantity of data inside a collection operation.

Generally speaking, here are some areas that will yield data you can put to good use. Some of this you may have access to and some of it you may not, but you can use this as a departure point.

Customer Financial Profile

  • Credit score
  • Income level
  • Employment status
  • Debt-to-income ratio
  • Payment History
  • Past payment patterns
  • Frequency of late payments
  • Average payment amounts

Communication Preferences

  • Preferred contact methods
  • Best times to reach the customer
  • Response rates to different communication channels

Behavioral Data

  • Website interactions
  • Call center engagement
  • Response to different collection approaches

Debt Information

  • Type of debt
  • Age of debt

Now that we've identified some key data points, let's explore how to effectively collect and utilize this information:

  • Implement a robust CRM system: Centralize your data collection efforts by using a Customer Relationship Management (CRM) system tailored for the collections industry. This will help you track customer interactions, payment histories, and communication preferences in one place.
  • Leverage alternative data sources: Look beyond traditional credit reports. Utilize public records, social media data, and other alternative sources to build a more comprehensive customer profile. For example, our web intelligence technology allows customers to access alternative data sources and build dynamic and comprehensive consumer profiles.
  • Invest in data analytics tools: Employ machine learning for debt recovery to identify patterns and trends in your data that human analysis might miss. Learn how AI is revolutionizing debt recovery and helping companies identify crucial data patterns.
  • Train your team on data literacy: Ensure that your collection agents understand the importance of accurate data entry and how to interpret data-driven insights.

Identifying the most important data points that help improve your productivity and efficiency is crucial to the success of your collection operation. Look at the data your operation is creating, talk to others about what data they are using, and work with your vendors to create reports that yield insightful and actionable insights you can put to work today.

No items found.

Heka Raises $14M to bring Real-Time Identity Intelligence to Financial Institutions

Windare Ventures, Barclays and other institutional investors back Heka’s AI engine as financial institutions seek stronger defenses against synthetic fraud and identity manipulation.

FOR IMMEDIATE RELEASE

Heka Raises $14M to bring Real-Time Identity Intelligence to Financial Institutions

Windare Ventures, Barclays and other institutional investors back Heka’s AI engine as financial institutions seek stronger defenses against synthetic fraud and identity manipulation.

New York, 15 July 2025

Consumer fraud is at an all-time high. Last year, losses hit $12.5 billion – a 38% jump year-over-year. The rise is fueled by burner behavior, synthetic profiles, and AI-generated content. But the tools meant to stop it – from credit bureau data to velocity models – miss what’s happening online. Heka was built to close that gap.

Inspired by the tradecraft of the intelligence community, Heka analyzes how a person actually behaves and appears across the open web. Its proprietary AI engine assembles digital profiles that surface alias use, reputational exposure, and behavioral anomalies. This helps financial institutions detect synthetic activity, connect with real customers, and act faster with confidence.

At the core of Heka’s web intelligence engine is an analyst-grade AI agent. Unlike legacy tools that rely on static files, scores, or blacklists, Heka’s AI processes large volumes of web data to produce structured outputs like fraud indicators, updated contact details, and contextual risk signals. In one recent deployment with a global payment processor, Heka’s AI engine caught 65% of account takeover losses without disrupting healthy user activity.

Heka is already generating millions in revenue through partnerships with banks, payment processors, and pension funds. Clients use Heka’s intelligence to support critical decisions from fraud mitigation to account management and recovery. The $14 million Series A round, led by Windare Ventures with participation by Barclays, Cornèr Banca, and other institutional investors, will accelerate Heka’s U.S. expansion and deepen its footprint across the UK and Europe.

“Heka’s offering stood out for its ability to address a critical need in financial services – helping institutions make faster, smarter decisions using trustworthy external data. We’re proud to support their continued growth as they scale in the U.S.” said Kester Keating, Head of US Principal Investments at Barclays.
Ori Ashkenazi, Managing Partner at Windare Ventures, added: “Identity isn’t a fixed file anymore. It’s a stream of behavior. Heka does what most AI can’t: it actually works in the wild, delivering signals banks can use seamlessly in workflows.”

Heka was founded by Rafael Berber, former Global Head of Equity Trading at Merrill Lynch; Ishay Horowitz, a senior officer in the Israeli intelligence community; and Idan Bar-Dov, a fintech and high-tech lawyer. The broader team includes intel analysts, data scientists, and domain experts in fraud, credit, and compliance.

“The credit bureaus were built for another era. Today, both consumers and risk live online. Heka’s mission is to be the default source of truth for this new digital reality – always-on, accurate, and explainable.” said Idan Bar-Dov, the Co-founder and CEO of Heka.

About Heka
Heka delivers web intelligence to financial services. Its AI engine is used by banks, payment processors, and pension funds to fill critical blind spots in fraud mitigation, credit-decision, and account recovery. The company was founded in 2021 and is headquartered in New York and Tel Aviv. 

Press contact
Joy Phua Katsovich, VP Marketing | joy@hekaglobal.com

Ready to See What Others Miss?

Let’s help you get started.
Talk To Us
Thank you, we will reach out to you soon! If you’d like to contact us directly, you can email us at info@hekaglobal.com.
Oops! Something went wrong while submitting the form.