$442 Billion Lost to Fraud in One Year. Big Tech Just Signed a Pledge to Help. It's Voluntary.
Interpol's new report reveals $442 billion in global financial fraud losses in 2025, driven by AI-powered scams. The same week, Google, Meta, Amazon and OpenAI signed a voluntary anti-scam accord with no enforcement. Here's what that gap looks like from around the world.

Two things happened in the same week that, placed side by side, tell you everything about the state of online fraud in 2026.
On Monday, Interpol released its Global Financial Fraud Threat Assessment. The headline: financial fraud drained $442 billion from the global economy in 2025. AI-powered scams surged 1,210 percent in a single year. The agency rated the risk for 2026 as "high."
On Sunday, eight of the world's largest technology companies — Google, Microsoft, Meta, Amazon, OpenAI, LinkedIn, Adobe, and Match Group — signed something called the Online Services Accord Against Scams. It promises shared threat intelligence, fraud detection tools, and best practices.
It is entirely voluntary. There are no penalties for noncompliance.
The Scale of What's Being Stolen
The Interpol number — $442 billion — is almost impossible to process. It's larger than the GDP of 160 countries. It's more than the entire global humanitarian aid budget. It exceeds what the world spent responding to the 2004 Indian Ocean tsunami, the 2010 Haiti earthquake, and every major disaster since — combined.
And it's accelerating. AI-driven scams grew at six times the rate of traditional fraud in 2025. Deepfake video impersonation, AI voice cloning, and synthetic identity creation have made it possible for criminal operations to run at industrial scale with tiny workforces.
Deepfake-related schemes alone cost victims $1.1 billion worldwide in 2025, according to SurfShark's research cited by Euronews. In the US, AI-driven deepfakes caused over $3 billion in losses between January and September 2025. Nearly half of those incidents used celebrity likenesses to lend scams credibility.
The trajectory is clear. Projected losses from AI fraud could reach $40 billion by 2027.
The Human Trafficking Engine
Behind the statistics is something far darker than clever software.
A UN report published in February found that at least 300,000 people are trapped in scam compounds across Southeast Asia — primarily in Cambodia, Myanmar, and Laos. These aren't willing participants. They're victims of human trafficking, lured by fake job offers, stripped of their passports, and forced to run scam operations from guarded compounds.
The so-called "pig butchering" model — where scammers build long-term relationships with victims before extracting money through fake cryptocurrency investments — has become the dominant fraud type globally. The operations are sophisticated, multi-lingual, and increasingly AI-assisted.
The US Department of Justice seized $61 million in Tether cryptocurrency linked to pig butchering operations in February alone. That seizure represents a fraction of a fraction of what's flowing through these networks.
This is the context in which eight tech companies signed a voluntary pledge.
What the Accord Actually Promises
The Online Services Accord Against Scams, first reported by Axios, commits its signatories to several broad goals: adding fraud detection tools, introducing user security features, requiring stronger verification for financial transactions, and sharing threat intelligence between companies and with law enforcement.
On the policy side, the coalition will ask governments to "declare scam prevention a national priority."
Some of these companies already have programs in place. Meta recently rolled out features across Facebook, Messenger, and WhatsApp to flag suspicious accounts. LinkedIn introduced verification requirements for recruiters and executives to reduce job scams. Google has fraud detection tools built into its advertising platform.
The question isn't whether these companies are doing something. It's whether what they're doing matches the scale of the problem.
The Gap Between Promise and Enforcement
Here is the core problem with voluntary accords in a $442 billion crisis: there is no mechanism to ensure they work.
The accord includes no penalties. No timelines. No external audits. No independent body to verify whether signatories follow through. If Google decides next year that a particular fraud detection tool is too expensive or too burdensome for advertisers, nothing in this agreement prevents them from quietly dropping it.
This pattern is familiar. The tech industry has a long history of self-regulation that looks good in press releases and evaporates under commercial pressure. Voluntary commitments on election integrity, misinformation, and data privacy have followed the same arc: announcement, praise, slow erosion, repeat.
The companies that signed this accord are the same companies whose platforms host the scams. Meta's Facebook and Instagram are among the top vectors for romance scams and fake investment schemes. Google's search results and ad network serve scam content. Amazon's marketplace hosts fraudulent sellers. Match Group's dating apps are primary hunting grounds for pig butchering operations.
They are signing a pledge to combat a crisis they profit from. The structural incentive problem is never mentioned.
Where the Money Goes
The geography of online fraud follows a predictable pattern. The money flows from wealthy countries to criminal networks, often through cryptocurrency, and ultimately through financial systems that are poorly equipped to trace or recover it.
The victims are disproportionately in the United States, Europe, and wealthy Asia-Pacific nations. But the human cost is heaviest in the countries where scam operations are based. In Myanmar, Cambodia, and Laos, trafficking victims face beatings, torture, and in some cases death for failing to meet fraud quotas.
Interpol's report warns that AI is accelerating every phase of this cycle. Voice cloning makes phone scams more convincing. Deepfakes enable video calls where victims believe they're speaking to real bank officials, government employees, or romantic partners. Large language models generate convincing messages in any language, eliminating the broken English that once served as an early warning sign.
The criminals are adopting AI faster than the companies that built it.
What Regulation Looks Like Elsewhere
The European Union's Digital Services Act, which took full effect in 2024, imposes mandatory obligations on large platforms to address illegal content, including scam advertising. Noncompliance carries fines of up to 6 percent of global revenue. The UK's Online Safety Act includes provisions targeting fraudulent content with regulatory enforcement.
Neither framework is perfect. But they share something the tech industry's voluntary accord lacks: consequences.
Australia has proposed mandatory scam prevention obligations for banks and platforms. Singapore's anti-scam framework requires financial institutions to share liability for fraud losses. India's cybercrime reporting system processes millions of complaints annually.
The US, where most of the accord's signatories are headquartered, has no equivalent federal framework for platform liability in fraud cases.
The Pattern That Keeps Repeating
In 2019, major tech companies signed the Christchurch Call, a voluntary commitment to combat terrorist content online after the mosque shootings in New Zealand. Researchers later found that enforcement was uneven, that many platforms failed to meet their commitments, and that the agreement had limited measurable impact.
In 2020, Facebook, Google, and Twitter signed up to the EU Code of Practice on Disinformation. Subsequent audits found inconsistent application and significant gaps in enforcement.
Voluntary accords serve a function. They create norms. They signal intent. They generate headlines. But when the problem is worth $442 billion a year and growing at quadruple-digit rates, norms without teeth aren't enough.
What Would Actually Work
Researchers and law enforcement officials have been clear about what's needed: mandatory reporting requirements for platforms when they detect scam networks; shared liability that makes companies financially responsible when their tools are used to defraud people; investment in law enforcement capacity to dismantle trafficking-based scam operations in Southeast Asia; and real-time information sharing that isn't optional.
The technology exists. The companies have the resources. The Interpol data makes the urgency undeniable.
What's missing is the part that voluntary accords can't provide: the willingness to impose costs on the platforms whose business models make fraud easy and enforcement hard.
Eight of the world's most powerful companies just promised to fight a $442 billion problem. They asked for no penalties if they fail. In the world of online fraud, that's called a confidence game.
Sources & Verification
Based on 5 sources from 3 regions
- Interpol / CNBC TV18International
- EngadgetNorth America
- The RegisterEurope
- ForbesNorth America
- EuronewsEurope
Keep Reading
Companies Are Firing Workers Because They Expect AI to Replace Them—Not Because It Has
HBR surveyed 1,006 executives. The layoffs are almost entirely anticipatory. People are losing jobs to a forecast, not a machine.
Peter Thiel Lectured on the Antichrist at the Vatican's Doorstep. His Company Is Helping Run the Iran War.
Peter Thiel delivered secret lectures on Biblical apocalypse in Rome while Palantir's AI targets strikes in Iran. Catholic institutions fled. The contradiction tells you everything.
America Wants to Approve Every AI Chip Sold on Earth — Its Allies Aren't Happy
US draft rules would require government permits for all AI chip exports worldwide, even to allies like the UK and Japan. Here's how the world sees it differently.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.