The Numbers Are Worse Than You Think
KPMG Canada's 2026 fraud survey didn't bury the lead. Eighty-one percent of surveyed companies experienced AI-enabled attacks in the past 12 months. The attack types are familiar to security teams but newly accessible at scale: deepfake audio and video, voice-cloned executive impersonation calls, and AI-generated phishing.
These attacks work because of the trust gap. Employees are trained to trust a phone call from the CFO. They act when an executive sounds urgent. AI exploits that trust with synthetic voices and video that human detection can't reliably catch. Human accuracy at spotting deepfakes hovers around 55-60%, barely better than a coin flip.
The financial damage is real. Losing up to 5% of annual profits to fraud isn't a rounding error. For a mid-sized company doing $50 million in revenue, that's up to $2.5 million gone. For a large enterprise, the exposure is much larger.
Only 1 in 4 Organizations Has a Real Plan
Only 26% of organizations surveyed have a tested response plan that covers AI-enabled attacks like deepfakes and voice clones. That means three out of four companies are improvising, responding to sophisticated AI-powered attacks with processes built for a pre-AI world.
A voice cloning attack doesn't wait for your incident response team to convene. A deepfake video of your CEO authorizing a wire transfer doesn't pause while you check your playbook. These attacks move fast. The organizations that survive them have verification infrastructure in place before the call comes in.
What AI Fraud Actually Looks Like in Practice
In February 2024, engineering firm Arup lost $25 million when a finance employee joined a video call where every other participant, including the CFO, was a deepfake. The employee wired the funds believing they had received direct authorization from company leadership.
That case became the benchmark for executive impersonation via AI video. Since then, voice cloning has become even more accessible. Attackers need just 20-30 seconds of publicly available audio to clone an executive's voice with convincing fidelity. Earnings calls, conference appearances, podcast interviews: all of it feeds the model.
The KPMG findings confirm this isn't isolated. It's a pattern affecting companies across industries, and the majority of victims had no tested plan in place when the attack arrived.
The Verification Gap Is the Real Vulnerability
Most organizations focus their fraud defenses on detection: spam filters, anomaly monitoring, security awareness training. Detection is necessary, but against AI-generated content, detection alone is losing the race.
The faster fix is verification. Instead of asking whether something is real after the fact, organizations need a way to confirm authenticity before acting. That's the problem the AI Defense Suite was built to solve.
Prove a Real Person Is Behind the Communication
Proof of Life, part of the AI Defense Suite, lets executives and employees create biometric-verified selfies called Proofies. A Proofie isn't a regular photo. It requires Face ID or Touch ID to create, meaning a living human was physically present and verified at the moment of capture. A cryptographic timestamp locks in exactly when it was taken, and anyone can verify the result at proof.proofoflife.io without an app or account.
For executive communications, the use case is direct. Before a sensitive instruction goes out, before a wire transfer gets authorized, before a major decision gets communicated down the chain, the executive sends a Proofie. Anyone who receives it can confirm the image was taken by a real, biometrically verified person at a specific time.
AI can clone a voice. AI can generate a video. AI cannot fake a biometric scan tied to a specific device at a specific moment. Proofies close that gap.
Proof of Life is free to download and available on the iOS App Store and Google Play Store.
Protect the Channels Where Attacks Arrive
Deepfake impersonation calls and AI-generated phishing don't arrive through a single channel. They come through email, SMS, WhatsApp, Slack, and every other platform your teams use daily. That's where Agent Safe comes in.
Agent Safe is a nine-tool security suite from the AI Defense Suite, built to protect teams and AI agents from phishing, business email compromise, and social engineering across any messaging platform. When a suspicious message arrives claiming to be from an executive, Agent Safe can check sender reputation, scan URLs, detect prompt injection in AI workflows, and flag manipulation patterns before anyone acts on the message.
For the 81% of companies already experiencing AI-enabled attacks, adding Agent Safe across communication channels puts a checkpoint between the attacker and the employee who might otherwise comply.
Learn more at agentsafe.aidefensesuite.com.
What Organizations Should Do Right Now
The KPMG data points to a clear action plan. Organizations don't need to solve everything at once. They need to close the verification gap first.
Step 1: Establish a verification protocol for high-stakes communications. Any instruction involving money movement, access credentials, or sensitive data should require a verified confirmation, not just a voice or video call.
Step 2: Deploy biometric verification for executive communications. Proof of Life gives leadership a simple, free tool to attach a tamper-proof, biometrically verified proof to any instruction or announcement. Roll it out to the C-suite and finance team first.
Step 3: Protect inbound communication channels. AI-generated phishing is part of the same attack ecosystem as deepfakes and voice clones. Agent Safe screens messages across platforms before they reach the people most likely to act on them.
Step 4: Build and test a response plan. The 26% of companies with a tested response plan are far better positioned than the other 74%. Document your verification protocols. Run simulations. The plan you build before an attack is the one that works during one.
The Window to Act Is Narrow
AI fraud tools are getting cheaper and more accessible every month. The gap between what attackers can do and what most organizations have deployed to stop them keeps widening, and the KPMG survey puts hard numbers to it.
Organizations that close that gap first will have verification infrastructure in place when the next impersonation call arrives. The ones that wait will keep improvising, and improvising against AI-powered fraud is an expensive habit.
Truth needs proof. The AI Defense Suite gives organizations the tools to provide it.