
A few weeks ago, a teller pointed to the small sign about “protecting your identity” in a glassy downtown branch that could have been Toronto or Vancouver (the furniture all looks like it came from the same catalogue now). The teller laughed, not because it was funny, but because it felt charming. She asserted that card theft is not the true issue. It involves someone posing as you, grinning on camera, responding to inquiries in your rhythm, and performing just well enough to move a loan application along.
It seems like Canadian banks are attempting to discuss this without frightening anyone. However, the phrase “deepfake lending” hits hard because it aloud implies the quiet part: the system is increasingly based on remote trust, and remote trust is being forged.
| Item | Details |
|---|---|
| Topic | Rising “deepfake lending” fraud attempts targeting Canadian banks and lenders |
| Where it’s showing up | Digital onboarding, remote identity checks, video/voice verification, “instant” lending products |
| Why it’s growing | Cheaper AI tools, more remote banking habits, more personal data floating around, faster loan decisions |
| Common ingredients | Synthetic or stolen identity data + AI face/voice impersonation + urgency + convincing “official” language |
| What authorities are seeing | Canadian Anti-Fraud Centre (CAFC) reports increased deepfake-related reporting and warns deepfake videos often impersonate trusted figures |
| Scale of the broader fraud problem | CAFC reported over $638 million in fraud losses in 2024 and 108,878 fraud reports |
| What banks/regulators worry deepfakes enable | Identity theft, bypassing controls, scam escalation; OSFI/FCAC note deepfakes and voice spoofing as financial-sector risks |
| One credible reference | Forensiq.ca |
Regarding the direction of travel, the Canadian Anti-Fraud Centre has been direct. It has cautioned that deeperfake technology, which impersonates politicians, celebrities, and news anchors, is being used by fraudsters to sell credibility quickly. This is typically done through social media platforms, where the content can spread before anyone notices.
The lending twist is that the same strategies can be used to sell a phony “borrower” in addition to phony investments, particularly when approvals are scheduled to take place in a matter of minutes.
How much of this is “successful fraud” as opposed to “attempted fraud that gets stopped” is still unknown. This distinction is important because, although the industry may seem to be largely under control, you can see how frequently warnings are updated, how often banks update their scam alert pages, and how quickly the terminology changes from “phishing” to “impersonation” to “AI-driven deception.”
The fraud baseline in Canada is already unattractive. According to the CAFC, 108,878 fraud reports were filed in 2024, and Canadians lost over $638 million to fraud. These figures come with an unsettling footnote: reporting rates are generally thought to be low.
Seeing those numbers in the open makes it difficult to ignore how fraud has evolved into a sort of background tax on contemporary life, one that is paid for with money, time, and humiliation.
One significant way that deepfakes differ from more traditional scams is that they do more than just convince you. They are able to pose as you. In their joint examination of AI applications and risks in federally regulated financial institutions, OSFI and the FCAC specifically identify voice spoofing and deepfakes as instruments that can facilitate identity theft, get around security measures, or commit fraud. That isn’t advertising copy.
That’s a regulator admitting that “verification” is being pushed.
Since lending is a combination of narrative and mathematics, this is where things start to feel personal for banks. A borrower arrives with a story that sounds credible, such as a new job, moving expenses, debt consolidation, or unforeseen bills. The story can be polished with generative AI, and the storyteller can be made up with cloned voice or deepfake video.
The dullest applications—a neatly presented applicant, a steady voice during a call, a brief selfie, or a scanned document that passes the initial automated checks—may be the most risky.
Fraudsters do not have to break through every barrier. The overconfident “instant decision” funnel, the hurried agent at the end of a long shift, or the moment a customer is encouraged to disregard a warning because the person on the screen appears comforting are the only gaps they require. Additionally, there is a subtle but pervasive suspicion that we have taught people to believe the wrong signals—a polished video, a self-assured voice, or a logo in the corner of a page.
The public is becoming more anxious. According to a recent poll cited by TD, 82% of Canadians believe scams are more difficult to identify, and 75% believe AI has made them more susceptible to financial fraud. If you’ve ever watched people squint at their phones in coffee shops, pause over a text that “seems official,” and then tap anyway because life is busy and the message says the account will be frozen, you’ll understand why those numbers seem plausible.
The industry’s response is beginning to resemble an ecosystem battle rather than a battle fought solely by banks. In their description of a cross-sector “Canadian Anti-Scam Coalition,” the Canadian Bankers Association advocates for coordinated education and cooperation among tech, telecom, and financial services. Additionally, BMO has presented the coalition push as an effort to coordinate against scams that are using deepfakes and artificial intelligence more and more.
Even though it subtly acknowledges something else, that’s the right course of action. Banks can fortify their own barriers, but many scams start upstream, on ad networks, in social media feeds, with spoof numbers, and in locations where attention is bought and sold.
The subsequent events will probably resemble a small-scale arms race. improved liveness checks. increased onboarding friction. Additional “step-up” verification in the event that the system detects anomalies, such as strange camera behavior, voice inconsistencies, or device patterns. Since everyone wants speed until it becomes a vulnerability for criminals, these measures will irritate actual customers, which is where the tension lies.
The unsettling recommendation for the average person is to view realism with suspicion.
Presume it is possible to stage the flawless video call. Presume that urgency is a strategy. Use the official channels you initiate for cross-checking, not the ones that were sent to you in a message. In essence, the CAFC’s advice is to treat anything that a public figure seems to endorse as fraudulent until you can show otherwise and to independently confirm information such as phone numbers and URLs.
It is believed that “deepfake lending” will also compel a cultural shift in the banking industry. It’s not a dramatic one. Subtle changes include a slower tempo, more questioning, and greater latitude for employees to express discomfort. Even when a face on a screen smiles back at you like it’s someone you’ve known for years, the best institutions may not have the most impressive AI; rather, they may be the ones that maintain a certain amount of skepticism in the workflow.
