You Can Be Deepfaked Right Now and Lose Millions
top of page

You Can Be Deepfaked Right Now and Lose Millions

The potential dangers of artificial intelligence (AI) has everyone abuzz. This includes technology leaders Elon Musk and the Apple co-founder Steve Wozniak. They have signed a public petition urging the makers of the conversational chatbot ChatGPT, OpenAI, to halt development for six months so that the AI can be “rigorously audited and overseen by independent outside experts.”

half-human half-robot

While there may be some merit to their concerns about humanity’s safety in the future, the reality is that today AI is already being leveraged to defraud thousands of companies and individuals. The US Federal Trade Commission (FTC) even felt the need to issue a warning recently about an AI-facilitated scam which “sounds like a plot from a science fiction story,” as NPR put it.


Yet science fiction, it is not. Deepfake tools—whether AI facilitated or simply crafting voice imitations using well-written software—enabled scammers to steal approximately $11,000,000 last year. Done by mimicking the voices of medical professional, lawyers, business associates, loved ones, and other acquaintances, urgently requesting money transfers.


The FTC warned, “All [the scammer] needs is a short audio clip of your family member’s voice – which he could get from content posted online – and a voice-cloning program. When the scammer calls you, he’ll sound just like your loved one.”

Businesses in all manner of industries and of all different sizes are quickly falling victim to the new fraud trend.


Take, for instance, the employee of a UK-based energy firm who became convinced by a deepfaking scammer that he was talking to the company’s CEO and transferred ~$250,000 to the attackers. Or the bank manager in Hong Kong who fell victim to deepfaked calls from someone claiming the be the bank director. The result of the fraud was a transfer of $35,000,000.

man hiding, taking a selfie

Another attack avenue, about which the FBI is warning businesses, is the creation of deepfake “employee candidates” for remote work positions, getting the deepfakes hired and thereby gaining access to sensitive information.


Deepfakes’ potential to impact many small business owners or executives is very worrisome, as many have appeared on publicly available videos on sites like Facebook, YouTube, or LinkedIn. Even someone who hasn’t appeared in a video can have her/his voice “stolen” by fraudsters out of recorded voicemail greetings or through a short call with the attackers in which they engage the target in brief conversation—the attackers recording the whole time.

CEO signing a document

How do you combat this attack type? No financial managers in your business should ever be allowed to create a financial transaction as a result of an incoming phone call. No transaction over a set amount can be authorized without prior written approval from multiple executives, and a signed request or contract must exist for every transaction request—no special shortcuts allowed even for CEOs.


AI technology such as ChatGPT should give us all pause, but the clear and present danger is deepfake technology that allows attackers to imitate executives and other employees—and will no doubt become increasingly available and popular among attackers.


Connect with Webcheck Security today to discuss your organization’s security strategy, needs, and how our expert consultants can assist you.


5 views0 comments
bottom of page