Skip to main content

Deepfakes Are Getting More Realistic—Here’s What it Means for Online Security

  |  Rachael Roth

With the generative AI boom comes a host of security concerns and fears of job displacement

These concerns overlap in a particular sector: “synthetic media.” This includes deepfakes (hyper-realistic, AI-generated videos) and voice cloning or audio deepfakes (AI-generated text-to-speech meant to sound like specific people). 

Promoters of synthetic media highlight its advantages: lowered production costs for film and advertising and increased accessibility since creator tools have become less cost prohibitive and easier to use. Yet the downsides are more somber.

Deepfakes threaten biometric security, including facial and voice recognition technology, which could lead to fraud and impact both individual and national security. Sophisticated deepfakes are harder to detect as false, making way for the spread of misinformation and disinformation in the media. 

Governmental regulation of generative AI and synthetic media, as well as alternatives to biometric security, can help combat these vulnerabilities. 

Here’s how deepfake technology is progressing, what lawmakers are saying, and how to take control of your own security. 

What is a deepfake? 

The term “deepfake” is an amalgam of “deep learning technology” and “fake” or synthetic videos. This technology is far more advanced than tools like Photoshop and video editors and makes creating videos much, much faster.

Just as large language models like ChatGPT are trained on publicly available source material across the internet, AI is trained for deepfakes on video sources to duplicate the movements of people in real videos. Creators can prompt the AI tool to substitute one face for another, for example, and the software will modify the video or image until it looks realistic. Voice cloning or audio deepfakes use the same technology, but instead of duplicating movements, they are trained on audio sources to mimic things like a speaker’s tone and intonation. 

Not only are the results highly advanced, but in some cases, they can be generated in minutes. 

The bad and the ugly: Deepfakes and the spread of misinformation

Many users have created harmless deepfakes just for fun (see: Jim Carrey in The Shining). Brands are experimenting with AI-generated videos as well, and Gartner predicts that by 2025, 30% of outbound marketing messages for major companies will leverage synthetic videos.

But nefarious deepfakes are highly prevalent. In 2017, a Redditor made the public more aware of deepfake technology when he used AI-driven software to create false, pornographic videos that appeared to be of celebrities. According to an article published by The Guardian in 2020, the AI firm, Deeptrace, found that 96% of deepfake videos online in 2019 were pornographic.

Deepfakes are also used for political subversions, like in this false video of President Biden, in which he appears to announce a draft to send American soldiers to Ukraine. Misinformation has long been spread through articles and social media posts; deepfakes are another, more convincing way to spread false information.

Synthetic media and security

Criminals have already used synthetic media to impersonate others, resulting in substantial fraud. Businesses are being targeted by scammers, who use AI-driven technology to pose as someone from the company and request transfers in large sums. 

Back in 2019, a scammer convinced a CEO to transfer $243,000 using AI-powered tech; in 2020, a company in Hong Kong was duped through voice cloning into authorizing $35 million in fraudulent transfers. Deepfake-related frauds continue to rise, doubling between 2022 and 2023, and accounted for 2.6% of frauds in the first quarter of 2023 in the US alone. 

These AI-generated attacks target individuals as well, with imposter scams totaling $2.6 billion in consumer losses in 2022, according to the FTC

Voice cloning tactics can be used to bypass security measures like voice verification, used for banking and other sensitive accounts. In February of 2023, a reporter for Motherboard explained how he was able to clone his own voice through AI-powered technology to access his bank account. 

Facial recognition technology is likewise vulnerable to deepfake attacks. The Verge reported that “liveness tests,” a form of facial recognition that checks a user’s face in a camera against their ID photo, can be easily duped by deepfakes. 

The Verge notes that security measures like Apple’s Face ID, which verifies identities based on the shape of a user's face, can’t be tricked by deepfakes. This type of biometric security checks a user’s face or fingerprint against what is stored in the user’s own device, rather than a security database. Additionally, these measures require the user to be physically present and can’t be unlocked by a photo or video. 

How to prevent deepfake security attacks 

In addition to using software (like Intel’s FakeCatcher) to detect deepfake scams, companies can create new procedures to protect their assets. For example, creating a policy against requesting or authorizing transfers over phone or video calls can help prevent deepfake scams. 

For individuals, The Atlantic suggests establishing code words for friends and family in the event that you receive a call from a scammer posing as a loved one requesting money.  

Alternatives to facial and voice recognition 

Unlike facial or voice recognition authorizations, strong passwords, and multifactor authentication (MFA) are secure ways to protect your sensitive accounts. 

Strong passwords remain the best defense against cybersecurity attacks, and there are many steps individuals and businesses can take to ensure that their passwords are unique and hard to crack. 

Dashlane’s password management solution offers a Password Generator that uses an algorithm called zxcvbn to measure and improve user passwords, while also allowing you to adjust passwords based on individual account requirements (i.e. length and use of special characters). 

Businesses can implement single sign-on (SSO). With SSO, employees have fewer passwords to remember, and IT admins have more control over company security and can track extraneous logins. 

Another alternative is passwordless logins, which Dashlane has in the works. It uses optional facial recognition—the kind that can’t be fooled by deepfakes—or a unique pin, stored locally on a user’s device, rather than a Master Password. This method is resistant to phishing attacks as it requires both the device and the user themselves to access the account. 

Governmental policy and deepfakes

Concerns over generative AI and the labor market, as well as security, have reached Congress. Sam Altman, the chief executive of OpenAI, urged US lawmakers to create regulation around generative AI to mitigate potential harm during a Senate hearing in May

Though it’s unclear how US lawmakers will proceed, some countries have already created mandates to address the risks posed by AI. The UK funds research for deepfake detection and is considering a law requiring labeling of AI-generated photos and videos, while nations like China and South Korea have already put comprehensive laws into place that require disclosure of deepfakes, as well as user consent.   

For now, the proliferation and advancement of deepfakes require individuals and businesses to be discerning in order to prevent fraud and identity-based attacks. You can even test your own ability to detect a deepfake through the Detect Fakes website, created as part of an MIT research project, and stay aware of potential phishing and social engineering scams. 

Using alternatives to voice and facial verification, like password and passwordless logins, can help reduce your risk of cyberattacks. 

Sign up to receive news and updates about Dashlane