Skip to main content
Clear icon
50º

AI Fraud Alert: How to avoid falling for deepfakes

Voice cloning, phishing, emergency scams, IRS imposters, fake ads, malware — the Federal Trade Commission reports consumers have lost $2.7 billion to scams since 2021. And now AI, or deepfake technology, is becoming more sophisticated and it’s becoming harder and harder to distinguish between what’s real and what’s a computer-generated scam.

Deepfake scammers use AI algorithms to manipulate videos or images, creating realistic, fraudulent content. It’s important to know the telltale signs of an AI fake.

First, look for inconsistencies in the video. Deepfakes are often created by stitching together footage from different videos, so there may be inconsistencies in the lighting, the setting, or the person’s appearance. If they turn their head to the side, their face will blur. There will also be unnatural facial expressions and eye movements—often they won’t blink.

Experts say that AI fraudsters find it easiest to clone voices. To increase your security, set up multifactor authentication. Never use voiceprints to access accounts. Facial verification is safer, and a one-time text code is the most effective way to protect yourself.

Apps like “InVid-WeVerify” is a free tool developed by the University of Maryland that allows users to verify videos. And “Reuters Fact Check” checks claims made on social media and other platforms, and it has a dedicated section on its website specifically for deepfakes.

The United States has announced the strongest global action yet on AI safety. President Joe Biden signed an executive order that requires artificial intelligence developers to share safety results with the US government. It also protects consumer privacy and creates a program to evaluate potentially harmful AI-related health care practices, to name a few. The U.S. is the first to take such measures.