The November elections are quickly approaching. That means Americans should prepare to be inundated with ads, images, videos, and promotional materials about each candidate. But some of these are deep fakes.
Deepfakes are videos, images, or audio that are fabricated with artificial intelligence, and they are becoming increasingly common.
Earlier this year, a fake robocall used President Joe Biden’s voice telling New Hampshire residents not to vote in the primary.
And Donald Trump supporters created an AI-generated image of Black voters to encourage them to vote Republican.
Virtually anyone can create these types of phony materials.
But there are some tricks to help identify a deep-fake.
First, in a video, the speaker’s voice may not be entirely in sync with their lips, which creates an audio-visual mismatch. Also, expressions, like head movements, that seem unnatural could indicate the video is fake.
Deep fakes often use artificially imposed lighting onto a background that creates shadows. Also, it could be suspicious if the speaker is saying unusual comments or making out-of-character remarks.
If you think you’ve come across a deep-fake on social media, don’t share it. Instead, report the content to the platform, such as Facebook, Instagram, or X.
When policing fake election content, social media platforms can take it down, put a warning label on it, or demote it. Some oversight board members are pushing for stricter regulation when it comes to deep fakes.