JACKSONVILLE, Fla. – Is that really Donald Trump and Joe Biden, or is it an image or voice generated by an intelligent computer algorithm?
The social media company, Meta, is concerned that elections all around the world could fall prey to the influence of artificial intelligence.
The issue of altered images has been a topic of discussion at the White House.
“We are alarmed by the reports of the circulation of images that you just laid out. False images, to be more exact. And it is alarming,” White House press secretary Karine Jean-Pierre said.
RELATED: Meta says it will label AI-generated images on Facebook and Instagram
Meta is working with other companies to create industry-standard invisible watermarks to identify any image created by artificial intelligence tools. Even though the election season is a top priority, Meta is promising to label any content that’s not real.\
That includes images of celebrities like Taylor Swift, who was a recent victim of virtual impersonation.
Meta’s new labels will roll out across Facebook, Instagram, and Threads in multiple languages, and Meta has already applied a similar label to images created with its own AI generator tool.
Audio and video generated by AI won’t be automatically labeled just yet because Meta said the industry isn’t including the data in those items.
News4JAX asked internet networking security consultant Chris Hamer if AI will soon be able to create a product that can’t be identified, by social media companies.
“[It’s like] two bulldozers pulling on the same chain,” Hamer said. “One wants to be able to continue to identify AI-generated images and videos and audio and scripts and books and everything else. Whereas the AI side of it says we need to be able to generate material that is undetectable as AI. Who’s going to win? It depends on who has the greater resources and the better-vested interest.”
Hamer said the only thing that could slow the growth of artificial intelligence deepfakes is legislation that would make certain aspects of AI illegal.
So how can you determine if what you’re looking at online is real or if it’s fake? Hamer said the first step is to change your overall mindset when viewing everything online.
“It’s difficult to actually challenge everything critical thinking is, it’s difficult,” Hamer said. “And in some cases, that leads you to be rather cynical, and you take all the beauty out of something that really just exists to be beautiful. Instead of tearing it apart and going, ‘well, that tree shouldn’t be growing there and those fruits are wrong.’”
To spot an AI-generated deepfake, the experts say:
- Look for unnatural eye movement, or a lack of eye movement, like no blinking.
- AI has a hard time generating the natural movement of hands.
- Watch for unnatural facial expressions or a lack of emotion.
- Awkward-looking body posture.
- Teeth or hair that doesn’t look real.
- Inconsistent audio and noise.
“People are going to have to use their brains more and believe the internet less,” Hamer said.
Meta is expected to rollout their new labels for AI-generated images in the coming months.