- AI deepfakes increasingly target Olympians, posing risks to their marketability.
- Deepfakes mislead consumers, as seen with fake Oprah promoting turmeric as weight loss.
SALT LAKE CITY — AI deepfakes have become a blight on social media and the internet. Just about every celebrity imaginable has been targeted. And now, that includes members of Team USA.
"In the case of Olympians, there's so much video footage of them out there that it's easy for a bad actor to make a video of that person," said Chris Simpson, director of National University's Center of Cybersecurity.
AI deepfakes hurt the person being imitated because if people can't trust what they're saying is bona fide, then their real words lose power and they lose marketability.
And depending on what the deepfake is saying, it can hurt you too, as Lisa Swearingen recently found when she bought weight loss pills.
She had purchased several bottles after catching an ad online featuring the endorsement of the one and only Oprah Winfrey.
Only, it wasn't Oprah. It was her deepfake doppelganger.
Swearingen said those pills turned out to be a common spice, turmeric — really expensive turmeric.
Experts say, in a very short amount of time, these deepfakes have become a lot harder to spot. And they can be created using relatively little source material.
"As a consumer, how do I protect myself?" I asked Simpson. "Besides being skeptical and assuming everything is fraud."
"Honestly, that's probably the best way," he answered.
The good news is that something is finally being done — not so much to shut down the fakes. Instead, its focus is to help us investigate and determine if a video we see on social media is real or not.
A group called the Coalition for Content Provenance and Authenticity is working on a system that basically adds a digital label to photos and videos so people can check where they came from and see whether artificial intelligence was used in their creation.








