(Bloomberg) Deepfakes already being put to some scammy uses

Jordan Howlett, a 26-year-old with 24 million followers on Instagram, TikTok and YouTube, is very careful about the brands he works with. He’s signed deals with Domino’s Pizza, Google and WingStop and makes his living creating videos on subjects such as how to “open jars like a pro” or the best way to “properly eat Chipotle bowls.” So he was spooked when he began receiving messages asking him why he was advertising a supposed cure for blindness on Facebook and Instagram.

Howlett clicked on one of the links to the videos in question and listened in horror as a voice that sounded just like his described how “top researchers from Cambridge” had discovered a seven-second ritual that could give anyone perfect vision. The video, which consisted of stock images of brain X-rays and middle-aged people squinting into their cellphones, was poorly edited. But the audio was utterly convincing, according to Howlett. “When I heard my voice, I was terrified,” he says. “They could theoretically get me to say anything.”

Cybersecurity experts have spent years warning about deepfakes—artificially generated or manipulated media that can pass as authentic. While much of the concern has centered on images and video, it’s become clear over the past year that audio deepfakes, sometimes called voice clones, pose the most immediate threat. Vijay Balasubramaniyan, founder of the fraud detection agency Pindrop, says his company has already begun to see attacks on banking customers in which fraudsters use synthetic audio to impersonate account holders in customer support calls.

Read it all.

print

Posted in Science & Technology