It is now possible to generate fake but realistic content with little more than the click of a mouse. This can be fun: a TikTok account on which—among other things—an artificial Tom Cruise wearing a purple robe sings “Tiny Dancer” to (the real) Paris Hilton holding a toy dog has attracted 5.1m followers. It is also a profound change in societies that have long regarded images, video and audio as close to ironclad proof that something is real. Phone scammers now need just ten seconds of audio to mimic the voices of loved ones in distress; rogue ai-generated Tom Hankses and Taylor Swifts endorse dodgy products online, and fake videos of politicians are proliferating.
The fundamental problem is an old one. From the printing press to the internet, new technologies have often made it easier to spread untruths or impersonate the trustworthy. Typically, humans have used shortcuts to sniff out foul play: one too many spelling mistakes suggests an email might be a phishing attack, for example. Most recently, ai-generated images of people have often been betrayed by their strangely rendered hands; fake video and audio can sometimes be out of sync. Implausible content now immediately raises suspicion among those who know what ai is capable of doing.
The trouble is that the fakes are rapidly getting harder to spot.
Phone scams, fake videos of politicians and AI-generated Tom Hankses and Taylor Swifts are proliferating. As fakes rapidly get harder to spot, dystopian possibilities abound—yet societies will also adapt to the fakers. Here’s how: https://t.co/yLgDa37oSd 👇
— The Economist US (@EconUS) January 19, 2024