(SA) Humans Find AI-Generated Faces More Trustworthy Than the Real Thing

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal. The creator of the “deeptomcruise” account on the social media platform was using “deepfake” technology to show a machine-generated version of the famous actor performing magic tricks and having a solo dance-off.

One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic person’s eyes. But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.

The startling realism has implications for malevolent uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud. Developing countermeasures to identify deepfakes has turned into an “arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.

Read it all.

print

Posted in Anthropology, Psychology, Science & Technology