(New Yorker) A New Generation Of Robots Seems Increasingly Human

In 1920, the Czech writer Karel Čapek wrote “R.U.R. (Rossum’s Universal Robots),” a play set in the year 2000 that follows the evolution of robota, a new class of android worker-slaves that eventually rises up and annihilates its human masters. The play introduced both the word “robot” and a narrative of human-robot conflict that, by now, has become familiar in movies such as “The Terminator,” “RoboCop,” and “Blade Runner.” Will robots fashioned to look like us, and programmed to accede to our wishes, spur people to think of them as friends and co-workers—or to treat them like chattel? Onstage in Telluride, David Hanson said that the purpose of robots like Sophia is to teach people compassion. But it seemed counterintuitive to suggest that a machine that can only mimic human emotions has the ability to inculcate in us something so fundamental to the human experience.

In Hanson’s view, Sophia is no different than a character in a book, and we know that stories can engender empathy. But given the speed at which artificial-intelligence models are being deployed, and their tendency to behave erratically, we would be wise not to wholly abandon the caution inspired by Čapek and his heirs. Matthias Scheutz, the C.E.O. of Thinking Robots, pointed out that unless designers build constraints and ethical guardrails into the A.I. models that will power robots of the future, there is a risk of inadvertently creating machines that could harm us in unforeseen ways. “The situation we need to avoid with these machines is that when we test them, they give us the answers we want to hear, but behind the scenes they’re developing their own agenda,” Scheutz told me. “I’m listening to myself talking right now, and it sounds like sci-fi. Unfortunately, it’s not.”

Read it all.

Posted in Uncategorized