[Elon] Musk, [Bill] Gates and [Stephen] Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers. Relying on efforts to program A.I. not to “harm humans” (inspired by on Isaac Asimov’s “three laws” of robotics from 1942) makes sense only when an A.I. knows what humans are and what harming them might mean. There are many ways that an A.I. might harm us that that have nothing to do with its malevolence toward us, and chief among these is exactly following our well-meaning instructions to an idiotic and catastrophic extreme. Instead of mechanical failure or a transgression of moral code, the A.I. may pose an existential risk because it is both powerfully intelligent and disinterested in humans. To the extent that we recognize A.I. by its anthropomorphic qualities, or presume its preoccupation with us, we are vulnerable to those eventualities.
Read it carefully and read it all (emphasis mine).
Right and wrong as compared with what. I am sure that many liberals would know the answer and how to program the robot but there is much to think about here. I wish Lewis were here to comment on this.