It had never occurred to me until I read The Age of AI that what differentiates AI from HI – human intelligence – is that even the most brilliant human chess player rules out ex ante certain moves that involve very high sacrifice. But AlphaZero plays chess “without reflection or volition, with strict adherence to the rules”. It is unbeatable partly because it has inferred from the rules certain tactics – and hence, cumulatively, a strategy – that HI would never consider.
The other obvious difference is that AI is much, much faster than HI. As the authors note, “An AI … scanning for targets follows its own logic, which may be inscrutable to an adversary and unsusceptible to traditional signals and feints – and which will, in most cases, proceed faster than the speed of human thought”. The idea of an AI program waging war, rather than playing chess, with the same ruthlessness and speed is deeply frightening. No doubt DeepMind is already working on AlphaHero. One imagines with a shudder the programme sacrificing entire armies or armadas as readily as its chess-playing predecessor sacrificed its queen. No doubt the reader should feel reassured that the United States has committed itself to develop only “AI-enabled weapons”, as opposed to “AI weapons … that make lethal decisions autonomously from human operators”. “Created by humans, AI should be overseen by humans”, the authors declare. But why should America’s undemocratic adversaries exercise the same restraint? Inhuman intelligence sounds like the natural ally of regimes that are openly contemptuous of human rights.
If the foe of the future is literally inhuman as well as inhumane, how shall we be able to defend ourselves? The varieties of deterrence that evolved during the first Cold War, up to and including Mutually Assured Destruction, seem unlikely to apply to AI war. Because, unlike nuclear weapons, AI will be widely used in multiple ways and at multiple scales, “the achievement of mutual strategic restraint … will be more difficult than before”. That seems an understatement. I have thought for some time that there may simply be no deterrence in the areas of cyberwar and information warfare.
We are left with only two possibilities. “For nations”, the authors note, “disconnection could become the ultimate form of defense.” This makes sense. The past five years have vividly revealed the dangers of a hyperconnected world. Without effective circuit-breakers that sever network links at the first indication of hazardous contagion, we are as vulnerable to cyberattack as we were to fake news in 2016 or a novel pathogen in 2020.
Read it all (subscription).