As computers play a bigger role in warfare, the dangers to humans rise
The contest between China and America, the world’s two superpowers, has many dimensions, from skirmishes over steel quotas to squabbles over student visas. One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence (ai), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle. China frets that America has an edge thanks to the breakthroughs of Western companies, such as their successes in sophisticated strategy games. America fears that China’s autocrats have free access to copious data and can enlist local tech firms on national service. Neither side wants to fall behind. As Jack Shanahan, a general who is the Pentagon’s point man for ai, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”
ai-enabled weapons may offer superhuman speed and precision. But they also have the potential to upset the balance of power. In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an ai-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.
During the 20th century the world eventually found a way to manage a paradigm shift in military technology, the emergence of the nuclear bomb. A global disaster was avoided through a combination of three approaches: deterrence, arms control and safety measures. Many are looking to this template for ai. Unfortunately it is only of limited use—and not just because the technology is new.
Deterrence rested on the consensus that if nuclear bombs were used, they would pose catastrophic risks to both sides. But the threat posed by ai is less lurid and less clear. It might aid surprise attacks or confound them, and the death toll could range from none to millions. Likewise, cold-war arms-control rested on transparency, the ability to know with some confidence what the other side was up to. Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of ai-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union.
That leaves the last control—safety. Nuclear arsenals involve complex systems in which the risk of accidents is high. Protocols have been developed to ensure weapons cannot be used without authorisation, such as fail-safe mechanisms that mean bombs do not detonate if they are dropped prematurely. More thinking is required on how analogous measures might apply to ai systems, particularly those entrusted with orchestrating military forces across a chaotic and foggy battlefield.
The principles that these rules must embody are straightforward. ai will have to reflect human values, such as fairness, and be resilient to attempts to fool it. Crucially, to be safe, ai weapons will have to be as explainable as possible so that humans can understand how they take decisions. Many Western companies developing ai for commercial purposes, including self-driving cars and facial-recognition software, are already testing their ai systems to ensure that they exhibit some of these characteristics. The stakes are higher in the military sphere, where deception is routine and the pace is frenzied. Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. So far there is little sign that the dangers have been taken seriously enough—although the Pentagon’s ai centre is hiring an ethicist. Leaving warfare to computers will make the world a more dangerous place.