Securing the nuclear second-strike capability is the basis of the deterrence strategy that has prevented any potential aggressor from launching a nuclear attack to date: “He who shoots first, dies second.” The nuclear powers have developed and installed extensive computerised early warning and decision-making systems in order to be able to react to a threat to their second-strike capability. These systems can support humans and aim to recognise an attack in good time in order to be able to activate their own nuclear launchers before the devastating impact. Such a strategy is known as a launch-on-warning strategy.
The basis for recognising enemy missile launches are radar systems, satellites with various sensors and maritime listening sensors to detect the movement of submarines and ships. Due to ever shorter warning times, e.g. because of modern hypersonic missiles, artificial intelligence techniques are increasingly required to automatically solve certain subtasks and provide decision recommendations. However, errors also occur in highly complex systems and it is impossible to realise such a system without errors. Furthermore, the data available for a decision in the event of an alarm is usually vague, uncertain and incomplete. Vague values such as brightness and size play a role in the evaluation of sensor signals, whereby there can be a continuous spectrum between “does not apply” and “applies”. This is why even AI systems cannot make one hundred per cent correct decisions in such situations, and it will hardly be possible to check the machine’s decisions in the short time available: The staff can only believe what the machine delivers. In the past, errors have repeatedly led to an attack with nuclear missiles being reported even though there was no threat. The causes of such errors can include incorrect or misinterpreted sensor data, transmission errors or computer errors. In connection with nuclear weapons, however, such automatic decisions could be fatal. Such uncertainties can also be relevant for normal weapon systems, but the effects are usually limited, whereas in the event of a nuclear war, the survival of the entire human race may be at stake.
Comparability with analogue phenomena
For a better visualisation of the dangers, let’s look at another analogous specialist area: misdiagnosis by medical professionals. Despite years of experience and extensive training, even experts can misinterpret symptoms, especially in time-critical or complex situations. Similar to early warning systems for missile attacks, such errors can have drastic consequences, even if they cannot be prevented due to the complexity of the data.
Early warning systems for missile attacks already existed before the increased use of AI. But even then, the algorithms for determining missile launches were among the most sophisticated calculation processes in civilisation. Today, the growing importance of artificial intelligence in the military sector is only made possible by the generation of large amounts of data (through the sensor technology described above), increasing networking and the subsequent high speed of data processing. Previously, human decisions were more important. One example of this is the incident on 26 September 1983: a satellite of the Russian early warning system reported five attacking intercontinental missiles. As the report passed the prescribed checks, the Russian officer on duty, Stanislav Petrov, should have passed on the warning message in accordance with the regulations. However, he considered an attack by the Americans with only five missiles to be rather unlikely and, despite the data available, decided that it was probably a false alarm and thus prevented a catastrophe with a nuclear strike and counter-strike. Petrov had emotionally reckoned with a false alarm, and he did not want to be responsible for the deaths of millions of people and decided accordingly.
It becomes clear that empathy and a judgement of global political contexts are important in humans. A machine does not tend to have such feelings, but makes decisions objectively according to certain rules. Today, there are additional risks in such situations: Cyberattacks could interfere with early warning systems, for example by transmitting false data or using deepfake techniques to insert false superiors into conferences to assess alarm situations.
Social relevance
In recent years, a new arms race has begun in various military dimensions. This applies to new nuclear weapon delivery systems such as hypersonic missiles, the planned weaponisation of space, the expansion of cyber warfare capabilities and the increasing use of artificial intelligence (AI) systems through to autonomous weapon systems.
The risk of nuclear war is expected to rise sharply in the coming years and decades. Climate change will lead to more crises, and new technical developments will increase the complexity of early warning systems and threat situations to such an extent that it will become increasingly difficult to control such systems. Ultimately, the survival of humanity as a whole may be at stake. To reduce the risks, political measures and new agreements are urgently needed, such as:
- Improving trust, communication and cooperation between nuclear powers
- Agreements on the reduction of nuclear weapons and on the alert mode of nuclear weapons
- Improving the exchange of information, including in connection with early warning systems
- no automatic decisions on the use of nuclear weapons.