Artificial Intelligence's impact on future strategy

Add bookmark

Sailor stands combat information watch aboard USS Mustin

Introduction

            The pervasiveness of information coupled with the rapid pace of technological development is transforming the political landscape and character of warfare. Due to its ubiquitous nature, information is now a component of power; the ‘best informed’ will gain strategic overmatch through enhanced decision-making and increased mission effectiveness (Lawrence 2017, 228). Artificial Intelligence will be at the heart of information operations and will play a critical role in transforming the OODA loop in the Digital Age (Guyonneau and Le Dez 2019, 103). However, Artificial Intelligence is not a strategic game-changer in isolation; its military applications to conventional capabilities have the potential to accelerate decision-making, enable warfare at machine speed, and push conflict beyond human cognitive ability. Artificial Intelligence is already shaping military doctrine and will continue to define future strategy, as the technology matures. Arguably AI-augmented conventional capabilities have the potential to destabilize strategic stability and upset the current balance of power by exacerbating the ‘fog of war’, prompting inadvertent conflict escalation, and increasing the likelihood of nuclear confrontation.

The article will:

  1. Define Artificial Intelligence and outline the possible strategic defence applications of AI/ML and their stabilizing effect on international strategic relations.
  2. Assess autonomous technologies and the potential negative effects of the incorporation of AI-enabled technologies by nuclear-armed states.
  3. Conclude that the integration of Al will exacerbate the security dilemma, ‘inject greater uncertainty and friction in the security environment, and diminish predictability, which is likely to aggravate the odds of escalation’ and nuclear warfare (Sweijs 2018).

Defining AI

            The hyperbole associated with Artificial Intelligence often distorts our perception of the opportunities that it offers and the challenges that exist in deploying it at scale to the battlefield. It is therefore paramount to understand that Artificial Intelligence, or the ‘capability of computer systems to perform tasks that require human intelligence’, is a technological enabler, rather than a revolutionary weapons system in itself, which will inevitably lead to a ‘sky-net’ doomsday scenario (Sweijs 2018). Technology is the ‘great enabler in war’, which allows for a ‘conceptual shift in the way we think about conducting it’ (Coker 2015, 57). Artificial Intelligence currently exceeds human intelligence in performing a narrow set of specific tasks. Further research is required to achieve Artificial General Intelligence and Artificial Superintelligence, where machine capability surpasses human cognitive ability across any given task (Sweijs 2018). However, we are not far from unlocking the myriad of opportunities of Artificial Intelligence in the military realm.

Given the rapid technological development in the field of disruptive innovation and Deep Learning, it is appropriate to expect that significant breakthroughs in Artificial Intelligence and Machine Learning will allow great powers to fulfil their ambitions and field autonomous technologies to their respective forces in the next decade. Vladimir Putin stated that ‘whoever becomes the leader in AI, will become the ruler of the world’ (Pecotic 2019). The United States launched the ‘Third Offset Strategy’ to advance ‘Deep Learning Machine Systems, human-machine combat teaming, and network enabled semi-autonomous weapons’, while China crystalised its plans to become the world’s leader in AI by 2030, shifting from ‘informatised’ to ‘intelligentised warfare’ (Sweijs 2018). ‘The potential strategic effects of military AI are not unique or exclusive to this technology’, however ‘multifaceted possible intersections of this disruptive technology with advanced conventional capabilities’ make Artificial Intelligence dangerous, yet promising and attractive (Johnson 2020, 16). While AI is inherently multifaceted, a new generation of artificial intelligence–enhanced conventional capabilities may produce a stabilising effect on international strategic relations, as states strengthen their retaliatory capability by enhancing early warning and command and control.  

AI delivering strategic stability?

            At the strategic level, AI-enabled technologies have the potential to act as force multipliers, transform C4ISR, enhance conventional precision missile munitions to target strategic weapons, accelerate decision-making and lighten the cognitive burden on the Warfighter, and improve air defence and electronic warfare capabilities in anti-access/area denial (A2AD) operating environments (Johnson 2020, 28). Advances in AI/ML technologies can reinforce the state’s perception of its second strike retaliatory nuclear capability and thus minimise the risk of nuclear confrontation. By enabling more reliable early warning and ISR capability, AI can equip nuclear decision-makers with the required information to make data-driven decisions in time-critical situations (Boulanin et al. 2020, 113). Early warning will enable de-escalation in strategic decision-making by equipping Commanders with various options to respond, not commonly considered by human decision-makers. AI-enabled image recognition collected with space assets will further strengthen verification of arms control and provide the international peace-keeping community with effective monitoring tools. Accurate decision-making and prediction of conflict scenarios can further be corroborated through AI-enabled wargaming and advanced simulation. Additionally, AI-enabled technologies can foster the development of more survivable nuclear delivery systems, including hypersonic weapon systems and unmanned submarines to name a few, augmenting confidence in deterrence capability (Boulanin et al. 2020, 102). Through predictive maintenance, Artificial Intelligence can strengthen the protection of nuclear weapons and related infrastructure. All of these measures would theoretically underpin strategic stability, however, these implications of AI deployment should be considered with a pinch of skepticism, as the outlined circumstances are sanitised and analysed in an intellectual vacuum. ‘AI’s impact on stability, deterrence, and escalation will be determined as much by a state’s perception of its functionality than capability’ (Johnson 2020, 17).

On the other hand…

 The adoption of AI could trigger a security dilemma, as other nuclear-armed states could perceive the integration of this disruptive technology as a threat to their national security. As with any new capability, states attempt to ‘achieve similar capabilities, to find asymmetrical responses, or to change doctrines to either nullify or offset the advantage offered by the new technology’ (Boulanin et al. 2020, 105). In an increasingly contested nuclear multipolar structure, AI-enabled technologies are likely to upset the balance of power and cause inadvertent conflict escalation due to misperception by other political actors. Unlike conventional weapons systems, such as low-yield nuclear weapons, AI isn’t a discrete application and is not easily verified due to its multifaceted and multi-purpose characteristics. The ‘possible intersections of AI with nuclear weapons; the interplay of these intersections with strategic nonnuclear capabilities; and the backdrop of a competitive multipolar nuclear world order’ may destabilise world order and aggravate the ‘fog and friction’ of warfare.

The application of AI in nuclear weapon systems by one state is likely to prompt the other nuclear powers to lose confidence in their second strike capability and field AI to defend strategic assets and match the adversary’s developments. The urge to adopt AI at pace may result in pressing risks, as states may consider it appropriate to integrate premature technology, compromising security standards and verification processes to seize advantage on the increasingly information-dominated battlefield. Apart from misperception of the use of AI by adversaries, it is not difficult to imagine a number of destabilising scenarios where strategic nuclear operations are augmented by AI-enabled technologies. AI used for remote sensing, non-nuclear strategic strike, and for nuclear weapon delivery can profoundly affect military strategy and produce negative effects on international relations.

AI, autonomy and information advantage

AI for remote sensing can revolutionise the conduct of warfare by enhancing situational awareness and detection capability of the AI state. Artificial Intelligence-enabled technologies can be exploited to process large volumes of disparate datasets and extract actionable intelligence from background clutter for the provision of mission-critical information to the Commander. Autonomous swarms, enabled with AI/ML, could be deployed beyond reach of manned platforms to execute nuclear ISR missions and identify the C4I systems of nuclear and nonnuclear missile launchers with AI-infused sensors (Johnson 2020, 23). Anti-Submarine Warfare currently relies on the use of conventional assets, however NATO has already initiated experimentation to field Maritime Unmanned Systems (MUS) and intelligent autonomous ASW networks (NATO Centre for Maritime Research and Experimentation 2019). The ambition is to integrate advanced cooperative autonomy through hybrid networks with an emphasis on combining ‘conventional assets with heterogenous networks of smart sensors’ (NATO Centre for Maritime Research and Experimentation 2019). As this technology matures, unmanned undersea vehicles (UUVs), unmanned surface vessels (USVs), and unmanned aerial vehicles (UAVs) exploited in the maritime domain could transform ASW and make deterrence at sea obsolete.

Ballistic missile submarines are seen as the most survivable type of nuclear-launch platform due to their stealth. Deployment of AI solutions able to fuse, digest, and exploit enormous volumes of data would enable the detection, tracking, and targeting of SSBNs, challenging the survivability of second-strike retaliatory capability of the nuclear state in question. The US Navy has already outlined its intent to increase reliance on unmanned vessels and secured FY2021 budget to advance large unmanned surface vessels (LUSVs) procurement for the purpose of extending reach, enhancing remote sensing, and reinforcing global presence. LUSVs would autonomously navigate and engage the identified target with missile magazines, allowing manned surface combatants to be fielded for extended periods of time (Larter 2021). The US Navy’s acquisition of Sea Hunter autonomous systems could also play an important role in monitoring chokepoints of adversarial SSBNs. China has also expressed great interest in integrating UUVs into its naval force structure. Chinese policy-makers are seeking to develop an ‘underwater Great Wall’ with UUVs to amplify ASW capability and challenge US nuclear ballistic and nonnuclear attack submarines (Johnson 2020, 25). By transforming ISR capability, AI systems will provide their user with unequivocal information advantage, which may intimidate less capable nuclear-armed states to engage in an arms race. 

            Apart from enabling autonomous ISR assets, AI has the potential to transform kinetic and non-kinetic target engagement and revolutionise conventional non-nuclear strategic weapons systems. The prospect of reduced casualties as a result of autonomous systems employment, coupled with ambiguous rules of engagement and a weak legal framework around AI accountability will likely prompt states to exploit AI-enabled technologies for asymmetric warfare and conventional conflict. Sorties of AI/ML drone swarms could significantly overwhelm the adversary’s defence apparatus by increasing mission effectiveness of air interdiction operations, amphibious ground assaults, long-range target engagement, and maritime operations in degraded, contested, and denied environments. Swarms of autonomous robotics assets will likely increase the range and speed of warfare, necessitating the transformation of CONOPs and doctrine. Stealth variants of robotics systems, if deployed with electromagnetic jammers, could undermine enemy C3I systems, break the opponent’s networked system of systems, and penetrate multi-layered air defence.

Chinese military practitioners have already begun research into ‘bee swarms’ with an emphasis on autonomous navigation, open network architecture, and anti-jamming provisions reportedly to target US aircraft carriers (Johnson 2020, 21). Able to exploit data in real-time, AI empowered autonomous systems will likely be able to perform highly complex operations, including the detection and engagement of nuclear deterrence forces. AI is likely to further enhance long-range precision-guided munitions and hypersonic vehicles, providing strategic overmatch and enabling the user to strike the opponent’s nuclear deterrence assets with conventional warheads. AI thus underpins the ‘problem of entanglement between conventional and nuclear weapons’ (Boulanin et al. 2020, 107). AI application to conventional weapons systems and the integration of drone swarms with kinetic capability will likely apply pressure on weaker nuclear states, which will be enticed to field unverified AI systems or respond with nuclear weapons to ensure their national security in an increasingly digitised operating environment.

AI friction, uncertainty, and volatility

            The ‘commingling and entangling’ of nuclear and non-nuclear capabilities, augmented by machine speed warfare, will likely aggravate uncertainty, produce a destabilizing state of affairs in the international political arena, and increase the odds of conflict escalation (Johnson 2020, 16). The existing friction in the security environment could be exacerbated by state employment of AI machine learning technologies for nuclear weapons delivery. AI technologies are expected to enable advancements in the development of hypersonic weapons and long-range conventional and nuclear precision munitions through ‘autonomous navigation, advanced vision-based guidance systems, real-time processing of large datasets for intelligence analysis, and pattern interpretation’ (Johnson 2020, 27). Nuclear delivery could be placed on hypersonic glide vehicles, UAVs, and UUVs, which would operate autonomously. Chinese strategists have already initiated programmes to enhance hypersonic technologies and develop intelligent autonomous weapons for multi-domain warfare (Johnson 2020, 26). Similarly, Russia is advancing its Machine Learning capability to establish C2 systems for hypersonic glide vehicles. AI variants could strengthen hypersonic glide vehicle resilience against cyber threats and enhance hypersonic missile flight plan by analysing data in real-time and accounting for unexpected changes in the location of the target (Boulanin et al. 2020, 95).

However, inhibitors to fielding autonomous nuclear delivery weapons apart from technological challenges include ethical principles. Ethical and political guidelines dictate that the human must remain in the decision-making loop and ‘pull the trigger’ when circumstances demand it. However, this perception may change as AI technologies mature and become more reliable. Robert Work, the US Deputy Secretary of Defence, stated in 2016 that the US DoD ‘will not delegate lethal authority to a machine to make a decision’ in the use of force. However, Work further explicated that if nuclear-armed opponents, such as China or Russia, will be ‘more willing to delegate authority to machines than we are and, as that competition unfolds, we’ll have to make decisions on how we can best compete’ (Lamothe 2016). Work’s statement suggests that the United States will seriously consider removing the human from the loop, should its adversaries take such a decision. Such willingness to compete in the autonomous realm is alarming, as it corroborates the prediction that AI will increase friction, uncertainty, and volatility of international security.  

In conclusion

            While future-gazing is extremely difficult, one must always imagine the art of the possible to formulate effective military strategy for future multi-domain operations. What seemed to be science fiction several decades ago is today’s reality. Artificial Intelligence is an incredibly powerful force multiplier and technological enabler, which is multifaceted, scalable, and multi-purpose. AI applications can provide the competitive edge to the armed forces by accelerating decision-making, transforming the OODA loop, and revolutionising C4ISR capability. However, until we can unveil the ‘unpredictable, brittle, inflexible, and unexplainable features of AI’, the technology will continue to ‘outpace strategy and human error’ (Johnson 2020, 30). We know AI, as any revolutionary technology, is likely to generate heated competition between nuclear-armed states, leading to a security dilemma. AI’s unpredictable nature, coupled with an increasingly multipolar world order, will likely lead to inadvertent conflict escalation and an increased risk of brinkmanship by less powerful nuclear-armed states to preempt their adversary’s first strike. AI’s potential capabilities may challenge second strike retaliatory capability and undermine nuclear deterrence. By way of injecting uncertainty into the international security environment, AI applications are likely to erode predictability and amplify the odds of conflict escalation.

 

                                                   
Sources
Boulanin, Vincent, Lora Saalman, Petr Topychkanov, Fei Su, and Moa Peldán Carlsson. Artificial Intelligence, Strategic Stability and Nuclear Risk. Report. Stockholm International Peace Research Institute, 2020. 101-22.
Coker, Christopher. 2015. Future War. Cambridge: Polity Press.
Guyonneau, Rudy, and Arnaud Le Dez. "Artificial Intelligence in Digital Warfare: Introducing the Concept of the Cyberteammate." The Cyber Defense Review 4, no. 2 (2019): 103-16. Accessed March 1, 2021.
Johnson, James S. "Artificial Intelligence: A Threat to Strategic Stability." Strategic Studies Quarterly 14, no. 1 (2020): 16-39.
Lamothe, Dan. 2016. "The Killer Robot Threat: Pentagon Examining How Enemy Nations Could Empower Machines". The Washington Post. https://www.washingtonpost.com/news/checkpoint/wp/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemy-nations-could-empower-machines/.
Larter, David. 2021. "Unclear On Unmanned: The US Navy’S Plans For Robot Ships Are On The Rocks". Defense News. https://www.defensenews.com/digital-show-dailies/surface-navy-association/2021/01/10/unclear-on-unmanned-the-us-navys-plans-for-robot-ships-are-on-the-rocks/.
Lawrence, Freedman. 2017. The Future Of War: A History. 3rd ed. London: Allen Lane.
NATO Centre for Maritime Research and Experimentation. 2019. "Programme: Cooperative Antisubmarine Warfare". La Spezia.
Pecotic, Adrian. 2019. "Whoever Predicts The Future Will Win The AI Arms Race". Foreign Policy. https://foreignpolicy.com/2019/03/05/whoever-predicts-the-future-correctly-will-win-the-ai-arms-race-russia-china-united-states-artificial-intelligence-defense/.
Sweijs, Tim. Report. Hague Centre for Strategic Studies, 2018.
 

RECOMMENDED