Integrating emerging technology in multinational military operations: The case of artificial intelligence

Integrating emerging technology in multinational military operations: The case of artificial intelligence

Last fall, Perry World House hosted a two-day colloquium titled "How Emerging Technologies Are Rewiring the Global Order." The essays in this roundtable emerged from a panel on how emerging technologies like AI are changing international politics. 

Below is one of the essays by Erik Lin-Greenberg, who joins MIT this fall as an assistant professor of political science at MIT.  The essays were featured here in Texas National Security Review.

June 2, 2020 | Texas National Security Review | Erik Lin-Greenberg
Man in military uniform with AI
Erik Lin-Greenberg
June 2, 2020
Texas National Security Review

States acquire and develop new military technologies to gain an advantage on the battlefield, to increase the efficiency of operations, and to decrease operational risk. Although these technologies can help shift the balance of power, militaries often have difficulty integrating new equipment and practices into their existing force structure due to a combination of technical and institutional barriers.45 The challenge of integrating new technologies into military operations is magnified in the case of alliances and other multinational coalitions. While members of these entities share security interests, each pursues its own national interests and has its own set of military priorities and procedures. As a result, each state may have different views on how and when to use these new technologies, complicating both planning and operations.

Multinational operations have long posed challenges to national security practitioners, but the development of more advanced military technologies has the potential to make cooperation even more complex.46 NATO, for instance, faced significant challenges during the 1999 Kosovo Air War because the encrypted radio systems used by some member states could not communicate with those of other states.47 More recently, allies have disagreed over the best policies and principles to guide cyber operations.48 To explore the challenges of integrating new technologies into multinational military operations, I examine the case of artificial intelligence (AI)—a technology that is rapidly finding its way into military systems around the world. I argue that AI will complicate multinational military operations in two key ways. First, it will pose unique challenges to interoperability — the ability of forces from different states to operate alongside each other. Second, AI may strain existing alliance decision-making processes, which are frequently characterized by time consuming consultations between member states. These challenges, however, are not insurmountable. Just as allies have integrated advanced technologies such as GPS and nuclear weapons into operations and planning, they will also be able to integrate AI.

Conceptualizing multinational operations and artificial intelligence

Before tackling these issues, I outline some key definitions and concepts. Alliances and coalitions are cooperative endeavors in which members contribute resources in pursuit of shared security interests.49 Alliances are typically more deeply formalized institutions whose operations are codified in treaties, while coalitions are typically shorter-term pacts created to achieve specific tasks — such as the defeat of an adversary. In today’s security environment, states often carry out military operations alongside allies or coalition partners. These multinational efforts yield both political and operational benefits. Politically, multilateral operations can lend greater legitimacy to the use of force than unilateral actions. Since multinational efforts require the buy in of multiple states, they can help signal that a military operation is justified.50 Alliances and coalitions also allow for burden sharing, with various member states each contributing to the planning and conduct of military operations. This can reduce the strain on any one state’s military during contingency operations and allow states to leverage the specialized skills of different alliance or coalition members.

Despite the virtues of alliance and coalition operations, they also pose obstacles to strategic and operational coordination. Even if allies share security interests, they may have difficulty agreeing how to pursue their objectives—a task that becomes increasingly more challenging as the number of states in an alliance or coalition increases, and if the terms of the alliance commitment are vague (as they often are, to prevent allies from being drawn into conflicts they would prefer to avoid).51 At a more operational level, allies and partners may face challenges when operating alongside each other because of technical, cultural, or procedural factors.52 AI has the potential to exacerbate all of these issues.

In the military domain, AI has been increasingly used in roles that traditionally required human intelligence. In some cases, AI is employed as part of analytical processes, like the use of machine learning to help classify geospatial or signals intelligence targets. Or, it can be part of the software used to operate physical systems, like self-driving vehicles or aircraft. States around the world have already fielded a range of military systems that rely on AI technology. The US Department of Defense, for instance, launched Project Maven to develop AI to process and exploit the massive volume of video collected by reconnaissance drones.53 Similarly, Australia is working with Boeing to develop an advanced autonomous drone intended for use on combat missions, and the US Navy is exploring the use of self-operating ships for anti-submarine warfare operations.54 Military decision-makers look to these systems as ways of increasing the efficiency and reducing the risk of conducting military operations. Automating processes like signals analysis can reduce manpower requirements, while replacing sailors or soldiers with computers on the front lines can mitigate the political risks associated with suffering friendly casualties.55

As states develop AI capabilities, leaders must consider the challenges that may arise when fielding AI as part of broader alliance or coalition efforts. First, alliance leaders must consider the unequal rates at which alliance members will adopt AI—and the consequences this could have on alliance and coalition operations. Second, leaders must consider how AI will affect two important components of alliance dynamics: shared decision-making and interoperability.

Challenges of adopting AI56

New technology does not diffuse across the world at the same rate, meaning that some states will possess and effectively operate AI-enabled capabilities, while others will not.57 This unequal distribution of technology can result from variation in material and human resources, or from political resistance to adoption. In the case of AI, large and wealthy states (e.g., the United States) and smaller, but technologically advanced countries (i.e., Singapore) have established robust AI development programs.58 In contrast, less wealthy allies have tended to allocate their limited defense funding to other, more basic capabilities. Many of NATO’s poorer members, for example, have opted to invest in modernizing conventional equipment rather than developing new military AI capabilities.59

In addition to variation in material resources, public support for the development of military AI capabilities varies significantly across states, potentially shaping whether and how states develop AI. Even though AI enables a range of military capabilities, the notion of AI-enabled weapons often conjures up images of killer robots in the minds of the public. One recent survey finds strong opposition to the use of autonomous weapons among the population of US allies like South Korea and Germany.60 The public and many political and military decision-makers in these countries remain reluctant to delegate life-or-death decisions to computers, and worry about the implications of AI-enabled technologies making mistakes.61 This type of resistance can lead states to ban the use of AI-enabled systems or hamper the development of AI technologies for military use, at least temporarily, as it did when Google terminated its involvement in Project Maven after employee protests.62 The resulting divergence in capabilities between AI haves and have-nots within a multinational coalition may stymie burden-sharing. States without AI-enabled capabilities may be less able to contribute to missions, forcing better-equipped allies to take on a greater share of work—possibly generating friction between coalition members.

Challenges to multinational decision-making and interoperability

Even if allies and coalition partners overcome the domestic obstacles to developing AI-enabled military technology, the use of these systems may still complicate decision-making and pose vexing interoperability challenges for multinational coalitions. These challenges can hamper multinational operations and potentially jeopardize cohesion among security partners.

Decision-making among allies is often characterized as a complex coordination game. Although allies share some set of objectives and goals, each state still maintains its own national interests. The negotiations needed to compromise on these divergent political interests can result in drawn-out decision-making timelines.63 AI, however, has the potential to greatly accelerate warfare to what former US Deputy Secretary of Defense Bob Work referred to as “machine speed.”64 The faster rate at which information is produced and operations are carried out may strain existing alliance decision-making processes. The current NATO decision-making construct, for instance, requires the 29-member North Atlantic Council to debate and vote on issues related to the use of force.65 As AI accelerates the speed of war, decision-making timelines may be compressed. Coalition leaders may find themselves making decisions without the luxury of extended debates.

At the more tactical level, the increased deployment of AI-enabled systems has the potential to complicate interoperability between coalition forces. Interoperability — “the ability to act together coherently, effectively, and efficiently to achieve tactical, operational, and strategic objectives” and “the condition achieved among communications-electronics systems…when information or services can be exchanged directly and satisfactorily between them and/or their users” — is critical to multinational operations.66 Interoperability ensures military personnel and assets from each member state are equipped with both the technology and procedures that allow them to support other member states on the battlefield.

As new AI-enabled systems are introduced to the battlefield, they must — like older generations of technology — be able to communicate and integrate with each other and with existing legacy systems. The data-intensive underpinnings of AI, however, can make this a complicated task for political and technical reasons. Politically, states may be hesitant to share military and intelligence data even with close allies.67 They may fear that providing unfettered access to data risks disclosing sensitive sources and methods, or revealing that states have been snooping on their allies.68 These revelations could cause mistrust, strain political relationships, or compromise ongoing intelligence operations.

Even if allies are willing to share data, significant technical obstacles remain. Data produced by different states that could be used to train AI systems, for example, may be stored in different formats or tagged with different labels, making it difficult to integrate data from multiple states.69 Further, much of this military and intelligence data resides on classified national networks that are not typically designed to enable easy information sharing. These information stovepipes have hampered past attempts at information and data sharing during coalition operations. This will only become more pronounced as data requirements increase in an age of AI-enabled warfare.70

Preparing for “machine dpeed” multinational operations 

Modern military operations are increasingly integrating AI, and potential rivals are developing robust AI capabilities. To deter these adversaries and to more efficiently carry out coalition operations, the United States must work with its allies to responsibly develop AI capabilities that are interoperable and support coalition decision-making processes. To these ends there are several steps the United States and its allies can take to better posture their forces for AI-enabled multinational operations.

First, allies and security partners should establish AI collaboration agreements that outline where and how AI will be used. Singapore and the United States, for instance, launched a partnership to coordinate on AI use in the national security domain.71 These agreements would provide guidelines that would help develop shared tactics, techniques, and standardization procedures that would allow allies and coalition partners to more effectively integrate their AI-enabled capabilities. Coalition-wide standards on labeling and formatting of data, for instance, would help streamline AI development and enhance interoperability, much in the same way that NATO standards today ensure that radios and other systems used by NATO partners can communicate with each other.

Second, coalition leaders should explore how to streamline decision-making processes so they are responsive to AI-enabled warfare. This might include developing pre-established rules of engagement that delegate authorities to frontline commanders, or even to AI-enabled computers and weapon systems — something that leaders may be hesitant to do. Third, alliances should look to develop technologies and processes that overcome barriers to the sharing of sensitive data. To do this, allies could draw from previous agreements that governed the sharing of extremely sensitive intelligence.72 Partners could also leverage procedures like secure multiparty computation, a privacy-preserving technique in which AI analyzes data inputs from sources that seek to keep the data secret, but produces outputs that are public to authorized users.73

Finally, allies and partners should help acclimate their forces to AI-enabled operations. For instance, multinational exercises might prominently feature AI-enabled capabilities like drone swarms or intelligence reports produced by AI-powered systems. Military leaders might also be asked to employ their own AI-enabled capabilities and to deter or defeat those of rivals during wargames. These exercises will help national security practitioners better understand what AI can and cannot do. This deeper understanding of AI will help decision-makers develop more informed plans and strategies, and help ensure multinational coalitions are ready for modern, AI-enabled warfare.


REFERENCES

45 Michael C. Horowitz, The Diffusion of Military Power: Causes and Consequences for International Politics (Princeton, NJ: Princeton University Press, 2010).

46 Eric Larson et al., Interoperability: A Continuing Challenge in Coalition Air Operations (Santa Monica, CA: Rand, 2000), 28–33.

47 Larson et al., 20.

48 Max Smeets, “Cyber Command’s Strategy Risks Friction With Allies,” Lawfare, May 28, 2019, https://www.lawfareblog.com/cyber-commands-strategy-risks-friction-allies; Christopher Porter and Klara Jordan, “Don’t Let Cyber Attribution Debates Tear Apart the NATO Alliance,” Lawfare, Feb. 14, 2019, https://www.lawfareblog.com/dont-let-cyber-attribution-debates-tear-apart-nato-alliance.

49 Stephen Walt, The Origins of Alliance (Ithaca, NY: Cornell University Press, 1990); Glenn H. Snyder, Alliance Politics (Ithaca, NY: Cornell University Press, 1997).

50 A large body of international relations scholarship suggests that support from multinational organizations can increase public support for the use of force and the legitimacy of military operations. See, Terrence L. Chapman, “Audience Beliefs and International Organization Legitimacy,” International Organization 63, no. 4 (October 2009): 733–64, https://doi.org/10.1017/S0020818309990154; Terrence L. Chapman and Dan Reiter, “The United Nations Security Council and the Rally ’Round the Flag Effect,” Journal of Conflict Resolution 48, no. 6 (December 2004): 886–909, https://www.jstor.org/stable/4149799.

51 Michael Beckley, “The Myth of Entangling Alliances: Reassessing the Security Risks of US Defense Pacts,” International Security 39, no. 4 (Spring 2015): 7–48, https://doi.org/10.1162/ISEC_a_00197.

52 Joint Publication 3-16: Multinational Operations, Joint Chiefs of Staff, March 1, 2019, I–3, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_16.pdf?ver=2019-11-14-170112-293.

53 Theresa Hitchens, “In 1st Interview, PDUSDI Bingen Talks Artificial Intelligence, Project Maven, Ethics,” Breaking Defense, Aug. 26, 2019, https://breakingdefense.com/2019/08/in-1st-interview-pdusdi-bingen-talks-artificial-intelligence-project-maven-ethics/.

54 Ewen Levick, “Boeing’s Autonomous Fighter Jet Will Fly Over the Australian Outback,” IEEE Spectrum, 2 January 2020, https://spectrum.ieee.org/aerospace/military/boeings-autonomous-fighter-jet-will-fly-over-the-australian-outback; Megan Eckstein, "Sea Hunter USV Will Operate With Carrier Strike Group, As SURFDEVRON Plans Hefty Testing Schedule," US Naval Institute News, Jan. 21, 2020, https://news.usni.org/2020/01/21/sea-hunter-usv-will-operate-with-carrier-strike-group-as-surfdevron-plans-hefty-testing-schedule.

55 James Igoe Walsh and Marcus Schulzke, Drones and Support for the Use of Force (Ann Arbor: University of Michigan Press, 2018); Smriti Srivastava, “Indian Army Encourages Potent AI Mechanism to Reduce Manpower Dependency,” Analytics Insight, Sept. 26, 2019, https://www.analyticsinsight.net/indian-army-encourages-potent-ai-mechanism-reduce-manpower-dependency/.

56 For a deeper discussion of this argument, see, Erik Lin-Greenberg, “Allies and Artificial Intelligence: Obstacles to Operations and Decision-making,” Texas National Security Review 3, no. 2 (Spring 2020).

57 On the diffusion of new technologies, see, Everett M. Rogers, Diffusion of Innovations, 5th ed. (New York: Free Press, 2003); and Horowitz, The Diffusion of Military Power.

58 Prashanth Parameswaran, “What’s in the New US-Singapore Artificial Intelligence Defense Partnership?” The Diplomat, July 1, 2019, https://thediplomat.com/2019/07/whats-in-the-new-us-singapore-artificial-intelligence-defense-partnership/.

59 “Modernization of the Armed Forces,” Republic of Albania: Ministry of Defense, Oct. 12, 2019, http://www.mod.gov.al/eng/index.php/security-policies/others-from-mod/modernization/68-modernization-of-the-armed-forces.

60 “Six in Ten (61%) Respondents Across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems,” Ipsos, Jan. 21, 2019, https://www.ipsos.com/en-us/news-polls/human-rights-watch-six-in-ten-oppose-autonomous-weapons.

61 Colin Clark, “Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence — Yet,” Breaking Defense, Aug. 21, 2019, https://breakingdefense.com/2019/08/air-combat-commander-doesnt-trust-project-mavens-artificial-intelligence-yet/.

62 “Country Views on Killer Robots,” Campaign to Stop Killer Robots, Aug. 21, 2019, https://www.stopkillerrobots.org/wp-content/uploads/2019/08/KRC_CountryViews21Aug2019.pdf; Daisuke Wakabayashi and Scott Shane, “Google Will Not Renew Pentagon Contract that Upset Employees,” New York Times, June 1, 2018, https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.

63 Kenneth N Waltz, Theory of International Politics (Boston, MA: McGraw Hill, 1979), 163–70; John J. Mearsheimer, “The False Promise of International Institutions,” International Security 19, no. 3 (Winter 1994/1995 1994): 32, https://www.jstor.org/stable/2539078.

64 Robert Work, “Remarks to the Association of the US Army Annual Convention,” US Department of Defense, Oct. 4, 2016, https://www.defense.gov/Newsroom/Speeches/Speech/Article/974075/remarks-to-the-association-of-the-us-army-annual-convention/.

65 “North Atlantic Council,” NATO, last updated Oct. 10, 2017, http://www.nato.int/cps/en/natohq/topics_49763.htm.

66 DOD Dictionary of Military and Associated Terms, The Joint Staff, last updated January 2020, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/dictionary.pdf.

67 James Igoe Walsh, The International Politics of Intelligence Sharing (New York: Columbia University Press, 2009).

68 Matthew Karnitschnig, “NSA Flap Strains Ties with Europe,” Wall Street Journal, Feb. 9, 2014, https://www.wsj.com/articles/wave-of-nsa-reports-strain-ties-with-europe-1391971428?tesla=y.

69 These data problems plague state militaries and would be magnified in the multinational context. See, Sydney J. Freedberg, “Pentagon’s AI Problem Is ‘Dirty’ Data: Lt. Gen. Shanahan,” Breaking Defense, Nov. 13, 2019, https://breakingdefense.com/2019/11/exclusive-pentagons-ai-problem-is-dirty-data-lt-gen-shanahan/.

70 Matt Pottinger, Michael T. Flynn, and Paul D. Batchelor, Fixing Intel: A Blueprint for Making Intelligence Relevant in Afghanistan (Washington, DC: Center for New American Security, 2010), https://www.cnas.org/publications/reports/fixing-intel-a-blueprint-for-making-intelligence-relevant.

71 Parameswaran, “What’s in the New US-Singapore Artificial Intelligence Defense Partnership?”

72 Walsh, The International Politics of Intelligence Sharing.

73 Andrew C. Yao, “Protocols for Secure Computations,” in Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, SFCS ’82 (Washington, DC, USA: IEEE Computer Society, 1982), 160–64
 

Erik Lin-Greenberg is a postdoctoral fellow at the University of Pennsylvania’s Perry World House. In Fall 2020, he will start as an assistant professor of political science at MIT.

This publication was made possible (in part) by a grant from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).