Coding the Law of Armed Conflict: First Steps

[Editor’s note: The following post highlights a subject addressed in the Lieber Studies volume The Future Law of Armed Conflict, which was published 27 May 2022. For a general introduction to this volume, see Professor Matt Waxman’s introductory post.]


Killer robots have captured the collective imagination, but many other predictive algorithms will arrive sooner on the battlefield to provide decision support to the military. The military may be looking for algorithms that help them predict a person’s legal status, or whether someone is holding a weapon in a hostile pose, or whether a particular attack would be proportionate. Even though coders cannot embed the Law of Armed Conflict (LOAC) directly into these algorithms, the military will benefit from LOAC-informed algorithms. In chapter 3 of The future law of armed conflictI explore what might be involved in an effort to build and deploy DCA-aware algorithms.

There is widespread skepticism about programming the LOAC into code; many doubt that autonomous systems are able to implement complex legal concepts such as distinction and proportionality. Indeed, work to date suggests that it is very difficult to translate abstract, context-dependent legal concepts directly into code. There are a few areas of law where computer scientists and lawyers have coded legal rules; TurboTax, for example, produces reliable legal conclusions about a user’s tax liability. Additionally, judges use algorithms in the criminal justice setting to predict how dangerous a person is, which informs their decisions about bail, parole, and sentencing. These examples suggest that it is possible to create predictive algorithms that take into account the law ex ante (where programmers understand the legal contexts for which they produce the algorithms), but still require human decision makers (informed by those predictions) to apply the law ex-post.

Military personnel seeking to use predictive algorithms to make sense of large amounts of information and identify patterns and anomalies must proceed through a three-phase process. First, coders and lawyers must identify the DCA rules that will be relevant to the type of operation for which the algorithm’s predictions will be used and assess which features or facts will be most salient to the predictions the algorithms will make. . For example, an algorithm used to predict whether a person poses an imperative security threat may include characteristics such as known suspicious or hostile actions, age, prior detentions, associations with organized armed groups or other actors. hostility, previous employment, tribal relationships and communication networks.

Second, programmers, working with lawyers and military operators, must code these features into the algorithm and train the algorithm on past cases that include examples of these features. One way to ensure that the system’s recommendations are in line with legal rules would be to program it tightly and configure the system to only show recommendations that the system has high confidence in. Third, the algorithm would produce a prediction about the identity or nature of a person or object, identifying the level of confidence in the prediction. Based on the prediction, legal officers would assess whether the person or object meets the legal test set out in the DCA and advise the commander on the legality of the action.

There would be real benefits to pursuing these algorithms. Their use could not only improve the speed and accuracy of targeting and detention decisions, but could also have ancillary benefits. First, the process of developing these algorithms can force government officials to come to greater agreement on which bodies of law apply to particular situations and which factors are relevant to the formation of an algorithm involving decisions. detention or targeting. Because the coding process will involve decisions on the nuances of DCA and may occur ex ante, before the system is deployed in the field, there may be greater opportunities for a set of US government actors beyond Department of Defense officials to participate in these decisions. Second, the use of these algorithms can make it easier for the military to recreate and verify their own detention and targeting decisions. Third, automation-focused efforts to quantify the characteristics and levels of confidence surrounding DCA decisions may require military actors to question and more clearly articulate what drives their non-computerized military decisions.

When creating these decision support algorithms, military operators, programmers, and lawyers will undoubtedly face challenges. Training these types of algorithms requires a lot of high-quality data, and the military must guard against hacking and frequently retest machine learning systems. Determining the specific characteristics relevant to the application of a DCA rule will involve trial and error, as well as steep learning curves for everyone involved. Lawyers will need to understand the capabilities, requirements, and limitations of algorithms, while programmers will need to learn the basics of DCA and how the military makes DCA-infused decisions under pressure. Letting lawyers know about the law-algorithm-law process is the safest course, at least until the war becomes a “hyperwar.”

***

Ashley Deeks is a professor of law at the University of Virginia School of Law, where she teaches international law and national security law.

Photo credit: Piqsels