Recent reports by +972 magazine have highlighted the extensive use of AI-based decision support systems by the Israeli Defense Forces (IDF) in military targeting. In particular, the investigation into the use of the so-called ‘Lavender’ system to identify members of the Hamas military wing describes decision-making processes that, if confirmed, could support allegations of war crimes such as ‘Attacks against the civilian population’ and ‘Excessive collateral damage’ under Art. 8(2)(b)(i) and (iv) of the ICC Statute. To be clear, Lavender is not an Autonomous Weapon System (AWS), but a decision support system that merely provides recommendations for the identification of legitimate targets and the execution of attacks. The final decision on target selection and engagement, in other terms, remains with human commanders and operators. Yet, the +972 investigation illustrates how the pervasive integration of AI into decision-making processes, even when not taking the humans out of the loop, can profoundly affect the conduct of hostilities. Against this backdrop, this article will use the +972 investigation into Lavender as a case study to discuss the implications, for international criminal law, of the increasing reliance on AI systems in targeting processes, considering both the hurdles to establishing individual criminal responsibility resulting from such reliance, and the new ways of perpetrating criminal conduct made possible by this technology.

Sistemi di supporto alle decisioni basati sull’IA e crimini di guerra: alcune riflessioni alla luce di una recente inchiesta giornalistica

Amoroso, Daniele
2024-01-01

Abstract

Recent reports by +972 magazine have highlighted the extensive use of AI-based decision support systems by the Israeli Defense Forces (IDF) in military targeting. In particular, the investigation into the use of the so-called ‘Lavender’ system to identify members of the Hamas military wing describes decision-making processes that, if confirmed, could support allegations of war crimes such as ‘Attacks against the civilian population’ and ‘Excessive collateral damage’ under Art. 8(2)(b)(i) and (iv) of the ICC Statute. To be clear, Lavender is not an Autonomous Weapon System (AWS), but a decision support system that merely provides recommendations for the identification of legitimate targets and the execution of attacks. The final decision on target selection and engagement, in other terms, remains with human commanders and operators. Yet, the +972 investigation illustrates how the pervasive integration of AI into decision-making processes, even when not taking the humans out of the loop, can profoundly affect the conduct of hostilities. Against this backdrop, this article will use the +972 investigation into Lavender as a case study to discuss the implications, for international criminal law, of the increasing reliance on AI systems in targeting processes, considering both the hurdles to establishing individual criminal responsibility resulting from such reliance, and the new ways of perpetrating criminal conduct made possible by this technology.
2024
Lavender; artificial intelligence; individual criminal responsibility; mens rea; proportionality
File in questo prodotto:
File Dimensione Formato  
Amoroso DUDI 2024.pdf

Solo gestori archivio

Tipologia: versione editoriale
Dimensione 384.79 kB
Formato Adobe PDF
384.79 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Amoroso DUDI 2024 post referaggio.pdf

embargo fino al 07/03/2026

Tipologia: versione post-print
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/415983
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact