Autonomous Weapon Systems and International Humanitarian Law: Selected Issues (International Red Cross) [View all]
https://www.icrc.org/en/article/autonomous-weapon-systems-and-international-humanitarian-law-selected-issues
ICRCs concerns about AWS
In the ICRCs view, as well as many States and other actors, AWS are weapon systems that, once activated, can select and engage one or more targets without further human intervention. After initial activation or launch, an autonomous weapon system triggers a strike in response to information from the environment received through sensors and on the basis of a generalized "target profile". As a result, the user does not choose, or even know, the specific target(s) and the precise timing and/or location of the resulting application of force.
The use of AWS entails serious risks due to the difficulty of anticipating and limiting their effects. The loss of human control and judgement in decisions over life and death raises profound humanitarian, legal and ethical concerns. In particular, AWS:
pose risks of harm to those affected by armed conflict, both civilians and combatants, as well as dangers of conflict escalation.
raise challenges for compliance with international law, including IHL, notably, the rules on conduct of hostilities; and
raise fundamental ethical concerns by delegating life and death decisions to machines, which diminishes both the moral agency of the users and the human dignity of those against whom force is used.
Regardless of the sophistication of AWS and associated sensor, software and robotics technologies, it is important to emphasize that IHL obligations regarding the conduct of hostilities must always be fulfilled by humans. It is not the weapon system that must comply with IHL, but the humans using it.
Download position paper (473K PDF)
https://www.icrc.org/sites/default/files/2026-03/4896_002_Autonomous_Weapons_Systems_-_IHL-ICRC.pdf