Ayuda
Ir al contenido

Dialnet


Los Sistemas de Armas Autónomos Letales y el Derecho Internacional Humanitario en la Guerra de Ucrania

  • Autores: Antonio Pedro Marín Martínez
  • Localización: Relaciones internacionales, ISSN-e 1699-3950, Nº. 53, 2023 (Ejemplar dedicado a: Número abierto), págs. 71-90
  • Idioma: español
  • Títulos paralelos:
    • Lethal Autonomous Weapons Systems and International Humanitarian Law in the Ukranian War
  • Enlaces
  • Resumen
    • español

      En el marco de las Relaciones Internacionales, los Sistemas de Armas Autónomos Letales (SAAL), que utilizan tecnologías asociadas con la Inteligencia Artificial (IA) y la robótica, están cada vez más presentes en el campo operacional de la Guerra de Ucrania. Desgraciadamente, en muchas instancias, el progreso científico no siempre viene acompañado de una adecuación paralela del Derecho Internacional, especialmente con relación al Derecho Internacional Humanitario (DIH). Además, el rápido desarrollo de la investigación tecnológica sigue expandiendo las capacidades operativas de dichos sistemas sin la intervención humana, estresando aún más una adecuada implementación del Derecho en el ámbito militar y más concretamente con relación a la ciberguerra. Una situación que se agudiza ante la retirada de la Federación Rusa de diversos organismos internacionales. Dado que no se vislumbra un nuevo tratado internacional en un futuro cercano, se tendrá que ir a un conflicto armado con el marco jurídico existente y no con el que se desearía tener. Ahora bien, en la actualidad no existe consenso entre los Estados sobre cómo aplicar, en la práctica, el Derecho Internacional existente en el ciberespacio. Para tratar de soslayar dicha problemática, habría que explorar la capacidad que tienen algunos instrumentos jurídicos no vinculantes ya desarrollados, como en el caso de los Manuales de Tallinn de la Organización del Tratado del Atlántico Norte (OTAN) o los principios rectores formulados por el Grupo General de Expertos (GGE) sobre los SAAL, dentro del marco de la Convención sobre Prohibiciones o Restricciones del Empleo de Ciertas Armas Convencionales que puedan considerarse excesivamente nocivas o de efectos indiscriminados (CCW) de las Naciones Unidas (NU), para servir como base para futuros acuerdos internacionales vinculantes. También se debería estudiar la posibilidad de desarrollar nuevos algoritmos, con la utilización de la IA, que permitiesen el desarrollo de los denominados Agentes Morales Artificiales (AMA), que pudiesen ser implementados en los SAAL como instrumentos de control de su utilización, basado en las directrices establecidas por el DIH.

    • English

      The Ukrainian War has provoked an acceleration in the development of highly sophisticated Artificial Intelligence (AI) based Lethal Autonomous Weapons Systems (LAWS). Unfortunately, these LAWS are being used by the Russian Federation against the civilian population, like the Iranian “suicide” drones “Shahed 136” or the Russian “Geran-2”, continuously destroying basic critical civilian infrastructure, like electricity and water supplies, as well as the destruction of residential complexes, provoking terror and human suffering and displacements. These are indiscriminate attacks which go against the basis of International Humanitarian Law (IHL), with regards to the principles of distinction, humanity and proportionality within the international laws of armed conflicts, especially with regards to the Additional Protocol I (API), of 1977, and to the Geneva Conventions of 1949. In an international situation of increasing complexity, volatility and geopolitical instability, the important question would be to determine the impact that the use of LAWS would have on their most critical function: the liberation of force, a product of a combination of the physical and computational worlds in an scenario of “Mixed Reality”, with an increase in the depersonalization of war.A use of force that is immersed in a technological acceleration of “weapons systems autonomy”. A concept defined by the International Committee of the Red Cross (ICRC), in 2019, as “any weapon system with autonomy in its critical functions. That is, a weapon system that can select and attack targets without human intervention”. However, this definition is not an international standard and different states and organizations have different ones, which may vary depending on the time or the geopolitical situation. In any case, any effect of an autonomous weapons system would not only depend on its design but also on how it would be used and the vulnerability of those that would be affected. In addition, it must be reiterated, with regards to the Ukrainian War, that the concept of a “just war”, in the area of the ius ad bellum, states that any harm towards the civilian entities (persons and/or objects), must not be excessive with regards to the military advantage obtained and that the destruction developing from such a war should not be disproportionate. Thus, the development of a war, such as that initiated by the Russian Federation in Ukraine, could be considered inappropriate if the harm caused is considered too high. As well, within the ius in bello, any collateral damage to civilians would be prohibited if it was considered disproportionate, as it would be an excessive use of force, as established by article 51 (5)(b) of the API. Moreover, as established by article eight(two)(b)(i-iv) of the Rome Statute, it could be considered as “crimes of war”. However, it would be difficult to apply these articles to the Russian Federation, since it has retired from the Rome Statute and thus does not accept the jurisdiction of the International Criminal Court at The Hague.These difficulties have given rise to campaigns from a series of Non-Governmental Organizations (NGO) to “Stop Killer Robots”, aiming to work to prohibit LAWS and maintain a human control in the use of force. Thus, a machine would never make decisions about life or death while, at the same time, questioning the capacity of LAWS to follow the principles of IHL. This idea has also been proposed by a series of Latin American and Caribbean States, during the Conference on the social and human impact of autonomous weapons, held in February 2023. That position has met with reticence from some of the major world powers: the United States of America, the Russian Federation, Australia, the United Kingdom and others, since they believe it to be premature. Thus, the current discussions within the Governmental Group of Experts of the CCW for LAWS in the UN need to be revamped, which seems to have been the case during the 2023 sessions. However, there is still no consensus on definitions, especially on what it is considered as a LAWS and its technical characteristics, neither on the issue of what is the meaning of Meaningful Human Control. At the same time, these discussions are prone to continuous changes depending on the existing geopolitical situation.We must therefore consider that, although there is an increasing request to establish a new international legal treaty on LAWS and the IHL, the current international convulsions and geopolitical difficulties seem to indicate that no progress will be made in the near future. Moreover, the different proposals of self-control through the development of “soft law” proposals have no legal obligations supporting them. This means that actions with the use of LAWS by some states, like the use of LAWS by the Russian Federation in the Ukrainian War, will need to be dealt with by existing IHL laws not designed for the digital world. Also, it is still not clear how the “war crimes” committed in the Ukrainian War will be dealt with unless a legal body is created that is accepted by all of the parties.We therefor believe it is imperative to develop a new legal treaty which bounds all of the world States, concerning LAWS and the IHL. A legal instrument that needs to be sponsored by the UN,which includes international standards that define the meaning of the term and of the MHC issue. At the same time, it seems obvious that some type of AI algorithm must be implemented as a moral software in all LAWS,one which would be able to inhibit the action of these artifacts when incurring illegal actions concerning IHL. That is, before they take place, as a means of implementing the term of precaution. Algorithms that would be based on deontological morals, based on the IHL, while maintaining practical aspects developed from military operations, like the development of new Rules of Engagement (ROE).


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno