The
possibility of early deployment of autonomous lethal weapons on battlefields presents
an urgent need to take global action to regulate these technologies
It
is the conclusion of a new book entitled “The Military Carrer of AI: Good Common
Governance in the Age of Artificial Intelligence” written by Dennise Garcia, professor
of political science and international affairs at the University of
Northeastern, who was part of the International Panel for the Regulation of
Autonomous Weapons from 2017 to 2022.
As
artificial intelligence progresses, weapons of was increasingly become capable
of killing people without meaningful human supervision, raising troubling
questions about how today’s and tomorrow’s wars will take place, and how
autonomous weapons systems could weaken accountability when it comes to possible
violations of international law accompanying their deployment.
In
his book, Denise Garcia condenses these bleak realities and explores the
challenges of “creating a global governance framework” that anticipates a world
of unbridled AI weapons systems in the context of the deterioration of international
law and norms. Thus, he highlights that military AI applications have already
been implemented in the ongoing conflicts in Europe and the Middle East, one of
the most famous examples of this is the Iraeli Iron Dome.
“The
world must come together and create new global public goods, which I would say
should include a framework for governing AI, but also commonly agreed rules on
the use of AI in the military” Garcia said in a statement from his university.
This
expert warns that speeding up militarized AI as such is not the right approach
and risks adding more volatility to an already very unstable international
system. “Simply pit, AI should not be trusted to make decisions about war”, he
says.
Some
4,500 AI and robotics researchers have collectively said AI should not make
decisions about human murder, a position, Garcia says, which is in line with European
Parliament guidelines and European Union regulation. But U.S officials have
pushed for a regulatory paradigm of rigorous testing and designs so that humans
can use artificial intelligence technology “to make the decision to kill”.
“This
seems good on paper, but it’s very difficult to achieve in reality, as
algorithms are unlikely to assimilate the enormous complexity of what happens
in war”, Garcia says.
AI
weapons systems not only threaten to alter accountability standards under
international law, but also make war crimes prosecution much more difficult
because of problems associated with the attribution of “combat status” to AI’s “counterbeth”
Garcia says.
“International
law has evolved to focus on the human being”, he says. “When a robot or
software is inserted into the equation, who will be responsible?”.
He
continues: “The difficulties of attribution of responsibly will accelerate the
dehumanization of war. When humans are reduced to date, human dignity will dissipate”:
Existing
AI and quasi-AI military applications have already caused sensation in defense
circles. One of those applications allows a single person to control multiple
unmanned system, according to a source, such as a swarm of drones capable of
attacking by air or under the sea. In the war in Ukraine, marauding munition
have sparked a debate about exactly how much control human agents have over
decisions on targets.
No hay comentarios:
Publicar un comentario