Nature of wars defined by violence, chance and rationality remains constant while character of wars influenced by geopolitics, geo-economics, societal norms and technology is prone to constant change.
Over the decades, despite having undergone several Revolutions in Military Affairs (RMA) characterized by invention of gun-powder, tanks, aircraft and nuclear weapons, phenomena elaborated by famous Prussian strategist Clausewitz, remain relevant.
Particularly, conflicts of the modern world are witnessing revolutionary transformation in the character of warfare through the development and employment of AI based weapon systems.
Advancements in the field of AI has enabled introduction of Lethal Autonomous Weapon Systems (LAWS) with the ability to autonomously scan, surveil, acquire, identify, lock, track, destroy and carry out battle damage assessment over a range of airborne, seaborne and ground based targets with remarkable accuracy.
AI bases systems are affecting different domains (command, control, communications, computers, intelligence, information, surveillance and reconnaissance), levels (sub- tactical, tactical, operational, strategic and grand strategic) and decision-making processes of modern warfare.
However, on the other hand, such an autonomy often resulting into unacceptable collateral damages, not only poses challenge to the extent of desired human control but also raises serious questions regarding the degree of decision-making autonomy delegated to the machines.
More and more countries and military industrial complex worldwide are spending billions of dollars and dedicating resources to surpass others in the pursuit of AI enabled and AI centric command and control systems and manned/unmanned and autonomous weapon systems.
In 2017, UN Of-fice for Disarmament Affairs (UNODA) carried out a study to identify a growing trend amongst number of countries to pursue and develop use of autonomous weapon systems.
According to the report, the ever-growing trend inherited a real risk of ‘uncontrollable war’.
Similarly, a study on ‘AI and Urban Operations’ conducted by the University of South Florida, USA, concluded that ‘the armed forces may soon be able to monitor, strike and kill their opponents and even civilians as will’.
Ruthless and lethal use of AI driven targeting system (target selection, acquisition, firing and destruction) was exemplified by the Israeli Defence Forces (IDF) in Gaza.
The Guardian revealed in December 2023, that IDF used AI based targeting system called Hesbora (Gospel) to target more then 100 target sets in a day, which according to former head of IDF Aviv Kochavi, human intelligence- based system could identify only up to 50 targets in a year.
Chief executive of Israeli Tech firm ‘Start up Nation Central’ Mr Avi Hasson stated that ‘war in Gaza has provided an opportunity for the IDF to test emerging technologies which had never been used in past conflicts’.
Consequently, IDF destroyed more than 360,000 buildings, indiscriminately killed over 50,000 and injured over 113,500 Palestinians most of whom were innocent women and children.
Ironically, indiscriminate killing of non-combatants is forbidden in the Fourth Geneva Convention of 1949.
In the same context, Pakistan has also launched Centre for AI and Computing (CENTAIC) under the auspices of Pakistan Air Force to spearhead AI development and AI based integration of various air, land and sea-based weapon systems into operational and strategic domains.
In the South Asian context, given the long- standing enmity under the nuclear overhang, introduction of AI based LAWS and their unhesitating use could have serious repercussions on the security architecture.
In the same context, absence of a comprehensive and regulatory legal framework coupled with non-existence of state monopoly further complicates the security situation.
In order to gauge the destructive and dan-gerous nature of AI driven command and control systems, in January 2024, a group of researchers from four different universities of USA simulated a war scenario by using five different AI programs including OpenAI and Meta’s Llama.
The results were shocking for the scientists and champions of AI based LAWS and a wakeup call for the global leaders to leave their differences aside and help UNO to regulate use of AI in all domains of warfare.
The findings of the study revealed that all simulated models chose use of nuclear weapons as their first weapons of choice over other weapons and peace initiatives against their adversaries.
Readily availability of AI technology along-side absence of global or state level regulations and monopoly, makes it vulnerable to be used by Non-State Actors (NSA).
The situation demands initiation of collective action and incorporation of stringent regulatory framework at global and national levels.
Globally concerted efforts are required to legally and ethically conclude initiatives like UN dialogue on AI based LAWS of 2014, comprising Convention of Certain Conventional Weapons (CCW) and Group of Governmental Experts (GGE).
UN Secretary General, Antonio Guterres, feeling the significance and urgency of this issue, also highlighted in his address in the ‘2023 new agenda for peace” that “there is a necessity to conclude a legally binding instrument to prohibit the development and deployment of autonomous weapon systems by 2026”.
—The writer, a retired Air Commodore, is recipient of SI(M) and TI(M).