ROLE OF MILITARY ARTIFICIAL INTELLIGENCE  
IN THE EVOLUTION OF CONFLICTS  
Lieutenant-general (ret.) Professor Cristea DUMITRU, Ph.D  
(Academy of Romanian Scientists, 3 Ilfov Street, 050044, Bucharest,  
Romania, email: secretariat@aosr.ro)  
Abstract: Artificial intelligence may accelerate the pace of war, but security  
architecture must also accelerate and deepen at the same pace, building a new  
generation of capabilities.  
The real challenge in military artificial intelligence (AI) is to maintain  
human accountability and political wisdom while maintaining technological  
superiority as we move toward algorithmic dominance. Therefore, every technical,  
legal, and ethical step taken today will shape not only the nature of war but also the  
future of the international order.  
Keywords: military artificial intelligence, security architecture, AI norms,  
AI in military operations, human control in military AI, hyper-war.  
DOI  
10.56082/annalsarscimilit.2026.1.19  
Military experts believe that artificial intelligence (AI) can accelerate  
the pace of war and determine combat success, but the security architecture  
must accelerate, adapt, and deepen at the same pace, building and developing  
a new generation of capabilities. If this does not happen, the fragility/entropy  
of strategic stability will increase within the millisecond decision cycles of  
next-generation AI-powered warfare technology.  
Periodically, those responsible for AI in the military meet in summits  
focused on the responsible use of artificial intelligence in the military. Thus,  
such meetings were held in The Hague in 2023, Seoul in 2024 and in February  
2026 in Spain. After this year's summit, which was attended by experts from  
85 countries, a declaration was developed that includes 20 key principles.  
Thus, the principles assumed in the aforementioned declaration include  
maintaining human responsibility for AI-based weapons, establishing clear  
chains of command and control, exchanging information on national  
oversight mechanisms, focusing on risk assessment, training and testing  
processes for military personnel. At the same time, this approach is  
considered too cautious by some states, especially those states where  
investments in AI are paramount. This excessive caution and regulation is  
considered to induce critical delays in the impetuous development of AI and  
quantum technologies, and any delay or loss of initiative in this extremely  
Entitled member of the Academy of Romanian Scientists, Military Sciences Section, email:  
19  
 
ROLE OF MILITARY ARTIFICIAL INTELLIGENCE  
IN THE EVOLUTION OF CONFLICTS  
dynamic field, in military terms, can constitute a catastrophic deficit. The  
reality is that we are living in the beginning of a new era of humanity, the era  
of AI, an era of intelligent robots, quantum computing, technologies that  
change the social order based on work. This colossal transformation generates  
seismic social imbalances, which almost inevitably lead to the triggering of  
both internal and international conflicts, difficult to control and stop. The  
anguish of this perspective induces the solution of cautious regulations,  
although these would have no effective effect, while precisely the  
acceleration of AI development and reaching a threshold of general  
superintelligence would provide systemic solutions to social problems and  
eliminate the root causes of world conflicts.  
It is clear that decades of ignoring the potential of technological  
advances by the political and economic spheres have generated surprise, a  
total lack of preparation and consequently a setback in the wide-scale  
adoption of AI and the tendency towards overregulation, including in the  
military. The risk is that, in the military, the actor who breaks the rules can  
create an insurmountable advantage in a possible conflict, if he achieves AI  
dominance. At the same time, once AI dominance is ensured, there is no  
chance of the adversary coming back, and the loss is final and cataclysmic.  
Thus, future conflicts will be conflicts for the dominance of algorithms.  
At the REAIM (Responsibility for AI in the Military) summits in 2023  
and 2024, many countries, coordinated by the US, adopted a more sensible  
framework of principles. This year, however, the fact that two leading  
countries in the field of AI technology, the USA and China, did not sign the  
declaration and that the number of countries that ratified it was limited (35),  
demonstrated how limited the production and sharing of a set of  
internationally accepted norms and principles is. Unfortunately, so far, a  
common denominator has not yet been established in the international system  
regarding AI in the military.  
This situation, in addition to the profound transformation triggered by  
AI technologies in the operational environment, can be considered a threat to  
security.  
Military Artificial Intelligence in the Context of Security Threats  
Currently, the use of AI in the military context covers a wide range of  
areas, from command and control and decision support to electronic and cyber  
warfare, missile defense and unmanned aerial, land and naval vehicles.  
In numerous scenarios, which exceed human cognitive capacity or  
require rapid analysis of large and complex data sets, AI functions as a key  
element. The technological processes already place almost all the necessities  
of the modern battlefield in this category. In other words, AI already takes  
warfare to a dimension that goes beyond human perception and reaction. It is  
already a reality that, in this new type of warfare, defined in the specialized  
20  
Lieutenant-general (ret.) Professor Cristea DUMITRU, Ph.D  
literature as “HYPER-WAR”, the decision-making and execution cycle can  
be completed in milliseconds.  
It is clear that at this point, it becomes necessary to consider human  
control, from a much broader perspective and necessity than the simple  
“finger that pulls the trigger”.  
It is therefore necessary that the first step in this direction is to design  
human control as a significant and effective component in all stages of the  
processes of using AI. Such a structure would ensure that the ethical and legal  
responsibility for the use of artificial intelligence remains entirely under  
human control, responsibility and competence. At the same time, the  
vulnerabilities of AI technologies must also be taken into account, the greatest  
of which being strong electromagnetic pulses, vulnerabilities that could  
quickly eliminate the advantages of an ultra-technological army.  
The second step would be to develop filters and barriers to prevent  
systems and processes directly or indirectly managed by AI from “negatively  
feeding” each other, interacting and escalating conflict. Such a multi-level  
filtering structure could prevent false alarms, “AI hallucinations”, lack of  
awareness or unwanted engagements.  
The third crucial step in managing internal security risks in military  
AI could be validating the security and integrity of the algorithms and the  
datasets they feed. Preventing contamination of datasets, identifying  
conditions and times when AI algorithms cannot make fully autonomous  
decisions, as well as certifying and monitoring the algorithms and the  
hardware they run on, are important measures in this context.  
However, it is important to highlight that the lack of a minimum global  
consensus on military AI makes it impossible to adopt structured ethical, legal  
and normative measures in this area. We are witnessing today the worsening  
of the existing international strategic trust deficit and even more so by the  
attractiveness of the opportunities and capabilities offered by military  
artificial intelligence; the concerns expressed from time to time by various  
lobby groups and states under headings such as: human rights, confidentiality  
and protection of personal data become weak in the face of the increasing  
effectiveness of artificial intelligence in military operations.  
Regulation of Artificial Intelligence. Limits and Requirements  
At this point, we must ask ourselves how we can determine whether  
an AI software or algorithm is safe or “human”. Among AI experts, it is  
proposed that a structure similar to the International Atomic Energy Agency  
(IAEA), which oversees nuclear facilities, be established for military artificial  
intelligence. But it must be emphasized that the codes that make up the  
algorithm do not leave traces like radioactive elements and tracing the source  
of the code is much more difficult than supervising the establishment and  
operation of a nuclear facility. However, as a solution, it is proposed to  
21  
ROLE OF MILITARY ARTIFICIAL INTELLIGENCE  
IN THE EVOLUTION OF CONFLICTS  
establish “algorithmic control rooms” in which countries would test and  
verify artificial intelligence systems without revealing their fundamental  
architecture and confidential components. Thus, the certification resulting  
from tests carried out in these simulation environments could set a standard  
in the commercial and military fields.  
It is therefore important to emphasize that the main issue is not  
whether AI will be used in the battlespace, but how, by whom and within  
what limits it will be used. At the same time, it is clearly shown that the  
development of norms in the field of military artificial intelligence cannot be  
considered independent of the competition between great powers. It is  
necessary that technology leaders such as the US and China intensify their  
participation in the development of norms with a view to creating a globally  
accepted ethical framework in the field of generative AI.  
For example, China has stated that its long-planned military  
modernization will proceed in three distinct but overlapping phases:  
mechanization, i.e., the adoption of modern machinery and equipment;  
informatization/digitalization, i.e., the integration of advanced information  
technologies and cyber networks to connect military platforms and enable  
real-time information exchange; intelligence, the application of artificial  
intelligence to automate operations, support decision-making, and control  
advanced weapons. Today, China is placing greater emphasis on integrating  
AI into its military by prototyping AI capabilities that can pilot unmanned  
combat vehicles, detect and respond to cyberattacks, track naval vessels,  
submarines, and identify and strike targets on land, at sea, and in space.  
The Chinese military is also developing systems that ingest, analyze,  
and augment massive amounts of data to improve tactical and strategic  
decision-making, as well as tools that create deepfake images and videos for  
disinformation campaigns. Accordingly, Beijing believes that the military  
that better develops and adopts artificial intelligence and other emerging  
technologies will gain a major advantage in future wars, especially those AI  
technologies that support and accelerate decision-making. The Chinese  
military’s AI experiments also extend to cyber and information operations,  
developing AI tools to automate the detection of intrusions into its computer  
networks, to increase the resilience of military communications, and to  
enhance its cyber operations. Meanwhile, Chinese officers and soldiers are  
using AI systems to simulate virtual battlefields and model the behavior of  
competitors, which improves their training for future conflicts.  
The Chinese military is increasingly applying AI to diminish U.S.  
advantages in space and maritime. It is openly pursuing satellite-targeting  
algorithms as well as new anti-satellite weapons, some of which involve small  
robots that can capture and disable an adversary’s space platforms. China  
aims to build a military that leverages advanced technologies to learn, adopt,  
and make decisions quickly and more accurately across all operational  
22  
Lieutenant-general (ret.) Professor Cristea DUMITRU, Ph.D  
domains. China’s path will certainly not be smooth. The Chinese military still  
faces obstacles in integrating artificial intelligence across its entire force and  
operations. Moreover, training AI systems requires a large amount of data  
that is not readily available on the internet, such as classified images of  
military platforms or the electromagnetic signatures of various radars and  
weapons.  
Unless technological leaders like the U.S. and China participate in the  
discussions, efforts to create an accepted global ethical framework will  
inevitably remain limited.  
At the national level, institutionalizing measures such as meaningful  
human oversight, algorithmic security, data integrity, and multi-layered  
filtering mechanisms is an urgent need. This is because a significant portion  
of the risks posed by military AI come not from intentional misuse, but from  
design flaws, inaccurate data, unpredictable interactions, and the pressure for  
speed.  
   
Artificial intelligence may accelerate warfare, but security  
architecture must also accelerate and deepen at the same pace, building a new  
generation of capabilities.  
The real challenge in military artificial intelligence (AI) is to maintain  
human accountability and political wisdom while maintaining technological  
superiority. Therefore, every technical, legal, and ethical step taken today will  
shape not only the nature of warfare but also the future of the international  
order.  
BIBLIOGRAPHY  
HERMAN A., „China's Brave New World Of AI," Forbes, august 30, 2018,  
available  
at  
2018/08/30/chinas-brave-new-world-of-ai/ - 3a7918bf28e9;  
HOROWITZ M. „Artificial Intelligence, International Competition, and the  
Balance of Power”, Texas National Security Review, available at  
competition-and-the-balance-of-power/;  
HOROWITZ M., „The promise and peril of military applications of artificial  
intelligence”, Bulletin of the Atomic Scientists, available at  
military-applications-of-artificial-intelligence/;  
23  
ROLE OF MILITARY ARTIFICIAL INTELLIGENCE  
IN THE EVOLUTION OF CONFLICTS  
TALBERT E. „Capcana triajului. Când viteza AI înlocuiește judecata la  
comandă”, january 2026;  
TUCKER P., „How NATO's Transformation Chief is Pushing the Alliance to  
Keep  
Up  
in  
AI",  
Defense  
One,  
available  
at  
nato's  
transformation chief is pushing the alliance to keep up in  
ai/148301/.  
AI Principles: Recomandări privind utilizarea etică a inteligenței artificiale de  
către Departamentul Apărării" Defense Innovation Board,  
Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_-  
PRIMARY_DOCUMENT.PDF;  
„Despre noi", NATO Center of Excellence for Strategic Communication,  
„Federated Mission Networking," NATO Allied Command Transformation,  
Electronic sources  
24