Úterý, 15 července, 2025

The impact of post-information age on modern warfare

Sdílet

The integration of Artificial Intelligence (AI) and other advanced technologies is redefining modern warfare, transitioning from traditional physical battles to digitalized conflicts as well. This essay examines the impact of the post-information age on military strategies, operations, and national security. Key areas of exploration include the role of cyberspace as the fifth domain of warfare, the implementation of AI in decision-making, autonomous weapon systems, and the spread of misinformation in military disputes.

Throughout history, technology has always significantly influenced the nature of warfare. A clear example of this is how several million people managed to create an empire on which the sun never set. Simply put, those with more advanced technology compared to their opponents often gain a significant advantage. However, with the advent of the post-information age, conflicts are shifting beyond physical battlefields into the digital realm. This evolution is profoundly transforming the traditional nature of warfare, opening both new opportunities and risks. Examples include cyberattacks, manipulation of public opinion, and autonomous weapons. This essay aims to shed light on the issues of the post-information age in warfare, examines the impacts of this transformation, and reflects on the ethical questions associated with the use of advanced technologies in military settings.

The role of cyberspace in contemporary conflicts

The Internet, as one of the most important inventions, has broken all barriers and transformed the way we communicate, work, have fun, perform services, etc. Our world has become increasingly networked, with digitized information supporting key services and infrastructures (M. Gallahe, 2008). Nations, states, organizations and ultimately users are all concerned about threats to the confidentiality, integrity and availability of digitized information (T. Rid and B. Buchanan, 2014). The rapid advancement of digitalization and the increasing number of vulnerable devices affect not only ordinary users but also governments and militaries. Cyberspace has become the fifth domain of warfare, alongside land, sea, air, and space (K. Geers, 2011). Modern conflicts are increasingly conducted through cyberattacks, which have the potential to cripple critical infrastructure such as power grids, transportation systems, and healthcare services (Clarke & Knake, 2010).

In a digital world that is progressively permeating every area of our daily lives, bothpublic and private, security is a must. In the field of information technology, cyber security plays a critical role. When we are in an attack, cyber security is the first thing that comes to mind. (Lillian Ablon et al., 2014) Protecting our personal data online has become a major concern so the military in the 21. century needs to adapt to prevent potentional problems related to cyber warfare.

So cyber security is a must-have tool in modern day armies. In today’s digital world, it is an essential safeguard that protects both individuals and organizations from the growing risks of cyberattacks. As technology integrates into every aspect of daily life, from personal communication to critical infrastructure, ensuring the safety and integrity of digital systems is paramount. In the 21st century, this is especially true for the military, where cyber threats pose risks to national security and operational effectiveness.

How does cybersecurity works?

Cybersecurity in the military works through a structured, multi-step approach to protect sensitive systems and ensure operational security:

  1. Threat Prevention: Military systems use firewalls, intrusion prevention systems (IPS), regular updates, and endpoint protection to block unauthorized access and prevent malware.
  2. Access Control: Multi-factor authentication (MFA), biometric systems, zero trust architecture, and role-based access ensure only authorized personnel access sensitive data.
  3. Monitoring and Detection: Security Operations Centers (SOC) and Intrusion Detection Systems (IDS) monitor networks 24/7 to identify anomalies or threats, while threat intelligence updates defenses against new attacks.
  4. Incident Response: During an attack, predefined protocols activate to isolate threats, conduct forensic analysis, and restore systems using secure backups.
  5. Offensive Cyber Capabilities: Military cyber units execute operations to disrupt enemy systems, such as communication networks or critical infrastructure.
  6. Training and Simulations: Regular drills and cybersecurity training prepare personnel to recognize threats and respond effectively to attacks.
  7. Encryption and Secure Communication: End-to-end and quantum encryption protect critical communications and data from interception.

(ChatGPT4, original prompt: “Could you explain in multiple steps how does cybersecurity in the military works?”)

Now that we have a basic understanding of how cybersecurity works, I would like to provide some real-world examples to demonstrate how serious it can be if this system fails. One of them can be definitely Stuxnet.

In May 2011, the Pentagon announced an official list of cyber weapon capabilitiesapproved for use against adversaries. The list included a „toolkit“ of methods to hackforeign networks, examine and test their functionality and operations, and the ability to leave „viruses“ to facilitate future targeting. (Nakashima, 2011) Several months before that, in July 2010, the Iranian-Israeli conflict seemed to have taken a dangerous and accelerating turn, as the details of the cyber-attack on the Iranian Natanz nuclear facility were revealed using a virus or a „malicious computer worm“ called Stuxnet. (Fruhlinger, 2017)

The attack revealed the possibility of causing massive physical destruction in industrial facilities or vital infrastructure networks of any state without the need to mobilize armies or move fleets. The pace and momentum of reciprocal cyber-attacks accelerated between the two sides since the Stuxnet attack was first revealed in 2010. Analysts have come to use the term „cyber war“ without hesitation to describe the reciprocal cyber-attacks between Iran and Israel. This was accompanied by a large cloud of controversy, mutual accusations, and theories that sought to probe the depths of the new term emerged in the skies of political circles: „cyber war“. (Mohee, 2022)

Another example to showcase the importance of cybersecurity can be russian cyber attacs on Estonia in 2007. In April 2007 the tensions with Russia significantly increased due to the decision of Estonian capital city – Tallinn authorities, to remove the statue of Bronze Soldier of Tallinn which commemorated the Soviet soldiers who had liberated Estonia. For the Estonians it was a symbol of oppression. For Russians it meant the destroying of the cultural heritage and the lack of respect for the Red Army which fought against Nazi Germans during II World War. After the movement of the Bronze Statue the relationships between Estonia and Russia became very tensed. Kremlin accused Tallinn authorities of breaking human laws and demanded resignation of the Estonian Prime Minister. Simultaneously, the serious riots on the streets between the police and Russian minority in Estonia, the protests in front of Estonian Embassy in Moscow and the massive cyberattacks campaign erupted. Estonia has been highly dependent on the internet. Almost the whole country was covered by the WiFi Internet, all Government services were available online, 86 % of Estonian populations did banking online. In 2007 there was opportunity to vote electronically and 5,5 % of voters did it. On 26 April the growing volume of the cyberattacks was noticed and this day is commonly recognized as the beginning of massive cyberattack. The peak of the attack took place on May 9. Since that date the number of hostile acts started to decrease. On May 11 the Paid botnets activity ended, the last attack took place on May 23. (Kozlowski, 2011)

The spread of false information

With the rise of technology, development, and artificial intelligence, the spread of deepfakes and other types of counterfeit information had a significant growth and impact on everyday life. Altered footage and videos are utilized as a tactic to spread false information regarding politicians, events, and data, and affect public opinion particularly the views of citizens in countries affected by military disputes, exposing them to accusations of bias.

Since the beginning of the military aggressions in Ukraine, the state has been severely exposed and targeted by a Russian disinformation campaign maneuvered by the Kremlin Regiment and other pro-Russian groups, as part of the Kremlin’s hybrid warfare. Its main purpose is to disseminate rumors about Ukrainian political corruption until the Ukrainian government loses its credibility towards its nation and allies. (Ștefan, A. M., & Balla, 2024)

But to be fair, propaganda does not only come from one side of the conflict. Ukraine’s online propaganda is mostly focused on creating heroes and martyrs, as well as possible incorrect data regarding war statistics. President Volodymyr Zelensky declared that 31,000 Ukrainian soldiers have been killed since the beginning of the war, although unofficial estimates suggest that those numbers are grossly underestimated. The real numbers of deaths and injured are not disclosed by either of the sides, often being described by the other’s as “vast”. The Ukrainian President officially stated: „I don’t know how many of them died, how many were killed, how many were murdered, tortured, how many were deported.“ (Ștefan, A. M., & Balla, 2024) Countries at war also attempt to spread disinformation in order to manipulate the citizens of the opposing state, further complicating the cyberspace landscape. The main goal of such efforts is to create a situation so convoluted that it becomes difficult for anyone to distinguish what is true from what is false. These actions can lead to public disorder and make the current situation even more difficult. We can observe this phenomenon today, as many people across Europe believe that Russia is in the right. These individuals may potentially become allies, highlighting the far-reaching impact of disinformation campaigns and cyberspace.

Use of AI in military

Artificial Intelligence (AI) is the driving force behind the latest technological advancements, enabling machines to perform tasks that require human-like intelligence. This technology is rapidly emerging as a powerful tool that holds immense potential for benefiting future generations. The proliferation of AI across various sectors has led to remarkable progress, with persistent research and innovation pushing advancements in many fields, including the economy, society, and power politics.

To survive in the complex geopolitical landscape of today, countries must be able to defend against key security challenges, manage their geopolitical complexities, and maintain a strong military. Military strength depends on strategy, doctrine, equipment, and warfare tactics, all of which contribute to combat readiness and sustainable military capabilities. AI plays a crucial role in reshaping these areas, directly and indirectly influencing military operations.

The self-evolving nature of AI makes it essential for developing advanced military strategies and technologies. As AI continues to evolve, it will impact virtually all operational domains, including land, sea, air, space, and information. AI is set to improve military applications such as reconnaissance, surveillance, intelligence analysis, command and control, and logistics. By enhancing these areas, AI will fundamentally change how warfare is conducted, as well as improve border security, cyber defense, emergency operations, counterterrorism, and threat evaluation.

With these changes, new paradigms of military power will emerge, along with evolving geopolitical complexities and national security challenges. As militaries adapt to these advancements, they must be well-acquainted with the ongoing progress in AI to leverage its operational benefits and secure their position in the shifting power-political dynamics. (Gaire, U. S., 2023)

Artificial Intelligence (AI) is already extensively implemented in numerous areas within and beyond the military, but there are domains where its potential is either limited or still in developmental phases.

Autonomous weapons

Autonomous weapons systems (AWS) and military robots are progressing from science fiction movies to designer’s drawing boards, to engineering laboratories, and to the battlefield. These machines have prompted a debate among military planners, roboticists, and ethicists about the development and deployment of weapons that are able to perform increasingly advanced functions, including targeting and application of force, with little or no human oversight. Some military experts hold that these autonomous weapons systems not only confer significant strategic and tactical advantages in the battleground, but that they are also are preferable to the use of human combatants, on moral grounds. In contrast, critics hold that these weapons should be curbed, if not banned altogether, for a variety of moral and legal reasons.

Those who call for further development and deployment of autonomous weapons systems generally point to several advantages. (a) Autonomous weapons systems act as a “force multiplier;” that is, fewer soldiers are needed for a given mission, and the efficacy of each soldier is greater. (b) Autonomous weapons systems expand the battlefield, allowing combat to reach into areas that were previously inaccessible. And (c) Autonomous weapons systems reduce casualties by removing human soldiers from dangerous missions (Marchant et al. 2011).

The Pentagon’s Unmanned Systems Roadmap 2007–2032 provides additional motivations for pursuing AWS. These include that robots are better suited than humans for “dull,” “dangerous,” and “dirty” missions. Examples given for each respective category of mission include long sorties, bomb disposal, and operating in nuclear clouds or areas with high radioactivity (Clapper et al., 2007).

he long-term savings that could be achieved through fielding an army of military robots have also been highlighted. The Fiscal Times notes that each US soldier in Afghanistan costs the Pentagon roughly $850,000 per year (some estimate the cost to be over $1 million per soldier per year), which does not include the long-term costs of providing health care to veterans. Conversely, the TALON robot—a small, armed robot—can be built for only $230,000 and is relatively cheap to maintain (Francis 2013).

Opposition to autonomous weapons systems

In July of 2015, an open letter calling for a ban on autonomous weapons was released at an International Joint Conference on Artificial Intelligence. The letter warns: “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms” (Autonomous Weapons 2015). The letter also notes that AI has the potential to benefit humanity, but that if a military AI arms race ensues, its reputation could be tarnished and a public backlash might curtail future benefits of AI. The letter has an impressive list of signatories, including Elon Musk (inventor and founder of Tesla), Steve Wozniak (co-founder of Apple), physicist Stephen Hawking (University of Cambridge), and Noam Chomsky (MIT), among others. Over 3000 AI and Robotics researchers have also signed the letter. The open letter simply calls for “a ban on offensive autonomous weapons beyond meaningful human control.”

In 2013, a group of engineers, AI and robotics experts, and other scientists and researchers from 37 countries issued the “Scientists’ Call to Ban Autonomous Lethal Robots.” The statement notes the lack of scientific evidence that robots could, in the foreseeable future, have “the functionality required for accurate target identification, situational awareness or decisions regarding the proportional use of force.” Hence they may cause a high level of collateral damage. The statement ends by insisting that “Decisions about the application of violent force must not be delegated to machines” (ICRAC 2013).

Historical examples, such as the accidental downing of civilian aircraft or misdirected missile strikes due to human or technological errors, demonstrate the devastating consequences of failures in decision-making. Introducing fully autonomous systems without sufficient safeguards exacerbates these risks.

While AWS promise significant advantages, the risks associated with their errors cannot be overlooked. AI-driven autonomous systems must undergo thorough evaluation, and their deployment should be governed by clear international laws ensuring accountability and compliance with humanitarian principles. As the open letters and scientific calls for bans suggest, placing meaningful human oversight at the core of AWS operation is crucial to minimizing errors and safeguarding human lives. (ChatGPT4, original prompt: „With regard to this text (chapter about AWS), what is your opinion on the use of autonomous weapon systems powered by AI? Is there a possibility that AI could make tremendous mistakes, costing innocent lives?“)

Robots

Since the beginning of the 21st century, robots have become an indispensable part of military operations, starting with their use in Afghanistan. Since then, their presence and variety have increased significantly in the armies of different countries. Military robots can be classified according to various parameters such as the type of movement – ground, air, underwater; the degree of autonomy – console, semi-autonomous, fully autonomous; and their functional purpose – reconnaissance, transport, military operations, etc. Among the best-known examples of military robots are unmanned aerial vehicles (UAVs) that can be controlled from a distance or follow a predetermined route. These UAVs perform many tasks such as surveillance, reconnaissance, guidance assistance, bombing, and even dog-fights with other drones.

The era of autonomous robots capable of operating without continuous human control has become a new stage in the evolution of military technology. In the world of military technology, robots play the role of complex and multifunctional tools that security forces can use to expand their capabilities on the ground, especially in areas that are difficult to protect with standard patrols. They become a kind of additional „eyes“ and „ears“, providing information about the situation on the ground. A distinctive feature of military robots is their ability to see and hear much better than people do. Because of their tirelessness and autonomy, they can perform tasks that would otherwise be boring, dirty, or dangerous to humans. For night vision and detection of thermal traces or smoke, they are equipped with infrared cameras, microphones, thermal imaging cameras, as well as sensors for flame, smoke, temperature, gas, and radioactivity. The advantage of robots lies in their smooth operation. They are able to bypass obstacles and analyze video streams to detect anomalies more effectively than humans. Guard robots equipped with video cameras can detect and signal intrusions using loudspeakers or sirens and deter potential intruders.

So that robots can successfully adapt to complex and constantly changing situations on the battlefield, it is necessary to create intelligent autonomous robots with an artificial brain simi-lar to a human one. The concept of an artificial brain is to develop a computer system that can mimic the structure and function of a real human brain. An artificial brain can be based on a variety of approaches, including neural networks, genetic algorithms, or cognitive architectures. It has various characteristics such as memory, the ability to focus, emotions, language, and even consciousness. Such an artificial brain is implanted into the robot to give it intellectual abilities. The advantages of smart autonomous robots with an artificial brain are noticeable when compared to traditional machines. They are able to independently study the environment and their capabilities, develop new strategies and tactics to complete tasks, make decisions independently based on available information, as well as evaluate their actions and correct behavior. What’s more, they can communicate and coordinate with other robots, allowing them to work together as a team. (Morozov, A. O., & Yashchenko, V. O.,2023)

Drones

Military autonomous drones (UAVs) can fly to a specific location, pick their own targets and kill without the assistance of a remote human operator. Therefore, the idea of a „killer robot“ has moved from fantasy to reality. Most people would probably be willing to understand „autonomous drones“ as „smart technology“, for example drones that can operate on the basis of a self-selected option (which in military terminology is referred to as „systém initiative“ or „full autonomy“). Such drones are programmed to equip them with a large number of alternative responses to the various challenges they may encounter in carrying out their missions. This is not science fiction – the technology is largely in place, although – to our knowledge – no approved autonomous drone systems are yet operational. The limiting factor is not the technology, but rather the political will to create and admit to possessing such a politically „sensitive“ technology that would allow lethal machines to operate without direct human supervision.

Autonomous drones have no legal definition. There are advanced drones programmed with algorithms for countless human-defined courses of action to meet emerging challenges (Dyndal et al., 2017). Drones were first used by military forces. Therefore, the drone warfare concept is not new. For years, they were used to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets (Hernandez, 2021). The US Department of Defense has used drones in nearly every military operation since the 1950s to provide reconnaissance, surveillance, and intelligence for enemy forces. Currently, it is estimated that nearly 100 countries use military drones (Karyoti, 2021). Equipped with the latest generation cameras, they provide an accurate topography of the terrain and are used in combat and rescue missions. Those with artificial intelligence communicate with soldiers and provide them with information about enemy movements on an ongoing basis. Drones can transport heavier and heavier loads. Equipped with sets of anti-tank guided missiles, they target targets and also help in developing war tactics. (Konert, A., & Balcerzak, T., 2021)

AI in support of the military decision-making process

Military decision-making consists of an iterative logical planning method to select the best course of action for a given battlefield situation. It can be conducted at levels ranging from tactical to strategic. Each step in this process lends itself to automation. This does not only hold for the MDMP, but also for related processes like the intelligence cycle and the targeting cycle. As argued in Ekelhof (2018), instead of focusing on the target engagement as an endpoint, the process should be examined in its entirety.

Given the limitations of human decision-making, the advantage of (partial) automatization with AI can be found both in the temporal dimension and in decision quality. A NATO Research Task Group for instance examined the need for automation in every step of the intelligence cycle (NATO Science & Technology Organization, 2020) and found that AI helps to automate manual tasks, identify patterns in complex datasets and accelerate the decision-making process in general. Since the collection of more information and perspectives results in less biased intelligence products (Richey, 2015), using computer power to increase the amount of data that can be processed and analyzed may reduce cognitive bias. Confirmation bias, for instance, can be avoided through the automated analysis of competing hypotheses (Dhami et al., 2019). Other advantages of machines over humans are that they allow for scalable simulations, conduct logical reasoning, have transferable knowledge and an expandable memory space (Suresh & Guttag, 2021), (Silver, et al., 2016).An important aspect of the current debate about the use of AI for decision-making concerns the potential dangers of providing AI systems with too much autonomy, leading to unforeseen consequences. A part of the solution is to provide sufficient information to the leadership about how the AI systems have been designed, what their decisions are based on (explainability), which tasks are suitable for automation and how to deal with technical errors (Lever & Schneider, 2021). Tasks not suitable for automation, e.g., those in which humans outperform machines, are typically tasks of high complexity (Blair et al., 2021). The debate on responsible AI should therefore also take human strengths (HS quadrant) into account. In practice, AI systems cannot work in isolation but need to team up with human decision-makers. Next to the acknowledgment of bounded rationality in humans and ‘human weakness’ (viz. lower left quadrant in Fig. 1; HW), it is also important to take into consideration that AI cannot be completely free of bias for two reasons. First, all AI systems based on machine learning have a so-called inductive bias comprising the set of implicit or explicit assumptions required for making predictions about unseen data. Second, the output of machine learning systems is based on past data collected in human decision-making events. Uncovering the second type of bias may lead to insights regarding past human performance and may ultimately improve the overall process.

Examples of AI in the MDMP

It is important to examine the risks of AI and strategies for their mitigation. This mitigation, however, is useless without examining the corresponding opportunities at the same time. In this paragraph, therefore, we present some examples of AI applications in the MDMP. In doing so, we provide an impetus for expanding the debate on responsible AI by taking every quadrant in Fig. 1 into account.An example of machine strength is the use of AI to aid the intelligence analyst in the generation of geospatial information products for tactical terrain analysis. This is an essential sub-step of the MDMP since military land operations depend heavily on terrain. AI-supported terrain analysis enables the optimization of possible COAs for a military commander, and additionally allows for an optimized analysis of the most likely enemy course of action (De Reus et al., 2021). Another example is the use of autonomous technologies to aid in target system analysis (TSA), a process that normally takes months (Ekelhof, 2018). TSA consists of the analysis of an enemy’s system in order to identify and prioritize specific targets (and their components) with the goal of resource optimization in neutralizing the opponent’s most vulnerable assets (Jux, 2021). Examples of AI use in TSA include automated entity recognition in satellite footage to increase the information position necessary to conduct TSA, and AI-supported prediction of enemy troop locations, buildup and dynamics based upon information gathered from the imagery analysis phase. Ekelhof (2018) also provides examples of autonomous technologies currently in use for weaponeering (i.e., the assessment of which weapon should be used for the selected targets and related military objectives) and collateral damage estimation (CDE), both sub-steps of the targeting process. Another illustrative example of the added value of AI for the MDMP is in wargaming, an important part of the COA analysis phase in the MDMP. In wargames AI can help participants to understand possible perspectives, perceptions, and calculations of adversaries for instance (Davis & Bracken, 2021). Yet another example is the possibility of a 3D view of a certain COA, enabling swift examination of the terrain characteristics (e.g., potential sightlines) to enhance decision-making (Kase, et al., 2022). AI-enabled cognitive systems can also collect and assess information about the attentional state of human decision-makers, using sensor technologies and neuroimaging data to detect mind wandering or cognitive overload (Weelden et al., 2022). Algorithms from other domains may also represent value to the MDMP, such as the weather-routing optimization algorithm for ships (Lin et al., 2013), the team formation optimization tool used in sports (Beal et al., 2019), or the many applications of deep learning in natural language processing (NLP) (Otter et al., 2020), with NLP applications that summarize texts (such as Quillbot and Wordtune) decreasing time to decision in the MDMP. Finally, digital twin technology (using AI) has already demonstrated its value in a military context and holds a promise for future applications, e.g., enabling maintenance personnel to predict future engine failures on airplanes (Mendi et al., 2021). In the future, live monitoring of all physical assets relevant to military operations, such as (hostile) military facilities, platforms, and (national) critical infrastructure, might be possible.

Conclusion

The post-information age largely changed the nature of warfare, introducing opportunities for innovation and efficiency through AI and digital technologies. Cybersecurity has become indispensable in protecting critical military systems from cyber attacs, while AI enhances operational capabilities in scouting, logistics, and decision-making. However, these advancements come with ethical and strategic dilemmas, particularly in the deployment of autonomous weapons and the regulation of misinformation.

Bibliography

  • Gallaher, M. P., Link, A. N., & Rowe, B. R. (2008). Cyber Security: Economic Strategies and Public Policy Alternatives. Edward Elgar Publishing. Available at: https://ideas.repec.org/b/elg/eebook/12762.html
  • Geers, K. (2011). Strategic Cyber Security. NATO Cooperative Cyber Defence Centre of Excellence. Available at: https://ccdcoe.org/library/publications/strategic-cyber-security/
  • Clarke, R. A., & Knake, R. K. (2010). Cyber War: The Next Threat to National Security and What to Do About It. HarperCollins. Available at: https://www.harpercollins.com/products/cyber-war-richard-a-clarke-robert-knake
  • Ablon, L., Libicki, M. C., & Golay, A. A. (2014). Markets for Cybercrime Tools and Stolen Data: Hackers‘ Bazaar. RAND Corporation. Available at: https://www.rand.org/pubs/research_reports/RR610.html
  • Nakashima, E. (2011). List of cyber-weapons Developed by Pentagon to Streamline Computer Warfare.The Washington Post. Available at: https://www.washingtonpost.com/national/list-of-cyber-weapons-developed-by-pentagon-to-streamline-computer-warfare/2011/05/31/AGSublFH_story.html
  • Fruhlinger, J. (2017). What Is Stuxnet, Who Created It and How Does It work? CSO Online. Available at: https://www.csoonline.com/article/3218104/what-is-stuxnet-who-created-it-and-how-does-it-work.html
  • Mohee, Ahmad. 2022. “A Realistic Analysis of the Stuxnet Cyber-Attack.” Available at: https://preprints.apsanet.org/engage/apsa/article-details/621e416fce899b8848a85f0b
  • Kozlowski, A. (2014). Comparative analysis of cyberattacks on Estonia, Georgia and Kyrgyzstan. COBISS. MK-ID 95468554, 236.

Available at: https://www.researchgate.net/profile/Nnedinma-Umeokafor/publication/260107032_International_Scientific_Forum_ISF_2013vol3/links/02e7e52f964505c201000000/International-Scientific-Forum-ISF-2013vol3.pdf#page=246

  • Ștefan, A. M., & Balla, P. Halting the Spread of Misinformation in Countries Affected by Military Disputes. Available at: https://www.munob.ro/importantd/rr/RR_SOCHUM_Topic%201_MUNOB%202024.pdf
  • Gaire, U. S. (2023). Application of Artificial Intelligence in the Military: An Overview. Unity Journal, 4(01), 161-174.
  • Konert, A., & Balcerzak, T. (2021). Military autonomous drones (UAVs)-from fantasy to reality. Legal and Ethical implications. Transportation research procedia, 59, 292-299.
  • Meerveld, H. W., Lindelauf, R. H. A., Postma, E. O., & Postma, M. (2023). The irresponsibility of not using AI in the military. Ethics and Information Technology, 25(1), 14.
  • Morozov, A. O., & Yashchenko, V. O. (2023). Robots in modern war. Prospects for the development of smart autonomous robots with artificial brain. Mathematical Machines and Systems, 3, 3-12.
+ posts

Číst více

Další články