The bongino report

The Weaponization of Artificial Intelligence Poses An Ethical Dilemma

An ethical dilemma: Weaponization Artificial Intelligence

By Justin K. Steinhoff

 

The world has seen technological changes over the past decade. These include advances in voice assistant technology, facial recognition software and cryptocurrency markets. Today voice assistance devices using AI and machine learning technologies (e.g. Amazon Echo and Google Home) are part of nearly half of American adults’ day (Juniper Research 2017, 2017). The rise of AI is changing the way we interact (Department of State 2022). The US Department of Defense (DoD), has invested in technology advancement and AI/ML adoption (Chief Digital and Artificial Intelligence Office). [CDAO], 2022a).

Numerous companies gather and use customer information to respond to customers’ needs and create targeted advertising campaigns and sales strategies (Goddard 2019). Contrary to popular belief, companies such as Oracle or Cambridge Analytica use large amounts of data (known as ‘big data’) to create psychometric profiles that can be used to influence the population (Goddard and Porotsky 2019, 2019). The US Government also uses large amounts of big data, AI and ML adoption to impact military operations (Department of State 2022). The DoD CDAO and US Air Force jointly conducted an exercise in June 2022 to evaluate and evaluate a project called Smart Sensor Brain. This project utilizes an AI-enabled autonomous, unmanned aerial system that can conduct “automated surveillance and reconnaissance functions in contested environments” (CDAO Public Affairs. 2022, para. 3). The US Government’s use of transformative technology creates moral challenges as well as numerous ethical dilemmas.

The Problem

Advanced technologies that use AI, ML, or autonomy can be used to weaponize the US government. This poses significant ethical and moral problems that the US Government should address. Elon Musk, chief executive officer at Tesla, Inc., and SpaceX, Inc., spoke out about what he considers the greatest empirical threat to the US. (Molina, 2017). Musk stated that AI regulation must be considered by the government due to the aforementioned. “fundamental existential risk for human civilization” (Molina, 2017, para. 1). AI has been around for decades. With the advancements in machine learning, specifically neural network systems, new capabilities will continue challenging the limits of our possibilities.

Although it was a simple illustration of AI/ML’s power, James Cameron, writer and director of The Terminator, released the film nearly 40 years ago. Cameron’s film depicted an imaginary world where an indestructible robotic person was controlled by Skynet, an superintelligence system. (IMDb., n.d.). Cameron’s storyline shows Skynet as the neural network software product of AI and ML. Theoretically this is a very dystopian example of what advanced AI could be capable of. Wissner–Gross & Freer (2013) state that AI can spontaneously emerge and take over the world. This is a consequence of intelligence.

Like most innovative technologies, the US government is always evolving and modernizing to stay competitive on any future battlefield. There are multiple ethical problems when it comes to the development, advancement and implementation of AI-enabled, highly adaptive, weaponized systems. According to Sullivan et.al. According to Sullivan et al. (2017), the US will lose its technological advantage over other countries on future battlefields because of the convergence advanced technologies such as AI. One of the most important ethical issues is where should the US Government stop considering AI-enabled targeting or decision-making unethical? Furthermore, current US policy demands that weapon systems allow for the following: “appropriate level of human judgment over the use of force” (DoD, 2017, p. 2). AI has both positive and negative effects on society, and will continue to be a major technological advance.

The Impact

Analyzing the US Government’s AI innovation and development, it is clear that the goal of programs like human enhancement and lethal autonomous weapons systems was to lower the overall risk for Soldiers. The US Congress says that “the term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments” (William M.(Mac) Thornberry National Defense Authorization Act for 2020, p. The future of Soldiers’ combat experience on the battlefield is certain to change as we recognize the potential and application of AI and ML for making predictions and decisions. The Smart Sensor Brain is an example of AI and ML technology supporting the autonomous control and autonomy of multiple sensors. “perceiving, making inferences, and reporting observations without the requirement for ground Processing, Exploitation and Dissemination” (CDAO Public Affairs. 2022, para. 4).

According to Sullivan et.al. According to Sullivan and al. Due to the convergence highly disruptive technologies, such as autonomous weapon systems that can be used to kill AI, the US Government won’t enjoy an undisputed advantage over its competitors.

The Root Cause

AI and enhancedML have allowed for the greater use of advanced technologies across many career fields like healthcare, education, and military. AI technology in the private sector has outperformed Stanford radiologists in accurately diagnosing chest x-ray diseases (Husain 2021). Husain (2021) claims that AI has yet not led to some of the most important discoveries made by humanity. The ethical dilemma in implementing highly advanced technologies that AI has enabled is often due to a grave misinterpretation of its capabilities. AI is often seen as a dangerous yet tangible creation. This is the Skynet example. AI is science, however, unlike a nuclear weapon, which is a physical object that can be detected, controlled, and monitored (Husain 2021). Husain (2021), describes that AI operates at speeds and scales unfathomable by humans. Although the solution for the root cause may develop over many decades, the US Government has increased efforts to address AI-enabled autonomous weaponry’s ethical challenges.

Solution

The US Government recognizes the ethical dilemmas associated with weaponizing AI/ML software and has established policies at the DoD level in order to ensure a system for checks and balances. (Sayler 2022). DoD must be clear about the capabilities of AI/ML enhancement. In order to make the most of AI/ML-enabled technologies, military leaders and operators must be able use them effectively. They will also need to train in future human-machine hybrid operations (Mooers, 2022; Sullivan et.al., 2017).

The US Army has spent more than $72million to develop capabilities and foster innovation using AI/ML (ARL Public Relations, 2019). Additionally, the US Army created the Army Research Laboratory under Army Development Command. This laboratory is located at the Army Futures Command. It leads the Army’s AI modernization. The DoD also established the Joint Artificial Intelligence Center (military component) in 2018, and the CDAO in 2022 as the civilian oversight part of the US Government’s National Artificial Intelligence Initiative Act of 2022 (CDAO, 2022a, CDAO, 2022b). The CDAO formed a partnership with John Hopkins University’s Applied Physics Laboratory (JHUAPL) to enhance the DoD’s research-and-development capabilities (CDAO Public Affairs 2022). The joint research and training exercises conducted by the Army, Air Force, CDAO and JHU-APL contribute to the DoD’s future-oriented capabilities. It also provides additional expertise that addresses the ethical concerns of military AI use on future battlefields. The ethical processing model can be used to examine ethical dilemmas and support ethical use of AI/ML technologies in the era of innovation and modernization.

Ethical lenses

            Making decisions is essential to life. Leaders may be faced with ethical dilemmas or other issues that they might not agree on. Leaders can depend on their character and values to guide their ethical decision making. Further, the ethical processing model will support leaders in their ethical reasoning and help them make ethical decisions.

According to the Department of the Army, one of the five characteristics of the Army career requires leaders to have military experience. Army leaders must possess military expertise in leadership, human development, moral, geo-cultural, political and military-technical. Army professionals must have the ability to create moral-ethical knowledge. “moral solutions to diverse problems” (Department of the Army 2019 p. 2-3). The ethical processing model is a method that examines ethical reasoning to find moral solutions to ethical problems.

The ethical processing model is made up of four principles. It involves recognizing an ethical issue or conflict, evaluating the options using the three ethical lenses, making a decision and then acting on that decision (Kem. n.d.). The second step, which is evaluating the ethical dilemma, involves looking at the issues through the three possible approaches to the triangle of rules, principles or outcomes, virtues or beliefs, and evaluating them using these alternative approaches (Kem. n.d.). Examining ethical options for the dilemma posed by lethal autonomous weapon systems that use AI on the battlefield is possible using the ethical processing model. Each perspective of the triangle (rules/rules, virtues and virtues) gives you a clear understanding of the best ethical decision.

The Rules Lens: Principles-based ethics

            At the top of the ethical triangle, the rules lens examines the ethical dilemma with consideration to any rules that currently exist or rules that should exist with regards to what is ethically and morally acceptable (Kem, n.d.). Military leaders recognized the ethical dilemma that AI-enabled autonomous weapons systems pose in the case of weaponizing AI. However, current “US policy does not prohibit the development or employment of lethal autonomous weapon systems” (Sayler, 2022, para. 2). High-level talks with top officials indicate that senior military leaders might need to invest more and develop lethal, autonomous weapons in order to keep up with global competition (Sayler 2022).

Contrary to popular belief, the current DoD policy states that fully autonomous and semi-autonomous weapon systems must provide appropriate levels of human judgement (DoD 2017, 2017). This policy is not applicable to cyberspace operations or unmanned systems (DoD 2017, 2017). In addition, the international definition and understanding of an AI-enabled autonomous weapons system is not yet established (Sayler 2002). From the perspective of the outcomes lens, weaponized artificial intelligence has the ability to not only reduce the threat to force but also increase the speed with which future battles can occur.

The Outcomes lens: Consequences-based Ethics

Kem (n.d.), The second ethical approach analyzes the consequences or the outcomes of ethical dilemmas. “judged by their consequences depending on the results to be maximized” (p. 5). The outcomes lens assesses the effectiveness of an action or inaction to resolve the ethical dilemma. It also examines how the action results in maximum benefits through interests like pleasure, refuge, dignity (Kem. n.d.).

The ethical dilemma that arises when an AI executes lethal actions outside the bounds of the law of armed conflict and expectation is highlighted by the outcome lens. In the same vein, we examine the US Government’s reaction to AI and ML advancements and the potential disadvantages that this will cause for the US Government.

Globally, AI-enabled autonomous lethal weapon systems are being discussed. This has led to calls for global preemptive bans on advanced technological weapons. Because of ethical concerns about the lethal autonomous weapons systems. Opponents to the lethal weapon system highlight many ethical concerns such as operational risk, disintegrated accountability, proportionality of use during armed conflicts, and the proportionality. In a similar vein, entities with more than two employees are subject to the same ethical challenges. “30 countries and 165 nongovernmental organizations” Have endorsed a global prohibition (Sayler (2022, para. 17). The US Government is not in favor of a ban on autonomous lethal weapon systems. However, senior government leaders may see the importance and utility of keeping the capability of autonomous lethal weapon systems operational through the virtues lens.

The Virtues Lens: Virtues based Ethics

            The final lens of the ethical triangle is the virtues lens. This lens is unique from the other outcomes and rules lenses because it relies on the wisdom of others and is not guided by existing laws or rules. The virtues lens is an approach to the ethical dilemma that focuses on a collective understanding about how a person should behave, and is fundamentally a moral decision made by someone with good character (Kem.

            The antithesis of virtuous ethical reasoning may be weaponizing AI/ML to manufacture autonomously lethal weapons. Senior US Government leaders may use a virtuous lens to analyze the weaponization AI. They will need to consider that near-peer and peer threat actors have always ignored ethics (Sullivan et.al., 2017). The US Government may see the ethical dilemma of producing and possessing lethal autonomous weapons systems as ethically imprudent, but it can be viewed by the US Government as an ethical decision. This is because they have an understanding of adversarial capabilities. The US Government might also see the virtue in weaponizing AI through the rules lens and understand that this ethical decision is to protect US national interests.

Conclusion

            Transformative technologies such as AI and ML have definitively changed how humans interact and will affect the future of battle. The US military is currently developing AI, ML and other transformative technologies (such autonomy) in order to increase survivability and overall success on the battlefield. The ethical dilemma is how to use AI/ML-enabled transformative technology to preserve military advantage on the battlefield and military overmatch. This ethical dilemma is exacerbated by the possibility of fully autonomous, lethal weapon systems that operate independently from human intervention. Ground combat risks are reduced by the development of autonomous lethal weapon systems. However, this highlights the ethical dilemma associated with a lethally-autonomous weapon system that executes a decision or takes action using AI-enabled data and without human intervention. The four steps of an ethical processing model are required to examine the decision-making process for ethically difficult problems. The same applies to moral-ethical problems. To solve them, you must examine the ethical triangle as well as the three ethical approaches of rules and virtues.

Refer to

Army Research Laboratory [ARL] Public Affairs. (2019, March 12). $72M Army investment in Battlefield artificial intelligence. US Army https://www.army.mil/article/218354/battlefield_artificial_intelligence_gets_72m_army_investment

Chief Digital and Artificial Intelligence Office [CDAO]. (2022a). Chief digital and artificial intelligence officer. https://www.ai.mil/

Chief Digital and Artificial Intelligence Office [CDAO]. (2022b). The JAIC story. https://www.ai.mil/about.html

CDAO Public Affairs. (2022, June 22). DoD CDAO teams up with USAF for a developmental flight of an AI-enabled unmanned aircraft vehicle and an autonomy-enabled drone.. https://www.ai.mil/docs/press_release_062222_DoD_CDAO_USAF_Conduct_Developmental_Test_Flight.pdf

Department of Defense [DoD]. (2017). Autonomy in weapon system. Washington Headquarters Services, Executive Services Directorate. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf

Department of State. (2022). Artificial intelligence. https://www.state.gov/artificial-intelligence/

Department of the Army. (2019). Army leadership and the profession (ADP 6-22). https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN20039-ADP_6-22-001-WEB-0.pdf

Goddard W. (2019. January 14). What is the best way for large companies to gather customer data? https://itchronicles.com/big-data/how-do-big-companies-collect-customer-data/

Husain, A. (2021, 18 November). AI is changing the face of war. PRISM 9(3), 50-61. https://ndupress.ndu.edu/Portals/68/Documents/prism/prism_9-3/prism_9-3.pdf

International Movie Database [IMDb]. (n.d.). The terminator. IMDb. https://www.imdb.com/title/tt0088247/

Juniper Research. (2017, November 8. Amazon echo and Google home to be installed in over half of all households by 2022. They will also serve as multi-assistant devices. https://www.juniperresearch.com/press/amazon-echo-google-home-reside-over-50pc-us-house

Kem, J. D. (n.d.). Use the following guidelines to make ethical decisions “ethical triangle.” http://www.cgscfoundation.org/wp-content/uploads/2016/04/Kem-UseoftheEthicalTriangle.pdf

Molina, B. (2017 July 17). Musk: The government must regulate artificial intelligence. https://www.usatoday.com/story/tech/talkingtech/2017/07/17/musk-government-needs-regulate-artificial-intelligence/484318001/

Mooers, N. (2022). The role of AI tools in decision-making and the responsibilities for leaders who use them [Manuscript submitted for publication]. US Army Concepts Development Division Maneuver Capabilities Development and Integration Directorate

Porotsky, S. (2019, June 10). Cambridge Analytica: The dark side of big data Global Security Review. https://globalsecurityreview.com/cambridge-analytica-darker-side-big-data/

Sayler, K. M. (November 14, 2022). Defense primer: US policy regarding lethal autonomous weapon system. Library of Congress, Congressional Research Service. https://crsreports.congress.gov/product/pdf/IF/IF11150

Sullivan, I., Santaspirt, M., & Shabro, L. (2017). Mad scientist: Visualizing multi-domain battle 2030-2050. https://community.apan.org/wg/tradoc-g2/mad-scientist/m/visualizing-multi-domain-battle-2030-2050/210183#

William M. Mac Thornberry National Defense Authorization Act (H.R. 6395, 116th Cong., 2d Sess. (2020) (enacted). https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210

Wissner-Gross, A. D., & Freer, C. E. (2013). Casual entropic force. Physical Review Letters 110(16), 168702-1- 168702-5. https://doi.org/10.1103/PhysRevLett.110.168702


Read More From Original Article Here:

" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
*As an Amazon Associate I earn from qualifying purchases

Related Articles

Sponsored Content
Back to top button
Available for Amazon Prime
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker