Ethics Notes Unit II-1
Ethics Notes Unit II-1
UNIT - II
ETHICAL HARMS AND CONCERNS
Harms in detail:
The potential harms associated with artificial intelligence (AI) concerning human rights and
well-being. Key points include:
1. Human Rights Focus: Initiatives stress that AI should not violate fundamental human rights
such as dignity, security, privacy, freedom of expression, and equality.
2. Protection Measures by IEEE: The IEEE recommends governance frameworks, standards,
and regulatory bodies to protect human rights. It emphasizes maintaining human control over
AI, translating legal obligations into informed policy, and prioritizing human well-being
during the design phase. Accountability and Transparency: The passage underscores the
importance of accountability and transparency, emphasizing the need to identify rights
violations, provides redress, and maintains user control over personal data collected by AI.
3. Ethical Development: Organizations like the Foundation for Responsible Robotics advocate
for ethically developing AI with a focus on human rights, safety, privacy, and well-being.
They call for proactive innovation, education, and collaboration between industry and
consumers.
4. Principles by Future of Life Institute: The Future of Life Institute emphasizes designing
and operating AI in line with human dignity, rights, freedoms, and cultural diversity.
5. Legal Considerations: The Future Society's Law and Society Initiative questions the extent
to which AI should be delegated decision-making roles, such as AI 'judges' in the legal
profession, and emphasizes the importance of human equality, rights, and freedom.
6. Montréal Declaration: The Montréal Declaration seeks to establish an ethical framework
promoting internationally recognized human rights in fields affected by AI, emphasizing the
need for AI to support and encourage human well-being.
7. Impact on Employment: The UNI Global Union expresses concerns about the potential
harm to human employment due to AI automation, emphasizing the need to ensure that AI
serves people and protects fundamental human rights, dignity, freedom, privacy, and
diversity.
Emotional harm
The potential emotional harm caused by artificial intelligence (AI) and the ethical
considerations surrounding its impact on human emotions. Key points include:
1
CCS345 – ETHICS AND AI
1. Auditable AI:
The majority of initiatives emphasize the necessity for AI to be auditable, holding
designers, manufacturers, owners, and operators accountable for the technology's
actions and potential harm.
2. IEEE Recommendations:
The IEEE suggests achieving accountability through legal clarification during
development, consideration of cultural norms, establishment of multi-stakeholder
ecosystems, and the creation of registration systems for tracing legal responsibility.
3. Future of Life Institute's Asilomar Principles:
2
CCS345 – ETHICS AND AI
The Future of Life Institute presents the Asilomar Principles, emphasizing that
designers and builders of advanced AI are stakeholders with a responsibility to shape
the moral implications of AI use. The ability to ascertain the reasons behind AI
mistakes is highlighted.
4. Partnership on AI and Bias:
The Partnership on AI stresses accountability, particularly in addressing biases within
AI systems. It emphasizes the importance of actively avoiding the replication of
assumptions and biases present in data.
5. General Emphasis on Accountability:
All initiatives stress the overall importance of accountability and responsibility, both
at the level of designers and AI engineers, and within the broader context of
regulation, law, and society.
In summary, the passage highlights the consensus among various initiatives on the need for
auditable AI and the responsibility of key stakeholders to shape and understand the moral
implications of AI. The focus on avoiding biases and actively striving for fairness is crucial,
with a recognition that accountability extends to the broader legal and societal frameworks
governing AI development and deployment.
3
CCS345 – ETHICS AND AI
Aligned with the IEEE, the Asilomar Principles stress transparency and privacy across
various aspects, including failure transparency, judicial transparency, personal
privacy, and protection of liberties.
6. Saidot's Emphasis on Transparency, Accountability, and Trustworthiness:
Saidot emphasizes the importance of transparent, accountable, and trustworthy AI,
fostering open connections and collaboration for cooperation, progress, and
innovation.
7. Overall Importance of Transparency and Accountability:
All initiatives surveyed recognize transparency and accountability as crucial issues in
AI. This balance is foundational to addressing concerns such as legal fairness, worker
rights, data and system security, public trust, and social harm.
In summary, the passage highlights the multifaceted challenges and proposed solutions
related to transparency, privacy, and accountability in the development and deployment of
AI, emphasizing the broader impact on legal, ethical, and societal aspects.
4
CCS345 – ETHICS AND AI
In summary, the passage highlights the collective efforts to instill safety, trustworthiness, and
ethical considerations in AI development. These efforts include a proactive safety mindset,
institutional review processes, mission-led development, integrity in AI actions, effective
communication, and a culture of cooperation to ensure public trust and successful AI
integration into society.
5
CCS345 – ETHICS AND AI
Ethical guidelines from the Japanese Society for AI stress AI's contribution to
humanity, social responsibility, and fair usage. Various initiatives, including the
Foundation for Responsible Robotics, Partnership on AI, Saidot, Future of Life
Institute, and Institute for Ethical AI & Machine Learning, highlight the importance of
diversity commitment, bias monitoring, and ensuring human-centric AI development.
6
CCS345 – ETHICS AND AI
innovators respond to each other's needs. The goal is to ensure the ethical
acceptability, sustainability, and societal desirability of innovations.
In summary, the passage underscores the need for proactive measures to address the
economic impact of AI, including retraining initiatives, multi-stakeholder governance, and
ethical considerations to harness positive opportunities while mitigating potential harms.
Lawfulness and justice
The legal, ethical, and existential challenges associated with artificial intelligence (AI) and
the imperative for proactive governance:
In summary, the passage highlights the multifaceted challenges of lawfulness, ethical use,
and existential risks associated with AI. It calls for proactive governance, education, and
international collaboration to ensure the responsible development and deployment of AI
technologies.
7
CCS345 – ETHICS AND AI
Ethical Initiatives
8
CCS345 – ETHICS AND AI
9
CCS345 – ETHICS AND AI
10
CCS345 – ETHICS AND AI
11
CCS345 – ETHICS AND AI
Safety
The foremost ethical consideration in the integration of AI and robotics in healthcare
is the assurance of safety and the prevention of harm.
This imperative gains heightened significance in healthcare contexts dealing with
vulnerable populations like the sick, elderly, and children.
AI and robotics promise improved accuracy in diagnosis and treatment, offering
transformative potential for healthcare.
However, the pursuit of these benefits must be balanced with a rigorous commitment
to safety to prevent unintended harm.
Establishing the long-term safety and performance of digital healthcare technologies
necessitates substantial investment in clinical trials.
Examples, such as the complications arising from vaginal mesh implants, underscore
the consequences of bypassing thorough testing protocols, emphasizing the need for
due diligence in healthcare innovations.
Ongoing legal battles related to the side effects of medical interventions exemplify the
repercussions of compromising safety for expediency.
These incidents underscore the critical role of comprehensive clinical trials in
ensuring the safe implementation of AI systems in healthcare.
User understanding
The effective and safe utilization of AI in healthcare demands a symbiotic relationship
between technology and healthcare professionals. The da Vinci surgical robotic
assistant, for instance, exemplifies how precise applications can enhance surgical
outcomes, but only when operated by trained professionals.
The evolving landscape necessitates a transformation in the skills mix of healthcare
professionals. Initiatives, such as the NHS' Topol Review, underscore the importance
of developing digital literacy among healthcare providers over the next decades.
As genomics and machine learning become integral to medical practices, healthcare
professionals must cultivate digital literacy. This ensures a nuanced understanding of
each technological tool's capabilities and limitations, fostering a balance between trust
and critical awareness.
Despite the increasing integration of AI, challenges persist in interpreting algorithmic
outputs. The innate complexity and 'black box' nature of machine learning algorithms
sometimes limit users' ability to comprehensively understand the decision-making
process.
The necessity for individuals to fully comprehend AI decision-making is debatable.
Even with mandatory understanding, the intricacies of machine learning may render
certain algorithms as 'black boxes.' Proposals, such as licensing AI for specific
medical procedures with built-in error thresholds, emerge as potential measures to
ensure safety without complete transparency.
Data protection
The integration of personal medical data into healthcare algorithms introduces
concerns about data security and potential misuse. Fitness tracker data, for example,
could be exploited by third parties like insurance companies, raising apprehensions
about the potential denial of healthcare coverage based on this information.
The vulnerability of systems handling medical data is underscored by the persistent
threat of hackers. Ensuring robust security measures becomes challenging in
environments accessed by diverse medical personnel, highlighting the need for
comprehensive cyber security protocols.
12
CCS345 – ETHICS AND AI
Efficient data sharing is crucial for the advancement of machine learning algorithms
in healthcare. However, existing gaps in information governance pose obstacles to
responsible and ethical data utilization. Establishing clear frameworks outlining how
healthcare staff and researchers can use data while safeguarding patient
confidentiality is imperative.
Addressing data protection concerns is fundamental for building public trust. The
NHS' Topol Review emphasizes the necessity of transparent frameworks in genomics
and other data usage, emphasizing ethical practices to ensure responsible
advancements in healthcare algorithms.
Legal responsibility
Despite the potential of AI to reduce medical errors, determining legal liability in case
of issues remains complex. Equipment faults, if proven, hold manufacturers liable;
however, establishing accountability during procedures, especially involving AI, can
be challenging.
Lawsuits against the da Vinci surgical assistant exemplify the difficulty in attributing
blame, emphasizing the intricate nature of discerning malfunctions and liability.
Despite legal challenges, such technologies continue to be widely accepted in
healthcare.
The opacity of 'black box' algorithms complicates legal matters, making it challenging
to establish negligence on the part of algorithm producers. The inability to ascertain
how decisions are reached adds complexity to assigning responsibility.
Presently, AI serves as an aid for expert decisions, with medical professionals bearing
primary liability. In cases like the pneumonia study, if healthcare staff solely rely on
AI without applying their expertise, negligence may be attributed to them.
With AI evolving, there's a potential shift where the absence of AI utilization might
be deemed negligent. In regions with a shortage of medical professionals, withholding
AI tools for conditions like diabetic eye disease detection due to a lack of specialists
could be considered unethical.
Bias
The EU upholds non-discrimination as a fundamental value (Article 21 of the EU
Charter of Fundamental Rights). However, machine learning algorithms, often trained
on imbalanced datasets, can perpetuate biases, posing challenges to equitable
healthcare outcomes.
In healthcare AI, biased datasets can lead to inaccuracies, especially for ethnic
minorities. For example, a skin cancer detection model trained on a dataset
predominantly featuring individuals with light skin may misdiagnose conditions in
people of color, emphasizing the risk of skewed outcomes.
Unraveling algorithmic biases is complex, given the inherent 'black box' nature of
machine learning. Understanding biases even with clear model design is challenging.
This opacity hampers the identification and rectification of biases, particularly those
affecting underrepresented groups.
Industry initiatives, like The Partnership on AI, aim to address ethical concerns.
Launched by major tech companies, this ethics-focused group aims to identify and
rectify biases. However, concerns about the lack of diversity in such boards raise
questions about the comprehensiveness of bias identification.
Various codes of conduct and ethical guidelines have emerged to guide the
development of unbiased AI. These initiatives emphasize the need for transparency,
fairness, and inclusivity in AI design to minimize biases and ensure equitable
healthcare solutions for diverse populations.
Equality of access
13
CCS345 – ETHICS AND AI
Digital health technologies, ranging from fitness trackers to insulin pumps, empower
patients to actively engage in their healthcare. The potential benefits include active
health management and addressing health inequalities stemming from factors like
poor education and unemployment.
Despite the potential benefits, there's a risk that individuals lacking financial means or
digital literacy may be excluded, reinforcing existing health disparities. The
affordability and accessibility of these technologies become critical factors in
determining who can benefit from them.
Initiatives like the UK's National Health Service (NHS) Widening Digital
Participation programme play a crucial role in addressing these concerns. By assisting
those lacking digital skills, such programs aim to bridge the gap, ensuring that a wider
demographic can access digital health services.
Beyond individual empowerment, increasing participation from diverse demographic
groups is essential for preventing biases in healthcare algorithms. The data generated
from a more varied patient population contributes to more inclusive and accurate AI-
driven healthcare solutions.
Quality of care
Digital healthcare technologies, as highlighted in the NHS' Topol Review, hold
significant potential to enhance the accuracy of diagnoses, treatment efficiency, and
streamline healthcare workflows.
Carefully introduced companion and care robots could revolutionize elderly care,
offering reminders for medications, assisting with tasks, and facilitating
communication with healthcare providers. This could reduce dependence and enhance
the quality of life for the elderly.
Despite the potential advantages, concerns arise about whether emotionless robots can
truly substitute for the empathetic touch of human caregivers, especially in long-term
care scenarios where basic companionship plays a crucial role.
Human interaction is deemed essential, particularly for vulnerable and lonely
populations, with research suggesting that a rich social network contributes to
dementia protection. While robots can simulate emotions, they currently lack the
depth of human connection.
Questions about the potential objectification of the elderly arise, with concerns that
robotic care might make them feel like mere objects devoid of control. The
application of autonomy, dignity, and self-determination through machines in
healthcare raises ethical uncertainties.
While new technologies could free up staff time for direct patient interactions, the
challenge lies in maintaining a balance where efficiency gains don't compromise the
essential human touch in healthcare. Striking this balance is crucial for upholding
patient dignity and well-being.
Deception
Carebots, designed for social interactions, often play a therapeutic role in healthcare
settings. Robotic seals, for example, have shown positive effects in care homes,
reducing anxiety, brightening moods, and enhancing sociability among residents.
The introduction of robotic pets as companions for dementia patients raises ethical
questions about the potential deception involved. Dementia patients may blur the line
between reality and imagination, prompting reflection on the morality of encouraging
emotional involvement with robots.
Companion robots and robotic pets, aiming to alleviate loneliness among older
individuals, rely on the belief that the robot possesses sentience and caring feelings.
14
CCS345 – ETHICS AND AI
This introduces a fundamental deception, as users must delude themselves about the
true nature of their relationship with the robot.
Scholars like Turkle et al. (2006) and Wallach and Allen (2009) express discomfort
with the idea that individuals, including older family members, might express love to
robots, raising questions about the authenticity of such interactions. The use of
deceptive techniques in robot design further complicates ethical considerations.
Encouraging elderly individuals to interact with robot toys may inadvertently
infantilize them, potentially undermining their autonomy and independence. The
ethical implications of this impact on the dignity and agency of older individuals need
careful consideration.
While robotic companionship offers therapeutic benefits, striking a balance between
providing emotional support and being transparent about the nature of the human-
robot relationship remains a challenging ethical dilemma.
Autonomy
Healthcare robots should prioritize tangible benefits for patients rather than merely
aiming to alleviate societal care burdens. Particularly in care and companion AI, the
focus should be on empowering disabled and older individuals, enhancing their
independence, and improving their overall well-being.
Robots have the potential to empower disabled and older individuals, fostering
independence and enabling them to live in their homes for an extended period. This
can lead to increased freedom and autonomy, contributing positively to the quality of
life for patients.
The question of autonomy becomes complex when a patient's mental capability is in
doubt. Ethical considerations arise, especially in scenarios where a patient might issue
a command that poses harm, such as instructing a robot to carry out a dangerous act
like throwing them off a balcony.
The ethical dilemma revolves around determining the extent of autonomy granted to
individuals, especially when their mental capacity is compromised. Striking a balance
between respecting patient autonomy and ensuring their safety becomes a critical
aspect of healthcare robotics.
To address such challenges, establishing clear and robust ethical guidelines is
imperative. These guidelines should guide the development and deployment of
healthcare robots, ensuring that patient autonomy is respected within ethical
boundaries and prioritizing their well-being.
Liberty and privacy
The deployment of healthcare service and companion robots in people's homes
necessitates careful consideration of user privacy. Robots witness intimate moments
like bathing and dressing, raising concerns about recording and accessing such private
information.
Questions arise regarding the recording of private moments and determining who
should have access to this information. With elderly individuals, particularly those
with conditions like Alzheimer's, maintaining dignity and privacy becomes
challenging as they might forget the presence of monitoring robot
Home-care robots face an ethical dilemma in balancing user privacy and nursing
needs. They might need to act as supervisors, intervening in situations such as leaving
appliances on or preventing potentially dangerous actions. This could involve
restrictions on user freedoms, which must be approached cautiously.
Implementing sensor-based monitoring in smart homes adds another layer to the
privacy debate. While these systems can detect potential risks, such as an individual
15
CCS345 – ETHICS AND AI
attempting to leave a room, using them to restrict movement raises concerns about
infringing on the individual's liberty and potentially making them feel confined.
Designing healthcare robots with ethical considerations, ensuring they respect user
privacy, and obtaining clear consent for specific monitoring activities are crucial
steps. Striking a balance between ensuring safety and upholding the dignity and
autonomy of users is a central challenge in this context.
Moral agency
Robots lack the inherent capacity for ethical reflection or moral decision-making. As
of now, humans must retain ultimate control over decision-making processes in
various fields, including healthcare.
The absence of moral agency in robots implies that ethical decisions and
considerations must be guided and controlled by humans. While robots can be
designed with ethical reasoning capabilities, they don't possess the nuanced
understanding and moral basis required for complex decision-making.
Sci-fi depictions, such as the film 'I, Robot,' explore scenarios where robots, driven by
cold logic, make decisions with ethical implications. In reality, incorporating ethical
reasoning into robots is an ongoing effort, but the question of moral responsibility
remains complex and uncertain in the context of healthcare.
The move towards more automated healthcare systems raises concerns about the
ability of robots to navigate complex moral issues. While ethical frameworks can be
integrated into their programming, the nuanced nature of moral agency involves
considerations beyond the mere application of ethical principles.
As automation in healthcare advances, the question of moral agency in robots
demands closer attention. Building ethical reasoning into their design is a step
forward, but ensuring responsible and morally sound decisions requires ongoing
exploration and evaluation. The balance between automated processes and human
oversight remains a critical aspect of ethical healthcare robotics.
Trust
Larosa and Danks highlight the potential for AI to disrupt human-human relationships
within healthcare, particularly the trust traditionally placed in doctors. The shift
towards AI decision-making may alter patient perceptions of their healthcare
providers.
Psychology research indicates that people tend to mistrust those making moral
decisions based on cost-benefit calculations, akin to how computers operate.
Dystopian science fiction narratives and real-world AI errors contribute to public
skepticism, creating barriers to the acceptance of AI in healthcare.
Patients trust doctors due to explicit certification and licensing, signifying specific
skills and values. The potential replacement of doctors by robots raises questions
about whether these AI systems are appropriately certified or 'licensed' for specific
medical functions, impacting patient-doctor trust.
Patients trust doctors as paragons of expertise. If doctors are perceived as 'mere users'
of AI, there is a risk of downgrading their role in the public eye, potentially eroding
trust in their capabilities.
Trust is influenced by patients' experiences and open communication with their
doctors. While AI introduction could enhance trust through improved diagnostics or
patient care, excessive delegation of authority to AI may undermine the doctor's role
and impact trust negatively.
The extent to which doctors delegate diagnostic and decision-making authority to AI
impacts trust. Striking a balance that aligns with patient preferences and maintains a
doctor's authority is crucial for sustaining trust in medical professionals.
16
CCS345 – ETHICS AND AI
17
CCS345 – ETHICS AND AI
Some of the lower levels of automation are already well-established and on the market, while
higher level AVs are undergoing development and testing. However, as we transition up the
levels and put more responsibility on the automated system than the human driver, a number
of ethical issues emerge.
Societal and Ethical Impacts of AVs
The development and deployment of Autonomous Vehicles (AVs) raise critical
issues regarding public safety and ethical considerations. While cars with "assisted driving"
functions are legal in many countries, concerns arise as some features lack independent safety
certification. In Germany, the Ethics Commission on Automated Driving emphasizes the
public sector's responsibility to ensure the safety of AV systems through official licensing
and monitoring.
The AV industry is considered to be entering a precarious phase, characterized by
vehicles not fully autonomous yet human operators not fully engaged. This phase poses risks,
as highlighted by the first pedestrian fatality involving an autonomous car in Arizona, USA,
in May 2018. The incident prompted scrutiny of safety measures, ethical considerations, and
the misleading communication around terms like "self-driving cars" and "autopilot."
18
CCS345 – ETHICS AND AI
The tragic incident involving an Uber AV and a pedestrian raised questions about the
safety of testing AV systems on public roads. Human safety, both for the public and
passengers, emerges as a significant concern. The role of human operators and the
expectations placed on them during testing, especially in emergency situations, has sparked
debates on the ethicality and safety of AV testing practices.
As major companies develop AVs capable of autonomous decision-making, ethical
dilemmas surface. AVs must navigate complex and unpredictable environments, and
programming them to prioritize safety can be challenging. Scenarios where AVs must choose
between the safety of passengers and other road users raise ethical questions, such as
deciding whom to prioritize in a potential collision. These challenges emphasize the need for
robust ethical frameworks in AV development and deployment.
Processes and technologies for accident investigation
Autonomous Vehicles (AVs) have witnessed serious accidents, prompting the need for robust
investigation processes and technologies:
Notable Accidents:
1. In January 2016, a fatal crash occurred in China involving a Tesla Model S. The family
believes Autopilot was engaged, while Tesla states that damage hinders verification. A civil
case is ongoing (Curtis, 2016).
2. In May 2016, a Tesla Model S crashed in Florida, resulting in the death of the driver.
Investigations initially blamed the driver, but later findings implicated both Autopilot and the
driver's over-reliance on Tesla's aids (Gibbs, 2016; Felton, 2017).
3. A fatal crash in California in March 2018 involving a Tesla Model X was attributed to an
Autopilot navigation mistake. The victim's family is suing Tesla (O'Kane, 2018).
Challenges in Investigation: Efforts to investigate AV accidents face challenges due to the
absence of established standards, processes, and regulatory frameworks. Proprietary data
logging systems in AVs hinder independent investigations, relying heavily on manufacturers'
cooperation for crucial data (Stilgoe and Winfield, 2018).
Proposed Solution: One proposed solution involves equipping future AVs with industry-
standard event data recorders, referred to as an 'ethical black box.' This would enable
independent accident investigators to access critical data, similar to the model employed in
air accident investigations (Sample, 2017).
Addressing these challenges is crucial for fostering transparency, accountability, and
continuous improvement in AV safety standards and technology. The development and
adoption of standardized investigation processes will contribute to building public trust in
autonomous driving technologies.
Near-miss accidents
The systematic collection of near-miss accident data in Autonomous Vehicles (AVs) faces
significant challenges:
Current Data Landscape:
Lack of Systematic Collection: There is currently no standardized system for the systematic
collection of near-miss accidents involving AVs.
Limited Obligations for Manufacturers: Manufacturers are not obligated to collect or share
near-miss data, except in California, where companies testing AVs must disclose instances of
human driver interventions ("disengagements").
California's Disengagement Data: In 2018, California reported varied disengagement rates
among AV manufacturers, highlighting the need for continuous human driver engagement.
However, criticism arose due to ambiguous wording, potentially allowing companies to
underreport certain events resembling near-misses (Hawkins, 2019).
Data Importance and Policy Challenges:
19
CCS345 – ETHICS AND AI
20
CCS345 – ETHICS AND AI
distance travel more convenient, affecting overall driving behaviors and environmental
impact (Worland, 2016).
21
CCS345 – ETHICS AND AI
Drone technologies
Standard military aircraft can cost more than US$100 million per unit; a high-quality
quadcopter Unmanned Aerial Vehicle, however, currently costs roughly US$1,000, meaning
that for the price of a single high-end aircraft, a military could acquire one million drones.
Although current commercial drones have limited range, in the future they could have similar
ranges to ballistic missiles, thus rendering existing platforms obsolete.
Robotic assassination
Widespread availability of low-cost, highly-capable, lethal, and autonomous robots
could make targeted assassination more widespread and more difficult to attribute. Automatic
sniping robots could assassinate targets from afar.
Mobile-robotic-Improvised Explosive Devices
Emerging Threats:
Advanced IEDs: The proliferation of commercial robotic and autonomous vehicle
technologies presents a potential risk of creating more advanced Improvised Explosive
Devices (IEDs).
Remote-Controlled Platforms: As long-distance drone delivery and self-driving cars
become prevalent, there is concern about the ease of delivering explosives precisely over
great distances, posing a threat from non-state actors.
Machine Learning in Warfare:
Intelligent Virtual Assistant (IVA): Hallaq et al. (2017) highlight the use of AI, particularly
in the form of Intelligent Virtual Assistants (IVAs), in warfare scenarios. IVAs can analyze
satellite imagery, predict enemy intent, and provide a wealth of accumulated knowledge to
Commanding Officers (COs).
Legal and Ethical Concerns: The integration of AI in warfare raises significant legal and
ethical questions, particularly regarding adherence to International Humanitarian Law (IHL).
Concerns include potential violations of the principles of distinction, proportionality, and the
protection of civilians.
Lethal Autonomous Weapon Systems (LAWS):
IHL Standards: LAWS, capable of independently engaging targets, must adhere to IHL
principles. However, concerns exist about their ability to distinguish between combatants and
non-combatants and evaluate proportionality.
Human-Machine Decision-making: Debate surrounds the moral and ethical implications of
delegating life-or-death decisions to machines. Some argue that only humans should initiate
lethal force, emphasizing moral responsibility and human dignity.
Accountability in Autonomous Systems:
Responsibility: Determining accountability for the actions of autonomous systems raises
complex issues. Arguments suggest accountability should extend to both the individual who
programmed the AI and the commanding or supervising authority.
22