0% found this document useful (0 votes)
44 views22 pages

Ethics Notes Unit II-1

Uploaded by

Sudha M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views22 pages

Ethics Notes Unit II-1

Uploaded by

Sudha M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

CCS345 – ETHICS AND AI

UNIT - II
ETHICAL HARMS AND CONCERNS
Harms in detail:
The potential harms associated with artificial intelligence (AI) concerning human rights and
well-being. Key points include:

1. Human Rights Focus: Initiatives stress that AI should not violate fundamental human rights
such as dignity, security, privacy, freedom of expression, and equality.
2. Protection Measures by IEEE: The IEEE recommends governance frameworks, standards,
and regulatory bodies to protect human rights. It emphasizes maintaining human control over
AI, translating legal obligations into informed policy, and prioritizing human well-being
during the design phase. Accountability and Transparency: The passage underscores the
importance of accountability and transparency, emphasizing the need to identify rights
violations, provides redress, and maintains user control over personal data collected by AI.
3. Ethical Development: Organizations like the Foundation for Responsible Robotics advocate
for ethically developing AI with a focus on human rights, safety, privacy, and well-being.
They call for proactive innovation, education, and collaboration between industry and
consumers.
4. Principles by Future of Life Institute: The Future of Life Institute emphasizes designing
and operating AI in line with human dignity, rights, freedoms, and cultural diversity.
5. Legal Considerations: The Future Society's Law and Society Initiative questions the extent
to which AI should be delegated decision-making roles, such as AI 'judges' in the legal
profession, and emphasizes the importance of human equality, rights, and freedom.
6. Montréal Declaration: The Montréal Declaration seeks to establish an ethical framework
promoting internationally recognized human rights in fields affected by AI, emphasizing the
need for AI to support and encourage human well-being.
7. Impact on Employment: The UNI Global Union expresses concerns about the potential
harm to human employment due to AI automation, emphasizing the need to ensure that AI
serves people and protects fundamental human rights, dignity, freedom, privacy, and
diversity.
Emotional harm
The potential emotional harm caused by artificial intelligence (AI) and the ethical
considerations surrounding its impact on human emotions. Key points include:

1. Impact on Human Emotional Experience:


 Acknowledges AI's influence on human emotions, emphasizing the core role of affect
(emotion and desire) in intelligence.
 Recognizes susceptibility to emotional influence both positively and negatively.
2. Cultural Sensitivity and Influence:
 The passage highlights the variation of affect across cultures and the potential for AI
to shape how individuals perceive society.
 Recommends mitigating this risk through adaptive AI norms and values based on
cultural sensitivities.

1
CCS345 – ETHICS AND AI

3. Potential Harms and Ethical Initiatives:


 Discusses ways AI could cause emotional harm, such as false intimacy, over-
attachment, and objectification.
 Various ethical initiatives, including the Foundation for Responsible Robotics,
Partnership on AI, AI Now institute, the Montréal Declaration, and EURON
Roadmap, address these concerns.
4. Intimate Systems and Ethical Guidelines:
 Focuses on potential harms in developing intimate relationships with AI, especially in
the context of the sex industry.
 Introduces the concept of "intimate systems" and outlines ethical guidelines to prevent
sexism, inequality, manipulation, and criminal behavior.
5. Affective AI and Nudging:
 Defines "nudging" as AI subtly modifying behavior through affective systems.
 Raises ethical concerns about potential negative impacts on human health and the
need for systematic analyses and user education.
6. Ethical Considerations for Governmental Nudging:
 Discusses the ethical appropriateness of governments using nudging through AI to
influence public behavior.
 Emphasizes the importance of transparency regarding the beneficiaries of such
behavior to prevent misuse.
7. Additional Issues:
 Highlights concerns related to technology addiction and emotional harm stemming
from societal or gender bias.

In summary, the passage underscores the importance of ethical considerations in AI


development, particularly concerning emotional well-being, cultural sensitivity, and the
potential risks associated with nudging and intimate relationships with AI. Transparency,
education, and ongoing discussions among stakeholders are crucial for responsible AI
deployment.

Accountability and responsibility


The crucial need for accountability and responsibility in the field of artificial intelligence
(AI). Key points include:

1. Auditable AI:
 The majority of initiatives emphasize the necessity for AI to be auditable, holding
designers, manufacturers, owners, and operators accountable for the technology's
actions and potential harm.
2. IEEE Recommendations:
 The IEEE suggests achieving accountability through legal clarification during
development, consideration of cultural norms, establishment of multi-stakeholder
ecosystems, and the creation of registration systems for tracing legal responsibility.
3. Future of Life Institute's Asilomar Principles:

2
CCS345 – ETHICS AND AI

 The Future of Life Institute presents the Asilomar Principles, emphasizing that
designers and builders of advanced AI are stakeholders with a responsibility to shape
the moral implications of AI use. The ability to ascertain the reasons behind AI
mistakes is highlighted.
4. Partnership on AI and Bias:
 The Partnership on AI stresses accountability, particularly in addressing biases within
AI systems. It emphasizes the importance of actively avoiding the replication of
assumptions and biases present in data.
5. General Emphasis on Accountability:
 All initiatives stress the overall importance of accountability and responsibility, both
at the level of designers and AI engineers, and within the broader context of
regulation, law, and society.

In summary, the passage highlights the consensus among various initiatives on the need for
auditable AI and the responsibility of key stakeholders to shape and understand the moral
implications of AI. The focus on avoiding biases and actively striving for fairness is crucial,
with a recognition that accountability extends to the broader legal and societal frameworks
governing AI development and deployment.

Accountability and responsibility


The critical concerns surrounding transparency, explicability, security, reproducibility, and
interpretability in artificial intelligence (AI) systems. Key points include:

1. Transparency and Accountability Concerns:


 The lack of transparency in AI systems, especially in safety-critical contexts like
driverless cars and medical diagnoses, raises issues of user understanding,
accountability, and difficulty in holding relevant parties responsible for potential
harm.
2. IEEE's Proposed Standards:
 The IEEE proposes developing measurable and testable transparency standards,
catering to different stakeholders. This includes a 'why-did-you-do-that' button for
users and an 'ethical black box' for certification agencies to access relevant
algorithms, ensuring failure transparency.
3. Privacy and Personally Identifiable Information (PII):
 AI's reliance on personal data raises concerns about individuals' right to keep
information private and have control over its use.
 The IEEE suggests the possibility of a personalized 'privacy AI' to help individuals
manage and foresee ethical implications of machine learning data exchange.
4. Regulatory Measures and Consent:
 Regulation (EU) 2016/679 establishes Personally Identifiable Information (PII) as an
individual's asset, requiring explicit consent for data collection to protect autonomy
and dignity.
5. Future of Life Institute's Asilomar Principles:

3
CCS345 – ETHICS AND AI

 Aligned with the IEEE, the Asilomar Principles stress transparency and privacy across
various aspects, including failure transparency, judicial transparency, personal
privacy, and protection of liberties.
6. Saidot's Emphasis on Transparency, Accountability, and Trustworthiness:
 Saidot emphasizes the importance of transparent, accountable, and trustworthy AI,
fostering open connections and collaboration for cooperation, progress, and
innovation.
7. Overall Importance of Transparency and Accountability:
 All initiatives surveyed recognize transparency and accountability as crucial issues in
AI. This balance is foundational to addressing concerns such as legal fairness, worker
rights, data and system security, public trust, and social harm.

In summary, the passage highlights the multifaceted challenges and proposed solutions
related to transparency, privacy, and accountability in the development and deployment of
AI, emphasizing the broader impact on legal, ethical, and societal aspects.

Safety and trust


The crucial aspects of safety and trust in the deployment of artificial intelligence (AI). Key
points include:

1. Safety Mindset Advocated by IEEE:


 The IEEE suggests fostering a 'safety mindset' among AI researchers, aiming to
proactively identify and address unintended behaviors. The focus is on developing
systems that are inherently safe by design, minimizing risks during the development
phase.
2. Institutional Review Boards and Community Sharing by IEEE:
 The IEEE recommends the establishment of review boards at institutions to evaluate
AI projects and progress. Additionally, the promotion of a community sharing culture
is encouraged to disseminate information on safety-related developments, research,
and tools.
3. Mission-Led Development for Public Trust by Future of Life Institute:
 The Future of Life Institute's Asilomar principles advocate a mission-led approach to
AI development. The norm is that AI should be developed in service of widely shared
ethical ideals, benefiting all humanity rather than serving the interests of a single state
or organization. This mission-led approach is seen as essential for building public
trust in AI integration.
4. AI Acting with Integrity and Effective Communication by JSAI:
 The Japanese Society for AI underscores the importance of AI acting with integrity
and advocates for earnest communication between AI and society. Consistent and
effective communication is viewed as a means to strengthen mutual understanding,
contributing to overall peace and happiness.
5. Culture of Cooperation and Trust by Partnership on AI:

4
CCS345 – ETHICS AND AI

 The Partnership on AI strives to ensure that AI is trustworthy and aims to foster a


culture of cooperation, trust, and openness among AI scientists and engineers.
6. Dialogue and Transparency Emphasized by Institute for Ethical AI & Machine
Learning:
 The Institute for Ethical AI & Machine Learning emphasizes the significance of
dialogue, especially in addressing issues of trust and privacy. Their core tenets
mandate that AI technologists communicate with stakeholders about processes and
data involved, aiming to build trust and spread understanding throughout society.

In summary, the passage highlights the collective efforts to instill safety, trustworthiness, and
ethical considerations in AI development. These efforts include a proactive safety mindset,
institutional review processes, mission-led development, integrity in AI actions, effective
communication, and a culture of cooperation to ensure public trust and successful AI
integration into society.

Social harm and social justice: inclusivity, bias, and discrimination

The imperative for socially responsible AI development, focusing on inclusivity, bias


mitigation, and the prevention of discrimination. Key points include:

1. Diversity and Social Alignment:


 AI development must embrace diverse viewpoints aligned with community norms,
values, and ethics. Biases and assumptions should be avoided, and AI should respect
cultural diversity, aligning with public values and working for the common good.
2. Social Responsibility of Developers:
 Developers have a social responsibility to embed ethical values into AI, avoiding
harm to any segment of society. Initiatives such as AI4All advocate for fair and
equitable inclusion at all stages, particularly supporting under-represented groups.
3. Norm Identification and Conflict Resolution:
 The IEEE suggests identifying social and moral norms of specific communities where
AI is deployed. Designing AI with "norm updating" in mind, accommodating
dynamic cultural changes, and transparently addressing norm conflicts are crucial.
Collaboration is key, with careful evaluation to prevent biases disadvantaging specific
social groups.
4. Global Inequality and AI's Impact:
 Addressing global inequality, the passage emphasizes AI's potential humanitarian
usefulness. It must not widen gaps but mitigate inequality through actions like CSR
integration, transparent power structures, and global knowledge sharing. Aligning AI
with Sustainable Development Goals is essential for worldwide availability and
ethical AI education.
5. Ethical Guidelines and Social Responsibility:

5
CCS345 – ETHICS AND AI

 Ethical guidelines from the Japanese Society for AI stress AI's contribution to
humanity, social responsibility, and fair usage. Various initiatives, including the
Foundation for Responsible Robotics, Partnership on AI, Saidot, Future of Life
Institute, and Institute for Ethical AI & Machine Learning, highlight the importance of
diversity commitment, bias monitoring, and ensuring human-centric AI development.

In summary, the passage advocates for socially responsible AI practices, emphasizing


diversity, ethical considerations, and global equity. It calls for collaborative efforts to embed
the right values into AI systems, ensuring fair and inclusive development while mitigating
biases and discriminatory risks.

Financial harm: Economic opportunity and employment


Concerns about the potential financial harm arising from AI's impact on employment and the
economy. Key points include:

1. Economic Disruption and Job Loss:


 AI's rapid development poses a risk of job displacement and economic disruption,
necessitating a reevaluation of traditional employment structures. The focus extends
beyond job numbers to consider the broader implications on workers' rights and
displacement strategies.
2. Need for Retraining and Adaptability:
 To cope with the pace of technological change, the workforce must embrace
adaptability and acquire new skill sets. The IEEE emphasizes the importance of
retraining programs, starting as early as high school, to ensure accessibility to future
employment opportunities.
3. Multi-Stakeholder Governance for Ethical AI:
 The UNI Global Union advocates for multi-stakeholder ethical AI governance bodies
on global and regional levels. Bringing together diverse stakeholders, including trade
unions, designers, researchers, and employers, ensures that AI benefits people broadly
and equally, with policies addressing economic, technological, and social divides.
4. AI's Impact on Working Conditions:
 The AI Now Institute collaborates with various stakeholders to understand how AI,
through automation and early integration, influences labor and working conditions
across different sectors. The Future Society poses ethical and professional questions
about AI's impact on the legal profession.
5. Positive Opportunities and Workplace Bias:
 AI's role extends beyond potential job loss, offering opportunities to address
workplace bias and identify deficiencies in product development. The IEEE suggests
that if developed with ethical considerations, AI can contribute to proactive
improvements in design processes.
6. RRI as a Transparent and Interactive Process:
 The passage introduces the concept of Responsible Research and Innovation (RRI),
emphasizing a transparent and interactive process where societal actors and

6
CCS345 – ETHICS AND AI

innovators respond to each other's needs. The goal is to ensure the ethical
acceptability, sustainability, and societal desirability of innovations.

In summary, the passage underscores the need for proactive measures to address the
economic impact of AI, including retraining initiatives, multi-stakeholder governance, and
ethical considerations to harness positive opportunities while mitigating potential harms.
Lawfulness and justice

The legal, ethical, and existential challenges associated with artificial intelligence (AI) and
the imperative for proactive governance:

1. Legal and Ethical Considerations:


 The IEEE emphasizes the necessity for AI to adhere to existing international and
domestic laws. It rejects the notion of granting AI any level of 'personhood' and
advocates for a robust legal framework governing AI development, distribution, and
accountability. Legal challenges include determining AI's status, addressing
governmental use, ensuring accountability for harm, and maintaining transparency.
Control and Ethical Use of AI:
 As AI becomes more sophisticated, concerns about misuse, data exploitation, and
hacking rise. The passage calls for increased public awareness and education on AI,
especially focusing on undergraduate and postgraduate programs. Various initiatives,
such as the Foundation for Responsible Robotics, Partnership on AI, and others, stress
the importance of clear, open dialogue between AI and society to build understanding,
acceptance, and trust.
Existential Risk:
 The Future of Life Institute highlights the existential risk associated with AI's
competence rather than malevolence. AI's continuous learning and potential
misalignment with human goals pose challenges. The risk of autonomous weapons
systems (AWS) is acknowledged, with the IEEE proposing recommendations for
meaningful human control over AWS, including accountability, transparency,
predictability, and adherence to ethical codes. Concerns about an arms race in lethal
autonomous weapons are raised, urging pre-emptive regulation to avoid societal harm.
2. International Cooperation and Mitigation Efforts:
 The passage stresses the need for international cooperation to address the risks posed
by AI, particularly in the development of autonomous weapons. The Future of Life
Institute cautions against assumptions about the upper limits of AI capabilities and
emphasizes the profound impact of advanced AI on the course of human history.

In summary, the passage highlights the multifaceted challenges of lawfulness, ethical use,
and existential risks associated with AI. It calls for proactive governance, education, and
international collaboration to ensure the responsible development and deployment of AI
technologies.

7
CCS345 – ETHICS AND AI

Ethical Initiatives

8
CCS345 – ETHICS AND AI

9
CCS345 – ETHICS AND AI

10
CCS345 – ETHICS AND AI

3.3. Case studies


3.3.1. Case study: healthcare robots
 AI and robotics are increasingly integrated into healthcare for tasks like diagnosis,
surgeries, patient monitoring, and physical interventions.
 The potential benefits include improved diagnostics, enhanced patient care, and
support for medical professionals.
 Machine learning, especially in medical image diagnostics, has demonstrated
capabilities matching or exceeding human abilities in detecting illnesses.
 Embodied AI, represented by robots, poses tangible risks to physical safety.
 Historical incidents, such as a malfunctioning surgical robot causing injury or a
factory robot leading to a worker's death, highlight the need for careful
implementation.
 As robots become more prevalent, especially in domains like driverless cars, drones,
and assistive robots, decisions made by these systems directly impact human safety
and well-being.
 The physical presence and moving parts of robots in the real world elevate the stakes,
necessitating stringent safety measures.
 The physical nature of robots introduces risks, especially when dealing with
vulnerable populations like children and the elderly.
 Continuous advancements in AI and robotics require careful consideration of potential
harm and proactive safety measures.

11
CCS345 – ETHICS AND AI

Safety
 The foremost ethical consideration in the integration of AI and robotics in healthcare
is the assurance of safety and the prevention of harm.
 This imperative gains heightened significance in healthcare contexts dealing with
vulnerable populations like the sick, elderly, and children.
 AI and robotics promise improved accuracy in diagnosis and treatment, offering
transformative potential for healthcare.
 However, the pursuit of these benefits must be balanced with a rigorous commitment
to safety to prevent unintended harm.
 Establishing the long-term safety and performance of digital healthcare technologies
necessitates substantial investment in clinical trials.
 Examples, such as the complications arising from vaginal mesh implants, underscore
the consequences of bypassing thorough testing protocols, emphasizing the need for
due diligence in healthcare innovations.
 Ongoing legal battles related to the side effects of medical interventions exemplify the
repercussions of compromising safety for expediency.
 These incidents underscore the critical role of comprehensive clinical trials in
ensuring the safe implementation of AI systems in healthcare.
User understanding
 The effective and safe utilization of AI in healthcare demands a symbiotic relationship
between technology and healthcare professionals. The da Vinci surgical robotic
assistant, for instance, exemplifies how precise applications can enhance surgical
outcomes, but only when operated by trained professionals.
 The evolving landscape necessitates a transformation in the skills mix of healthcare
professionals. Initiatives, such as the NHS' Topol Review, underscore the importance
of developing digital literacy among healthcare providers over the next decades.
 As genomics and machine learning become integral to medical practices, healthcare
professionals must cultivate digital literacy. This ensures a nuanced understanding of
each technological tool's capabilities and limitations, fostering a balance between trust
and critical awareness.
 Despite the increasing integration of AI, challenges persist in interpreting algorithmic
outputs. The innate complexity and 'black box' nature of machine learning algorithms
sometimes limit users' ability to comprehensively understand the decision-making
process.
 The necessity for individuals to fully comprehend AI decision-making is debatable.
Even with mandatory understanding, the intricacies of machine learning may render
certain algorithms as 'black boxes.' Proposals, such as licensing AI for specific
medical procedures with built-in error thresholds, emerge as potential measures to
ensure safety without complete transparency.
Data protection
 The integration of personal medical data into healthcare algorithms introduces
concerns about data security and potential misuse. Fitness tracker data, for example,
could be exploited by third parties like insurance companies, raising apprehensions
about the potential denial of healthcare coverage based on this information.
 The vulnerability of systems handling medical data is underscored by the persistent
threat of hackers. Ensuring robust security measures becomes challenging in
environments accessed by diverse medical personnel, highlighting the need for
comprehensive cyber security protocols.

12
CCS345 – ETHICS AND AI

 Efficient data sharing is crucial for the advancement of machine learning algorithms
in healthcare. However, existing gaps in information governance pose obstacles to
responsible and ethical data utilization. Establishing clear frameworks outlining how
healthcare staff and researchers can use data while safeguarding patient
confidentiality is imperative.
 Addressing data protection concerns is fundamental for building public trust. The
NHS' Topol Review emphasizes the necessity of transparent frameworks in genomics
and other data usage, emphasizing ethical practices to ensure responsible
advancements in healthcare algorithms.
Legal responsibility
 Despite the potential of AI to reduce medical errors, determining legal liability in case
of issues remains complex. Equipment faults, if proven, hold manufacturers liable;
however, establishing accountability during procedures, especially involving AI, can
be challenging.
 Lawsuits against the da Vinci surgical assistant exemplify the difficulty in attributing
blame, emphasizing the intricate nature of discerning malfunctions and liability.
Despite legal challenges, such technologies continue to be widely accepted in
healthcare.
 The opacity of 'black box' algorithms complicates legal matters, making it challenging
to establish negligence on the part of algorithm producers. The inability to ascertain
how decisions are reached adds complexity to assigning responsibility.
 Presently, AI serves as an aid for expert decisions, with medical professionals bearing
primary liability. In cases like the pneumonia study, if healthcare staff solely rely on
AI without applying their expertise, negligence may be attributed to them.
 With AI evolving, there's a potential shift where the absence of AI utilization might
be deemed negligent. In regions with a shortage of medical professionals, withholding
AI tools for conditions like diabetic eye disease detection due to a lack of specialists
could be considered unethical.
Bias
 The EU upholds non-discrimination as a fundamental value (Article 21 of the EU
Charter of Fundamental Rights). However, machine learning algorithms, often trained
on imbalanced datasets, can perpetuate biases, posing challenges to equitable
healthcare outcomes.
 In healthcare AI, biased datasets can lead to inaccuracies, especially for ethnic
minorities. For example, a skin cancer detection model trained on a dataset
predominantly featuring individuals with light skin may misdiagnose conditions in
people of color, emphasizing the risk of skewed outcomes.
 Unraveling algorithmic biases is complex, given the inherent 'black box' nature of
machine learning. Understanding biases even with clear model design is challenging.
This opacity hampers the identification and rectification of biases, particularly those
affecting underrepresented groups.
 Industry initiatives, like The Partnership on AI, aim to address ethical concerns.
Launched by major tech companies, this ethics-focused group aims to identify and
rectify biases. However, concerns about the lack of diversity in such boards raise
questions about the comprehensiveness of bias identification.
 Various codes of conduct and ethical guidelines have emerged to guide the
development of unbiased AI. These initiatives emphasize the need for transparency,
fairness, and inclusivity in AI design to minimize biases and ensure equitable
healthcare solutions for diverse populations.
Equality of access

13
CCS345 – ETHICS AND AI

 Digital health technologies, ranging from fitness trackers to insulin pumps, empower
patients to actively engage in their healthcare. The potential benefits include active
health management and addressing health inequalities stemming from factors like
poor education and unemployment.
 Despite the potential benefits, there's a risk that individuals lacking financial means or
digital literacy may be excluded, reinforcing existing health disparities. The
affordability and accessibility of these technologies become critical factors in
determining who can benefit from them.
 Initiatives like the UK's National Health Service (NHS) Widening Digital
Participation programme play a crucial role in addressing these concerns. By assisting
those lacking digital skills, such programs aim to bridge the gap, ensuring that a wider
demographic can access digital health services.
 Beyond individual empowerment, increasing participation from diverse demographic
groups is essential for preventing biases in healthcare algorithms. The data generated
from a more varied patient population contributes to more inclusive and accurate AI-
driven healthcare solutions.
Quality of care
 Digital healthcare technologies, as highlighted in the NHS' Topol Review, hold
significant potential to enhance the accuracy of diagnoses, treatment efficiency, and
streamline healthcare workflows.
 Carefully introduced companion and care robots could revolutionize elderly care,
offering reminders for medications, assisting with tasks, and facilitating
communication with healthcare providers. This could reduce dependence and enhance
the quality of life for the elderly.
 Despite the potential advantages, concerns arise about whether emotionless robots can
truly substitute for the empathetic touch of human caregivers, especially in long-term
care scenarios where basic companionship plays a crucial role.
 Human interaction is deemed essential, particularly for vulnerable and lonely
populations, with research suggesting that a rich social network contributes to
dementia protection. While robots can simulate emotions, they currently lack the
depth of human connection.
 Questions about the potential objectification of the elderly arise, with concerns that
robotic care might make them feel like mere objects devoid of control. The
application of autonomy, dignity, and self-determination through machines in
healthcare raises ethical uncertainties.
 While new technologies could free up staff time for direct patient interactions, the
challenge lies in maintaining a balance where efficiency gains don't compromise the
essential human touch in healthcare. Striking this balance is crucial for upholding
patient dignity and well-being.
Deception
 Carebots, designed for social interactions, often play a therapeutic role in healthcare
settings. Robotic seals, for example, have shown positive effects in care homes,
reducing anxiety, brightening moods, and enhancing sociability among residents.
 The introduction of robotic pets as companions for dementia patients raises ethical
questions about the potential deception involved. Dementia patients may blur the line
between reality and imagination, prompting reflection on the morality of encouraging
emotional involvement with robots.
 Companion robots and robotic pets, aiming to alleviate loneliness among older
individuals, rely on the belief that the robot possesses sentience and caring feelings.

14
CCS345 – ETHICS AND AI

This introduces a fundamental deception, as users must delude themselves about the
true nature of their relationship with the robot.
 Scholars like Turkle et al. (2006) and Wallach and Allen (2009) express discomfort
with the idea that individuals, including older family members, might express love to
robots, raising questions about the authenticity of such interactions. The use of
deceptive techniques in robot design further complicates ethical considerations.
 Encouraging elderly individuals to interact with robot toys may inadvertently
infantilize them, potentially undermining their autonomy and independence. The
ethical implications of this impact on the dignity and agency of older individuals need
careful consideration.
 While robotic companionship offers therapeutic benefits, striking a balance between
providing emotional support and being transparent about the nature of the human-
robot relationship remains a challenging ethical dilemma.
Autonomy
 Healthcare robots should prioritize tangible benefits for patients rather than merely
aiming to alleviate societal care burdens. Particularly in care and companion AI, the
focus should be on empowering disabled and older individuals, enhancing their
independence, and improving their overall well-being.
 Robots have the potential to empower disabled and older individuals, fostering
independence and enabling them to live in their homes for an extended period. This
can lead to increased freedom and autonomy, contributing positively to the quality of
life for patients.
 The question of autonomy becomes complex when a patient's mental capability is in
doubt. Ethical considerations arise, especially in scenarios where a patient might issue
a command that poses harm, such as instructing a robot to carry out a dangerous act
like throwing them off a balcony.
 The ethical dilemma revolves around determining the extent of autonomy granted to
individuals, especially when their mental capacity is compromised. Striking a balance
between respecting patient autonomy and ensuring their safety becomes a critical
aspect of healthcare robotics.
 To address such challenges, establishing clear and robust ethical guidelines is
imperative. These guidelines should guide the development and deployment of
healthcare robots, ensuring that patient autonomy is respected within ethical
boundaries and prioritizing their well-being.
Liberty and privacy
 The deployment of healthcare service and companion robots in people's homes
necessitates careful consideration of user privacy. Robots witness intimate moments
like bathing and dressing, raising concerns about recording and accessing such private
information.
 Questions arise regarding the recording of private moments and determining who
should have access to this information. With elderly individuals, particularly those
with conditions like Alzheimer's, maintaining dignity and privacy becomes
challenging as they might forget the presence of monitoring robot
 Home-care robots face an ethical dilemma in balancing user privacy and nursing
needs. They might need to act as supervisors, intervening in situations such as leaving
appliances on or preventing potentially dangerous actions. This could involve
restrictions on user freedoms, which must be approached cautiously.
 Implementing sensor-based monitoring in smart homes adds another layer to the
privacy debate. While these systems can detect potential risks, such as an individual

15
CCS345 – ETHICS AND AI

attempting to leave a room, using them to restrict movement raises concerns about
infringing on the individual's liberty and potentially making them feel confined.
 Designing healthcare robots with ethical considerations, ensuring they respect user
privacy, and obtaining clear consent for specific monitoring activities are crucial
steps. Striking a balance between ensuring safety and upholding the dignity and
autonomy of users is a central challenge in this context.
Moral agency
 Robots lack the inherent capacity for ethical reflection or moral decision-making. As
of now, humans must retain ultimate control over decision-making processes in
various fields, including healthcare.
 The absence of moral agency in robots implies that ethical decisions and
considerations must be guided and controlled by humans. While robots can be
designed with ethical reasoning capabilities, they don't possess the nuanced
understanding and moral basis required for complex decision-making.
 Sci-fi depictions, such as the film 'I, Robot,' explore scenarios where robots, driven by
cold logic, make decisions with ethical implications. In reality, incorporating ethical
reasoning into robots is an ongoing effort, but the question of moral responsibility
remains complex and uncertain in the context of healthcare.
 The move towards more automated healthcare systems raises concerns about the
ability of robots to navigate complex moral issues. While ethical frameworks can be
integrated into their programming, the nuanced nature of moral agency involves
considerations beyond the mere application of ethical principles.
 As automation in healthcare advances, the question of moral agency in robots
demands closer attention. Building ethical reasoning into their design is a step
forward, but ensuring responsible and morally sound decisions requires ongoing
exploration and evaluation. The balance between automated processes and human
oversight remains a critical aspect of ethical healthcare robotics.
Trust
 Larosa and Danks highlight the potential for AI to disrupt human-human relationships
within healthcare, particularly the trust traditionally placed in doctors. The shift
towards AI decision-making may alter patient perceptions of their healthcare
providers.
 Psychology research indicates that people tend to mistrust those making moral
decisions based on cost-benefit calculations, akin to how computers operate.
Dystopian science fiction narratives and real-world AI errors contribute to public
skepticism, creating barriers to the acceptance of AI in healthcare.
 Patients trust doctors due to explicit certification and licensing, signifying specific
skills and values. The potential replacement of doctors by robots raises questions
about whether these AI systems are appropriately certified or 'licensed' for specific
medical functions, impacting patient-doctor trust.
 Patients trust doctors as paragons of expertise. If doctors are perceived as 'mere users'
of AI, there is a risk of downgrading their role in the public eye, potentially eroding
trust in their capabilities.
 Trust is influenced by patients' experiences and open communication with their
doctors. While AI introduction could enhance trust through improved diagnostics or
patient care, excessive delegation of authority to AI may undermine the doctor's role
and impact trust negatively.
 The extent to which doctors delegate diagnostic and decision-making authority to AI
impacts trust. Striking a balance that aligns with patient preferences and maintains a
doctor's authority is crucial for sustaining trust in medical professionals.

16
CCS345 – ETHICS AND AI

 As evidence supporting the therapeutic benefits of robotic healthcare systems


accumulates, trust in these technologies is likely to increase. The da Vinci surgical
robotic assistant serves as an example of a robotic system gaining trust as it
demonstrates positive outcomes in medical applications.
Employment replacement
 Similar to concerns in other industries, the healthcare sector is apprehensive about the
potential threat of emerging technologies, with carebots capable of performing a
significant portion of nurses' tasks. This has raised fears of job displacement among
healthcare professionals.
 The NHS' Topol Review in 2009 provides a more optimistic perspective, asserting
that emerging technologies, including AI, will not replace healthcare professionals but
rather enhance or augment their capabilities. The emphasis is on technology serving
as a supportive tool, allowing healthcare workers to focus more on direct patient care.
 Despite the introduction of carebots that can handle a substantial portion of nurses'
duties, the overarching view is that these technologies are intended to complement
human efforts rather than replace healthcare professionals entirely. Carebots are seen
as tools to increase efficiency and productivity.
 The Topol Review emphasizes the importance of fostering a learning environment
within healthcare systems to ensure that employees are digitally capable. This
approach acknowledges the evolving nature of healthcare roles and the need for
continuous adaptation to technological advancements.
 The key challenge lies in striking a balance between leveraging technological
advancements for improved healthcare outcomes and ensuring that the human
workforce remains integral. The goal is to create synergy between AI and healthcare
professionals, maximizing the benefits of both.
3.3.2 Case study: Autonomous Vehicles
Autonomous Vehicles (AVs) are vehicles that are capable of sensing their
environment and operating with little to no input from a human driver. While the idea of self-
driving cars has been around since at least the 1920s, it is only in recent years that technology
has developed to a point where AVs are appearing on public roads.
According to automotive standardization body SAE International (2018), there are six
levels of driving automation:
According to automotive No automation An automated system may
standardisation body SAE issue warnings and/or
International (2018), there momentarily intervene in
are six levels of driving driving, but has no
automation: 0 sustained vehicle control.
1 Hands on The driver and automated
system share control of the
vehicle. For example, the
automated system may
control engine power to
maintain a set speed (e.g.
Cruise Control), engine and
brake power to maintain
and vary speed (e.g.
Adaptive Cruise Control),
or steering (e.g. Parking
Assistance). The driver
must be ready to retake full

17
CCS345 – ETHICS AND AI

control at any time.


2 Hands off The automated system takes
full control of the vehicle
(including accelerating,
braking, and steering).
However, the driver must
monitor the driving and be
prepared to intervene
immediately at any time.
3 Eyes off The driver can safely turn
their attention away from
the driving tasks (e.g. to
text or watch a film) as the
vehicle will handle any
situations that call for an
immediate response.
However, the driver must
still be prepared to
intervene, if called upon by
the AV to do so, within a
timeframe specified by the
AV manufacturer.
4 Minds off As level 3, but no driver
attention is ever required for
safety, meaning the driver
can safely go to sleep or
leave the driver's seat.
5 Steering wheel optional No human intervention is
required at all. An example
of a level 5 AV would be a
robotic taxi.

Some of the lower levels of automation are already well-established and on the market, while
higher level AVs are undergoing development and testing. However, as we transition up the
levels and put more responsibility on the automated system than the human driver, a number
of ethical issues emerge.
Societal and Ethical Impacts of AVs
The development and deployment of Autonomous Vehicles (AVs) raise critical
issues regarding public safety and ethical considerations. While cars with "assisted driving"
functions are legal in many countries, concerns arise as some features lack independent safety
certification. In Germany, the Ethics Commission on Automated Driving emphasizes the
public sector's responsibility to ensure the safety of AV systems through official licensing
and monitoring.
The AV industry is considered to be entering a precarious phase, characterized by
vehicles not fully autonomous yet human operators not fully engaged. This phase poses risks,
as highlighted by the first pedestrian fatality involving an autonomous car in Arizona, USA,
in May 2018. The incident prompted scrutiny of safety measures, ethical considerations, and
the misleading communication around terms like "self-driving cars" and "autopilot."

18
CCS345 – ETHICS AND AI

The tragic incident involving an Uber AV and a pedestrian raised questions about the
safety of testing AV systems on public roads. Human safety, both for the public and
passengers, emerges as a significant concern. The role of human operators and the
expectations placed on them during testing, especially in emergency situations, has sparked
debates on the ethicality and safety of AV testing practices.
As major companies develop AVs capable of autonomous decision-making, ethical
dilemmas surface. AVs must navigate complex and unpredictable environments, and
programming them to prioritize safety can be challenging. Scenarios where AVs must choose
between the safety of passengers and other road users raise ethical questions, such as
deciding whom to prioritize in a potential collision. These challenges emphasize the need for
robust ethical frameworks in AV development and deployment.
Processes and technologies for accident investigation
Autonomous Vehicles (AVs) have witnessed serious accidents, prompting the need for robust
investigation processes and technologies:
Notable Accidents:
1. In January 2016, a fatal crash occurred in China involving a Tesla Model S. The family
believes Autopilot was engaged, while Tesla states that damage hinders verification. A civil
case is ongoing (Curtis, 2016).
2. In May 2016, a Tesla Model S crashed in Florida, resulting in the death of the driver.
Investigations initially blamed the driver, but later findings implicated both Autopilot and the
driver's over-reliance on Tesla's aids (Gibbs, 2016; Felton, 2017).
3. A fatal crash in California in March 2018 involving a Tesla Model X was attributed to an
Autopilot navigation mistake. The victim's family is suing Tesla (O'Kane, 2018).
Challenges in Investigation: Efforts to investigate AV accidents face challenges due to the
absence of established standards, processes, and regulatory frameworks. Proprietary data
logging systems in AVs hinder independent investigations, relying heavily on manufacturers'
cooperation for crucial data (Stilgoe and Winfield, 2018).
Proposed Solution: One proposed solution involves equipping future AVs with industry-
standard event data recorders, referred to as an 'ethical black box.' This would enable
independent accident investigators to access critical data, similar to the model employed in
air accident investigations (Sample, 2017).
Addressing these challenges is crucial for fostering transparency, accountability, and
continuous improvement in AV safety standards and technology. The development and
adoption of standardized investigation processes will contribute to building public trust in
autonomous driving technologies.
Near-miss accidents
The systematic collection of near-miss accident data in Autonomous Vehicles (AVs) faces
significant challenges:
Current Data Landscape:
Lack of Systematic Collection: There is currently no standardized system for the systematic
collection of near-miss accidents involving AVs.
Limited Obligations for Manufacturers: Manufacturers are not obligated to collect or share
near-miss data, except in California, where companies testing AVs must disclose instances of
human driver interventions ("disengagements").
California's Disengagement Data: In 2018, California reported varied disengagement rates
among AV manufacturers, highlighting the need for continuous human driver engagement.
However, criticism arose due to ambiguous wording, potentially allowing companies to
underreport certain events resembling near-misses (Hawkins, 2019).
Data Importance and Policy Challenges:

19
CCS345 – ETHICS AND AI

Informing Regulation: The absence of comprehensive near-miss data hampers


policymakers' ability to assess the frequency and significance of such incidents and the
subsequent measures taken by manufacturers.
Lessons from Aviation: A model similar to aviation, where near misses are rigorously
logged and independently investigated, could provide valuable insights for regulatory
frameworks in the AV sector.
Data privacy
The increasing prevalence of Autonomous Vehicles (AVs) raises critical questions regarding
data privacy:
Current Landscape:
Data Collection by Manufacturers: AV manufacturers gather substantial amounts of data
from these vehicles.
Privacy Implications: Concerns arise about the extent to which this data collection
compromises the privacy and data protection rights of both drivers and passengers.
Emerging Issues:
Misuse for Advertising: Privacy concerns extend to potential misuse of AV data for
advertising purposes, raising ethical questions about the handling of personal information
(Lin, 2014).
Unethical Data Use: Instances, such as Tesla's handling of AV data logs, highlight ethical
concerns. The company shared drivers' private data with the media without consent, raising
questions about transparency and user control (Thielman, 2017).
Proposed Solutions:
Data Sovereignty: The German Ethics Commission on Automated Driving proposes a
solution emphasizing data sovereignty for AV drivers. This approach aims to grant users
control over how their data is utilized, addressing privacy concerns (Ethics Commission,
2017).
Employment
Employment Risks:
Truck Drivers: The growth of AVs poses a medium-term risk to truck drivers, as long-
distance trucks lead AV technology adoption. Commercial deliveries and fully autonomous
trips have showcased the potential for job displacement in the trucking industry (Viscelli,
2018).
Bus Drivers: Cities globally are considering self-driving shuttles, challenging the future of
bus drivers. Tensions with labor unions have emerged, reflecting concerns about job security
and the impact on traditional public transport (Calder, 2018; Weinberg, 2019).
Taxi Industry Disruption:
Long-Term Impact: While fully autonomous taxis might become a reality in the long term,
plans for self-driving taxis in major cities raise concerns among taxi drivers about job
security and industry disruption. Initiatives like autonomous taxis in London and Arizona
signal a shift in the traditional taxi model (BBC, 2018; Sage, 2019).
Urban Environment Transformation:
Infrastructure Changes: AVs could reshape urban landscapes, requiring new infrastructure
such as AV-only lanes. Long-term planning may need adjustments to accommodate the
changing dynamics of transportation (Marshall and Davies, 2018).
5G Network Expansion: The widespread adoption of AVs will necessitate the significant
extension of 5G network coverage, impacting urban planning strategies (Khosravi, 2018).
Environmental Considerations:
Driving Behavior Shifts: The potential environmental benefits of reduced fuel usage with
self-driving cars might be counteracted by increased driving distances. AVs could make long-

20
CCS345 – ETHICS AND AI

distance travel more convenient, affecting overall driving behaviors and environmental
impact (Worland, 2016).

Legal and ethical responsibility


Ethical dilemmas in development
Ethical Dilemmas in Development:
Trolley Dilemma Variation: A survey by the Open Roboethics initiative presented a
scenario where an autonomous car had to choose between saving its passenger or a child. The
majority preferred saving themselves, revealing a challenging ethical decision (ORi, 2014a).
Passenger Input vs. Pre-programmed Settings: Balancing passenger input and pre-
programmed settings poses a dilemma. Allowing users to set ethical preferences in advance
may lead to potential harm, raising accountability concerns. Pre-programmed decisions by
manufacturers, on the other hand, face skepticism regarding user consent (Millar, 2016).
Legal Responsibility and Compensation:
Crash Responsibility: Determining legal responsibility for crashes caused by autonomous
vehicles controlled by algorithms remains a challenge. Courts need to establish clear
guidelines to allocate responsibility and ensure fair compensation for victims (Lin et al.,
2017).
Manufacturer Liability: The issue of unexpected costs for robot manufacturers and the
impact on investments is a critical legal concern. Striking a balance between fair
compensation for victims and maintaining trust in autonomous vehicles is crucial for public
acceptance (Lin et al., 2017).
Programming Ethical Approaches:
Uncertain Conditions: AVs face challenges in making ethical decisions in uncertain or 'no-
win' situations. The lack of legal guidance on ethical programming leaves questions about the
appropriate ethical approach to follow (Lin et al., 2017).
Shared Responsibility: Views on who should choose the ethical principles for AVs differ.
Loh and Loh argue for shared responsibility among engineers, drivers, and the autonomous
driving system. In contrast, Millar emphasizes user input, drawing parallels with the
importance of informed consent in medical decisions (Loh and Loh, 2017; Millar, 2016).
3.3.3 Case study: Warfare and weaponisation
Advancements in Military Technology:
Historical Context: Military technology has integrated partially autonomous systems since
World War II. However, recent strides in machine learning and AI represent a transformative
era in warfare automation.
Current Applications: AI is actively employed in military operations, ranging from satellite
imagery analysis to cyber defense. The full potential of AI in warfare is yet to be fully
realized (Allen and Chan, 2017).
Transformational Impact: Experts assert that AI has the potential to revolutionize warfare
as significantly as historical milestones like nuclear weapons, aircraft, computers, and
biotechnology.
Lethal Autonomous Weapons (LAWs):
Delegated Authority: Militaries increasingly entrust autonomous systems with authority,
indicating a potential AI-driven arms race. Russia's ambitious plan aims for 30% of its
combat power to comprise remote-controlled and autonomous robotic platforms by 2030.
Global AI Arms Race: The pursuit of AI technologies in the military domain suggests a
global arms race. While the U.S. Department of Defense imposes restrictions on autonomous
systems with lethal capabilities, other nations and non-state actors may not exercise similar
restraint.

21
CCS345 – ETHICS AND AI

Drone technologies
Standard military aircraft can cost more than US$100 million per unit; a high-quality
quadcopter Unmanned Aerial Vehicle, however, currently costs roughly US$1,000, meaning
that for the price of a single high-end aircraft, a military could acquire one million drones.
Although current commercial drones have limited range, in the future they could have similar
ranges to ballistic missiles, thus rendering existing platforms obsolete.
Robotic assassination
Widespread availability of low-cost, highly-capable, lethal, and autonomous robots
could make targeted assassination more widespread and more difficult to attribute. Automatic
sniping robots could assassinate targets from afar.
Mobile-robotic-Improvised Explosive Devices
Emerging Threats:
Advanced IEDs: The proliferation of commercial robotic and autonomous vehicle
technologies presents a potential risk of creating more advanced Improvised Explosive
Devices (IEDs).
Remote-Controlled Platforms: As long-distance drone delivery and self-driving cars
become prevalent, there is concern about the ease of delivering explosives precisely over
great distances, posing a threat from non-state actors.
Machine Learning in Warfare:
Intelligent Virtual Assistant (IVA): Hallaq et al. (2017) highlight the use of AI, particularly
in the form of Intelligent Virtual Assistants (IVAs), in warfare scenarios. IVAs can analyze
satellite imagery, predict enemy intent, and provide a wealth of accumulated knowledge to
Commanding Officers (COs).
Legal and Ethical Concerns: The integration of AI in warfare raises significant legal and
ethical questions, particularly regarding adherence to International Humanitarian Law (IHL).
Concerns include potential violations of the principles of distinction, proportionality, and the
protection of civilians.
Lethal Autonomous Weapon Systems (LAWS):
IHL Standards: LAWS, capable of independently engaging targets, must adhere to IHL
principles. However, concerns exist about their ability to distinguish between combatants and
non-combatants and evaluate proportionality.
Human-Machine Decision-making: Debate surrounds the moral and ethical implications of
delegating life-or-death decisions to machines. Some argue that only humans should initiate
lethal force, emphasizing moral responsibility and human dignity.
Accountability in Autonomous Systems:
Responsibility: Determining accountability for the actions of autonomous systems raises
complex issues. Arguments suggest accountability should extend to both the individual who
programmed the AI and the commanding or supervising authority.

22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy