0% found this document useful (0 votes)
21 views49 pages

Eai-Unit Ii

The document discusses various ethical initiatives and frameworks for artificial intelligence (AI), highlighting the potential ethical problems AI may face, including human rights, emotional harm, accountability, and environmental sustainability. It outlines international initiatives aimed at addressing these concerns, emphasizing the need for ethical development, transparency, and inclusivity in AI. The document also stresses the importance of public education and awareness regarding AI's impact on society and the environment.

Uploaded by

arunakavin01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views49 pages

Eai-Unit Ii

The document discusses various ethical initiatives and frameworks for artificial intelligence (AI), highlighting the potential ethical problems AI may face, including human rights, emotional harm, accountability, and environmental sustainability. It outlines international initiatives aimed at addressing these concerns, emphasizing the need for ethical development, transparency, and inclusivity in AI. The document also stresses the importance of public education and awareness regarding AI's impact on society and the environment.

Uploaded by

arunakavin01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

UNIT II

ETHICAL INITIATIVES AND FRAMEWORKS FOR AI

Ethical initiatives in the field of artificial intelligence:

 AI builds upon previous revolutions in ICT and computing and, as such,


will face a number of similar ethical problems.
 While technology may be used for good, potentially it may be misused.
 We may excessively anthropomorphize and humanize AI, blurring the
lines between human and machine.
 The ongoing development of AI will bring about a new 'digital divide',
with technology benefiting some socioeconomic and geographic groups
more than others.
 Further, AI will have an impact on our biosphere and environment that is
yet to be qualified.

3.1. INTERNATIONAL ETHICAL INITIATIVES


While official regulation remains scarce, many independent initiatives
have been launched internationally to explore these – and other – ethical
quandaries.

The initiatives explored in this section are outlined in Table below:


……………………………………………………………………………………
…..
3.2. ETHICAL HARMS AND CONCERNS TACKLED BY THESE
INITIATIVES
 All of the initiatives listed above agree that AI should be researched,
developed, designed, deployed, monitored, and used in an ethical manner
– but each has different areas of priority.
 This section will include analysis and grouping of the initiatives above,
by type of issues they aim to address, and then outline some of the
proposed approaches and solutions to protect from harms.
A number of key issues emerge from the initiatives, which can be broadly split
into the following categories:
1. Human rights and well being
2. Emotional harm
3. Accountability and responsibility
4. Security, Privacy, Accessibility and Transparency
5. Safety and trust
6. Social harm and justice
7. Financial harm
8. Lawfulness and justice
9. Control and the ethical use or misuse of AI
10.Environmental harm and sustainability
11.Informed use
12.Existential risk

1. HUMAN RIGHTS AND WELL-BEING


 All initiatives adhere to the view that AI must not impinge on basic and
fundamental human rights, such as human dignity, security, privacy,
freedom of expression and information, protection of personal data,
equality, solidarity and justice.
1. To safeguard human well-being, defined as 'human satisfaction with
life and the conditions of life, as well as an appropriate balance
between positive and negative affect' (ibid), the IEEE suggest
prioritizing human well-being throughout the design phase, and using
the best and most widely-accepted available metrics to clearly
measure the societal success of an AI.
2. According to the Foundation for Responsible Robotics
 AI must be ethically developed with human rights in mind to achieve
their goal of 'responsible robotics', which relies upon proactive innovation
to uphold societal values like safety, security, privacy, and well-being.
 The Foundation engages with policymakers, organises and hosts events,
publishes consultation documents to educate policymakers and the public,
and creates public-private collaborations to bridge the gap between
industry and consumers, to create greater transparency.
 It calls for ethical decision-making right from the research and
development phase, greater consumer education, and responsible law-
and policymaking – made before AI is released and put into use.
3. The Future of Life Institute defines a number of principles, ethics,
and values for consideration in the development of AI, including the
need to design and operate AI in a way that is compatible with the
ideals of human dignity, rights, freedoms, and cultural diversity.
 This is echoed by the Japanese Society for AI Ethical
Guidelines, which places the utmost importance on AI being
realised in a way that is beneficial to humanity.
4. The Future Society's Law and Society Initiative emphasises that
human beings are equal in rights, dignity, and freedom to flourish, and
are entitled to their human rights.
5. UNI Global Union, which strives to protect an individual's right to
work.
Over half of the work currently done by people could be done
faster and more efficiently in an automated way, says the Union.
The Union states that we must ensure that AI serves people and
the planet, and both protects and increases fundamental human rights,
human dignity, integrity, freedom, privacy, and cultural and gender
diversity.

2. EMOTIONAL HARM
 What is it to be human? AI will interact with and have an impact on the
human emotional experience in ways that have not yet been qualified;
 Humans are susceptible to emotional influence both positively and
negatively, and 'affect' – how emotion and desire influence behaviour –
is a core part of intelligence.
 There are various ways in which AI could inflict emotional harm,
including false intimacy, over-attachment, objectification and
commodification of the body, and social or sexual isolation. These are
covered by various of the aforementioned ethical initiatives, including the
Foundation for Responsible Robotics, Partnership on AI.
NUDGING :
Affective AI is also open to the possibility of deceiving and coercing its
users – researchers have defined the act of AI subtly modifying behaviour as
'nudging', when an AI emotionally manipulates and influences its user through
the affective system.
1. While this may be useful in some ways – drug dependency, healthy
eating – it could also trigger behaviours that worsen human health.
2. Systematic analyses must examine the ethics of affective design prior
to deployment; users must be educated on how to recognise and
distinguish between nudges; users must have an opt-in system for
autonomous nudging systems; and vulnerable populations that cannot
give informed consent, such as children, must be subject to additional
protection.
3. Other issues include technology addiction and emotional harm due to
societal or gender bias.

3. ACCOUNTABILITY AND RESPONSIBILITY


The vast majority of initiatives mandate that AI must be auditable, in
order to assure that the designers, manufacturers, owners, and operators of AI
are held accountable for the technology or system's actions, and are thus
considered responsible for any potential harm it might cause.
According to the IEEE, this could be achieved by the courts clarifying
issues of culpability and liability during the development and deployment
phases where possible, so that those involved understand their obligations and
rights; by designers and developers taking into account the diversity of existing
cultural norms among various user groups;
1. The Future of Life Institute tackles the issue of accountability via its
Asilomar Principles, a list of 23 guiding principles for AI to follow in
order to be ethical in the short and long term.
Designers and builders of advanced AI systems are 'stakeholders in
the moral implications of their use, misuse, and actions, with a
responsibility and opportunity to shape those implications.
If an AI should make a mistake, it should also be possible to
ascertain why. The Partnership on AI also stresses the importance of
accountability in terms of bias. We should be sensitive to the fact that
assumptions and biases exist within data and thus within systems built
from these data, and strive not to replicate them – i.e. to be actively
accountable for building fair, bias-free AI.

4. ACCESS AND TRANSPARENCY VS. SECURITY AND PRIVACY


A main concern over AI is its transparency, explicability, security,
reproducibility, and interpretability: is it possible to discover why and how a
system made a specific decision, or why and how a robot acted in the way it
did?
This is especially pressing in the case of safety-critical systems that may
have direct consequences for physical harm.
For example: Driverless cars or medical diagnosis systems, without
transparency, users may struggle to understand the systems they are using – and
their associated consequences – and it will be difficult to hold the relevant
persons accountable and responsible.
 To address this, the IEEE propose developing new standards that
detail measurable and testable levels of transparency, so systems can
be objectively assessed for their compliance.
 This will likely take different forms for different stakeholders; a robot
user may require a 'why-did-you-do-that' button, while a certification
agency or accident investigator will require access to relevant
algorithms in the form of an 'ethical black box' which provides failure
transparency.
AI require data to continually learn and develop their automatic decision-
making. These data are personal and may be used to identify a particular
individual's physical, digital, or virtual identity (i.e. personally identifiable
information, PII).

5. SAFETY AND TRUST


 Where AI is used to supplement or replace human decision-making, there
is consensus that it must be safe, trustworthy, and reliable, and act with
integrity.
ETHICAL BLACK BOX:
 A device that can record information about said system to ensure its
accountability and transparency, but that also includes clear data on the
ethical consideration built into the system from the beginning.
1. The IEEE propose cultivating a 'safety mindset' among researchers, to
'identify and pre-empt unintended and unanticipated behaviors in their
systems' and to develop systems which are 'safe by design'
2. The Future of Life Institute's Asilomar principles indicate that all
involved in developing and deploying AI should be mission-led, adopting
the norm that AI 'should only be developed in the service of widely
shared ethical ideals, and for the benefit of all humanity rather than one
state or organization. This approach would build public trust in AI,
something that is key to its successful integration into society.
3. The Japanese Society for AI proposes that AI should act with integrity at
all times, and that AI and society should earnestly seek to learn from and
communicate with one another. 'Consistent and effective communication'
will strengthen mutual understanding, says the Society, and '[contribute]
to the overall peace and happiness of mankind.
4. The Partnership on AI agrees, and strives to ensure AI is trustworthy and
to create a culture of cooperation, trust, and openness among AI scientists
and engineers.
5. The Institute for Ethical AI & Machine Learning also emphasises the
importance of dialogue; it ties together the issues of trust and privacy in
its eight core tenets, mandating that AI technologists communicate with
stakeholders about the processes.

6. SOCIAL HARM AND SOCIAL JUSTICE: INCLUSIVITY, BIAS,


AND DISCRIMINATION
 AI development requires a diversity of viewpoints.
 There are several organisations establishing that these must be in line
with community viewpoints and align with social norms, values, ethics,
and preferences, that biases and assumptions must not be built into data
or systems.

1. The IEEE suggest first identifying social and moral norms of the specific
community in which an AI will be deployed, and those around the
specific task or service it will offer; designing AI with the idea of 'norm
updating' in mind, given that norms are not static and AI must change
dynamically and transparently alongside culture;
2. Several initiatives – such as AI4All and the AI Now Institute – explicitly
advocate for fair, diverse, equitable, and non-discriminatory inclusion in
AI at all stages, with a focus on support for under-represented groups.

3. A set of ethical guidelines published by the Japanese Society for AI


emphasises, among other considerations, the importance of a)
contribution to humanity, and b) social responsibility.
4. The Foundation for Responsible Robotics includes a Commitment to
Diversity in its push for responsible AI; the Partnership on AI cautions
about the 'serious blind spots' of ignoring the presence of biases and
assumptions hidden within data.
5. Saidot aims to ensure that, although our social values are now
'increasingly mediated by algorithms.
6. The Future of Life Institute highlights a need for AI imbued with
human values of cultural diversity and human rights;
7. Institute for Ethical AI & Machine Learning includes 'bias evaluation'
for monitoring bias in AI development and production.

7. FINANCIAL HARM: ECONOMIC OPPORTUNITY AND


EMPLOYMENT
 AI may disrupt the economy and lead to loss of jobs or work disruption
for many humans, and will have an impact on workers' rights and
displacement strategy as many strains of work become automated (and
vanish in related business change).
Additionally, rather than just focusing on the number of jobs lost or gained,
traditional employment structures will need to be changed to mitigate the effects
of automation and take into account the complexities of employment.
1. The UNI Global Union call for multi-stakeholder ethical AI governance
bodies on global and regional levels, bringing together designers,
manufacturers, developers, researchers, trade unions, lawyers, CSOs,
owners, and employers.
2. The AI Now Institute works with diverse stakeholder groups to better
understand the implications that AI will have for labour and work,
including automation and early-stage integration of AI changing the
nature of employment and working conditions in various sectors.
3. The Future Society specifically asks how AI will affect the legal
profession: 'If AI systems are demonstrably superior to human attorneys
at certain aspects of legal work, what are the ethical and professional
implications for the practice of law?
RRI (RESPONSIBLE RESEARCH AND INNOVATION:
 'RRI is a transparent, interactive process by which societal actors and
innovators become mutually responsive to each other with a view to the
(ethical) acceptability, sustainability and societal desirability of the
innovation process and its marketable products (in order to allow a proper
embedding of scientific and technological advances in our society).

8. LAWFULNESS AND JUSTICE


 Several initiatives address the need for AI to be lawful, equitable, fair,
just and subject to appropriate, pre-emptive governance and regulation.
 The many complex ethical problems surrounding AI translate directly and
indirectly into discrete legal challenges. How should AI be labelled: as a
product? An animal? A person? Something new?
1. The IEEE conclude that AI should not be granted any level of
'personhood', and that, while development, design and distribution of AI
should fully comply with all applicable international and domestic law,
there is much work to be done in defining and implementing the relevant
legislation.
Legal issues fall into a few categories:
legal status, governmental use (transparency, individual rights), legal
accountability for harm, and transparency, accountability, and verifiability.
The IEEE suggest that AI should remain subject to the applicable regimes of
property law; that stakeholders should identify the types of decisions that should
never be delegated to AI.

9. CONTROL AND THE ETHICAL USE – OR MISUSE – OF AI


With more sophisticated and complex new AI come more sophisticated and
complex possibilities for misuse.

Personal data may be used maliciously or for profit, systems are at risk of
hacking, and technology may be used exploitatively.

1. The IEEE suggests new ways of educating the public on ethics and
security issues, for example a 'data privacy' warning on smart devices that
collect personal data; delivering this education in scalable, effective
ways; and educating government, lawmakers, and enforcement agencies
surrounding these issues, so they can work collaboratively with citizens –
in a similar way to police officers providing safety lectures in schools –
and avoid fear and confusion .
Other issues include manipulation of behaviour and data.
 Humans must retain control over AI and oppose subversion. Most
initiatives reviewed flag this as a potential issue facing.
 AI must also work for the good of humankind, must not exploit people,
and be regularly reviewed by human experts.

10.ENVIRONMENTAL HARM AND SUSTAINABILITY


The production, management, and implementation of AI must be sustainable
and avoid environmental harm. This also ties in to the concept of well-being; a
key recognised aspect of well-being is environmental, concerning the air,
biodiversity, climate change, soil and water quality, and so on .
1. The IEEE (EAD, 2019) state that AI must do no harm to Earth's natural
systems or exacerbate their degradation, and contribute to realising
sustainable stewardship, preservation, and/or the restoration of Earth's
natural systems.
2. The UNI Global Union state that AI must put people and the planet first,
striving to protect and even enhance our planet's biodiversity and
ecosystems (UNI Global Union, n.d.).
3. The Foundation for Responsible Robotics identifies a number of
potential uses for AI in coming years, from agricultural and farming roles
to monitoring of climate change and protection of endangered species.

11.INFORMED USE: PUBLIC EDUCATION AND AWARENESS


 Members of the public must be educated on the use, misuse, and potential
harms of AI, via civic participation, communication, and dialogue with
the public.
 The issue of consent – and how much an individual may reasonably and
knowingly give – is core to this.
1.For example, the IEEE raise several instances in which consent is less
clear-cut than might be ethical: what if one's personal data are used to make
inferences they are uncomfortable with or unaware of?

 To remedy this, the IEEE suggest employee data impact assessments


to deal with these corporate nuances and ensure that no data is
collected without employee consent.
 Data must also be only gathered and used for specific, explicitly
stated, legitimate purposes, kept up-to-date, lawfully processed, and
not kept for a longer period than necessary.
2. To increase awareness and understanding of AI, undergraduate and
postgraduate students must be educated on AI and its relationship to
sustainable human development, say the IEEE.
 Specifically, curriculum and core competencies should be defined
and prepared; degree programmes focusing on engineering in
international development and humanitarian relief should be
exposed to the potential of AI applications; and awareness should
be increased of the opportunities and risks faced by Lower Middle
Income Countries in the implementation of AI in humanitarian
efforts across the globe.

3. Many initiatives focus on this, including the Foundation for Responsible


Robotics, Partnership on AI, Japanese Society for AI Ethical
Guidelines, Future Society and AI Now Institute; these and others
maintain that clear, open and transparent dialogue between AI and society
is key to the creation of understanding, acceptance, and trust.

12.EXISTENTIAL RISK
 According to the Future of Life Institute, the main existential
issue surrounding AI 'is not malevolence, but competence' – AI
will continually learn as they interact with others and gather data,
leading them to gain intelligence over time and potentially develop
aims that are at odds with those of humans.
1. AI also poses a threat in the form of autonomous weapons systems
(AWS). As these are designed to cause physical harm, they raise
numerous ethical quandaries.
2. The pursuit of AWS may lead to an international arms race and
geopolitical stability; as such, the IEEE recommend that systems
designed to act outside the boundaries of human control or judgement are
unethical and violate fundamental human rights and legal accountability
for weapons use.
3. Given their potential to seriously harm society, these concerns must be
controlled for and regulated pre-emptively, says the Foundation for
Responsible Robotics. Other initiatives that cover this risk explicitly
include the UNI Global Union and the Future of Life Institute.

Addressing Ethical Challenges in AI


Development
Ethical Frameworks and Guidelines
Developing and adhering to comprehensive ethical frameworks and guidelines is crucial.
These frameworks should encompass principles of fairness, transparency, accountability, and
respect for human values.

Ethical AI Design
Integrating ethics into the design phase of AI systems is essential. This involves
multidisciplinary collaboration, including ethicists, policymakers, technologists, and end-
users, to identify and mitigate potential ethical issues.

Continuous Evaluation and Auditing


Regular evaluation and auditing of AI systems for ethical considerations are necessary. This
process involves assessing biases, transparency, data privacy, and the societal impact of AI
applications.
Education and Awareness
Raising awareness and providing education on AI ethics among developers, policymakers,
and the public is crucial. Understanding the ethical implications of AI fosters responsible
development and deployment practices.

Ethical Considerations When Designing AI Solutions


Artificial intelligence (AI) has the potential to revolutionize industries and reshape many aspects
of human life. Its capacity to automate processes, enhance decision-making, and uncover insights
from massive datasets promises numerous benefits. However, the responsible and ethical use of
AI is crucial to ensuring its positive impact. Without ethical safeguards, AI systems can
exacerbate inequalities, perpetuate bias, and cause unintended harm. This article explores the key
ethical considerations to keep in mind when designing AI solutions and how they can help create
more just, transparent, and inclusive systems.

Transparency and Explainability


One of the most critical ethical considerations in AI design is ensuring transparency and
explainability. AI systems are often seen as "black boxes," where users and stakeholders struggle
to understand how decisions are made. This lack of clarity can result in mistrust and reluctance to
adopt AI, especially in high-stakes areas like healthcare, law, or finance.
To address this, AI systems should be designed with mechanisms that allow stakeholders to
understand the rationale behind decisions and recommendations. Explainable AI (XAI)
technologies focus on making AI decision-making processes more understandable to humans.
This enhances transparency and ensures that AI can be held accountable for its actions.
Moreover, transparency helps detect bias in AI decision-making. By offering clear explanations,
stakeholders can assess whether the AI is making decisions based on ethically sound principles or
whether it is perpetuating harmful biases. Transparent AI systems increase accountability,
allowing developers, regulators, and users to spot errors or discriminatory outcomes more easily.

Fairness and Mitigating Bias


AI systems must be fair and free from bias to avoid perpetuating or exacerbating societal
inequalities. Bias in AI can stem from several sources, including the datasets used to train models
and the assumptions built into the algorithms. For example, training an AI model on historical
hiring data that reflects past discriminatory practices can result in biased recommendations that
disadvantage underrepresented groups.
To ensure fairness, AI developers must be vigilant in curating diverse, representative datasets.
This includes actively identifying and mitigating biases during the data collection process and
continuously refining models throughout their lifecycle. Bias detection tools, combined with
continuous monitoring and auditing, are essential to ensuring that AI systems do not inadvertently
reinforce harmful stereotypes or discriminatory practices.
Additionally, algorithmic fairness involves balancing competing ethical concerns, such as equal
treatment and equal opportunity. Achieving fairness often requires trade-offs, and developers
must carefully weigh these decisions while ensuring that AI systems serve diverse populations
equitably.
Privacy and Data Protection
In an era of massive data collection, safeguarding user privacy is of paramount importance when
developing AI systems. AI solutions often rely on large datasets, which can include sensitive
personal information. Without proper safeguards, these systems can pose significant privacy
risks, leading to unauthorized data access or misuse.
AI developers must ensure that data is collected and processed in compliance with data protection
regulations, such as the General Data Protection Regulation (GDPR) in Europe or the California
Consumer Privacy Act (CCPA) in the United States. This includes implementing techniques like
data anonymization, which ensures that personally identifiable information (PII) is stripped from
datasets before processing.
Moreover, AI systems should be designed with privacy by default and privacy by design
principles, meaning that privacy protections are integrated into every stage of AI development.
Ensuring data security is also critical, requiring encryption and robust access controls to protect
against data breaches or cyberattacks.

Accountability and Responsibility


Determining who is accountable for the development, deployment, and oversight of AI systems is
another key ethical issue. AI solutions, by their nature, involve multiple stakeholders, including
developers, businesses, governments, and end users. When AI systems malfunction or produce
harmful outcomes, the question of responsibility can become murky.
Ethical AI design requires that clear accountability frameworks are in place. This includes
defining roles and responsibilities for developers, data scientists, and decision-makers, ensuring
that there is a transparent chain of responsibility for managing AI systems. Developers should
document and make public the decision-making processes and ethical considerations involved in
creating the AI system, enabling external scrutiny. Moreover, AI systems should be subject to
regular ethical reviews and risk assessments to ensure their continued alignment with societal
values. This helps identify potential risks or unintended consequences early in the development
process and ensures that AI systems are updated or discontinued when necessary.

Human Oversight and Control


Despite the sophistication of AI, it is crucial to maintain human oversight and the ability to
intervene in the decisions AI systems make. AI should not replace human judgment entirely,
particularly in areas where decisions carry moral, ethical, or social consequences, such as
healthcare, criminal justice, or education.
Human-in-the-loop (HITL) systems allow humans to review and override AI decisions when
necessary. This ensures that AI serves as an aid to human decision-making rather than a
replacement. AI systems should be designed to augmenthuman capabilities, enhancing
productivity and decision-making, but not diminishing the role of human responsibility and
judgment.
Maintaining human control is essential for maintaining trust in AI systems, particularly in high-
risk environments. Without this safeguard, the delegation of decision-making power to machines
risks eroding the role of human intuition, empathy, and ethical reasoning.

Safety and Reliability


The safety and reliability of AI systems are vital to ensuring that they do not cause harm. AI
safety involves thoroughly testing systems to ensure they behave predictably and do not produce
unintended outcomes. This is particularly important in areas like autonomous vehicles, healthcare
diagnostics, and financial systems, where errors can have significant real-world consequences.
Regular monitoring, updates, and security assessments are essential for maintaining AI systems’
safety over time. The rapid pace of technological advancement means that AI systems must
continually evolve to respond to new challenges, such as cybersecurity threats or changes in
regulatory requirements.

Ethical Use of AI
Considerations of how AI is used and its impact on society must guide development. AI
applications should align with ethical standards, respect human rights, and contribute
positively to societal well-being.

Human-Centric Approach
Maintaining a human-centric approach in AI development involves prioritizing human
values, well-being, and autonomy. Human oversight and control over AI systems should be
paramount, ensuring that AI augments human capabilities rather than replacing or dictating
them.

Social and Ethical Impact


The broader social impact of AI must also be considered when designing AI solutions. AI has the
potential to transform industries, but it can also disrupt labor markets, amplify inequalities, and
challenge existing social norms. Developers must engage with a wide range of stakeholders,
including ethicists, sociologists, policymakers, and the communities affected by AI, to assess
these impacts.
For example, the deployment of AI in automated decision-making systems, such as loan
approvals or criminal sentencing, can have profound effects on people's lives. AI developers need
to anticipate and mitigate any potential harms that could arise from the widespread use of their
systems. Ensuring inclusion and diverse representation in AI development teams is also key to
understanding and addressing these complex social dynamics.

Job Displacement
The advancement of AI automation has the potential to replace human jobs, resulting in
widespread unemployment and exacerbating economic inequalities. Conversely, some argue
that while AI will replace knowledge workers – like robots are replacing manual laborers –
AI has the potential to create far more jobs than it destroys. Addressing the impacts of job
displacement requires proactive measures such as retraining programs and policies that
facilitate a just transition for affected workers, as well as far-reaching social and economic
support systems.

Autonomous Weapons
Ethical concerns arise with the development of AI-powered autonomous weapons. Questions
of accountability, the potential for misuse, and the loss of human control over life-and-death
decisions necessitate international agreements and regulations to govern the use of such
weapons. Ensuring responsible deployment becomes essential to prevent catastrophic
consequences.

Addressing the ethical issues surrounding AI requires collaboration among technologists,


policymakers, ethicists, and society at large. Establishing robust regulations, ensuring
transparency in AI systems, promoting diversity and inclusivity in development, and
fostering ongoing discussions are integral to responsible AI deployment. By proactively
engaging with these concerns, we can harness the incredible potential of AI while upholding
ethical principles to shape a future where socially responsible AI is the norm.

Key principles for developing ethical AI systems


AI researchers have identified a handful of principles that can help guide the development of
ethical AI. These principles are not yet legally enforceable, but they can still act as critical
guideposts as AI creators navigate this new frontier.

 Transparency and explainability: AI models should be transparent, and their decisions


explainable. People affected by an AI system should be able to understand why it made a
particular decision.
 Fairness and non-discrimination: Artificial intelligence should treat all individuals fairly,
avoiding biases that could lead to discriminatory outcomes. This includes both explicit and
unconscious bias, which is often embedded in the data used to train an AI model.
 Privacy and data protection: AI tools must respect user privacy and personal data. This
includes not only securing data from unauthorized access, but also respecting a user's right to
control how their data is used.

Ethical data sourcing and management


Sourcing with integrity

Data is the backbone of any AI model—meaning ethical data sourcing is critical.

Sourcing data ethically means obtaining data in a way that respects individuals' privacy,
consent, and applicable data rights. While ethical data sourcing helps to maintain an AI
system's integrity and public trust, it can also mitigate potential legal risks.

Irresponsible practices like inadequate data security or violation of privacy rights can erode
public trust, cause data breaches, damage the reputation of the organization, and lead to legal
repercussions.

Managing data lifecycle

Proper data management for AI tools involves secure storage, controlled access, and
regulated deletion practices.
Data should be properly secured, employing encryption methods and firewall systems to
prevent unauthorized access or breaches. Access to data should be limited to necessary
personnel, with a system for tracking who has accessed the data and for what purpose.

Additionally, a clear data deletion policy should be implemented. Once data has outlived its
utility or an individual requests that their data is deleted, it should be permanently removed to
maintain privacy and respect individual rights.

Global perspectives on the ethics of artificial intelligence


International standards and guidelines

Many countries and international organizations are recognizing the importance of


establishing ethical guidelines for AI development—formulating their own policies and
recommendations for ethical AI.

For instance, the European Union (EU) has proposed a framework that emphasizes
transparency, accountability, and protection of individual rights. Meanwhile, countries
like Singapore and Canada have published their own AI ethics guidelines, emphasizing
principles of fairness, accountability, and human-centric values.

At the global level, the UNESCO has released draft recommendations on the Ethics of
Artificial Intelligence—emphasizing the need for a human-centered approach to AI that
focuses on human rights, cultural diversity, and fairness. It also stresses the importance of
transparency, accountability, and the need for AI to be understandable and controllable by
human beings.

While the specifics may vary, the global consensus leans towards a human-centric approach
that stresses transparency, accountability, and the protection of individual rights.

Collaboration and consensus

As AI technologies continue to permeate international borders, fostering global collaboration


and consensus on the ethics of artificial intelligence is crucial. It’s essential to have
standardized, universally adopted ethical guidelines to ensure the responsible use of AI across
all nations.

These globally recognized standards can help bridge cultural and societal differences, while
establishing a common ground for the ethical use and development of AI. Such an
international approach not only promotes the responsible development and use of AI
technologies, but also fosters trust, cooperation, and mutual understanding among nations.

Practical implementation of AI ethics


From theory to practice

Translating ethical principles into actionable guidelines is key to realizing ethical AI. This
involves integrating ethical considerations into every stage of the AI lifecycle, from initial
design to deployment, to monitoring.

Implementing ethical principles begins at the conceptualization and design stage. AI


developers should incorporate ethical considerations from the start, ensuring their AI code is
designed to be fair, transparent, and respectful of user privacy.

During the development phase, it’s essential to source and manage data ethically. This
involves obtaining data sets responsibly, ensuring secure storage, and managing its lifecycle
properly.

Once the AI system is deployed, its performance and ethical behavior should be consistently
monitored. Continuous auditing can help identify any ethical issues or biases that arise and
address them promptly.

Additionally, clear communication about how the AI works, its limitations, and the data it
uses will help ensure transparency and maintain user trust. This can be accomplished through
comprehensive, user-friendly documentation and, where appropriate, interfaces that allow
users to review and understand the AI’s decisions.

Lastly, it's crucial to have an accountability framework in place, so there are clear lines of
responsibility if the AI system fails or causes harm. This is a helpful way to support both
internal and legal accountability.

By integrating these steps into the development process, ethical principles can be translated
into practical, actionable guidelines.

Case studies: AI ethics in practice


Google's AI Principles, first published in 2018, serve as an ethical framework to guide the
responsible development and use of AI across the company's products and services. These
principles emphasize the social benefits of AI, noting potential transformative impacts in
fields like health care, security, energy, transportation, manufacturing, and entertainment.

Google's approach to implementing these principles involves a combination of education


programs, AI ethics reviews, and technical tools. Furthermore, the company collaborates with
NGOs, industry partners, academics, and ethicists throughout the product development
process.

Microsoft’s AI Ethics

Microsoft's approach to AI ethics is guided by six key principles: accountability,


inclusiveness, reliability and safety, fairness, transparency, and privacy and security.
These principles provide internal guidance on how to design, build, and test AI models
responsibly. The company also proactively establishes guardrails to anticipate and
mitigate AI risks, while maximizing benefits.

Furthermore, Microsoft reviews its AI systems to identify those that may have an adverse
impact on people, organizations, and society, and applies additional oversight to these
systems.

IBM’s Trustworthy AI

IBM is recognized as a leader in the field of trustworthy AI, with a focus on ethical principles
and practices in its use of technology. The company has developed a Responsible Use of
Technology framework to guide its decision-making and governance processes, fostering a
culture of responsibility and trust.

Trustworthiness in AI, according to IBM, involves continuous monitoring and frequent


validation of AI models to ensure they can be trusted by various stakeholders. IBM's
approach to trustworthy AI also emphasizes trust in data, models, and processes.

The World Economic Forum has highlighted IBM's efforts in a case study, providing
practical resources for organizations to operationalize ethics in their use of technology.

AI Principles and Guidelines


The Principles of Artificial Intelligence Ethics for the Intelligence Community
are intended to guide personnel on whether and how to develop and use AI, to
include machine learning, in furtherance of the IC’s(Intelligence Community)
mission.

Respect the Law and Act with Integrity


We will employ AI in a manner that respects human dignity, rights, and
freedoms. Our use of AI will fully comply with applicable legal authorities and
with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable


We will provide appropriate transparency to the public and our customers
regarding our AI methods, applications, and uses within the bounds of security,
technology, and releasability by law and policy, and consistent with the
Principles of Intelligence Transparency for the IC. We will develop and employ
mechanisms to identify responsibilities and provide accountability for the use of
AI and its outcomes.

Objective and Equitable


Consistent with our commitment to providing objective intelligence, we will
take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use


We will develop and use AI to augment our national security and enhance our
trusted partnerships by tempering technological guidance with the application of
human judgment, especially when an action has the potential to deprive
individuals of constitutional rights or interfere with their free exercise of civil
liberties.

Secure and Resilient


We will develop and employ best practices for maximizing reliability, security,
and accuracy of AI design, development, and use. We will employ security best
practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology


We will apply rigor in our development and use of AI by actively engaging both
across the IC and with the broader scientific and technology communities to
utilize advances in research and best practices from the public and private
sector.

Knowledge and behaviour: the 10 principles of ethical AI

The ten core principles of ethical AI enjoy broad consensus for a reason: they align with
globally recognized definitions of fundamental human rights, as well as with multiple
international declarations, conventions and treaties. The first two principles can help you
acquire the knowledge that can allow you to make ethical decisions for your AI. The next
eight can help guide those decisions.

1. Interpretability. AI models should be able to explain their overall decision-making process


and, in high-risk cases, explain how they made specific predictions or chose certain actions.
Organisations should be transparent about what algorithms are making what decisions on
individuals using their own data.
2. Reliability and robustness. AI systems should operate within design parameters and make
consistent, repeatable predictions and decisions.
3. Security. AI systems and the data they contain should be protected from cyber threats —
including AI tools that operate through third parties or are cloud-based.
4. Accountability. Someone (or some group) should be clearly assigned responsibility for the
ethical implications of AI models’ use — or misuse.
5. Beneficiality. Consider the common good as you develop AI, with particular attention to
sustainability, cooperation and openness.
6. Privacy. When you use people’s data to design and operate AI solutions, inform individuals
about what data is being collected and how that data is being used, take precautions to protect
data privacy, provide opportunities for redress and give the choice to manage how it’s used.
7. Human agency. For higher levels of ethical risk, enable more human oversight over and
intervention in your AI models’ operations.
8. Lawfulness. All stakeholders, at every stage of an AI system’s life cycle, must obey the law
and comply with all relevant regulations.
9. Fairness. Design and operate your AI so that it will not show bias against groups or
individuals.
10. Safety. Build AI that is not a threat to people’s physical safety or mental integrity.

AI principles into action: context and traceability

A top challenge to navigating these ten principles is that they often mean different things in
different places — and to different people. The laws a company has to follow in the US, for
example, are likely different than those in China. In the US they may also differ from one
state to another. How your employees, customers and local communities define the common
good (or privacy, safety, reliability or most of the ethical AI principles) may also differ.

To put these ten principles into practice, then, you may want to start by contextualising them:
Identify your AI systems’ various stakeholders, then find out their values and discover any
tensions and conflicts that your AI may provoke.6 You may then need discussions to
reconcile conflicting ideas and needs.
When all your decisions are underpinned by human rights and your values, regulators,
employees, consumers, investors and communities may be more likely to support you — and
give you the benefit of the doubt if something goes wrong.

To help resolve these possible conflicts, consider explicitly linking the ten principles to
fundamental human rights and to your own organisational values. The idea is to create
traceability in the AI design process: for every decision with ethical implications that you
make, you can trace that decision back to specific, widely accepted human rights and your
declared corporate principles.

Artificial Intelligence Ethics Framework


AI should:
• Be used when it is an appropriate means to achieve a defined purpose after evaluating the
potential risks; • Be used in a manner consistent with respect for individual rights and liberties
of affected individuals, and use data obtained lawfully and consistent with legal obligations
and policy requirements;
• Incorporate human judgment and accountability at appropriate stages to address risks across
the lifecycle of the AI and inform decisions appropriately;
• Identify, account for, and mitigate potential undesired bias, to the greatest extent practicable
without undermining its efficacy and utility;
• Be tested at a level commensurate with foreseeable risks associated with the use of the AI;
• Maintain accountability for iterations, versions, and changes made to the model;
• Document and communicate the purpose, limitation(s), and design outcomes;
• Use explainable and understandable methods, to the extent practicable, so that users,
overseers, and the public, as appropriate, understand how and why the AI generated its
outputs;
• Be periodically reviewed to ensure the AI continues to further its purpose and identify issues
for resolution;

Identify who will be accountable for the AI and its effects at each stage and across its lifecycle,
including responsibility for maintaining records created. Identifying and addressing risk is best
achieved by involving appropriate stakeholders. As such, consumers, technologists, developers,
mission personnel, risk management professionals, civil liberties and privacy officers, and legal
counsel should utilize this framework collaboratively, each leveraging their respective experiences,
perspectives, and professional skills.

CASE STUDIES
3.3.1. CASE STUDY: HEALTHCARE ROBOTS
 Artificial Intelligence and robotics are rapidly moving into the field
of healthcare and will increasingly play roles in diagnosis and
clinical treatment.
 For example, currently, or in the near future, robots will help in the
diagnosis of patients; the performance of simple surgeries; and the
monitoring of patients' health and mental wellness in short and
long-term care facilities. They may also provide basic physical
interventions, work as companion carers, remind patients to take
their medical image diagnostics, machine learning has been proven
to match or even surpass our ability to detect illnesses.
1. Safety
 The most important ethical issue arising from the growth of AI and
robotics in healthcare is that of safety and avoidance of harm.
 It is vital that robots should not harm people, and that they should be safe
to work with. This point is especially important in areas of healthcare that
deal with vulnerable people, such as the ill, elderly, and children.
 Digital healthcare technologies offer the potential to improve accuracy of
diagnosis and treatments, but to thoroughly establish a technology's long-
term safety and performance investment in clinical trials is required.

2. User understanding
The correct application of AI by a healthcare professional is important to ensure
patient safety.
'THE DA VINCI' ROBOT
 The precise surgical robotic assistant 'the da Vinci' has proven a useful
tool in minimizing surgical recovery, but requires a trained operator.
 It is important for users to trust the AI presented but to be aware of each
tool's strengths and weaknesses, recognising when validation is
necessary. For instance, a generally accurate machine learning study to
predict the risk of complications in patients with pneumonia erroneously
considered those with asthma to be at low risk.
 However, it's questionable to what extent individuals need to understand
how an AI system arrived at a certain prediction in order to make
autonomous and informed decisions.
 Even if an in-depth understanding of the mathematics is made obligatory,
the complexity and learned nature of machine learning algorithms often
prevent the ability to understand how a conclusion has been made from a
dataset — a so called 'black box' .

Data protection
 Personal medical data needed for healthcare algorithms may be at risk.
 For instance, there are worries that data gathered by fitness trackers might
be sold to third parties, such as insurance companies, who could use those
data to refuse healthcare coverage.
 Hackers are another major concern, as providing adequate security for
systems accessed by a range of medical personnel is problematic.
 Clear frameworks for how healthcare staff and researchers use data, such
as genomics, in a way that safeguards patient confidentiality is necessary
to establish public trust and enable advances in healthcare algorithms.
Legal responsibility
 Although AI promises to reduce the number of medical mishaps, when
issues occur, legal liability must be established.
 If equipment can be proven to be faulty then the manufacturer is liable,
but it is often tricky to establish what went wrong during a procedure and
whether anyone, medical personnel or machine, is to blame.
 For instance, there have been lawsuits against the da Vinci surgical
assistant, but the robot continues to be widely accepted.
 For now, AI is used as an aide for expert decisions, and so experts remain
the liable party in most cases.
Bias
 Non-discrimination is one of the fundamental values of the EU, but
machine learning algorithms are trained on datasets that often have
proportionally less data available about minorities, and as such can be
biased .
 This can mean that algorithms trained to diagnose conditions are less
likely to be accurate for ethnic patients; for instance, in the dataset used to
train a model for detecting skin cancer, less than 5 percent of the images
were from individuals with dark skin, presenting a risk of misdiagnosis
for people of colour.
 To ensure the most accurate diagnoses are presented to people of all
ethnicities, algorithmic biases must be identified and understood.
 Even with a clear understanding of model design this is a difficult task
because of the aforementioned 'black box' nature of machine learning.
However, various codes of conduct and initiatives have been introduced
to spot biases earlier.
 For instance, The Partnership on AI, an ethics-focused industry group
was launched by Google, Facebook, Amazon, IBM and Microsoft —
although, worryingly, this board is not very diverse.
Equality of access
 Digital health technologies, such as fitness trackers and insulin pumps,
provide patients with the opportunity to actively participate in their own
healthcare.
 Some hope that these technologies will help to redress health inequalities
caused by poor education, unemployment, and so on. However, there is a
risk that individuals who cannot afford the necessary technologies or do
not have the required 'digital literacy' will be excluded, so reinforcing
existing health inequalities.
Quality of care
 'There is remarkable potential for digital healthcare technologies to
improve accuracy of diagnoses and treatments, the efficiency of care, and
workflow for healthcare professionals'.
 If introduced with careful thought and guidelines, companion and care
robots, for example, could improve the lives of the elderly, reducing their
dependence, and creating more opportunities for social interaction.

EXAMPLE :
 Imagine a home-care robot that could: remind you to take your
medications; fetch items for you if you are too tired or are already in bed;
perform simple cleaning tasks; and help you stay in contact with your
family, friends and healthcare provider via video link.
 Human interaction is particularly important for older people, as research
suggests that an extensive social network offers protection against
dementia.
 At present, robots are far from being real companions. Although they can
interact with people, and even show simulated emotions, their
conversational ability is still extremely limited, and they are no
replacement for human love and attention.
carebots
 A number of 'carebots' are designed for social interactions and are often
touted to provide an emotional therapeutic role.
 For instance, care homes have found that a robotic seal pup's animal-like
interactions with residents brightens their mood, decreases anxiety and
actually increases the sociability of residents with their human caregivers.
Deception
 However, the line between reality and imagination is blurred for dementia
patients, so is it dishonest to introduce a robot as a pet and encourage a
social-emotional involvement? And if so, is if morally justifiable?
 Companion robots and robotic pets could alleviate loneliness amongst
older people, but this would require them believing, in some way, that a
robot is a sentient being who cares about them and has feelings — a
fundamental deception.

EXAMPLE :
 'The fact that our parents, grandparents and children might say 'I love
you' to a robot who will say 'I love you' in return, does not feel
completely comfortable; it raises questions about the kind of authenticity
we require of our technology'.
 For an individual to benefit from owning a robot pet, they must
continually delude themselves about the real nature of their relation with
the animal. What's more, encouraging elderly people to interact with
robot toys has the effect of infantilising them.
Autonomy
 It's important that healthcare robots actually benefit the patients
themselves, and are not just designed to reduce the care burden on the rest
of society — especially in the case of care and companion AI.
 Robots could empower disabled and older people and increase their
independence; in fact, given the choice, some might prefer robotic over
human assistance for certain intimate tasks such as toileting or bathing.
 Robots could be used to help elderly people live in their own homes for
longer, giving them greater freedom and autonomy. However, how much
control, or autonomy, should a person be allowed if their mental
capability is in question? If a patient asked a robot to throw them off the
balcony, should the robot carry out that command?
Liberty and privacy
 As with many areas of AI technology, the privacy and dignity of users'
needs to be carefully considered when designing healthcare service and
companion robots.
 Working in people's homes means that robots will be privy to private
moments such as bathing and dressing; if these moments are recorded,
who should have access to the information, and how long should
recordings be kept?
 The issue becomes more complicated if an elderly person's mental state
deteriorates and they become confused — someone with Alzheimer's
could forget that a robot was monitoring them, and could perform acts or
say things thinking that they are in the privacy of their own home.
 Home-care robots need to be able to balance their user's privacy and
nursing needs, for example by knocking and awaiting an invitation before
entering a patient's room, except in a medical emergency.
 To ensure their charge's safety, robots might sometimes need to act as
supervisors, restricting their freedoms.
EXAMPLE
 A robot could be trained to intervene if the cooker was left on, or the bath
was overflowing.
 Robots might even need to restrain elderly people from carrying out
potentially dangerous actions, such as climbing up on a chair to get
something from a cupboard.
 Smart homes with sensors could be used to detect that a person is
attempting to leave their room, and lock the door, or call staff — but in so
doing the elderly person would be imprisoned.
Moral agency
 Robots do not have the capacity for ethical reflection or a moral basis for
decision-making, and thus humans must currently hold ultimate control
over any decision-making.
EXAMPLE :
 An example of ethical reasoning in a robot can be found in the 2004
dystopian film 'I, Robot', where Will Smith's character disagreed with
how the robots of the fictional time used cold logic to save his life over
that of a child's.
 If more automated healthcare is pursued, then the question of moral
agency will require closer attention.
 Ethical reasoning is being built into robots, but moral responsibility is
about more than the application of ethics — and it is unclear whether
robots of the future will be able to handle the complex moral issues in
healthcare.
Trust
'Psychology research shows people mistrust those who make moral decisions by
calculating costs and benefits — like computers do' .
1. Firstly, doctors are explicitly certified and licensed to practice medicine,
and their license indicates that they have specific skills, knowledge, and
values such as 'do no harm'.
 If a robot replaces a doctor for a particular treatment or diagnostic
task, this could potentially threaten patient-doctor trust, as the patient
now needs to know whether the system is appropriately approved or
'licensed' for the functions it performs.
2. Secondly, patients trust doctors because they view them as paragons of
expertise. If doctors were seen as 'mere users' of the AI, we would expect
their role to be downgraded in the public's eye, undermining trust.
3. Thirdly, a patient's experiences with their doctor are a significant driver
of trust. If a patient has an open line of communication with their doctor,
and engages in conversation about care and treatment, then the patient
will trust the doctor.
Employment replacement
 As in other industries, there is a fear that emerging technologies may
threaten employment, for instance, there are carebots now available
that can perform up to a third of nurses' work .
 Despite these fears, the NHS' Topol Review concluded that 'these
technologies will not replace healthcare professionals but will
enhance them ('augment them'), giving them more time to care for
patients'.
 The review also outlined how the UK's NHS will nurture a learning
environment to ensure digitally capable employees.

……………………………………………………………………………………
……

3.3.2 CASE STUDY: AUTONOMOUS VEHICLES

 Autonomous Vehicles (AVs) are vehicles that are capable of sensing


their environment and operating with little to no input from a human
driver.
 While the idea of self-driving cars has been around since at least the
1920s, it is only in recent years that technology has developed to a
point where AVs are appearing on public roads.
According to automotive standardisation body SAE International
(2018), there are six levels of driving automation

0 No automation An automated system may issue warnings and/or


momentarily intervene in driving, but has no
sustained vehicle control.

1 Hands on The driver and automated system share control of the


vehicle. For example, the automated system may
control engine power to maintain a set speed (e.g.
Cruise Control), engine and brake power to maintain
and vary speed (e.g. Adaptive Cruise Control), or
steering (e.g. Parking Assistance). The driver must
be ready to retake full control at any time.
2 Hands off The automated system takes full control of the
vehicle (including accelerating, braking, and
steering). However, the driver must monitor the
driving and be prepared to intervene immediately
at any time.
3 Eyes off The driver can safely turn their attention away
from the driving tasks (e.g. to text or watch a film)
as the vehicle will handle any situations that call for
an immediate response. However, the driver must
still be prepared to intervene, if called upon by the
AV to do so, within a timeframe specified by the AV
manufacturer.
4 Minds off As level 3, but no driver attention is ever required
for safety, meaning the driver can safely go to sleep
or leave the driver's seat.

5 Steering wheel No human intervention is required at all. An


optional example of a level 5 AV would be a robotic taxi.

Societal and Ethical Impacts of AVs


 'We cannot build these tools saying, 'we know that humans act a certain
way, we're going to kill them – here's what to do'.' (John Havens)
Public safety and the ethics of testing on public roads
 At present, cars with 'assisted driving' functions are legal in most
countries. Notably, some Tesla models have an Autopilot function, which
provides level 2 automation.
 Drivers are legally allowed to use assisted driving functions on public
roads provided they remain in charge of the vehicle at all times. However,
many of these assisted driving functions have not yet been subject to
independent safety certification, and as such may pose a risk to drivers
and other road users.
 In Germany, a report published by the Ethics Commission on Automated
Driving highlights that it is the public sector's responsibility to guarantee
the safety of AV systems introduced and licensed on public roads, and
recommends that all AV driving systems be subject to official licensing
and monitoring.
Issue of human safety
 This issue of human safety — of both public and passenger — is
emerging as a key issue concerning self-driving cars. Major companies
— Nissan, Toyota, Tesla, Uber, Volkswagen — are developing
autonomous vehicles capable of operating in complex, unpredictable
environments without direct human control, and capable of learning,
inferring, planning and making decisions.
 Self-driving vehicles could offer multiple benefits: statistics show you're
almost certainly safer in a car driven by a computer than one driven by a
human.
 They could also ease congestion in cities, reduce pollution, reduce travel
and commute times, and enable people to use their time more
productively. However, they won't mean the end of road traffic accidents.
Even if a self-driving car has the best software and hardware available,
there is still a collision risk.
 An autonomous car could be surprised, say by a child emerging from
behind a parked vehicle, and there is always the issue of how: how should
such cars be programmed when they must decide whose safety to
prioritise?
 Driverless cars may also have to choose between the safety of passengers
and other road users.

EXAMPLE :
 Say that a car travels around a corner where a group of school children
are playing; there is not enough time to stop, and the only way the car can
avoid hitting the children is to swerve into a brick wall — endangering
the passenger. Whose safety should the car prioritise: the children’s', or
the passenger's?
Processes and technologies for accident investigation
 AVs are complex systems that often rely on advanced machine learning
technologies. Several serious accidents have already occurred, including a
number of fatalities involving level 2 AVs:
EXAMPLE :
1. In January 2016, 23-year-old Gao Yaning died when his Tesla Model S
crashed into the backof a road-sweeping truck on a highway in Hebei,
China. The family believe Autopilot wasengaged when the accident
occurred and accuse Tesla of exaggerating the system'scapabilities. Tesla
state that the damage to the vehicle made it impossible to
determinewhether Autopilot was engaged and, if so, whether it
malfunctioned. A civil case into thecrash is ongoing, with a third-party
appraiser reviewing data from the vehicle .

2. In May 2016, 40-year-old Joshua Brown died when his Tesla Model S
collided with a truckwhile Autopilot was engaged in Florida, USA. An
investigation by the National Highwaysand Transport Safety Agency
found that the driver, and not Tesla, were at fault . However, the National
Highway Traffic Safety Administration later determined that both
Autopilot and over-reliance by the motorist on Tesla's driving aids were
to blame .

ETHICAL BLACK BOX


 One solution is to fit all future AVs with industry standard event data
recorders — a so-called 'ethical black box' — that independent accident
investigators could access. This would mirror the model already in place
for air accident investigations.
Near-miss accidents
 At present, there is no system in place for the systematic collection of
near-miss accidents. While it is possible that manufacturers are collecting
this data already, they are not under any obligation to do so — or to share
the data.
 The only exception at the moment is the US state of California, which
requires all companies that are actively testing AVs on public roads to
disclose the frequency at which human drivers were forced to take control
of the vehicle for safety reasons (known as 'disengagement').
Data privacy
 It is becoming clear that manufacturers collect significant amounts of data
from AVs.
 As these vehicles become increasingly common on our roads, the
question emerges: to what extent are these data compromising the privacy
and data protection rights of drivers and passengers?
1. Already, data management and privacy issues have appeared, with some
raising concerns about the potential misuse of AV data for advertising
purposes .
 Tesla have also come under fire for the unethical use of AV data logs.
In an investigation by The Guardian, the newspaper found multiple
instances where the company shared drivers' private data with the
media following crashes, without their permission, to prove that its
technology was not responsible. At the same time, Tesla does not
allow customers to see their own data logs.

 One solution, proposed by the German Ethics Commission on


Automated Driving, is to ensure that that all AV drivers be given full
data sovereignty . This would allow them to control how their data is
used.
Employment
 The growth of AVs is likely to put certain jobs — most pertinently bus,
taxi, and truck drivers — at risk.
 In the medium term, truck drivers face the greatest risk as long-distance
trucks are at the forefront of AV technology .
 In 2016, the first commercial delivery of beer was made using a self-
driving truck, in a journey covering 120 miles and involving no human
action.
 Looking further forward, bus drivers are also likely to lose jobs as more
and more buses become driverless.
 Numerous cities across the world have announced plans to introduce self-
driving shuttles in the future, including Edinburgh , New York and
Singapore.
 In some places, this vision has already become a reality; the Las Vegas
shuttle famously got off to a bumpy start when it was involved in a
collision on its first day of operation , and tourists in the small Swiss
town of Neuhausen Rheinfall can now hop on a self-driving bus to visit
the nearby waterfalls.
 Fully autonomous taxis will likely only become realistic in the long term,
once AV technology has been fully tested and proven at levels 4 and 5.
Nonetheless, with plans to introduce self-driving taxis in London by 202,
and an automated taxi service already available in Arizona, USA, it is
easy to see why taxi drivers are uneasy.
The quality of urban environments
 In the long-term, AVs have the potential to reshape our urban
environment. Some of these changes may have negative consequences for
pedestrians, cyclists and locals.
 As driving becomes more automated, there will likely be a need for
additional infrastructure (e.g. AV-only lanes).
 There may also be more far-reaching effects for urban planning, with
automation shaping the planning of everything from traffic congestion
and parking to green spaces and lobbies.
 The environmental impact of self-driving cars should also be considered.
 While self-driving cars have the potential to significantly reduce fuel
usage and associated emissions, these savings could be counteracted by
the fact that self-driving cars make it easier and more appealing to drive
long distances .

The impact of automation on driving behaviours should therefore not be


underestimated.

Legal and ethical responsibility


 From a legal perspective, who is responsible for crashes caused by robots,
and how should victims be compensated (if at all) when a vehicle
controlled by an algorithm causes injury?
 If courts cannot resolve this problem, robot manufacturers may incur
unexpected costs that would discourage investment. However, if victims
are not properly compensated then autonomous vehicles are unlikely to
be trusted or accepted by the public.
 Robots will need to make judgement calls in conditions of uncertainty, or
'no win' situations. However, which ethical approach or theory should a
robot be programmed to follow when there's no legal guidance.
 Additionally, who should choose the ethics for the autonomous vehicle
— drivers, consumers, passengers, manufacturers, politicians?
 The responsibility should be shared among the engineers, the driver
and the autonomous driving system itself.
 However, Millar suggests that the user of the technology, in this case the
passenger in the self-driving car, should be able to decide what ethical or
behavioral principles the robot ought to follow.
 Using the example of doctors, who do not have the moral authority to
make important decisions on end-of-life care without the informed
consent of their patients, he argues that there would be a moral outcry if
engineers designed cars without either asking the driver directly for their
input, or informing the user ahead of time how the car is programmed to
behave in certain situations.
……………………………………………………………………………………
….
3.3.3 CASE STUDY: WARFARE AND WEAPONISATION
 Although partially autonomous and intelligent systems have been used in
military technology since at least the Second World War.
 Advances in machine learning and AI signify a turning point in the use of
automation in warfare.
 AI is already sufficiently advanced and sophisticated to be used in areas
such as satellite imagery analysis and cyber defence, but the true scope of
applications has yet to be fully realized.
 A recent report concludes that AI technology has the potential to
transform warfare to the same, or perhaps even a greater, extent than the
advent of nuclear weapons, aircraft, computers and biotechnology.

Lethal autonomous weapons


 As automatic and autonomous systems have become more capable,
militaries have become more willing to delegate authority to them. This is
likely to continue with the widespread adoption of AI, leading to an AI
inspired arms-race.
 The Russian Military Industrial Committee has already approved an
aggressive plan whereby 30% of Russian combat power will consist of
entirely remote-controlled and autonomous robotic platforms by 2030.
 Other countries are likely to set similar goals.
 While the United States Department of Defense has enacted restrictions
on the use of autonomous and semi-autonomous systems wielding lethal
force, other countries and non-state actors may not exercise such self-
restraint.
Drone technologies
 Standard military aircraft can cost more than US$100 million per unit; a
high-quality quadcopter Unmanned Aerial Vehicle, however, currently
costs roughly US$1,000, meaning that for the price of a single high-end
aircraft, a military could acquire one million drones.
 Although current commercial drones have limited range, in the future
they could have similar ranges to ballistic missiles, thus rendering
existing platforms obsolete.
Robotic assassination
 Widespread availability of low-cost, highly-capable, lethal, and
autonomous robots could make targeted assassination more widespread
and more difficult to attribute.
 Automatic sniping robots could assassinate targets from afar.
Mobile-robotic-Improvised Explosive Devices
 As commercial robotic and autonomous vehicle technologies become
widespread, some groups will leverage this to make more advanced
Improvised Explosive Devices (IEDs).
 Currently, the technological capability to rapidly deliver explosives to a
precise target from many miles away is restricted to powerful nation
states.
 Similarly, self-driving cars could make suicide car bombs more frequent
and devastating since they no longer require a suicidal driver.
EXAMPLE :
 They describe an example where a Commanding Officer (CO) could
employ an Intelligent Virtual Assistant (IVA) within a fluid battlefield
environment that automatically scanned satellite imagery to detect
specific vehicle types, helping to identify threats in advance.
 It could also predict the enemy's intent, and compare situational data to a
stored database of hundreds of previous wargame exercises and live
engagements.
Lethal Autonomous Weapon Systems
 In particular, many researchers are concerned that Lethal Autonomous
Weapon Systems (LAWS) — a type of autonomous military robot that
can independently search for and 'engage' targets using lethal force —
may not meet the standards set by International Humanitarian Law, as
they are not able to distinguish civilians from combatants, and would not
be able to judge whether the force of the attack was proportional given
the civilian damage it would incur.
 Robots also have no concept of what it means to kill the 'wrong' person.
'It is only because humans can feel the rage and agony that accompanies
the killing of humans that they can understand sacrifice and the use of
force against a human. Only then can they realise the 'gravity of the
decision' to kill.
What others think about the AI in warfare:
 However, others argue that there is no particular reason why being killed
by a machine would be a subjectively worse, or less dignified, experience
than being killed by a cruise missile strike. 'What matters is whether the
victim experiences a sense of humiliation in the process of getting killed.
 Victims being threatened with a potential bombing will not care whether
the bomb is dropped by a human or a robot.
 In addition, not all humans have the emotional capacity to conceptualise
sacrifice or the relevant emotions that accompany risk.
 In the heat of battle, soldiers rarely have time to think about the concept
of sacrifice, or generate the relevant emotions to make informed decisions
each time they deploy lethal force.
Utilitarianism and AI: Maximizing utility vs minimizing Harm
When applying utilitarianism to artificial intelligence (AI), "maximizing utility"
means the AI should always choose the action that produces the greatest overall
good or benefit for the most people, while "minimizing harm" means the AI should
prioritize actions that cause the least amount of negative consequences or suffering,
even if it means sacrificing some potential benefit; essentially, both concepts aim
to achieve the best outcome for the greatest number, but "minimizing harm" places
a stronger emphasis on avoiding negative impacts.
Key points to consider:
 Core principle:
Utilitarianism is a moral philosophy that dictates choosing the action that
produces the greatest overall happiness or utility for the most people affected by a
decision.
 Applying to AI:
When designing AI systems, a utilitarian approach would involve programming
them to make decisions that maximize positive outcomes for the largest group,
even if it means causing some harm to a smaller group.

 Maximizing utility:
This aspect focuses on identifying the action that generates the most overall
benefit, even if it involves some level of harm to a few individuals.
 Minimizing harm:
This approach prioritizes avoiding negative consequences as much as possible,
even if it means sacrificing some potential positive outcomes.
Challenges in applying utilitarianism to AI:
 Quantifying utility:
Determining the "greatest good" can be complex, especially when dealing with
diverse human values and situations.
 Unforeseen consequences:
AI systems may produce unintended negative outcomes, making it difficult to
accurately predict the full impact of a decision.
 Individual rights:
A purely utilitarian approach may sometimes overlook the rights and well-being
of individual people, especially if they are in a minority group.
Example scenarios:
 Self-driving car dilemma:
A utilitarian AI might choose to hit a single pedestrian to avoid a larger accident
with multiple casualties, while a "minimizing harm" approach would prioritize
avoiding any casualties even if it means causing a smaller accident.
 Medical diagnosis:
An AI designed to maximize utility might prioritize diagnosing a larger number
of patients with a common illness, even if it means missing a few rare but serious
conditions, whereas a "minimizing harm" approach would prioritize identifying
all potential serious diseases, even if it means missing some less severe cases.

Utilitarianism is a moral theory that advocates for actions that maximize overall happiness
or utility and minimize harm. When applied to AI, utilitarianism can help guide decisions
about how to develop, deploy, and regulate AI systems to ensure they produce the greatest
good for the greatest number of people while minimizing potential harms.

How utilitarianism and AI can be linked together, particularly in the context of maximizing
utility and minimizing harm:

1. Utilitarianism: An Overview
Utilitarianism is a form of consequentialism, meaning that it judges actions based on their
outcomes or consequences. The central tenet is the greatest happiness principle, which
holds that the best action is the one that produces the greatest good (or utility) for the most
people. The basic idea is to:

 Maximize Utility: Increase overall happiness or well-being.


 Minimize Harm: Reduce suffering or negative outcomes.

In the context of AI, this framework can guide ethical decision-making, focusing on ensuring
that AI systems provide more benefits than harms to individuals and society as a whole.

2. AI and Maximizing Utility

When applying utilitarianism to AI systems, the primary goal is to design AI systems that
maximize benefits and utility for the largest number of stakeholders. This could involve:

 Improving Efficiency: AI can optimize processes in various sectors (e.g., healthcare,


transportation, manufacturing) to save time, resources, and effort, increasing overall
societal well-being.
o Example: AI-driven medical diagnostic tools can help doctors identify
diseases faster, leading to earlier treatment, better outcomes, and improved
quality of life.
 Enhancing Accessibility: AI can make essential services more accessible to
underserved populations, improving well-being by giving people access to education,
healthcare, and financial services.
o Example: AI-powered language translation tools can help bridge
communication gaps, enabling people from different cultures or regions to
access knowledge and services that were previously unavailable to them.
 Optimizing Resource Allocation: AI systems can help manage resources more
effectively, reducing waste, and ensuring that limited resources are used where they
are most needed.
o Example: AI algorithms can be used to optimize energy consumption,
reducing costs and environmental impact.

3. AI and Minimizing Harm

Utilitarianism also requires minimizing harm, which involves preventing negative


consequences or unintended side effects that could arise from AI systems. This can be a
complex challenge, as AI systems can sometimes cause harm even with the best intentions.
Here are some considerations for minimizing harm:

 Preventing Bias: One of the key challenges in AI is bias, especially in data-driven


systems. AI models that are trained on biased data can reinforce existing inequalities,
leading to harmful outcomes for certain groups.
o Example: A hiring algorithm that is trained on biased data may
unintentionally discriminate against women or racial minorities, leading to
harmful social consequences like reinforcing inequality.
 Ensuring Safety: AI systems, especially autonomous systems like self-driving cars or
drones, must be designed with safety in mind to avoid accidents or harmful failures.
o Example: An autonomous vehicle must be programmed to avoid accidents
and prioritize the safety of pedestrians and passengers. Failure to do so could
lead to serious harm and societal distrust of the technology.
 Protecting Privacy: AI systems often require access to large amounts of personal
data. Ensuring that this data is used ethically and securely is vital to minimizing harm
to individuals’ privacy.
o Example: AI systems used in healthcare must follow strict privacy regulations
(e.g., HIPAA in the U.S.) to protect sensitive medical data from being exposed
or misused.
 Preventing Job Displacement: AI has the potential to displace jobs, especially in
areas like manufacturing, transportation, and customer service. While AI can improve
efficiency and reduce costs, it’s important to consider the societal impact of mass
unemployment.
o Example: The automation of factory jobs may increase productivity but could
also lead to significant job losses. Ethical AI development includes
considering how to retrain workers and minimize negative social
consequences.
 Ensuring Fairness: An AI system should be designed to be fair and equitable,
ensuring that no group or individual is disproportionately harmed by its use.
o Example: When designing AI systems for criminal justice, it’s essential to
ensure that algorithms do not unfairly target or disadvantage certain groups,
such as minorities.

4. Challenges in Applying Utilitarianism to AI

While the goal of maximizing utility and minimizing harm sounds clear, applying
utilitarianism to AI can be challenging for several reasons:

 Uncertainty of Long-Term Consequences: The long-term consequences of AI


systems can be hard to predict. A system that seems beneficial in the short term could
have unintended harmful consequences in the long term, especially with technologies
like AI that evolve rapidly.
o Example: AI-driven content recommendation systems might maximize
engagement (and utility) in the short term but lead to increased polarization,
misinformation, or mental health issues in the long run.
 Balancing Competing Interests: Different stakeholders might have conflicting
interests. For instance, a company might design an AI system that maximizes profit
(economic utility), but this could harm workers through job displacement or
exacerbate inequality.
o Example: A factory may use AI-powered automation to increase productivity,
but the resulting job losses could harm workers, and the community may lose
economic stability.
 Ethical Trade-offs: In some cases, utilitarianism might require making difficult
trade-offs. For instance, ensuring the safety of a self-driving car might mean it
prioritizes the safety of its passengers over pedestrians in certain scenarios.
o Example: In a critical situation, an autonomous vehicle might need to make a
decision about whether to protect the passengers or avoid hitting a pedestrian.
Such scenarios raise complex ethical questions about the value of life and
well-being.
5. Utilitarianism in AI Regulation

Governments and international bodies may adopt utilitarian frameworks to regulate AI,
ensuring that AI systems are developed and deployed in ways that maximize societal benefits
while minimizing harm. This might involve:

 Ethical Guidelines and Standards: Establishing clear ethical guidelines, such as the
IEEE 7000 series, which promotes integrating ethics into AI system design.
 Transparency and Accountability: Ensuring that AI systems are transparent and
accountable, so their utility can be accurately assessed and harm can be mitigated.
 Ongoing Monitoring and Adjustment: Given that AI systems evolve over time,
continuous monitoring is crucial to ensuring that they continue to maximize utility
and minimize harm in changing societal contexts.

Utilitarianism theory and the Cambridge Analytica – Facebook case


The utilitarianism theory is part of the consequentialists theories, which are
concerned with the overall ethical consequences of a particular action. This theory was
chosen as artificial intelligence operates with large amounts of users’ data and it is a
technological tool for people, which is very close to the main idea of this ethical theory - to
achieve a greater good for society. The utilitarianism theory “is one of the most common
approaches to making ethical decisions, especially decisions with consequences that concern
large groups of people, in part because it instructs us to weigh the different amounts of good
and bad that will be produced by our action” (Bonde & Firenze, 2013). The assessment of
good and bad should be done by the people using the system. For this reason, the GDPR
(General Data Protection Regulation ) code has been established. Although
companies are legally regulated by it, ethics are universally different for everyone. Therefore,
there is a possibility of a business formally obeying the GDPR law, but still performing in an
unethical way with the data of its customers. As it is still not possible to create an ethical law
and it probably never will be, people can only trust organizations, their policies and
regulators.
However, an unethical usage and processing of data through the use of artificial
intelligence still could be punished by the laws in charge. An example of that is the very
famous case of Cambridge Analytica, which is a political consultancy company founded in
2013 that “combines the predictive data analytics, behavioral sciences, and innovative ad tech
into one award winning approach.” (Rathi, 2019). However, in March 2018 a former analyst
has publicly come up with a confession that the practices the organization has used in the
2016 US Presidential Election are unethical. Shortly after his recognition Cambridge
Analytica goes bankrupt. What the company was able to do is collect Facebooks’ user’s data
with the help of the social media giant. “They are blamed for deceiving consumers in how
their data was collected and about identifiable information (FTC). While the data was
showing the user a personality score, the firm was harvesting each Facebook User ID to gain
insight for voter profiling.” (Boerboom, 2020). Cambridge Analytica lied to what information
exactly it was going to collect from users and consequently its actions were proven unethical.
However, Facebook have a role in this whole thing. The social media company did not read
all of the privacy policy information that Cambridge Analytica had proposed to them leading
to the organization not knowing about what the political consultancy business was doing. Due
to Facebook’s poor attention to its own privacy policy, Cambridge Analytica was able to
harvest the data of more than 50 million people. “The Federal Trade Commission wanted to
be sure that users could be confident in their rights with the platform influencing
communication worldwide. Therefore, they issued a $5 Billion fine and demanded a new
privacy compliance system which includes two-factor authentication, and other new tools that
helps the FTC monitor Facebook in an effort to make a statement about the importance and
seriousness concerning data privacy (FTC).” (Boerboom, 2020). This case shows that it will
not be possible to fully trust the ethics of organizations with people’s personal information.
For that to happen successfully, societies need regulators and laws to forbid inappropriate
actions. The utilitarianism theory applied for the Cambridge Analytica and Facebook case
provides an obvious reasoning of how companies could act in an unethical way with their
user’s data.
A main idea of the theory is “that some good and some bad will necessarily be the
result of our action and that the best action will be that which provides the most good or does
the least harm, or, to put it another way, produces the greatest balance of good over harm”
(Bonde & Firenze, 2013). Obviously, to exploit and put in danger the data of millions of
people just to use it for the good of one is definitely against utilitarianism.
Utilitarianism applied in organizational culture
 In the book ‘Business Ethics’ by William H. Shaw the first ethical theory discussed is the
Utilitarianism one. By doing an analysis of business cultural ethics through it, the author
concluded that this is a very appealing theory as a moral standard for organizations. It is
needed to first mention that organizations could be considered moral agents. “If corporations
are moral agents, then they can be seen as having obligations and as being morally
responsible for their actions, just as individuals are.” (Shaw, 2017). Therefore, such entities
should have ethical considerations in the performance of their activities. For the successful
development of the ethical tool and answering the problem questions of this research, three
aspects derived by W. Shaw about Utilitarianism will be mentioned.
These are: -
“First, utilitarianism provides a clear and straightforward basis for formulating and testing
policies. By utilitarian standards, an organizational policy, decision, or action is good if it promotes
the general welfare more than any other alternative.”
- “Second, utilitarianism provides an objective and attractive way of resolving conflicts of self-
interest. This feature of utilitarianism dramatically contrasts with egoism, which seems incapable of
resolving such conflicts… Thus, individuals within organizations make moral decisions and evaluate
their actions by appealing to a uniform standard: the general good.”
- “Third, utilitarianism provides a flexible, result-oriented approach to moral decision making…
This facet of utilitarianism enables organizations to make realistic and workable moral decisions.”
These three conclusions made by William Shaw will be used in the development of the ethical tool as
a proof of the possibility that an ethical culture could be developed in organizations.– following the
main idea of the Utilitarianism theory (to do what brings best consequences for society).
To do that important aspects should be developed based on the literature review and expert
interviews’ findings. First, the idea of the theory itself – to do that which will result in the best
consequences for society. Therefore,
Aspect #1 of the borderline will be ‘good consequences for society’ on the ethical side and ‘bad
consequences for society’ on the unethical one. ‘Consequences’ are defined under different
dimensions of societal progress: - health; income; education; happiness; safety (more personal focus)
- economy, technology, environment (more global focus) Secondly, an analysis of the consequences
should be made and how utility is calculated - the right action will be the one that maximizes the
average utility of society (McNamee et al., 2001).
Aspect #2 will be about utility calculation based on the previously mentioned dimensions. On the
ethical side – the action/s that maximize the average utility of society; on the unethical side – the
action/s that do not. Thirdly, the use of personal data for the greater good should only be used with
regards to the GDPR code. However, as discussed there are ways to go around it, which is why
organizational ethical culture is of extreme importance. Aspect #3 is about the need for organizations
to implement an ethical culture to ensure regulators that their activities are ethical. Lastly,
accountability for transparency and trusted AI systems being used must be held by organizations.
Aspect #4 will be the successful implementation of trusted AI systems with complete
understanding of its processes in order to satisfy the transparency requirements of regulators and
customers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy