Eai-Unit Ii
Eai-Unit Ii
2. EMOTIONAL HARM
What is it to be human? AI will interact with and have an impact on the
human emotional experience in ways that have not yet been qualified;
Humans are susceptible to emotional influence both positively and
negatively, and 'affect' – how emotion and desire influence behaviour –
is a core part of intelligence.
There are various ways in which AI could inflict emotional harm,
including false intimacy, over-attachment, objectification and
commodification of the body, and social or sexual isolation. These are
covered by various of the aforementioned ethical initiatives, including the
Foundation for Responsible Robotics, Partnership on AI.
NUDGING :
Affective AI is also open to the possibility of deceiving and coercing its
users – researchers have defined the act of AI subtly modifying behaviour as
'nudging', when an AI emotionally manipulates and influences its user through
the affective system.
1. While this may be useful in some ways – drug dependency, healthy
eating – it could also trigger behaviours that worsen human health.
2. Systematic analyses must examine the ethics of affective design prior
to deployment; users must be educated on how to recognise and
distinguish between nudges; users must have an opt-in system for
autonomous nudging systems; and vulnerable populations that cannot
give informed consent, such as children, must be subject to additional
protection.
3. Other issues include technology addiction and emotional harm due to
societal or gender bias.
1. The IEEE suggest first identifying social and moral norms of the specific
community in which an AI will be deployed, and those around the
specific task or service it will offer; designing AI with the idea of 'norm
updating' in mind, given that norms are not static and AI must change
dynamically and transparently alongside culture;
2. Several initiatives – such as AI4All and the AI Now Institute – explicitly
advocate for fair, diverse, equitable, and non-discriminatory inclusion in
AI at all stages, with a focus on support for under-represented groups.
Personal data may be used maliciously or for profit, systems are at risk of
hacking, and technology may be used exploitatively.
1. The IEEE suggests new ways of educating the public on ethics and
security issues, for example a 'data privacy' warning on smart devices that
collect personal data; delivering this education in scalable, effective
ways; and educating government, lawmakers, and enforcement agencies
surrounding these issues, so they can work collaboratively with citizens –
in a similar way to police officers providing safety lectures in schools –
and avoid fear and confusion .
Other issues include manipulation of behaviour and data.
Humans must retain control over AI and oppose subversion. Most
initiatives reviewed flag this as a potential issue facing.
AI must also work for the good of humankind, must not exploit people,
and be regularly reviewed by human experts.
12.EXISTENTIAL RISK
According to the Future of Life Institute, the main existential
issue surrounding AI 'is not malevolence, but competence' – AI
will continually learn as they interact with others and gather data,
leading them to gain intelligence over time and potentially develop
aims that are at odds with those of humans.
1. AI also poses a threat in the form of autonomous weapons systems
(AWS). As these are designed to cause physical harm, they raise
numerous ethical quandaries.
2. The pursuit of AWS may lead to an international arms race and
geopolitical stability; as such, the IEEE recommend that systems
designed to act outside the boundaries of human control or judgement are
unethical and violate fundamental human rights and legal accountability
for weapons use.
3. Given their potential to seriously harm society, these concerns must be
controlled for and regulated pre-emptively, says the Foundation for
Responsible Robotics. Other initiatives that cover this risk explicitly
include the UNI Global Union and the Future of Life Institute.
Ethical AI Design
Integrating ethics into the design phase of AI systems is essential. This involves
multidisciplinary collaboration, including ethicists, policymakers, technologists, and end-
users, to identify and mitigate potential ethical issues.
Ethical Use of AI
Considerations of how AI is used and its impact on society must guide development. AI
applications should align with ethical standards, respect human rights, and contribute
positively to societal well-being.
Human-Centric Approach
Maintaining a human-centric approach in AI development involves prioritizing human
values, well-being, and autonomy. Human oversight and control over AI systems should be
paramount, ensuring that AI augments human capabilities rather than replacing or dictating
them.
Job Displacement
The advancement of AI automation has the potential to replace human jobs, resulting in
widespread unemployment and exacerbating economic inequalities. Conversely, some argue
that while AI will replace knowledge workers – like robots are replacing manual laborers –
AI has the potential to create far more jobs than it destroys. Addressing the impacts of job
displacement requires proactive measures such as retraining programs and policies that
facilitate a just transition for affected workers, as well as far-reaching social and economic
support systems.
Autonomous Weapons
Ethical concerns arise with the development of AI-powered autonomous weapons. Questions
of accountability, the potential for misuse, and the loss of human control over life-and-death
decisions necessitate international agreements and regulations to govern the use of such
weapons. Ensuring responsible deployment becomes essential to prevent catastrophic
consequences.
Sourcing data ethically means obtaining data in a way that respects individuals' privacy,
consent, and applicable data rights. While ethical data sourcing helps to maintain an AI
system's integrity and public trust, it can also mitigate potential legal risks.
Irresponsible practices like inadequate data security or violation of privacy rights can erode
public trust, cause data breaches, damage the reputation of the organization, and lead to legal
repercussions.
Proper data management for AI tools involves secure storage, controlled access, and
regulated deletion practices.
Data should be properly secured, employing encryption methods and firewall systems to
prevent unauthorized access or breaches. Access to data should be limited to necessary
personnel, with a system for tracking who has accessed the data and for what purpose.
Additionally, a clear data deletion policy should be implemented. Once data has outlived its
utility or an individual requests that their data is deleted, it should be permanently removed to
maintain privacy and respect individual rights.
For instance, the European Union (EU) has proposed a framework that emphasizes
transparency, accountability, and protection of individual rights. Meanwhile, countries
like Singapore and Canada have published their own AI ethics guidelines, emphasizing
principles of fairness, accountability, and human-centric values.
At the global level, the UNESCO has released draft recommendations on the Ethics of
Artificial Intelligence—emphasizing the need for a human-centered approach to AI that
focuses on human rights, cultural diversity, and fairness. It also stresses the importance of
transparency, accountability, and the need for AI to be understandable and controllable by
human beings.
While the specifics may vary, the global consensus leans towards a human-centric approach
that stresses transparency, accountability, and the protection of individual rights.
These globally recognized standards can help bridge cultural and societal differences, while
establishing a common ground for the ethical use and development of AI. Such an
international approach not only promotes the responsible development and use of AI
technologies, but also fosters trust, cooperation, and mutual understanding among nations.
Translating ethical principles into actionable guidelines is key to realizing ethical AI. This
involves integrating ethical considerations into every stage of the AI lifecycle, from initial
design to deployment, to monitoring.
During the development phase, it’s essential to source and manage data ethically. This
involves obtaining data sets responsibly, ensuring secure storage, and managing its lifecycle
properly.
Once the AI system is deployed, its performance and ethical behavior should be consistently
monitored. Continuous auditing can help identify any ethical issues or biases that arise and
address them promptly.
Additionally, clear communication about how the AI works, its limitations, and the data it
uses will help ensure transparency and maintain user trust. This can be accomplished through
comprehensive, user-friendly documentation and, where appropriate, interfaces that allow
users to review and understand the AI’s decisions.
Lastly, it's crucial to have an accountability framework in place, so there are clear lines of
responsibility if the AI system fails or causes harm. This is a helpful way to support both
internal and legal accountability.
By integrating these steps into the development process, ethical principles can be translated
into practical, actionable guidelines.
Microsoft’s AI Ethics
Furthermore, Microsoft reviews its AI systems to identify those that may have an adverse
impact on people, organizations, and society, and applies additional oversight to these
systems.
IBM’s Trustworthy AI
IBM is recognized as a leader in the field of trustworthy AI, with a focus on ethical principles
and practices in its use of technology. The company has developed a Responsible Use of
Technology framework to guide its decision-making and governance processes, fostering a
culture of responsibility and trust.
The World Economic Forum has highlighted IBM's efforts in a case study, providing
practical resources for organizations to operationalize ethics in their use of technology.
The ten core principles of ethical AI enjoy broad consensus for a reason: they align with
globally recognized definitions of fundamental human rights, as well as with multiple
international declarations, conventions and treaties. The first two principles can help you
acquire the knowledge that can allow you to make ethical decisions for your AI. The next
eight can help guide those decisions.
A top challenge to navigating these ten principles is that they often mean different things in
different places — and to different people. The laws a company has to follow in the US, for
example, are likely different than those in China. In the US they may also differ from one
state to another. How your employees, customers and local communities define the common
good (or privacy, safety, reliability or most of the ethical AI principles) may also differ.
To put these ten principles into practice, then, you may want to start by contextualising them:
Identify your AI systems’ various stakeholders, then find out their values and discover any
tensions and conflicts that your AI may provoke.6 You may then need discussions to
reconcile conflicting ideas and needs.
When all your decisions are underpinned by human rights and your values, regulators,
employees, consumers, investors and communities may be more likely to support you — and
give you the benefit of the doubt if something goes wrong.
To help resolve these possible conflicts, consider explicitly linking the ten principles to
fundamental human rights and to your own organisational values. The idea is to create
traceability in the AI design process: for every decision with ethical implications that you
make, you can trace that decision back to specific, widely accepted human rights and your
declared corporate principles.
Identify who will be accountable for the AI and its effects at each stage and across its lifecycle,
including responsibility for maintaining records created. Identifying and addressing risk is best
achieved by involving appropriate stakeholders. As such, consumers, technologists, developers,
mission personnel, risk management professionals, civil liberties and privacy officers, and legal
counsel should utilize this framework collaboratively, each leveraging their respective experiences,
perspectives, and professional skills.
CASE STUDIES
3.3.1. CASE STUDY: HEALTHCARE ROBOTS
Artificial Intelligence and robotics are rapidly moving into the field
of healthcare and will increasingly play roles in diagnosis and
clinical treatment.
For example, currently, or in the near future, robots will help in the
diagnosis of patients; the performance of simple surgeries; and the
monitoring of patients' health and mental wellness in short and
long-term care facilities. They may also provide basic physical
interventions, work as companion carers, remind patients to take
their medical image diagnostics, machine learning has been proven
to match or even surpass our ability to detect illnesses.
1. Safety
The most important ethical issue arising from the growth of AI and
robotics in healthcare is that of safety and avoidance of harm.
It is vital that robots should not harm people, and that they should be safe
to work with. This point is especially important in areas of healthcare that
deal with vulnerable people, such as the ill, elderly, and children.
Digital healthcare technologies offer the potential to improve accuracy of
diagnosis and treatments, but to thoroughly establish a technology's long-
term safety and performance investment in clinical trials is required.
2. User understanding
The correct application of AI by a healthcare professional is important to ensure
patient safety.
'THE DA VINCI' ROBOT
The precise surgical robotic assistant 'the da Vinci' has proven a useful
tool in minimizing surgical recovery, but requires a trained operator.
It is important for users to trust the AI presented but to be aware of each
tool's strengths and weaknesses, recognising when validation is
necessary. For instance, a generally accurate machine learning study to
predict the risk of complications in patients with pneumonia erroneously
considered those with asthma to be at low risk.
However, it's questionable to what extent individuals need to understand
how an AI system arrived at a certain prediction in order to make
autonomous and informed decisions.
Even if an in-depth understanding of the mathematics is made obligatory,
the complexity and learned nature of machine learning algorithms often
prevent the ability to understand how a conclusion has been made from a
dataset — a so called 'black box' .
Data protection
Personal medical data needed for healthcare algorithms may be at risk.
For instance, there are worries that data gathered by fitness trackers might
be sold to third parties, such as insurance companies, who could use those
data to refuse healthcare coverage.
Hackers are another major concern, as providing adequate security for
systems accessed by a range of medical personnel is problematic.
Clear frameworks for how healthcare staff and researchers use data, such
as genomics, in a way that safeguards patient confidentiality is necessary
to establish public trust and enable advances in healthcare algorithms.
Legal responsibility
Although AI promises to reduce the number of medical mishaps, when
issues occur, legal liability must be established.
If equipment can be proven to be faulty then the manufacturer is liable,
but it is often tricky to establish what went wrong during a procedure and
whether anyone, medical personnel or machine, is to blame.
For instance, there have been lawsuits against the da Vinci surgical
assistant, but the robot continues to be widely accepted.
For now, AI is used as an aide for expert decisions, and so experts remain
the liable party in most cases.
Bias
Non-discrimination is one of the fundamental values of the EU, but
machine learning algorithms are trained on datasets that often have
proportionally less data available about minorities, and as such can be
biased .
This can mean that algorithms trained to diagnose conditions are less
likely to be accurate for ethnic patients; for instance, in the dataset used to
train a model for detecting skin cancer, less than 5 percent of the images
were from individuals with dark skin, presenting a risk of misdiagnosis
for people of colour.
To ensure the most accurate diagnoses are presented to people of all
ethnicities, algorithmic biases must be identified and understood.
Even with a clear understanding of model design this is a difficult task
because of the aforementioned 'black box' nature of machine learning.
However, various codes of conduct and initiatives have been introduced
to spot biases earlier.
For instance, The Partnership on AI, an ethics-focused industry group
was launched by Google, Facebook, Amazon, IBM and Microsoft —
although, worryingly, this board is not very diverse.
Equality of access
Digital health technologies, such as fitness trackers and insulin pumps,
provide patients with the opportunity to actively participate in their own
healthcare.
Some hope that these technologies will help to redress health inequalities
caused by poor education, unemployment, and so on. However, there is a
risk that individuals who cannot afford the necessary technologies or do
not have the required 'digital literacy' will be excluded, so reinforcing
existing health inequalities.
Quality of care
'There is remarkable potential for digital healthcare technologies to
improve accuracy of diagnoses and treatments, the efficiency of care, and
workflow for healthcare professionals'.
If introduced with careful thought and guidelines, companion and care
robots, for example, could improve the lives of the elderly, reducing their
dependence, and creating more opportunities for social interaction.
EXAMPLE :
Imagine a home-care robot that could: remind you to take your
medications; fetch items for you if you are too tired or are already in bed;
perform simple cleaning tasks; and help you stay in contact with your
family, friends and healthcare provider via video link.
Human interaction is particularly important for older people, as research
suggests that an extensive social network offers protection against
dementia.
At present, robots are far from being real companions. Although they can
interact with people, and even show simulated emotions, their
conversational ability is still extremely limited, and they are no
replacement for human love and attention.
carebots
A number of 'carebots' are designed for social interactions and are often
touted to provide an emotional therapeutic role.
For instance, care homes have found that a robotic seal pup's animal-like
interactions with residents brightens their mood, decreases anxiety and
actually increases the sociability of residents with their human caregivers.
Deception
However, the line between reality and imagination is blurred for dementia
patients, so is it dishonest to introduce a robot as a pet and encourage a
social-emotional involvement? And if so, is if morally justifiable?
Companion robots and robotic pets could alleviate loneliness amongst
older people, but this would require them believing, in some way, that a
robot is a sentient being who cares about them and has feelings — a
fundamental deception.
EXAMPLE :
'The fact that our parents, grandparents and children might say 'I love
you' to a robot who will say 'I love you' in return, does not feel
completely comfortable; it raises questions about the kind of authenticity
we require of our technology'.
For an individual to benefit from owning a robot pet, they must
continually delude themselves about the real nature of their relation with
the animal. What's more, encouraging elderly people to interact with
robot toys has the effect of infantilising them.
Autonomy
It's important that healthcare robots actually benefit the patients
themselves, and are not just designed to reduce the care burden on the rest
of society — especially in the case of care and companion AI.
Robots could empower disabled and older people and increase their
independence; in fact, given the choice, some might prefer robotic over
human assistance for certain intimate tasks such as toileting or bathing.
Robots could be used to help elderly people live in their own homes for
longer, giving them greater freedom and autonomy. However, how much
control, or autonomy, should a person be allowed if their mental
capability is in question? If a patient asked a robot to throw them off the
balcony, should the robot carry out that command?
Liberty and privacy
As with many areas of AI technology, the privacy and dignity of users'
needs to be carefully considered when designing healthcare service and
companion robots.
Working in people's homes means that robots will be privy to private
moments such as bathing and dressing; if these moments are recorded,
who should have access to the information, and how long should
recordings be kept?
The issue becomes more complicated if an elderly person's mental state
deteriorates and they become confused — someone with Alzheimer's
could forget that a robot was monitoring them, and could perform acts or
say things thinking that they are in the privacy of their own home.
Home-care robots need to be able to balance their user's privacy and
nursing needs, for example by knocking and awaiting an invitation before
entering a patient's room, except in a medical emergency.
To ensure their charge's safety, robots might sometimes need to act as
supervisors, restricting their freedoms.
EXAMPLE
A robot could be trained to intervene if the cooker was left on, or the bath
was overflowing.
Robots might even need to restrain elderly people from carrying out
potentially dangerous actions, such as climbing up on a chair to get
something from a cupboard.
Smart homes with sensors could be used to detect that a person is
attempting to leave their room, and lock the door, or call staff — but in so
doing the elderly person would be imprisoned.
Moral agency
Robots do not have the capacity for ethical reflection or a moral basis for
decision-making, and thus humans must currently hold ultimate control
over any decision-making.
EXAMPLE :
An example of ethical reasoning in a robot can be found in the 2004
dystopian film 'I, Robot', where Will Smith's character disagreed with
how the robots of the fictional time used cold logic to save his life over
that of a child's.
If more automated healthcare is pursued, then the question of moral
agency will require closer attention.
Ethical reasoning is being built into robots, but moral responsibility is
about more than the application of ethics — and it is unclear whether
robots of the future will be able to handle the complex moral issues in
healthcare.
Trust
'Psychology research shows people mistrust those who make moral decisions by
calculating costs and benefits — like computers do' .
1. Firstly, doctors are explicitly certified and licensed to practice medicine,
and their license indicates that they have specific skills, knowledge, and
values such as 'do no harm'.
If a robot replaces a doctor for a particular treatment or diagnostic
task, this could potentially threaten patient-doctor trust, as the patient
now needs to know whether the system is appropriately approved or
'licensed' for the functions it performs.
2. Secondly, patients trust doctors because they view them as paragons of
expertise. If doctors were seen as 'mere users' of the AI, we would expect
their role to be downgraded in the public's eye, undermining trust.
3. Thirdly, a patient's experiences with their doctor are a significant driver
of trust. If a patient has an open line of communication with their doctor,
and engages in conversation about care and treatment, then the patient
will trust the doctor.
Employment replacement
As in other industries, there is a fear that emerging technologies may
threaten employment, for instance, there are carebots now available
that can perform up to a third of nurses' work .
Despite these fears, the NHS' Topol Review concluded that 'these
technologies will not replace healthcare professionals but will
enhance them ('augment them'), giving them more time to care for
patients'.
The review also outlined how the UK's NHS will nurture a learning
environment to ensure digitally capable employees.
……………………………………………………………………………………
……
EXAMPLE :
Say that a car travels around a corner where a group of school children
are playing; there is not enough time to stop, and the only way the car can
avoid hitting the children is to swerve into a brick wall — endangering
the passenger. Whose safety should the car prioritise: the children’s', or
the passenger's?
Processes and technologies for accident investigation
AVs are complex systems that often rely on advanced machine learning
technologies. Several serious accidents have already occurred, including a
number of fatalities involving level 2 AVs:
EXAMPLE :
1. In January 2016, 23-year-old Gao Yaning died when his Tesla Model S
crashed into the backof a road-sweeping truck on a highway in Hebei,
China. The family believe Autopilot wasengaged when the accident
occurred and accuse Tesla of exaggerating the system'scapabilities. Tesla
state that the damage to the vehicle made it impossible to
determinewhether Autopilot was engaged and, if so, whether it
malfunctioned. A civil case into thecrash is ongoing, with a third-party
appraiser reviewing data from the vehicle .
2. In May 2016, 40-year-old Joshua Brown died when his Tesla Model S
collided with a truckwhile Autopilot was engaged in Florida, USA. An
investigation by the National Highwaysand Transport Safety Agency
found that the driver, and not Tesla, were at fault . However, the National
Highway Traffic Safety Administration later determined that both
Autopilot and over-reliance by the motorist on Tesla's driving aids were
to blame .
Maximizing utility:
This aspect focuses on identifying the action that generates the most overall
benefit, even if it involves some level of harm to a few individuals.
Minimizing harm:
This approach prioritizes avoiding negative consequences as much as possible,
even if it means sacrificing some potential positive outcomes.
Challenges in applying utilitarianism to AI:
Quantifying utility:
Determining the "greatest good" can be complex, especially when dealing with
diverse human values and situations.
Unforeseen consequences:
AI systems may produce unintended negative outcomes, making it difficult to
accurately predict the full impact of a decision.
Individual rights:
A purely utilitarian approach may sometimes overlook the rights and well-being
of individual people, especially if they are in a minority group.
Example scenarios:
Self-driving car dilemma:
A utilitarian AI might choose to hit a single pedestrian to avoid a larger accident
with multiple casualties, while a "minimizing harm" approach would prioritize
avoiding any casualties even if it means causing a smaller accident.
Medical diagnosis:
An AI designed to maximize utility might prioritize diagnosing a larger number
of patients with a common illness, even if it means missing a few rare but serious
conditions, whereas a "minimizing harm" approach would prioritize identifying
all potential serious diseases, even if it means missing some less severe cases.
Utilitarianism is a moral theory that advocates for actions that maximize overall happiness
or utility and minimize harm. When applied to AI, utilitarianism can help guide decisions
about how to develop, deploy, and regulate AI systems to ensure they produce the greatest
good for the greatest number of people while minimizing potential harms.
How utilitarianism and AI can be linked together, particularly in the context of maximizing
utility and minimizing harm:
1. Utilitarianism: An Overview
Utilitarianism is a form of consequentialism, meaning that it judges actions based on their
outcomes or consequences. The central tenet is the greatest happiness principle, which
holds that the best action is the one that produces the greatest good (or utility) for the most
people. The basic idea is to:
In the context of AI, this framework can guide ethical decision-making, focusing on ensuring
that AI systems provide more benefits than harms to individuals and society as a whole.
When applying utilitarianism to AI systems, the primary goal is to design AI systems that
maximize benefits and utility for the largest number of stakeholders. This could involve:
While the goal of maximizing utility and minimizing harm sounds clear, applying
utilitarianism to AI can be challenging for several reasons:
Governments and international bodies may adopt utilitarian frameworks to regulate AI,
ensuring that AI systems are developed and deployed in ways that maximize societal benefits
while minimizing harm. This might involve:
Ethical Guidelines and Standards: Establishing clear ethical guidelines, such as the
IEEE 7000 series, which promotes integrating ethics into AI system design.
Transparency and Accountability: Ensuring that AI systems are transparent and
accountable, so their utility can be accurately assessed and harm can be mitigated.
Ongoing Monitoring and Adjustment: Given that AI systems evolve over time,
continuous monitoring is crucial to ensuring that they continue to maximize utility
and minimize harm in changing societal contexts.