0% found this document useful (0 votes)
135 views8 pages

Chapter 3 (Ethics in AI) (Part 1)

This document discusses ethics in artificial intelligence, specifically concerning data and implications of AI technology. It defines ethics and AI ethics, noting that AI systems can reflect biases in the real-world data used to train them. For example, recruitment systems developed by Amazon and risk assessment tools created by ProPublica demonstrated gender and racial biases, respectively. The document also discusses the problem of inclusion that can result when certain groups are left out of or discriminated against by AI systems due to the biases learned from training data.

Uploaded by

Aryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views8 pages

Chapter 3 (Ethics in AI) (Part 1)

This document discusses ethics in artificial intelligence, specifically concerning data and implications of AI technology. It defines ethics and AI ethics, noting that AI systems can reflect biases in the real-world data used to train them. For example, recruitment systems developed by Amazon and risk assessment tools created by ProPublica demonstrated gender and racial biases, respectively. The document also discusses the problem of inclusion that can result when certain groups are left out of or discriminated against by AI systems due to the biases learned from training data.

Uploaded by

Aryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ARTIFICIAL INTELLIGENCE

CHAPTER 3 (ETHICS IN AI)


PART -1
AI Ethics

➢ Ethics is defined as the moral principles governing the behavior or actions


of an individual or a group.
ARTIFICIAL INTELLIGENCE (AI) ETHICS
 Ethical concerns are one of the most critical problem areas that have
emerged out of the development of Artificial Intelligence and related
technology. In the simplest possible words, we can define ‘ethics’ as a
system of moral principles that govern individual’s behaviour or actions.
Ethics are concerned with what is good for individuals and societies.
Similarly, ethical concerns are the issues, situations or concerns that cause
individuals, societies and/or organisations to evaluate different choices in
terms of what is right (ethical) and what is wrong (unethical).
The term ‘AI Ethics’ is a blanket term that is used to deal with all the ethical
concerns and issues related to AI systems. AI Ethics are normally divided into
two categories:

AI ETHICS

Concerns related to data

Concerns related to implications of the AI technology itself


ETHICAL CONCERNS OF AI RELATED TO DATA
(BIAS AND INCLUSIONS CONCERNS)

BIAS IN REAL WORLD DATA:


Bias is a phenomenon that occurs when an algorithm produces results that are
discriminatory due to the built-in biases created by the programmer.
AI bias is the underlying prejudice in data that’s used to create AI algorithms, which
can ultimately result in discrimination and other social consequences.
The problem here is that the AI system learns from the real-world data fed into it.
This means that AI systems can reinforce the biases found in AI systems. For
example, a computer system trained on the data for last 200 years might find that
more females were involved in certain jobs or that more percentage of successful
businesses were established by men and conclude that specific genders are better
equipped for handling certain jobs (gender bias).
Understanding or even detecting such biases is not easy because many AI systems
act as black boxes. The reason behind their decision-making is not easy or in some
cases, even possible to understand. Many times, programmers of AI systems
themselves cannot explain the logic behind decisions taken by the AI systems.
Two famous cases of AI Bias
Amazon Recruitment ProPublica Judiciary services

In 2014, scientists at the Amazon In 2016, the scientists at the ProPublica


developed a recruitment AI to select developed a system to predict the
the software engineers that the chances of repeated offence by
company might consider for hiring. criminals. The idea was to help judges
make better judgements with this
Amazon quickly found out that system additional information
had picked gender bias from the
data that was used for making the The system picked up racial bias that
selection and as a result was had existed in America for hundred of
discriminating against women. years and became biased towards
the black community.
Amazon was forced to abandon the
AI recruitment system in 2017.
Since the beginning of computer programming, the programmers have
understood the classic computer maxim: Garbage in, Garbage out, i.e., the
output provided by computers depend on the inputs given to it. Feed useless
or irrelevant inputs and the outputs will also be useless and irrelevant.

This also holds true for AI systems. But in case of AIs, this problem is further
complicated because AI systems learn from the data fed into them. This
real world data is rarely free from bias! AI systems trained on biased data
also learn the bias from the data.
PROBLEM OF INCLUSION:
Inclusion in AI is the ability to imagine and achieve a good life in every area that is
meaningful.
AI systems trained on biased real world data create problem of inclusion, i.e. the
problem that some people are left out of AI decision-making system. Consider the
example of AI system used by Amazon for recruitment. This created a situation in
which many eligible females were left out of consideration. This is known as problem
of inclusion.

Click on the link given below to watch the video on AI Ethics and the problem of
inclusion.

https://youtu.be/qPvMKkN8ES8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy