0% found this document useful (0 votes)
110 views7 pages

Class 10 Unit 1 Notes

The document outlines the AI Project Cycle, which consists of six stages: Problem Scoping, Data Acquisition, Data Visualization & Exploration, Model Selection & Building, Evaluation, and Deployment. It also categorizes AI models into three domains—Statistical Data, Computer Vision, and Natural Language Processing—while emphasizing the importance of ethical frameworks to guide AI development and deployment. Ethical frameworks are classified into sector-based and value-based types, with principles like respect for autonomy and justice being crucial for ensuring AI systems operate fairly and effectively.

Uploaded by

mohitsharma.u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views7 pages

Class 10 Unit 1 Notes

The document outlines the AI Project Cycle, which consists of six stages: Problem Scoping, Data Acquisition, Data Visualization & Exploration, Model Selection & Building, Evaluation, and Deployment. It also categorizes AI models into three domains—Statistical Data, Computer Vision, and Natural Language Processing—while emphasizing the importance of ethical frameworks to guide AI development and deployment. Ethical frameworks are classified into sector-based and value-based types, with principles like respect for autonomy and justice being crucial for ensuring AI systems operate fairly and effectively.

Uploaded by

mohitsharma.u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Unit 1 : Revisiting AI Project Cycle & Ethical (Notes)

 Frameworks for AI
AI Project Cycle

 The AI Project Cycle is a cyclical process with 6 main stages:


o Problem Scoping: Defining the specific problem to be solved,
identifying relevant parameters, and clarifying the goals and
objectives of the AI solution.
o Data Acquisition: Collecting data from reliable and authentic
sources that is sufficient and relevant to the problem parameters.
Ensuring data quality is also important.
o Data Visualization & Exploration: Transforming raw data into
visual representations like graphs, databases, flow charts, and
maps to identify patterns and make interpretation easier. This
makes data interpretation more intuitive and accessible.
o Model Selection & Building: Researching potential AI models
suitable for the problem, testing different models to determine
the most efficient one, selecting the most effective model as the
foundation, and developing algorithms around it.
o Evaluation: Testing the model with new data not used during
training, analysing the results to identify strengths and
weaknesses, and refining and improving the model based on these
insights.
o Deployment: Integrating the AI solution into real-world
environments, ensuring its successful operation in practical
settings, and delivering value and impact to users and
stakeholders.
Introduction to AI Domains

 AI becomes intelligent based on the training it receives from datasets.


 AI models can be broadly categorised into three domains based on the
type of data fed into them:
o Statistical Data:
 Related to data systems and processes where the system
collects numerous data, maintains datasets, and derives
meaning from them.
 The extracted information can be used for decision-making.
 Example: Price Comparison Websites.
 Credit Scoring Systems
 Process financial history, payment patterns, and
demographic data
 Calculate risk scores for loan approvals
 Examples: FICO scoring models, bank lending
algorithms
o Computer Vision (CV):
 The capability of a machine to get and analyse visual
information and predict decisions.
 The process involves image acquiring, screening, analysing,
identifying, and extracting information.
 Input can be photographs, videos, and pictures from various
sensors.
 Computer vision projects translate digital visual data into
computer-readable language to aid decision-making.
 The main objective is to teach machines to collect
information from pixels.
 Examples: Agricultural Monitoring (crop monitoring, pest
detection, yield estimation using drones), Surveillance
Systems (detecting suspicious activities, tracking
individuals/vehicles).
o Natural Language Processing (NLP):
 Deals with the interaction between computers and humans
using natural language (spoken and written by people).
 Attempts to extract information from spoken and written
words using algorithms.
 The ultimate objective is to read, decipher, understand, and
make sense of human languages in a valuable manner.
 Examples: Email filters (spam detection), Machine
Translation (like Google Translate).
Ethical Frameworks for AI

 Frameworks: A set of steps that help in solving problems by providing a


structured approach, ensuring all relevant factors are considered, and
facilitating communication and consistency.
 Ethical Frameworks: Frameworks that help ensure choices made do not
cause unintended harm by providing a systematic approach to
navigating complex moral dilemmas and promoting positive outcomes.
 Why we need Ethical Frameworks for AI:
o AI is used as a decision-making/influencing tool, so we need to
ensure it makes morally acceptable recommendations.
o Ethical frameworks help avoid unintended outcomes, even before
they occur, especially those resulting from bias in AI solutions
(e.g., biased hiring algorithms).
 Factors which could influence decision-making (knowingly or
unknowingly): Culture, Value of humans, Value of non-humans, Religion,
Intuition & Values, Identity of the charity recipient, Location of the
recipient, Bias towards relatives, Uncovering information available.

 Types of Ethical Frameworks: Classified into sector-based and value


based frameworks.
o Sector-based Frameworks: Tailored to specific sectors or
industries.
 Example: Bioethics in healthcare, focusing on patient
privacy, data security, and ethical use of AI in medical
decision-making. Can also apply to finance, education,
transportation, agriculture, governance, and law
enforcement.
o Value-based Frameworks: Focus on fundamental ethical
principles and values guiding decision-making. Classified into three
categories:
 Rights-based: Prioritizes the protection of human rights and
dignity, valuing human life, and emphasizing individual
autonomy and freedoms. In AI, this ensures systems do not
violate human rights or discriminate.
 Utility-based: Evaluates actions based on maximizing
overall good and minimizing harm, aiming for the greatest
benefit for the greatest number. In AI, this involves
weighing potential benefits against risks like job
displacement or privacy concerns.

 Virtue-based: Focuses on the character and intentions of


individuals involved in decision-making, considering
alignment with virtuous principles like honesty, compassion,
and integrity. In AI, this involves whether developers, users,
and regulators uphold ethical values.

 Bioethics: An ethical framework used in healthcare and life sciences,


dealing with ethical issues related to health, medicine, and biological
sciences, ensuring AI applications in healthcare adhere to ethical
standards.
o Principles of bioethics:
 Respect for Autonomy: Enabling users to be fully aware of
decision-making. For AI, users should know how the
algorithm functions, the data it was trained on should be
reproducible and accessible, and model predictions and
data labels should be released if performance concerns
arise.
 Do not harm (Non-maleficence): Avoiding harm to anyone
(human or non-human) at all costs, choosing the least
harmful path if no harmless option exists. AI algorithms
must be trained on datasets that equitably reduce harm for
all groups, preventing inappropriate resource allocation.
 Ensure maximum benefit for all (Beneficence): Actions
must focus on providing the maximum benefit possible,
going beyond avoiding harm. AI solutions should be held to
clinical practice standards and use unbiased training data
reflecting the needs of all populations.
 Give justice (Justice): All benefits and burdens must be
distributed fairly across people irrespective of their
background. Solution development requires understanding
social structures causing biases, and the solution needs to
be aware of social determinants and work against
inequities.

Questions and Answers


1. What are the six stages of the AI Project Cycle?
The AI Project Cycle consists of six key stages: Problem Scoping, where the
specific problem is defined, relevant parameters identified, and project goals
clarified; Data Acquisition, involving the collection of data from reliable and
authentic sources; Data Visualisation & Exploration, where raw data is
transformed into visual formats to identify patterns; Model Selection & Building,
which includes researching, testing, and selecting the most effective AI model;
Evaluation, where the model is tested with new data to assess its performance;
and Deployment, which involves integrating the AI solution into real-world
environments.

2. What are the three main domains of AI based on the type of data used
for training?
Based on the type of data fed into AI models, there are three main domains:
Statistical Data, which involves collecting and analysing large datasets to derive
meaning and inform decisions, exemplified by price comparison websites;
Computer Vision (CV), which focuses on enabling machines to acquire and
analyse visual information, with applications in areas like agricultural monitoring
and surveillance systems; and Natural Language Processing (NLP), which deals
with the interaction between computers and humans using natural language,
with applications such as email filters and machine translation.

3. What is the purpose of ethical frameworks in the context of Artificial


Intelligence?
Ethical frameworks for AI are structured approaches designed to help ensure
that the development and deployment of AI systems align with moral principles
and values. They provide a step-by-step guide for navigating complex ethical
dilemmas, helping to distinguish right from wrong and prevent unintended
harm. By using ethical frameworks, developers and organisations can make well-
informed decisions, avoid biased outcomes, and promote positive impacts for
all stakeholders.

4. What are the two main categories of ethical frameworks for AI?
Ethical frameworks for AI can be broadly categorised into two main types:
Sector-based frameworks and Value-based frameworks. Sector-based
frameworks are tailored to specific industries or domains, such as Bioethics in
healthcare, which addresses ethical considerations related to patient privacy
and the use of AI in medical decisions. Value-based frameworks, on the other
hand, focus on fundamental ethical principles that guide decision-making, and
include categories like Rights-based, Utility-based, and Virtue-based
frameworks.

5. Can you explain the key principles of a value-based ethical framework


for AI?
Value-based ethical frameworks are centred on fundamental moral
philosophies. Rights-based frameworks prioritise the protection of human rights
and dignity. Utility-based frameworks evaluate actions based on their ability to
maximise overall benefit and minimise harm. Virtue-based frameworks focus on
the character and intentions of individuals involved in the AI lifecycle,
emphasising virtuous principles like honesty and integrity.

6. What is Bioethics and how does it serve as a sector-based ethical


framework for AI in healthcare?
Bioethics is a sector-based ethical framework specifically used in healthcare and
life sciences. It addresses ethical issues arising from advancements in biology
and medicine, and in the context of AI, it ensures that AI applications in
healthcare adhere to ethical standards. Key principles of bioethics include
Respect for Autonomy (ensuring users are aware of decision-making processes),
Non-maleficence (avoiding harm), Beneficence (maximising benefit), and Justice
(ensuring fair distribution of benefits and burdens).

7. Why is it important to consider factors like culture and values when


developing AI systems?
Factors like culture, religion, personal values, and societal biases can
significantly influence ethical decision-making. When developing AI systems, it
is crucial to consider these factors to avoid unintended consequences and
ensure that AI makes morally acceptable and fair recommendations. Awareness
of these influences helps in uncovering potential biases in data and algorithms,
leading to the development of more equitable and ethically sound AI solutions.
Case Study
A company aimed to support hospitals in optimizing patient care by creating an
AI algorithm designed to identify individuals at high risk. The objective was to
provide healthcare providers with valuable insights to allocate resources
effectively and ensure those most in need receive appropriate attention.
However, potential unintended consequences lead to some problems in the
model, such as the algorithm inadvertently exacerbating existing biases or
inaccuracies in the data, potentially leading to misclassification of patients or
overlooking critical cases. Addressing concerns about the algorithm's accuracy
and reliability becomes paramount, as any flaws in its design or training data
could compromise patient care and outcomes
The problem it caused:
Patients from the Western region of a particular area, who were categorized at
the same risk level by the algorithm, generally exhibited more severe health
conditions compared to patients from other regions.
Why the problem happened:
• The algorithm utilized was trained on healthcare expense data as a
measure for health metrics rather than actual physical illness.

• This algorithm was created in the United States where less money is spent
on western region patient healthcare than other ethnic patient healthcare.

8. How can the principles of Bioethics be applied to avoid unintended


negative consequences in AI for healthcare, as illustrated in the case
study?
The case study highlighted how an AI algorithm trained on healthcare expense
data inadvertently disadvantaged patients from a specific region. By applying
the principles of Bioethics, such negative consequences could be avoided.
Respect for Autonomy would involve making the data and functioning of the
algorithm transparent. Non-maleficence would necessitate training the
algorithm on datasets that equitably reduce harm for all patient groups.
Beneficence would require striving for maximum benefit for all patients, using
unbiased data reflecting diverse healthcare needs. Justice would demand a deep
understanding of social structures causing disparities and actively working
against them to ensure fair distribution of healthcare resources.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy