AIPROJECT
AIPROJECT
Academic year-2024-25
ARTIFICIAL INTELLIGENCE
PROJECT ON
Ai project cycle
Student’s profile
NAME: MD.RAIYAN
CLASS: x A
ROLL: 19
ADM NO: 10446
SUBJECT:ARTIFICIAL INTELLIGENCE
TEACHER IN CHARGE: mr.waris jamal
GYAN NIKETAN SCHOoL
TEACHER’S SIGNATURE-
Acknowledgement
In the accomplishment of this project
successfully,many people have best owned upon me
their blessings and the heartpledged support.
1 Introduction 5
Different stages of
2 AI project cycle 5
4 Data Acquisition 7
6 Modelling 8-12
7 Evaluation 12-13
8 Bibliography 14
AI PROJECT CYCLE
INTRODUCTION:
The AI Project Cycle is a structured framework that outlines
the different stages of Creating an AI system.
It is a systematic and step by step process for developing
and implementing AI Projects.
Each stage involves careful planning and execution to
design and deploy AI solutions.
1. PROBLEM SCOPING
2.DATA
DATA ACQUISITION
3.DATA EXPLORATION
4.MODELLING
5.EVALUATION
Problem scoping—
1. It is the stage where we set clear goals and outline the
objectives of the AI project.
It includes precisely outlining the issues,defining them
explicitly,identifying their causes and developing a plan to
fix them.
SDGS–
Sustainable Development Goals(SDGs) or Global goals are
17 goals that were agreed upon by world leaders from 193
countries in 2015 at the United Nations Headquaters in New
York.It aims to make the world a better place by 2030 by
addressing issues like poverty,hunger,inequality,and
climate change
change..
GOAL1:NO POVERTY
GOAL 2:ZERO HUNGER
GOAL3:GOOD HEALTH AND WELL BEING
GOAL 4:QUALITY EDUCATION
GOAL 5:GENDER EQUALITY
GOAL 6:CLEAN WATER AND SANITATION
GOAL 7:AFFORDABLE AND CLEAN ENERGY
GOAL 8:DECENT WORK AND ECONOMIC GROWTH
GOAL 9:INDUSTRY,INNOVATION,INFRASTRUCTURE
GOAL 10:REQUIRED INEQUALITIES
GOAL 11:SUSTAINABLE CITIES AND COMMUNITIES
GOAL 12:RESPONSIBLE CONSUMPTION AND PRODUCTION
GOAL 13:CLIMATE ACTION
GOAL 14:LIFE BELOW WATER
GOAL 15:LIFE ON LAND
GOAL 16:PEACE JUSTICE AND STRONG INSTITUTIONS
GOAL 17:PARTNERSHIP FOR THE GOALS
Data features-
It describe the type of information that will be collected in
response to the problem statement.
Sources of data
Surveys
Web scraping
Sensors
Cameras
Observations
APIs(Application Programmimg Interfaces)
Data exploration-
This stage involves the exploration and analysis of the
collevcted data to interpret patterns,trends and relationships.
It helps us in the following ways—
It simplifies complex data;making it easier to understand .
It helps gain a deeper understanding of the
trends,relationships,and patterns present within the data.
It uncovers hidden relationshipsor anomalies(odd
behaviour).
It helps us select models for the subsequent AI Project
Cycle.
It makes it deasy to communicate insights to others,even non-
technical people.
Data visualisation tools
Bullet Graphs
Histogram
Scatter plots
Tree Diagram
Flowchart
Modelling–
To select the appropiate AI model to achieve the goal.This
model should be able to learn from the data and make
predictions.
AI Modelling refers to the development of a program or
algorithm that can be used to draw conclusions or generate
predictions based on the available data.
It is of two types–
1.Rule-based model
2.Learning-based model
Rule-based approach
It refers to AI Modelling where the relationships or patterns
in the data are defined by the developer and produces
outcomes that are based on some pre-defined conditions or
rules
The developer feeeds data along with some rules to the
model.The rules are in the form;“If this happens,Then do
this”.
It uses labelled data
data.
It may require less training time
time.
Learning-based approach
It is of two types----
1.Classification
2.Regression
CLASSIFICATION
The model learns from the labelled
data and then classifies new data
on the basis of their labels to predict
the result.
It works on Discrete and labelled
datasets which means it is used
with the data that can be grouped
into specific categories
categories.
Real life eg-Email spam
detector,image recognition,language
detection etc;
Regression
The model predicts the behaviour of
One variable depending on the value
Of another variable.
It works on Contineous and labelled
Data(Data that can have anyb value
Data(
Within a certain range) and used to
Predict contineous numeric value
value.
In other words,a regression model
Predicts the behaviour of one variable
Depending on the value of another variable.
Real life eg-Weather forecasting,stock price
prediction,economic growth etc;
Unsupervised learning
It works on unlabelled datasets i.e data that is fed is
random.
It is a Data driven process where models are used to
identify relationships,patterns,and trends in the data that is
fed into them.
Its goal is to find similarities and differences between data
points and help users to understand what the data is about
and its major features identified by the machine.
It learns from experience.
It is of two types----
1.Clustering
2.Dimensionality Reduction
Clustering
The model groups the dataset
into different clusters based on
algorithms or rules that it generates
on its own.
It works on Discrete and unlabelled
or random datasets which means it is
Used with data that can be grouped into Specific categories.
Real life eg-Search engines,social network analysis etc;
Dimensionality reduction
It makes complex,higher-
dimensional
data simple while still making
sense of it.
Dimensionality Reduction
makes complex data simpler,
even though it comes at the
cost of losing some
Information.
Real life eg-Document
classification,Image compression etc;
Reinforcement learning(rl)
A type of AI model that aims to maximise rewards through
trial and error is called a Reinforcement learning.
It is a type of machine learning method where an intelligent
agent, i.e., a computer program,interacts with the
environment in such a way that it can gain maximum
rewards.
It works on unlabelled data
data.
It is a Feedback-based learning technique.
Real life eg-Recommendation systems,self-driving
cars,Google’s AlphaGo etc;
Evaluation
It refers to systematically checking and analying the
merits,correctness,reliability,of AI model based on the
output produced.
It helps us to find the best model that represents our data
and how well the chosen model will work in the future.
Once a model has been developed and trained,it is tested
using a dataset called Testing data that was separated from
the acquired dataset at the Data acquisition stage.
The efficieny and effectiveness of the model are determined
on the basis of various parameters that help assess its
overall performance.
The parameters are-----
1.Accuracy
2.Precision
3.Recall
4.F1 Score
BIBLIOGRAPHY
https://en.wikipedia.org
https://intellipaat.com
https://m.youtube.com
https://chat.openai.com