IAI-Unit-1(Part-1)
IAI-Unit-1(Part-1)
AI is one of the fascina ng and universal fields of Computer science which has a great scope in future. AI holds
a tendency to cause a machine to work as a human.
one of the booming technologies of computer science is Ar ficial Intelligence which is ready to create a new
revolu on in the world by making intelligent machines. The Ar ficial Intelligence is now all around us. It is
currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing
chess, proving theorems, playing music, Pain ng, etc.
Ar ficial Intelligence is composed of two words Ar ficial and Intelligence, where Ar ficial defines "man-
made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."
So, we can define AI as:
"It is a branch of computer science by which we can create intelligent machines which can behave like a
human, think like humans, and able to make decisions."
With Ar ficial Intelligence you do not need to preprogram a machine to do some work, despite that you can
create a machine with programmed algorithms which can work with own intelligence, and that is the
awesomeness of AI.
o With the help of AI, you can create such so ware or devices which can solve real-world problems very
easily and with accuracy such as health issues, marke ng, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana, Google Assistant,
Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where survival of
humans can be at risk.
The Intelligence is an intangible part of our brain which is a combina on of Reasoning, learning, problem-solving
percep on, language understanding, etc.
Ar ficial Intelligence can be categorized in several ways, primarily based on two main criteria: capabili es and
func onality.
1. Weak AI or Narrow AI: Narrow AI, also known as Weak AI, is like a specialist in the world of Ar ficial Intelligence.
Imagine it as a virtual expert dedicated to performing one specific task with intelligence. For example, think of
Apple's Siri. It's pre y smart when it comes to voice commands and answering ques ons, but it doesn't
understand or do much beyond that. Narrow AI operates within strict limits, and if you ask it to step outside its
comfort zone, it might not perform as expected. This type of AI is everywhere in today's world, from self-driving
cars to image recogni on on your smartphone. IBM's Watson is another example of Narrow AI. It's a
supercomputer that combines Expert Systems, Machine Learning, and Natural Language Processing, but it's s ll
a specialist. It's excellent at crunching data and providing insights but doesn't venture far beyond its defined
tasks.
2. General AI: General AI, o en referred to as Strong AI, is like the holy grail of ar ficial intelligence. Picture it as a
system that could do any intellectual task with the efficiency of a human. General AI aims to create machines
that think and learn like humans, but here's the catch: there's no such system in existence yet. Researchers
worldwide are working diligently to make it a reality, but it's a complex journey that will require significant me
and effort.
3. Super AI: Super AI takes AI to another level en rely. It's the pinnacle of machine intelligence, where machines
surpass human capabili es in every cogni ve aspect. These machines can think, reason, solve puzzles, make
judgments, plan, learn, and communicate independently. However, it's important to note that Super AI is
currently a hypothe cal concept. Achieving such a level of ar ficial intelligence would be nothing short of
revolu onary, and it's a challenge that's s ll on the horizon.
1. Reac ve Machines: Reac ve Machines represent the most basic form of Ar ficial Intelligence. These machines
live in the present moment and don't have memories or past experiences to guide their ac ons. They focus
solely on the current scenario and respond with the best possible ac on based on their programming. An
example of a reac ve machine is IBM's Deep Blue, the chess-playing computer, and Google's AlphaGo, which
excels at the ancient game of Go.
2. Limited Memory: Limited Memory machines can remember some past experiences or data but only for a short
period. They use this stored informa on to make decisions and navigate situa ons. A great example of this type
of AI is seen in self-driving cars. These vehicles store recent data like the speed of nearby cars, distances, and
speed limits to safely navigate the road.
3. Theory of Mind: Theory of Mind AI is s ll in the realm of research and development. These AI systems aim to
understand human emo ons and beliefs and engage in social interac ons much like humans. While this type of
AI hasn't fully materialized yet, researchers are making significant strides toward crea ng machines that can
understand and interact with humans on a deeper, more emo onal level.
4. Self-Awareness: Self-Awareness AI is the future fron er of Ar ficial Intelligence. These machines will be
extraordinarily intelligent, possessing their own consciousness, emo ons, and self-awareness. They'll be smarter
than the human mind itself. However, it's crucial to note that Self-Awareness AI remains a hypothe cal concept
and does not yet exist in reality. Achieving this level of AI would be a monumental leap in technology and
understanding.
o High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes
decisions as per pre-experience or informa on.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat
a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same ac on mul ple mes with high
accuracy.
o Useful for risky areas: AI machines can be helpful in situa ons such as defusing a bomb, exploring the ocean
floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently
used by various E-commerce websites to show the products as per customer requirement.
o Useful as a public u lity: AI can be very useful for public u li es such as a self-driving car which can make our
journey safer and hassle-free, facial recogni on for security purpose, Natural language processing to
communicate with the human in human-language, etc.
o Enhanced Security: AI can be very helpful in enhancing security, as It can detect and respond to cyber threats in
real me, helping companies protect their data and systems.
o Aid in Research: AI is very helpful in the research field as it assists researchers by processing and analyzing large
datasets, accelera ng discoveries in fields such as astronomy, genomics, and materials science.
o High Cost: The hardware and so ware requirement of AI is very costly as it requires lots of maintenance to meet
current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but s ll they cannot work out of the
box, as the robot will only do that work for which they are trained, or programmed.
o No feelings and emo ons: AI machines can be an outstanding performer, but s ll it does not have the feeling so
it cannot make any kind of emo onal a achment with human, and may some me be harmful for users if the
proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are ge ng more dependent on
devices and hence they are losing their mental capabili es.
o No Original Crea vity: As humans are so crea ve and can imagine some new ideas but s ll AI machines cannot
beat this power of human intelligence and cannot be crea ve and imagina ve.
o Complexity: Making and keeping AI systems can be very complicated and need a lot of knowledge. This
can make it hard for some groups or people to use them.
o Job Concerns: As AI gets be er, it might take away not just basic jobs but also some skilled ones. This
worries people about losing jobs in different fields.
Challenges of AI
Ar ficial Intelligence offers incredible advantages, but it also presents some challenges that need to be addressed:
o Doing the Right Thing: AI should make the right choices, but some mes it doesn't. It can make mistakes
or do things that aren't fair. We need to teach AI to be be er at making good choices.
o Government and AI: Some mes, governments use AI to keep an eye on people. This can be a problem
for our freedom. We need to make sure they use AI in a good way.
o Bias in AI: AI can some mes be a bit unfair, especially when it comes to recognizing people's faces. This
can cause problems, especially for people who aren't like the majority.
o AI and Social Media: What you see on social media is o en decided by AI. But some mes, AI shows
things that aren't true or are kind of mean. We need to make sure AI shows the right stuff.
o Legal and Regulatory Challenges: The rapid evolu on of AI has outpaced the development of
comprehensive laws and regula ons, leading to uncertainty about issues like liability and responsibility.
Applica ons of AI
Unit-1 Part-1
Scope of AI:
1. Games:
There are several ways in which ar ficial intelligence (AI) is being used in
the gaming industry:
There are a few limita ons to the use of ar ficial intelligence (AI) in the
gaming industry:
5. Ethical concerns: Some people may have ethical concerns about the
use of AI in games, such as the poten al for AI to be used for
unethical purposes or to perpetuate harmful biases.
Game Playing in Ar ficial Intelligence
There are two main approaches to game playing in AI, rule-based systems
and machine learning-based systems.
o There might be some situa ons where more than one agent is
searching for the solu on in the same search space, and this
situa on usually occurs in game playing.
o So, Searches in which two or more players with conflic ng goals are
trying to explore the same search space for the solu on, are called
adversarial searches, o en known as Games.
Zero-Sum Game
The phrase "zero-sum game" comes from game theory and the no on
that if one person wins and the other person loses, this produces a net
gain of zero.
o One player of the game try to maximize one single value, while other
player tries to minimize it.
o What to do.
Each of the players is trying to find out the response of his opponent to
their ac ons. This requires embedded thinking or backward reasoning to
solve the game problems in AI.
o U lity(s, p): A u lity func on gives the final numeric value for a
game that ends in terminal states s for player p. It is also called payoff
func on. For Chess, the outcomes are a win, loss, or draw and its
payoff values are +1, 0, ½. And for c-tac-toe, u lity values are +1, -
1, and 0.
Introduction
This field has gained significant trac on with the incredible advancements
in machine learning, cogni ve compu ng, and ar ficial intelligence in the
past few years. Theorem proving is a vital tool in fields such as computer
science, mathema cs, physics, engineering, and ar ficial intelligence.
Theorem proving has numerous applica ons. Its applica ons extend to
various fields such as computer science and mathema cs, and it plays an
essen al role in the op miza on of complex systems. In computer
science, theorem proving is commonly used to verify the correctness of
algorithms and so ware systems. In mathema cs, theorem proving helps
Automa ng the tedious process of mathema cal proofs.
There are mul ple components to theorem proving. Each component has
a vital role in the overall theorem proving process. These components
include:
Logic: The logic used in theorem proving determines the strength of the
statement or proposi on to be proved. For example, the first-order logic
can handle statements involving natural numbers and sets, while second-
order logic can handle statements involving natural numbers and sets and
second-order statements as well.
Inference Engine: This component is responsible for processing the rules
of inference that are used to create new statements or proposi ons.
Inference engines implement various rules such as Modus Ponens (If A
implies B and A is true, then B is true) and resolu on
Axioms: These are assumed statements or proposi ons that are assumed
to be true and are used to build the theorem. Axioms are the building
blocks of mathema cal theorems, and are assumed to be true or self-
evident.
Lemmas: These are intermediate results that are assumed to be true, and
are essen al in proving the final result.
Op miza on: Theorem proving can help op mize complex systems that
are prone to errors. By iden fying and fixing errors automa cally, theorem
proving helps improve the effec veness and efficiency of these systems.
There are mul ple techniques used in theorem proving that help op mize
the process. These techniques include:
Backward Chaining: This is a technique where the theorem prover starts
by assuming the goal and then moves backward trying to derive the
prerequisites of the goal. This technique is widely used in automated
reasoning where the theorem to be proved is known beforehand.
Theorem proving is not without its challenges. Here are some of the
factors that make the task harder:
Ambiguity: The natural language used to state theorems can some mes
be ambiguous. This poses a challenge when transla ng the language into
a formal language that the theorem prover can process and understand.
What is NLP?
NLP stands for Natural Language Processing, which is a part of Computer Science, Human
language, and Artificial Intelligence. It is the technology that is used by machines to understand, analyse,
manipulate, and interpret human's languages. It helps developers to organize knowledge for performing tasks such
as translation, automatic summarization, Named Entity Recognition (NER), speech recognition,
relationship extraction, and topic segmentation.
History of NLP
(1940-1960) - Focused on Machine Translation (MT)
The Natural Languages Processing started in the year 1940s.
1948 - In the Year 1948, the first recognisable NLP application was introduced in Birkbeck College, London.
1950s - In the Year 1950s, there was a conflicting view between linguistics and computer science. Now, Chomsky
developed his first book syntactic structures and claimed that language is generative in nature.
In 1957, Chomsky also introduced the idea of Generative Grammar, which is rule based descriptions of syntactic
structures.
(1960-1980) - Flavored with Artificial Intelligence (AI)
In the year 1960 to 1980, the key developments were:
Augmented Transition Networks (ATN)
Augmented Transition Networks is a finite state machine that is capable of recognizing regular languages.
Case Grammar
Case Grammar was developed by Linguist Charles J. Fillmore in the year 1968. Case Grammar uses languages
such as English to express the relationship between nouns and verbs by using the preposition.
In Case Grammar, case roles can be defined to link certain kinds of verbs and objects.
For example: "Neha broke the mirror with the hammer". In this example case grammar identify Neha as an agent,
mirror as a theme, and hammer as an instrument.
In the year 1960 to 1980, key systems were:
SHRDLU
SHRDLU is a program written by Terry Winograd in 1968-70. It helps users to communicate with the computer
and moving objects. It can handle instructions such as "pick up the green boll" and also answer the questions like
"What is inside the black box." The main importance of SHRDLU is that it shows those syntax, semantics, and
reasoning about the world that can be combined to produce a system that understands a natural language.
LUNAR
LUNAR is the classic example of a Natural Language database interface system that is used ATNs and Woods'
Procedural Semantics. It was capable of translating elaborate natural language expressions into database queries
and handle 78% of requests without errors.
1980 - Current
Till the year 1980, natural language processing systems were based on complex sets of hand-written rules. After
1980, NLP introduced machine learning algorithms for language processing.
In the beginning of the year 1990s, NLP started growing faster and achieved good process accuracy, especially in
English Grammar. In 1990 also, an electronic text introduced, which provided a good resource for training and
examining natural language programs. Other factors may include the availability of computers with fast CPUs
and more memory. The major factor behind the advancement of natural language processing was the Internet.
Now, modern NLP consists of various applications, like speech recognition, machine translation, and machine
text reading. When we combine all these applications then it allows the artificial intelligence to gain knowledge
of the world. Let's consider the example of AMAZON ALEXA, using this robot you can ask the question to Alexa,
and it will reply to you.
Advantages of NLP
o NLP helps users to ask questions about any subject and get a direct response within seconds.
o NLP offers exact answers to the question means it does not offer unnecessary and unwanted information.
o NLP helps computers to communicate with humans in their languages.
o It is very time efficient.
o Most of the companies use NLP to improve the efficiency of documentation processes, accuracy of
documentation, and identify the information from large databases.
Disadvantages of NLP
A list of disadvantages of NLP is given below:
o NLP may not show context.
o NLP is unpredictable
o NLP may require more keystrokes.
o NLP is unable to adapt to the new domain, and it has a limited function that's why NLP is built for a single
and specific task only.
Components of NLP
There are the following two components of NLP -
1. Natural Language Understanding (NLU)(What do the user say?)
Natural Language Understanding (NLU) helps the machine to understand and analyse human language by
extracting the metadata from content such as concepts, entities, keywords, emotion, relations, and semantic roles.
NLU mainly used in Business applications to understand the customer's problem in both spoken and written
language.
NLU involves the following tasks -
o It is used to map the given input into useful representation.
o It is used to analyze different aspects of the language.
2. Natural Language Generation (NLG)(What should I say to the use?)
Natural Language Generation (NLG) acts as a translator that converts the computerized data into natural language
representation. It mainly involves Text planning, Sentence planning, and Text Realization.
Note: The NLU is difficult than NLG.
Difference between NLU and NLG
NLU NLG
NLU is the process of reading and interpreting NLG is the process of writing or generating language.
language.
It produces non-linguistic outputs from natural It produces constructing natural language outputs from non-
language inputs. linguistic inputs.
Applications of NLP
There are the following applications of NLP -
1. Question Answering
Question Answering focuses on building systems that automatically answer the questions asked by humans in a
natural language.
2. Spam Detection
Spam detection is used to detect unwanted e-mails getting to a user's inbox.
3. Sentiment Analysis
Sentiment Analysis is also known as opinion mining. It is used on the web to analyse the attitude, behaviour, and
emotional state of the sender. This application is implemented through a combination of NLP (Natural Language
Processing) and statistics by assigning the values to the text (positive, negative, or natural), identify the mood of
the context (happy, sad, angry, etc.)
4. Machine Translation
Machine translation is used to translate text or speech from one natural language to another natural language.
6. Speech Recognition
Speech recognition is used for converting spoken words into text. It is used in applications, such as mobile, home
automation, video recovery, dictating to Microsoft Word, voice biometrics, voice user interface, and so on.
7. Chatbot
Implementing the Chatbot is one of the important applications of NLP. It is used by many companies to provide
the customer's chat services.
8. Information extraction
Information extraction is one of the most important applications of NLP. It is used for extracting structured
information from unstructured or semi-structured machine-readable documents.
9. Natural Language Understanding (NLU)
It converts a large set of text into more formal representations such as first-order logic structures that are easier
for the computer programs to manipulate notations of the natural language processing.
How to build an NLP pipeline
There are the following steps to build an NLP pipeline -
Step1: Sentence Segmentation
Sentence Segment is the first step for building the NLP pipeline. It breaks the paragraph into separate sentences.
Example: Consider the following paragraph -
Independence Day is one of the important festivals for every Indian citizen. It is celebrated on the 15th of
August each year ever since India got independence from the British rule. The day celebrates independence
in the true sense.
Sentence Segment produces the following result:
1. "Independence Day is one of the important festivals for every Indian citizen."
2. "It is celebrated on the 15th of August each year ever since India got independence from the British rule."
3. "This day celebrates independence in the true sense."
Step2: Word Tokenization
Word Tokenizer is used to break the sentence into separate words or tokens.
Example:
JavaTpoint offers Corporate Training, Summer Training, Online Training, and Winter Training.
Word Tokenizer generates the following result:
"JavaTpoint", "offers", "Corporate", "Training", "Summer", "Training", "Online", "Training", "and", "Winter",
"Training", "."
Step3: Stemming
Stemming is used to normalize words into its base form or root form. For example, celebrates, celebrated and
celebrating, all these words are originated with a single root word "celebrate." The big problem with stemming is
that sometimes it produces the root word which may not have any meaning.
For Example, intelligence, intelligent, and intelligently, all these words are originated with a single root word
"intelligen." In English, the word "intelligen" do not have any meaning.
Step 4: Lemmatization
Lemmatization is quite similar to the Stemming. It is used to group different inflected forms of the word, called
Lemma. The main difference between Stemming and lemmatization is that it produces the root word, which has
a meaning.
For example: In lemmatization, the words intelligence, intelligent, and intelligently has a root word intelligent,
which has a meaning.
Step 5: Identifying Stop Words
In English, there are a lot of words that appear very frequently like "is", "and", "the", and "a". NLP pipelines will
flag these words as stop words. Stop words might be filtered out before doing any statistical analysis.
Example: He is a good boy.
Note: When you are building a rock band search engine, then you do not ignore the word "The."
Step 6: Dependency Parsing
Dependency Parsing is used to find that how all the words in the sentence are related to each other.
Step 7: POS tags
POS stands for parts of speech, which includes Noun, verb, adverb, and Adjective. It indicates that how a word
functions with its meaning as well as grammatically within the sentences. A word has one or more parts of speech
based on the context in which it is used.
Example: "Google" something on the Internet.
In the above example, Google is used as a verb, although it is a proper noun.
Step 8: Named Entity Recognition (NER)
Named Entity Recognition (NER) is the process of detecting the named entity such as person name, movie name,
organization name, or location.
Example: Steve Jobs introduced iPhone at the Macworld Conference in San Francisco, California.
Step 9: Chunking
Chunking is used to collect the individual piece of information and grouping them into bigger pieces of sentences.
Phases of NLP
There are the following five phases of NLP:
Natural language has a very large vocabulary. Computer language has a very limited vocabulary.
Natural language is easily understood by humans. Computer language is easily understood by the machines.
On a certain level, computer vision is all about pattern recognition which includes the training process of machine systems
for understanding the visual data such as images and videos, etc.
Firstly, a vast amount of visual labeled data is provided to machines to train it. This labeled data enables the machine to
analyze different patterns in all the data points and can relate to those labels. E.g., suppose we provide visual data of millions
of dog images. In that case, the computer learns from this data, analyzes each photo, shape, the distance between each shape,
color, etc., and hence identifies patterns similar to dogs and generates a model. As a result, this computer vision model can
now accurately detect whether the image contains a dog or not for each input image.
Task Associated with Computer Vision
Although computer vision has been utilized in so many fields, there are a few common tasks for computer vision systems.
These tasks are given below:
o Object classification: Object classification is a computer vision technique/task used to classify an image, such as
whether an image contains a dog, a person's face, or a banana. It analyzes the visual content (videos & images) and
classifies the object into the defined category. It means that we can accurately predict the class of an object present
in an image with image classification.
o Object Identification/detection: Object identification or detection uses image classification to identify and locate
the objects in an image or video. With such detection and identification technique, the system can count objects in
a given image or scene and determine their accurate location and labeling. For example, in a given image, one dog,
one cat, and one duck can be easily detected and classified using the object detection technique.
o Object Verification: The system processes videos, finds the objects based on search criteria, and tracks their
movement.
o Object Landmark Detection: The system defines the key points for the given object in the image data.
o Image Segmentation: Image segmentation not only detects the classes in an image as image classification; instead,
it classifies each pixel of an image to specify what objects it has. It tries to determine the role of each pixel in the
image.
o Object Recognition: In this, the system recognizes the object's location with respect to the image.
How to learn computer Vision?
Although, computer vision requires all basic concepts of machine learning, deep learning, and artificial intelligence. But if
you are eager to learn computer vision, then you must follow below things, which are as follows:
1. Build your foundation:
o Before entering this field, you must have strong knowledge of advanced mathematical concepts such as
Probability, statistics, linear algebra, calculus, etc.
o The knowledge of programming languages like Python would be an extra advantage to getting started with
this domain.
3. Semantic Segmentation
o Semantic Segmentation is not only about detecting the classes in an image as image classification. Instead,
it classifies each pixel of an image to specify what objects it has. It tries to determine the role of each pixel
in the image. It basically classifies pixelS in a particular category without differentiating the object
instances. Or we can say it classifies similar objects as a single class from the pixel levels. For example,
if an image contains two dogs, then semantic segmentation will put both the dogs under the same label. It
tries to understand the role of each pixel in an image.
o
4. Instance Segmentation
o Instance segmentation can classify the objects in an image at pixel level as similar to semantic
segmentation but with a more advanced level. It means Instance Segmentation can classify similar types
of objects into different categories. For example, if visual consists of various cars, then with semantic
segmentation, we can tell that there are multiple cars, but with instance segmentation, we can label them
according to their colour, shape, etc.
5. Panoptic Segmentation
o Panoptic Segmentation is one of the most powerful computer vision techniques as it combines the Instance
and Semantic Segmentation techniques. It means with Panoptic Segmentation, you can classify image
objects at pixel levels and can also identify separate instances of that class.
6. Keypoint Detection
o Keypoint detection tries to detect some key points in an image to give more details about a class of objects.
It basically detects people and localizes their key points. There are mainly two keypoint detection areas,
which are Body Keypoint Detection and Facial Keypoint Detection.
o For example, Facial keypoint detection includes detecting key parts of the human face such as the nose,
eyes, corners, eyebrows, etc. Keypoint detection mainly has applications, including face detection, pose
detection, etc.
7. Person Segmentation
o Person segmentation is a type of image segmentation technique which is used to separate the person from
the background within an image. It can be used after the pose estimation, as with this, we can closely
identify the exact location of the person in the image as well as the pose of that person.
8. Depth Perception
Depth perception is a computer vision technique that provides the visual ability to machines to estimate
the 3D depth/distance of an object from the source. Depth Perception has wide applications, including the
Reconstruction of objects in Augmented Reality, Robotics, self-driving cars, etc. LiDAR(Lights
Detection and Ranging) is one of the popular techniques that is used for in-depth perception. With the
help of laser beams, it measures the relative distance of an object by illuminating it with laser light and
then measuring the reflections using sensors.
9. Image Captioning
o Image captioning, as the name suggests, is about giving a suitable caption to the image that can describe
the image. It makes use of neural networks, where when we input an image, then it generates a caption for
that image that can easily describe the image. It is not only the task of Computer vision but also an NLP
task.
o Actuators: Actuators are the devices that are responsible for moving and controlling a system or machine.
It helps to achieve physical movements by converting energy like electrical, hydraulic and air, etc.
Actuators can create linear as well as rotary motion.
o Power Supply: It is an electrical device that supplies electrical power to an electrical load. The primary
function of the power supply is to convert electrical current to power the load.
o Electric Motors: These are the devices that convert electrical energy into mechanical energy and are
required for the rotational motion of the machines.
o Pneumatic Air Muscles: Air Muscles are soft pneumatic devices that are ideally best fitted for robotics.
They can contract and extend and operate by pressurized air filling a pneumatic bladder. Whenever air is
introduced, it can contract up to 40%.
o Muscles wire: These are made up of nickel-titanium alloy called Nitinol and are very thin in shape. It can
also extend and contract when a specific amount of heat and electric current is supplied into it. Also, it
can be formed and bent into different shapes when it is in its martensitic form. They can contract by 5%
when electrical current passes through them.
o Piezo Motors and Ultrasonic Motors: Piezoelectric motors or Piezo motors are the electrical devices that
receive an electric signal and apply a directional force to an opposing ceramic plate. It helps a robot to
move in the desired direction. These are the best suited electrical motors for industrial robots.
o Sensor: They provide the ability like see, hear, touch and movement like humans. Sensors are the devices
or machines which help to detect the events or changes in the environment and send data to the computer
processor. These devices are usually equipped with other electronic devices. Similar to human organs, the
electrical sensor also plays a crucial role in Artificial Intelligence & robotics. AI algorithms control robots
by sensing the environment, and it provides real-time information to computer processors.
Applications of Robotics
Robotics have different application areas. Some of the important applications domains of robotics are as follows:
o Robotics in defense sectors: The defense sector is undoubtedly the one of the main parts of any country.
Each country wants their defense system to be strong. Robots help to approach inaccessible and dangerous
zone during war. DRDO has developed a robot named Daksh to destroy life-threatening objects safely.
They help soldiers to remain safe and deployed by the military in combat scenarios. Besides combat
support, robots are also deployed in anti-submarine operations, fire support, battle damage management,
strike missions, and laying machines.
o Robotics in Medical sectors: Robots also help in various medical fields such as laparoscopy, neurosurgery,
orthopaedic surgery, disinfecting rooms, dispensing medication, and various other medical domains.
o Robotics in Industrial Sector: Robots are used in various industrial manufacturing industries such as
cutting, welding, assembly, disassembly, pick and place for printed circuit boards, packaging & labelling,
palletizing, product inspection & testing, colour coating, drilling, polishing and handling the materials.
Moreover, Robotics technology increases productivity and profitability and reduces human efforts,
resulting from lower physical strain and injury. The industrial robot has some important advantages, which
are as follows:
o Accuracy
o Flexibility
o Reduced labour charge
o Low noise operation
o Fewer production damages
o Increased productivity rate.
o Robotics in Entertainment: Over the last decade, use of robots is continuously getting increased in
entertainment areas. Robots are being employed in entertainment sector, such as movies, animation, games
and cartoons. Robots are very helpful where repetitive actions are required. A camera-wielding robot helps
shoot a movie scene as many times as needed without getting tired and frustrated. A big-name Disney has
launched hundreds of robots for the film industry.
o Robots in the mining industry: Robotics is very helpful for various mining applications such as robotic
dozing, excavation and haulage, robotic mapping & surveying, robotic drilling and explosive handling,
etc. A mining robot can solely navigate flooded passages and use cameras and other sensors to detect
valuable minerals. Further, robots also help in excavation to detect gases and other materials and keep
humans safe from harm and injuries. The robot rock climbers are used for space exploration, and
underwater drones are used for ocean exploration.
Note: It is important to remember that an expert system is not used to replace the human experts; instead, it is
used to assist the human in making a complex decision. These systems do not have human capabilities of thinking
and work on the basis of the knowledge base of the particular domain.
Below are some popular examples of the Expert System:
o DENDRAL: It was an artificial intelligence project that was made as a chemical analysis expert system.
It was used in organic chemistry to detect unknown organic molecules with the help of their mass spectra
and knowledge base of chemistry.
o MYCIN: It was one of the earliest backward chaining expert systems that was designed to find the bacteria
causing infections like bacteraemia and meningitis. It was also used for the recommendation of antibiotics
and the diagnosis of blood clotting diseases.
o PXDES: It is an expert system that is used to determine the type and level of lung cancer. To determine
the disease, it takes a picture from the upper body, which looks like the shadow. This shadow identifies
the type and degree of harm.
o CaDeT: The CaDet expert system is a diagnostic support system that can detect cancer at early stages.
Characteristics of Expert System
o High Performance: The expert system provides high performance for solving any type of complex
problem of a specific domain with high efficiency and accuracy.
o Understandable: It responds in a way that can be easily understandable by the user. It can take input in
human language and provides the output in the same way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very short period of time.
Components of Expert System
An expert system mainly consists of three components:
o User Interface
o Inference Engine
o Knowledge Base
1. User Interface
With the help of a user interface, the expert system interacts with the user, takes queries as an input in a readable
format, and passes it to the inference engine. After getting the response from the inference engine, it displays the
output to the user. In other words, it is an interface that helps a non-expert user to communicate with the expert
system to find a solution.
2. Inference Engine(Rules of Engine)
o The inference engine is known as the brain of the expert system as it is the main processing unit of the
system. It applies inference rules to the knowledge base to derive a conclusion or deduce new information.
It helps in deriving an error-free solution of queries asked by the user.
o With the help of an inference engine, the system extracts the knowledge from the knowledge base.
o There are two types of inference engine:
o Deterministic Inference engine: The conclusions drawn from this type of inference engine are assumed to
be true. It is based on facts and rules.
o Probabilistic Inference engine: This type of inference engine contains uncertainty in conclusions, and
based on the probability.
Inference engine uses the below modes to derive the solutions:
o Forward Chaining: It starts from the known facts and rules, and applies the inference rules to add their
conclusion to the known facts.
o Backward Chaining: It is a backward reasoning method that starts from the goal and works backward to
prove the known facts.
3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the different experts of the
particular domain. It is considered as big storage of knowledge. The more the knowledge base, the more
precise will be the Expert System.
o It is similar to a database that contains information and rules of a particular domain or subject.
o One can also view the knowledge base as collections of objects and their attributes. Such as a Lion is an
object and its attributes are it is a mammal, it is not a domestic animal, etc.
Components of Knowledge Base
o Factual Knowledge: The knowledge which is based on facts and accepted by knowledge engineers comes
under factual knowledge.
o Heuristic Knowledge: This knowledge is based on practice, the ability to guess, evaluation, and
experiences.
Knowledge Representation: It is used to formalize the knowledge stored in the knowledge base using the If-else
rules.
Knowledge Acquisitions: It is the process of extracting, organizing, and structuring the domain knowledge,
specifying the rules to acquire the knowledge from various experts, and store that knowledge into the knowledge
base.
Development of Expert System
Here, we will explain the working of an expert system by taking an example of MYCIN ES. Below are some
steps to build an MYCIN:
o Firstly, ES should be fed with expert knowledge. In the case of MYCIN, human experts specialized in the
medical field of bacterial infection, provide information about the causes, symptoms, and other knowledge
in that domain.
o The KB of the MYCIN is updated successfully. In order to test it, the doctor provides a new problem to
it. The problem is to identify the presence of the bacteria by inputting the details of a patient, including
the symptoms, current condition, and medical history.
o The ES will need a questionnaire to be filled by the patient to know the general information about the
patient, such as gender, age, etc.
o Now the system has collected all the information, so it will find the solution for the problem by applying
if-then rules using the inference engine and using the facts stored within the KB.
o In the end, it will provide a response to the patient by using the user interface.
Participants in the development of Expert System
There are three primary participants in the building of Expert System:
1. Expert: The success of an ES much depends on the knowledge provided by human experts. These experts
are those persons who are specialized in that specific domain.
2. Knowledge Engineer: Knowledge engineer is the person who gathers the knowledge from the domain
experts and then codifies that knowledge to the system according to the formalism.
3. End-User: This is a particular person or a group of people who may not be experts, and working on the
expert system needs the solution or advice for his queries, which are complex.
Why Expert System?
Before using any technology, we must have an idea about why to use that technology and hence the same for the
ES. Although we have human experts in every field, then what is the need to develop a computer-based system.
So below are the points that are describing the need of the ES:
1. No memory Limitations: It can store as much data as required and can memorize it at the time of its
application. But for human experts, there are some limitations to memorize all things at every time.
2. High Efficiency: If the knowledge base is updated with the correct knowledge, then it provides a highly
efficient output, which may not be possible for a human.
3. Expertise in a domain: There are lots of human experts in each domain, and they all have different skills,
different experiences, and different skills, so it is not easy to get a final output for the query. But if we put
the knowledge gained from human experts into the expert system, then it provides an efficient output by
mixing all the facts and knowledge
4. Not affected by emotions: These systems are not affected by human emotions such as fatigue, anger,
depression, anxiety, etc.. Hence the performance remains constant.
5. High security: These systems provide high security to resolve any query.
6. Considers all the facts: To respond to any query, it checks and considers all the available facts and provides
the result accordingly. But it is possible that a human expert may not consider some facts due to any
reason.
7. Regular updates improve the performance: If there is an issue in the result provided by the expert systems,
we can improve the performance of the system by updating the knowledge base.
Capabilities of the Expert System
Below are some capabilities of an Expert System:
o Advising: It is capable of advising the human being for the query of any domain from the particular ES.
o Provide decision-making capabilities: It provides the capability of decision making in any domain, such
as for making any financial decision, decisions in medical science, etc.
o Demonstrate a device: It is capable of demonstrating any new products such as its features, specifications,
how to use that product, etc.
o Problem-solving: It has problem-solving capabilities.
o Explaining a problem: It is also capable of providing a detailed description of an input problem.
o Interpreting the input: It is capable of interpreting the input given by the user.
o Predicting results: It can be used for the prediction of a result.
o Diagnosis: An ES designed for the medical field is capable of diagnosing a disease without using multiple
components as it already contains various inbuilt medical tools.
Advantages of Expert System
o These systems are highly reproducible.
o They can be used for risky places where the human presence is not safe.
o Error possibilities are less if the KB contains correct knowledge.
o The performance of these systems remains steady as it is not affected by emotions, tension, or fatigue.
o They provide a very high speed to respond to a particular query.
Limitations of Expert System
o The response of the expert system may get wrong if the knowledge base contains the wrong information.
o Like a human being, it cannot produce a creative output for different scenarios.
o Its maintenance and development costs are very high.
o Knowledge acquisition for designing is much difficult.
o For each domain, we require a specific ES, which is one of the big limitations.
o It cannot learn from itself and hence requires manual updates.
Applications of Expert System
o In designing and manufacturing domain
It can be broadly used for designing and manufacturing physical devices such as camera lenses and
automobiles.
o In the knowledge domain
These systems are primarily used for publishing the relevant knowledge to the users. The two popular
ES used for this domain is an advisor and a tax advisor.
o In the finance domain
In the finance industries, it is used to detect any type of possible fraud, suspicious activity, and advise
bankers that if they should provide loans for business or not.
o In the diagnosis and troubleshooting of devices
In medical diagnosis, the ES system is used, and it was the first area where these systems were used.
o Planning and Scheduling
The expert systems can also be used for planning and scheduling some particular tasks for achieving the
goal of that task.
Artificial Intelligence (AI) Technique
Artificial intelligence (AI) is both a tool and a fundamental shift in intelligence used by and for humans. What is
this paradigm composed of? Is it evolving well in all aspects of human intelligence? Let us explore.
Artificial intelligence (AI) is getting closer and closer to the heights and depths of human intelligence. That’s
what some of us want. That’s what we smell in John McCarthy’s words of AI’s description too. “The science and
engineering of making intelligent machines, especially intelligent computer programs.” And all this intelligence
comes from building agents that act rationally. That is where we can define the AI technique as a composite of
three areas. It is a type of method built on knowledge, which organizes and uses this knowledge and is also aware
of its complexity.
Let’s break this down one by one.
Search in artificial intelligence (AI)
Artificial intelligence (AI) agents essentially perform some kind of search algorithm in the background to
complete their expected tasks. That’s why search is a major building block for any artificial intelligence (AI)
solution.
Any artificial intelligence (AI) has a set of states, a start state from where the search begins, and a goal state. By
the use of search algorithms, the solution reaches from the start state to the goal state.
This is done through various approaches.
o Blind search
o Uninformed and informed search
o Depth first search
o Breadth first search
o Uniform cost search
o Search heuristics
Knowledge representation in artificial intelligence (AI)
Any artificial intelligence (AI) agent has to work on some input. This work can happen only when there is some
knowledge about the input or about its handling. Artificial intelligence (AI), hence, has to be strong in
understanding, reasoning, and interpreting knowledge. This is done by the representation of knowledge. It is
where the beliefs, intentions, and judgments of an intelligent agent are expressed by reasoning. This is the place
for modeling intelligent behavior for an agent.
Here, the representation of information from the real world happens for a computer to understand and leverage
this knowledge to solve complex real-life problems. This knowledge can be in the form of the following.
o Objects
o Events
o Performance
o Facts
o Meta-knowledge
o Knowledge-base
o Declarative knowledge
o Structural knowledge
o Procedural knowledge
o Meta knowledge
o Heuristic knowledge
o Perception component
o Learning component
o Reasoning
o Execution component
All this is woven into many ways through logical, semantic, frame, and production rules- as ways of knowledge
representation.
Abstraction in artificial intelligence (AI)
When we talk of abstraction, we are looking at an arrangement of the complexity of computer systems. It helps
to reduce complexity and achieve a simplified view of various parts and their interplay with each other.
This is very important considering the significant criticism that AI tools face. The ‘black box’ effect is a big
problem because a lot of effective and stellar AI models cannot explain how they do what they do. This opacity
is a massive barrier to gaining confidence and adoption of artificial intelligence (AI). So several AI techniques
span these areas of search, knowledge, and abstraction. Like the following.
o Data Mining – where statistics and artificial intelligence are used for the analysis of large data sets to
discover helpful information
o Machine Vision – where the system can use imaging-based automatic inspection and analysis for
guidance, decisions, automatic inspection, process control, etc.
o Machine Learning (ML) – where models learn from experience and evolve their precision and delivery
over a period
o Natural Language Processing or NLP – where machines can understand and respond to text or voice
data
o Robotics – where expert systems can perform tasks like a human.
As we can see, these techniques are evolving and will keep getting better and sharper to bring artificial intelligence
(AI) into proximity to the complexity and beauty of human intelligence. We need a lot of work in these areas
because we need to address privacy, bias, discrimination, unexplainability, and misapplication that many artificial
intelligence (AI) solutions face. We can achieve more trust in AI and its techniques only by getting stronger in all
these areas – search, knowledge, and abstraction. That’s where we will remove the most significant gap between
a dog and a robot dog – a creature that human intelligence can feel sure of.