0% found this document useful (0 votes)
58 views32 pages

Soft Computing 2023

The document discusses soft computing, computational intelligence, and intelligent techniques. These terms are used synonymously to refer to building intelligence into machines. The goal is to develop cost-effective approximate solutions to complex problems by exploiting tolerance for imprecision. This is achieved using techniques like fuzzy logic, neural networks, and genetic algorithms. Soft computing allows developing useful solutions without high precision or accuracy, reducing computational time and costs.

Uploaded by

uzeyrniaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views32 pages

Soft Computing 2023

The document discusses soft computing, computational intelligence, and intelligent techniques. These terms are used synonymously to refer to building intelligence into machines. The goal is to develop cost-effective approximate solutions to complex problems by exploiting tolerance for imprecision. This is achieved using techniques like fuzzy logic, neural networks, and genetic algorithms. Soft computing allows developing useful solutions without high precision or accuracy, reducing computational time and costs.

Uploaded by

uzeyrniaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

The three terms, namely

1. Soft Computing
2. Computational Intelligence
3. Intelligent Techniques

are synonymously used in computer literature.


Aim/Objective

The basic aim of :

1. Soft Computing
2. Computational Intelligence
3. Intelligent Techniques

is to build intelligence into the machine.


Fundamental principles

(A) Conventional computing


1. Precision
2. Accuracy

(B) Non-conventional computing


1. Imprecision
2. Approximation
Complexity of the System

With the advances in science and


technology, the physical systems to be
modelled by conventional mathematical tools
are becoming more and more complex. Hence
modeling time or computational cost has
increased many fold.

The purpose of new approaches is to reduce


the complexity of the system while
maintaining its full utility and hence to reduce
the modeling time or computational cost.
Real World’s examples:
1. The problem of parking a car
We find it relatively easy to park a car because the final
position of the car is not specified precisely. However, if we
were asked to park a car in a parking space such that the
outside wheels are precisely within 0.01 mm from the side
line of a parking space and the wheel are within 0.01 degree
from a specified angle. It would take a very long time to
park the car. The point is that the cost (i.e., the time
required) to park a car increases as the precision of the car
parking task increases.

This trade-off between precision and cost exists not only in


car parking but also in control, modelling, decision making,
and almost any kind of problem.
2. Travelling salesman problem
This problem is frequently used as a test
bed for assessing the effectiveness of
various methods of solution. The important
point about this problem is the steep rise in
computing as a function of precision of
solution.

These and many similar examples lead to


the basic premise and the guiding principle
of soft computing.
The Cost-Precision Trade-Off
Conceptually, Fig. 1 can be used to depict this trade-off for
many systems. The horizontal axis represents the degree of
precision, while the vertical axis serves the dual purpose of
representing both the cost and the degree of utility. As the
precision of a system increases, the cost for developing the
system also increases, typically in an exponential manner. On
the other hand, the utility (i.e., usefulness) of the system does
not increase proportionally as its precision increases—it
usually saturates after a certain point. This insight about the
trade-off between precision, cost, and utility inspired Zadeh and
his followers to exploit the gray area in Fig.1, which resulted in
a revolutionary way of thinking for developing approximate
solutions that are both cost–effective and highly useful. In other
words, the fundamental principle of soft computing is to
develop cost-effective approximate solutions to complex
problems by exploiting the tolerance for imprecision.
Fig. 1. The Cost-Precision Trade-Off
Revolutionary way of thinking
 As the precision of a system increases, the cost for
developing the system also increases (exponentially)
 The utility (i.e., usefulness) of the system does not
increase proportionally as its precision increases
(saturates)
 This insight about the trade-off between precision,
cost, and utility inspired Zadeh and his followers to
exploit the gray area in Fig.1, which resulted in a
revolutionary way of thinking for developing
approximate solutions that are both cost–effective and
highly useful
 The fundamental principle of soft computing is to
develop cost-effective approximate solutions to
complex problems by exploiting the tolerance for
imprecision
The principle of incompatibility

“As the complexity of a system increases, our


ability to make precise and yet significant
statements about its behaviour diminishes
until a threshold is reached beyond which
precision and significance become almost
mutually exclusive characteristics. It is in this
sense that precise quantitative analyses of the
behaviour of humanistic systems are not likely
to have much relevance to the real world
societal, political, economic, and other types of
problems which involve humans either as
individuals or in groups.”
Soft Computing

The term soft computing (SC) was first coined


by Zadeh in 1992.

Basic Premises

1. The real world is pervasively imprecise and


uncertain

2. Precision and certainty carry cost.


Guiding Principle

Intelligent systems should exploit the tolerance


for imprecision, uncertainty, and partial truth to
achieve tractability, robustness, and low
solution cost.
Supremacy of Technology
Two decades ago, there was intense competition
between various methodologies

1. Fuzzy logic (1962)


2. Neural networks (1943)
3. Genetic algorithms (1971)

Zadeh realized that more could be gained by


cooperation than by claims and counter-claims of
superiority.
Essential Characteristics of SC

1. Thus, the basic tenet of SC is that, in


general, better results can be achieved by
employing the constituent methodologies in
combination than in stand-alone mode
2. Soft Computing is not a single
methodology. Rather, it is a coalition or
consortium of distinct methodologies
3. Its constituent methodologies are, for the
most part, complementary rather than
competitive
Soft Computing Definition
Zadeh:
1. It is tolerant of imprecision, uncertainty, and partial
truth. In effect, the role model of soft computing is
the human mind.
2. Soft computing is a collection of methodologies
that aim to exploit the tolerance for imprecision and
uncertainty to achieve tractability, robustness, and
low solution cost.
3. Soft computing is an emerging approach to
computing which parallels the remarkable ability of
the human mind to reason and learn in an
environment of uncertainty and imprecision.
Definition
Wildberger:
Soft computing might be described as
“automated intelligent estimation.” Soft
computing attempts to emulate and automate
the pragmatic techniques used by intelligent
humans to deal adequately and quickly both
with routine problems and with crises. It is an
attempt to automate what is often called
“human intuition”.
Definition
Yuan:
The minimum definition is that soft computing
includes fuzzy logic, neural networks, genetic
algorithms, rough sets, and probabilistic theory.

Kosko:
I do not believe there is a genuine new field behind
the term SC. Soft computing has yet to define its
own orthogonal axis. I support the fields that make
up SC, and I am happy to support them as one
large set or as any subset combination.
Definition
Takagi:
A computational framework for tolerance to imprecision
is the basic idea of SC. Techniques used for this goal
are not keywords to define the SC.

Onisawa:
NN, FL, GA, etc. have both an advantage and a
disadvantage for dealing with imprecision, uncertainty,
partial truth, etc., respectively. For example, FL is
appropriate for the representation of fuzzy rules while
NN and GA are appropriate for knowledge acquisition.
SC is a total technology for knowledge acquisition, the
representation of fuzzy rules, including methods that
complement the disadvantages of NN, FL, GA, etc.
Hard Computing and Soft Computing

Soft computing is a natural extension of hard


computing. Hence, it is reasonable to discuss
the basic differences between hard computing
and soft computing. A summary of these
distinctions is presented in Table I.
Table : Distinctions between hard computing and soft computing

Hard computing Soft computing


It is precise and quantitative. It is inexact (approximate) and
qualitative.
Imprecision and uncertainty are The tolerance for imprecision and
undesirable properties. uncertainty is exploited.
It seeks to achieve ‘absolute’ optimization. It accepts ‘good enough’ results.

It is based on binary logic, crisp systems, It is based on fuzzy logic, neural nets
numerical analysis, and crisp software. and probabilistic reasoning.
The process is local. It means that The process is global. It means that
independent sets of bits are processed. strongly dependent information is
processed, i.e., the change in the value
of one bit is transmitted to all other bits
of the computer.
Sensitive to initial values of parameters. Insensitive to initial values of
parameters.
Most suitable for serial computation. Most suitable for parallel computation.

Solutions are found at a high Solutions can be found at a much lower


computational cost. cost in terms of calculation effort.
Soft Computing Constituents
1. Fuzzy Logic (FL)
2. Neural Networks (NN)
3. Probabilistic Reasoning (PR)
4. Genetic Algorithms (GA)
5. Chaos Theory (ChT)

FL is the kernel of SC. It can be used as a


springboard for generalization of any
theory.
Fuzzy Sets

A theory which provides a systematic calculus


to deal with imprecise and incomplete
information linguistically

Numerical computation is performed using


linguistic labels stipulated by membership
functions
Artificial Neural Networks

Inspired by the mammal brain

Simplistic model in comparison to the mammal


brain

A non-algorithmic approach
Evolutionary Computation

Genetic algorithms are based on the


evolutionary principle of natural selection

Offers the capacity for population based


systematic random search
Fuzzy Logic
Advantages
1. It has great advantage over the other two techniques
in the sense that knowledge base is computationally
much less complex and the linguistic representation is
very close to human reasoning.
2. Fuzzy sets can be used as a universal approximator,
which is very important for modelling unknown
objects.
3. If an operator cannot tell linguistically what kind of
action he or she takes in a specific situation, then it is
quite useful to model his/her control actions using
numerical data.
Fuzzy Logic
Disadvantages
1. fuzzy logic in its pure form is not always useful for
easily constructing intelligent systems. For example,
when a designer does not have sufficient prior
knowledge (information) about the system,
development of acceptable fuzzy rule base becomes
impossible.
2. As the complexity of the system increases, it becomes
difficult to specify a correct set of rules and
membership functions for describing adequately the
behavior of the system.
3. Fuzzy systems also have the disadvantage of not being
able to extract additional knowledge from experience
and correcting the fuzzy rules for improving the
performance of the system.
Neural Networks
Advantages
1. Key feature is the intrinsic parallelism that allows
fast computations.
2. Parallel fine-grained implementation of non-linear
static or dynamic systems.
3. adaptive nature, where “learning by example”
replaces traditional “programming in problems
solving.
4. Generalization capability.
5. ANNs are viable computational models for a wide
variety of problems including pattern classification,
speech sysnthesis and recognition, curve fitting,
approximation capability, image data compression,
associative memory, and modelling and control of
non-linear unknown systems.
Neural Networks
Disadvantages
1. In general, they can learn correctly from
examples, but what is learned is not easy for
humans to understand, i.e., the knowledge
base extracted from them does not have such
an intuitive representation as that provided,
for example, by FL.
2. The type of functions that can be used in NNs
have to possess precise regularity features
and the derivative of these functions has to be
known a priori.
Genetic Algorithms
Advantages
1. The functions that can be used in GAs can be much
more general in nature and knowledge of the
gradient of the functions is not usually required.
2. As these algorithms explore in several directions at
the same time, they are affected much less than NNs
by the problem of local extremes; that is, a GA has
far less likelihood than an NN of finding a local
extreme rather than a global one.
3. Even if the extreme found is not a global one, it is
likely to correspond to a less significant learning
error.

Disadvantages
Their learning speed is usually slower.
Future Trends of Sc
• In coming years, most systems with high
MIQ will be hybrid systems, that is, systems
which achieve superior performance by
employing a variety of combinations of
methodologies of soft computing.

• The use of hybrid Intelligent Systems are


leading to the development of numerous
manufacturing system, multimedia system,
intelligent robots, trading systems etc.,
which exhibits a high level of MIQ.
Intelligent Combinations of the Components of SC

The following are known principal combinations of


the components of SC:
1. Neuro computing + fuzzy logic (Neuro-Fuzzy: NF)
2. Fuzzy logic + genetic algorithms (FG)
3. Fuzzy logic + Chaos theory (FCh)
4. Neural networks + genetic algorithms (NG)
5. Neural networks + Chaos theory (NCh)
6. Fuzzy logic + neural networks + genetic algorithms
(FNG)
7. Neural networks + fuzzy logic + genetic algorithms
(NFG)
8. Fuzzy logic + Probabilistic reasoning (FP)
Computing
technologies

Hard Computing – base of Soft Computing – base of


classical Artificial intelligence Computational intelligence
with high MIQ

Probabilistic Neural
reasoning Networks

Fuzzy logic - kernal of


Soft Computing

Genetic Algorithms Chaos Theory

Hybrid Systems

Fig. 2. The main components of Soft Computing

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy