Soft Computing
Soft Computing
SOFT COMPUTING
(IT-701)
For
Fourth Year Student
Department: Information Technology
Department of Information Technology
Vision of IT Department
The Department of Information Technology envisions preparing technically competent problem
solvers, researchers, innovators, entrepreneurs, and skilled IT professionals for the development of
rural and backward areas of the country for the modern computing challenges .
Our graduates will show management skills and teamwork to attain employers’ objectives in their
careers.
Our graduates will explore the opportunities to succeed in research and/or higher studies.
Our graduates will apply technical knowledge of Information Technology for innovation and
entrepreneurship.
Our graduates will evolve ethical and professional practices for the betterment of society.
Program Outcomes (POs)
4|Page
P P P P P P
P P P P P P P P P
CO 0 0 0 S S S
0 0 0 0 0 0 0 0 0
Course Attain 1 1 1 O O O
1 2 3 4 5 6 7 8 9
Course Outcomes ment 0 1 2 1 2 3
Understand
concept of ANN
1 1 0 0 0 0 0 0 0 0 0 0 1 0 0
and explain the
IT 701.1 XOR problem.
Use supervised
neural networks to
0 0 1 0 1 0 0 0 0 0 0 0 0 0 1
classify given
IT 701.2 inputs.
Understand
unsupervised
0 1 1 0 0 0 0 0 0 0 0 0 1 0 0
neural networks
IT 701.3 for clustering data.
Build Fuzzy
inference system
0 1 1 1 0 0 0 0 0 0 0 0 0 0 1
using concepts of
IT 701.4 fuzzy logic
Obtain an
optimized solution
to a given problem 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1
using genetic
IT 701.5 algorithm
0 0
0. 0. 0. 0. 0.
. 0 0 0 0 0 0 0 . 0
2 8 8 2 6
Average 2 4
5|Page
List of Programs
Course
S. Page
List Outco
No. No.
me
INTRODUCTION TO SOFT
1
COMPUTING
Form a perceptron net for basic logic gates
1 with binary input and output CO2 2-3
6|Page
INTRODUCTION TO SOFT COMPUTING
Soft computing is a branch of computer science and computational intelligence that deals
with approximate reasoning, uncertainty, and imprecision to handle complex real-world
problems. Unlike traditional computing techniques, which rely on precise mathematical
models and algorithms, soft computing techniques employ heuristic methods to approximate
solutions in situations where crisp mathematical models may be insufficient.
Soft computing encompasses several subfields, including fuzzy logic, neural networks,
evolutionary computation, and probabilistic reasoning, each of which addresses different
aspects of imprecision and uncertainty in problem-solving. Here's a detailed introduction to
each of these components:
1. Fuzzy Logic: Fuzzy logic provides a framework for dealing with uncertainty by allowing
for degrees of truth instead of the usual binary true/false (1/0) values. It employs linguistic
variables, which are described by fuzzy sets that represent imprecise concepts. Fuzzy logic
enables the modeling of human reasoning processes by incorporating linguistic terms such
as "very hot" or "slightly cold," allowing for more flexible and human-like decision-making
systems.
2. Neural Networks: Neural networks are computational models inspired by the structure and
functioning of biological neural networks in the human brain. They consist of interconnected
nodes (neurons) organized into layers, with each neuron processing information and
transmitting signals to other neurons. Neural networks learn from data through a process
called training, where they adjust their internal parameters to minimize errors and improve
performance on specific tasks, such as pattern recognition, classification, and prediction.
3. Evolutionary Computation: Evolutionary computation algorithms are inspired by
principles of natural selection and genetics. They include genetic algorithms, evolutionary
strategies, and genetic programming, among others. These algorithms iteratively generate
and evaluate a population of candidate solutions to a problem, selecting the fittest individuals
for reproduction and applying genetic operators (mutation, crossover) to produce offspring
with potentially improved characteristics. Over successive generations, evolutionary
computation techniques converge towards optimal or near-optimal solutions in complex
search spaces.
4. Probabilistic Reasoning: Probabilistic reasoning involves reasoning under uncertainty
using probability theory. It provides a formal framework for representing and updating
beliefs about uncertain events based on available evidence. Techniques such as Bayesian
networks, Markov models, and probabilistic graphical models are commonly used in soft
computing to model uncertain and dynamic systems, make predictions, and perform
decision-making under uncertain
1|Page
1. Form a perceptron net for basic logic gates with binary input and
output
A perceptron is a simple neural network model typically used for binary classification tasks.
While a single perceptron can only solve linearly separable problems, we can combine
multiple perceptron to form networks capable of implementing more complex functions,
including basic logic gates like AND, OR, and NOT gates. Here, I'll show you how to form
perceptron networks for these basic logic gates with binary input and output.
AND Gate: The AND gate outputs 1 (or True) only if both inputs are 1; otherwise, it outputs
0 (or False). To implement the AND gate using a perceptron, we need to set appropriate
weights and bias.
Here, the weights are both 1, and the bias is -1. The perceptron's output is 1 only when both
inputs are 1.
OR Gate: The OR gate outputs 1 if at least one of the inputs is 1. We can implement the OR
gate using a perceptron with appropriate weights and bias.
Here, both weights are 1, and the bias is 0. The perceptron's output is 1 if either or both inputs
are 1.
2|Page
NOT Gate: The NOT gate outputs the opposite of its input. We can implement the NOT gate
using a single-input perceptron.
Here, the weight is -1, and the bias is 1. The perceptron's output is the opposite of the input.
These perceptron networks demonstrate how simple neural networks can represent basic logic
gates using binary inputs and outputs. By combining multiple perceptrons or using more
complex neural network architectures, we can build networks capable of implementing more
intricate logical functions and solving diverse computational tasks.
3|Page
2. Calculation of new weights for a Back propagation network, given the
values of input pattern, output pattern, target output, learning rate and
activation function.
To calculate new weights for a back propagation neural network, you need to follow these
steps:
1. Forward Propagation: Compute the network's output using the current weights.
2. Calculate Error: Find the error between the computed output and the target output.
3. Back propagation: Propagate the error backward through the network to update the weights.
4. Update Weights: Use the error and learning rate to adjust the weights.
1. Forward Propagation: Compute the output of the neural network for a given input pattern
using the current weights. This involves passing the input through the network's layers and
applying the activation function at each neuron to get the output.
2. Calculate Error: Find the error between the computed output and the target output. This
could be calculated using a suitable error function, such as mean squared error (MSE) or cross-
entropy loss, depending on the problem.
3. Back propagation: Back propagation involves propagating the error backward through the
network to compute the gradients of the error with respect to the weights. This is done by
applying the chain rule of calculus to update the weights of each layer based on the error
contributions from subsequent layers.
4. Update Weights: Finally, update the weights of the network using the calculated gradients
and the learning rate. This step ensures that the network learns from its mistakes and adjusts
the weights to minimize the error.
This formula represents how each weight is updated based on the error signal propagated
backward through the network during back propagation. The learning rate controls the step
size of the weight updates, preventing the network from overshooting the optimal
4|Page
3. Implement Travelling salesman problem using Genetic Algorithm
Implementing the Traveling Salesman Problem (TSP) using a Genetic Algorithm (GA)
involves representing candidate solutions as permutations of cities, creating a population of
such solutions, and evolving them over generations to find an optimal or near-optimal solution.
Here's a basic outline of how you can implement it:
import numpy as np
import random
def total_distance(tour):
5|Page
return sum(distance(tour[i], tour[i+1]) for i in range(len(tour) - 1)) + distance(tour[-1],
tour[0])
def generate_population(size):
return [random.sample(cities.keys(), len(cities)) for _ in range(size)]
# Tournament selection
# Order 1 crossover
def crossover(parent1, parent2):
start, end = sorted(random.sample(range(len(parent1)), 2))
offspring = parent1[start:end]
missing_cities = [city for city in parent2 if city not in offspring]
return offspring + missing_cities
# Swap mutation
def mutate(individual, mutation_rate):
if random.random() < mutation_rate:
idx1, idx2 = random.sample(range(len(individual)), 2)
individual[idx1], individual[idx2] = individual[idx2], individual[idx1]
return individual
# Genetic Algorithm
def genetic_algorithm(num_generations, population_size, num_parents, mutation_rate):
population = generate_population(population_size)
for generation in range(num_generations):
fitness_scores = [1 / total_distance(individual) for individual in population]
parents = select_parents(population, num_parents)
offspring = []
while len(offspring) < population_size:
parent1, parent2 = random.sample(parents, 2)
child = crossover(parent1, parent2)
child = mutate(child, mutation_rate)
offspring.append(child)
population = offspring
best_tour = min(population, key=total_distance)
return best_tour, total_distance(best_tour)
Example usage
best_tour, min_distance = genetic_algorithm(num_generations=100, population_size=100,
num_parents=50, mutation_rate=0.1)
print("Best tour:", best_tour)
6|Page
print("Minimum distance:", min_distance)
This implementation uses a simple genetic algorithm with tournament selection, order 1
crossover, and swap mutation. You may need to adjust parameters such as the population size,
number of generations, and mutation rate to find a good solution for your specific TSP
instance. Additionally, you can experiment with other genetic operators and strategies to
improve the performance of the algorithm.
7|Page
4. Optimization of problem like Job shop scheduling using Genetic
algorithm
Job shop scheduling is a classic optimization problem where a set of jobs need to be processed
on a set of machines, each with its own processing time. The objective is to find an optimal
schedule that minimizes a certain criterion, such as the total completion time (makespan), total
tardiness, or maximum lateness.
Here's how you can approach job shop scheduling using a Genetic Algorithm (GA):
1. Representation: Represent each solution (schedule) as a permutation of jobs, where each job
represents the order in which the jobs are processed on the machines.
3. Fitness Evaluation: Evaluate the fitness of each schedule based on the chosen objective
function (e.g., makespan, total tardiness).
4. Selection: Select individuals (schedules) from the population to serve as parents for the next
generation. Common selection methods include roulette wheel selection, tournament selection,
or rank-based selection.
7. Replacement: Replace the current population with the new generation of individuals,
potentially employing elitism to retain the best solutions.
8. Termination: Repeat steps 3-7 for a fixed number of generations or until a termination
condition is met, such as reaching a maximum number of iterations or finding a satisfactory
solution.
Here's a high-level Python-like pseudocode for implementing GA for job shop scheduling:
8|Page
def initialize_population(population_size):
# Generate random initial population of schedules
pass
def evaluate_fitness(schedule):
# Calculate fitness of a schedule (e.g., makespan, total tardiness)
pass
# Example usage
best_solution, best_fitness = genetic_algorithm(num_generations=100, population_size=100,
num_parents=50, mutation_rate=0.1)
print("Best solution:", best_solution)
print("Best fitness:", best_fitness)
This pseudocode provides a framework for implementing a Genetic Algorithm for job shop
scheduling. You need to replace placeholder functions like initialize_population,
9|Page
evaluate_fitness, selection, crossover, and mutation with actual implementations tailored to
your specific problem instance and objective function. Additionally, you may need to fine-
tune parameters like population size, number of generations, number of parents, and mutation
rate to achieve optimal performance.
10 | P a g e
5. Using Ada line net, generate XOR function with bipolar inputs and
targets.
To implement the XOR function using an Ada line neural network with bipolar inputs and
targets, we need to follow these steps:
1. Define the Ada line neural network architecture.
2. Initialize the network's weights and bias.
3. Define the activation function (linear activation in the case of Ada line).
4. Train the network using the XOR dataset.
5. Test the trained network.
import numpy as np
class Adaline:
def __init__(self, input_size):
self.input_size = input_size
self.weights = np.random.randn(input_size)
self.bias = np.random.randn()
self.learning_rate = 0.1
Note: Adaline is a linear classifier, so it can only approximate linearly separable functions.
The XOR function is not linearly separable, so Adaline may not achieve perfect accuracy
on this problem. However, with enough epochs and proper initialization, it should
converge to a reasonable solution.
12 | P a g e
6. Design fuzzy inference system for a given problem.
To design a fuzzy inference system (FIS) for a given problem, we need to follow these steps:
Let's design a fuzzy inference system for a simple problem: determining the amount of water
to be poured into a plant based on the soil moisture level and the temperature.
6. Defuzzification:
• Convert fuzzy output into crisp values using methods such as centroid, mean of
13 | P a g e
maximum (MOM), or weighted average.
Here's a Python-like pseudocode implementation:
You would need to replace the placeholder code with actual implementations suitable for your
specific problem and use appropriate libraries or tools for fuzzy logic computations if available
(e.g., scikit-fuzzy in Python).
14 | P a g e
7. Maximize the function y =3x2 + 2 for some given values of x using
Genetic algorithm.
1. Define the problem and its representation: In this case, we want to find the maximum value of
�y for given values of �x.
3. Define the fitness function: The fitness function will evaluate how close a given solution is to
the maximum value of �y.
4. Implement genetic operators: We'll define selection, crossover, and mutation operations to
evolve the population.
5. Implement the genetic algorithm: We'll use the genetic operators to iteratively evolve the
population and find the optimal solution.
import numpy as np
# Genetic Algorithm
# Example usage
best_solution, max_value = genetic_algorithm(num_generations=100, population_size=100,
num_parents=50, mutation_rate=0.1, x_range=(-10, 10))
print("Best solution:", best_solution)
print("Maximum value of y:", max_value)
In this implementation:
16 | P a g e