Subject Code: 18CS3064: Time: 2 Hours Max. Marks: 50 Key and Scheme of Evaluation
Subject Code: 18CS3064: Time: 2 Hours Max. Marks: 50 Key and Scheme of Evaluation
Functions can have "hills and valleys": places where they reach a minimum or maximum
value.
It may not be the minimum or maximum for the whole function, but locally it is.
Local Maximum
First we need to choose an interval:
Then we can say that a local maximum is the point where:
The height of the function at "a" is greater than (or equal to) the height anywhere else in that
interval.
Or, more briefly: f(a) ≥ f(x) for all x in the interval
Note: a should be inside the interval, not at one end or the other.
The maximum or minimum over the entire function is called an "Absolute" or "Global"
maximum or minimum.
There is only one global maximum (and one global minimum) but there can be more than one
local maximum or minimum.
(1). __________ is the class of the object defined by x <- c(4, TRUE).
(2). _______________ returns TRUE then X can be termed as a matrix data object.
(3). Suppose I have a list defined as x <- list(2, "a", "b", TRUE). How can I fetch character
vector "b" from the list?
(8). ___________ function is used to create a box plots for visualization in R programming
language
(9). Name the function used to extract the first name in the string “Mrs. Jake Luther”?
Answers
3. Identify the different ways of representing solution and evaluation functions for bag price
problem and Sum of Bits problem.
Representation 4M
Evaluation function 4M
Representation of a problem
• A major decision when using modern optimization methods is related with how to
represent a possible solution.
• Such decision sets the search space and its size, thus producing an impact on how new
solutions are searched.
• To represent a solution, there are several possibilities. Binary, integer, character, real
value and ordered vectors, matrices, trees and virtually any computer based
representation form (e.g., computer program) can be used to encode solutions.
Evaluation function
• Another important decision for handling optimization tasks is the definition of the
evaluation function, which should translate the desired goal (or goals) to be maximized or
minimized.
Bag prices –
Sum of bits -
5. Answer 5a and 5b
5a. Outline about the different methods used in dealing with infeasible solutions.
- death-penalty
- penalty-weights
- repair
• Death-penalty is a simple method, which involves assigning a very large penalty value,
such that infeasible solutions are quickly discarded by the search.
• This method is not very efficient and often puts the search engine effort in discarding
solutions and not finding the optimum value.
• For example, quite often, the shape of an evaluation function can be set within the form
f(s) = Objective(s) – Penalty(s).
• The main problem with penalty-weights is that often it is difficult to find the ideal
weights, in particular when several constraints are involved.
• Finally, the approaches that only generate feasible solutions are based in decoders and
special operators.
5b. Using R-Programming, write a user defined function that finds the sum of the digits in a
given number.
Partial Logic 5M
Correct Logic 8.5M
sumofdig=function(n)
{
s=0
while (n > 0) {
r = n %% 10
s=s+r
n = n %/% 10
}
print(paste("Sum of the digits is :", s))
}
n = as.integer(readline(prompt = "Enter a number :"))
sumofdig(n)
Input
Enter a number :121
Output
"Sum of the digits is : 4"
6. Answer 6a and 6b
6b. Using R programming create a dataframe for the following data and then find the details of
the student having lowest grade and also find out details of students who joined in CSE Branch.
Retrieving student having lowest grade 4.5M
Retrieving details of students who joined in CSE Branch 4M
retval <- subset(data, Grade == min(Grade))
print(retval)
Output
retval <- subset(data, Branch==”CSE”))
print(retval)
Output
7. Distinguish between Full Blind Search algorithm and Depth Full Search algorithm.
• Here we will discuss two blind search functions: fsearch and dfsearch.
• The former is a simpler function that requires the search space to be explicitly defined in
a matrix in the format solutions D (argument Search), while the latter performs a
recursive implementation of the depth-first search and requires the definition of the
domain values for each variable to be optimized (argument domain).
• Both functions receive as arguments the evaluation function (FUN), the optimization type
(type, a character with "min" or "max") and extra arguments, (denoted by ... and that
might be used by the evaluation function FUN).
• This strategy requires much less memory than breadth-first search, since it only needs to
store a single path from the root of the tree down to the leaf node.
• However, it is potentially incomplete, since it will keep going on down one branch until it
finds a dead-end, and it is nonoptimal -- if there's a solution at the fourth level in the first
branch tried, and a solution at the second level in the next one over, the solution at the
fourth level will be returned.
• The time complexity of depth-first search is O(b^m) where b is the branching factor (2 for
the binary trees below) and m is the maximum depth of the tree. Its space complexity is
only b*m.
• Local maximum : At a local maximum all neighboring states have a values which is
worse than than the current state. Since hill climbing uses greedy approach, it will not
move to the worse state and terminate itself. The process will end even though a better
solution may exist.
• Plateau : On plateau all neighbors have same value . Hence, it is not possible to select
the best direction.
• Ridge : Any point on a ridge can look like peak because movement in all possible
directions is downward. Hence the algorithm stops when it reaches this state.
9. Find maximum profit obtained for bag prices problem when D=2 with x= (413,395) and m=
(1.25,1).
X=413 4M
X=395 4M
Sales(413)= round(1000/ln(413+200)-141)*1.25)
=round(1000/6.418349 – 141)*1.25)
=round(155.80292 – 141)*1.25)
=round(18.50365)=19
Sales(395)= round(1000/ln(395+200)-141)*1)
=round(1000/6.38856 – 141)*1)
=round(156.529797 – 141)*1)
=round(15.529797)=16
Cost(413)= 100+15*19=385
Cost(395)=100+10*16=260
10. Illustrate the concept of Simulated Annealing Search technique with neat diagram.
• We need some mechanism that can help us to escape from the trap of local minima.
Simulated Annealing is one of such methods.
• Simulated annealing is a variation of the hill climbing technique that was proposed in the
1980s and that is inspired in the annealing phenomenon of metallurgy, which involves
first heating a particular metal and then perform a controlled cooling.
• This single-state method differs from the hill climbing search by adopting a control
temperature parameter (T ) that is used to compute the probability of accepting inferior
solutions.
• Annealing is the process in which materials are raised to high energy levels for melting
and are then cooled to solid state.
• In contrast with the stochastic hill climbing, which adopts a fixed value for T , the
simulated annealing uses a variable temperature value during the search.
• The method starts with a high temperature and then gradually decreases (cooling process)
the control parameter until a small value is achieved (similar to the hill climbing).
• It should be noted that for high temperatures the method is almost equivalent to Monte
Carlo search, thus behaving more like a global search method, while for low temperatures
the method is similar to the hill climbing local search
11a. Illustrate the Grid Search optimization algorithm with neat diagram.
• Grid search reduces the space of solutions by implementing a regular hyper dimensional
search with a given step size.
• Uniform design search (Huang et al. 2007) is similar to the standard grid search method,
except that it uses a different type of grid, with lesser search points.
• Nested grid search is another variant that uses several grid search levels.
• Then, a second grid level is applied over the best point, searching over a smaller area and
with a lower grid size. And so on.
• Nested search is not a pure blind method, since it incorporates a greedy heuristic, where
the next level search is guided by the result of the current level search.
Algorithm
gsearch=function(step,lower,upper,FUN,type="min",...)
{ D=length(step) # dimension
for(i in 1:D)
{ domain[[i]]=seq(lower[i],upper[i],by=step[i])
L[i]=length(domain[[i]])
for(i in 1:D)
{ if(i==1) E=1
else E=E*L[i-1]
s[,i]=rep(domain[[i]],length.out=LS,each=E)
11b. Using R-Programming, Apply Grid search for Bag Prices problem.
gsearch=function(step,lower,upper,FUN,type="min",...)
{ D=length(step) # dimension
for(i in 1:D)
{ domain[[i]]=seq(lower[i],upper[i],by=step[i])
L[i]=length(domain[[i]])
for(i in 1:D)
{ if(i==1) E=1
else E=E*L[i-1]
s[,i]=rep(domain[[i]],length.out=LS,each=E)
S1=gsearch(rep(100,5),rep(1,5),rep(1000,5),profit,"max")
Output
gsearch best s: 401 401 401 401 501 f: 43142 time: 4.149 s
12a. Identify the differences between Tabu Search and Simulated Annealing Search technique.
Simulated Annealing
• We need some mechanism that can help us to escape from the trap of local minima.
Simulated Annealing is one of such methods.
• Simulated annealing is a variation of the hill climbing technique that was proposed in the
1980s and that is inspired in the annealing phenomenon of metallurgy, which involves
first heating a particular metal and then perform a controlled cooling.
• This single-state method differs from the hill climbing search by adopting a control
temperature parameter (T ) that is used to compute the probability of accepting inferior
solutions.
• Annealing is the process in which materials are raised to high energy levels for melting
and are then cooled to solid state.
• In contrast with the stochastic hill climbing, which adopts a fixed value for T , the
simulated annealing uses a variable temperature value during the search.
• The method starts with a high temperature and then gradually decreases (cooling process)
the control parameter until a small value is achieved (similar to the hill climbing).
• It should be noted that for high temperatures the method is almost equivalent to Monte
Carlo search, thus behaving more like a global search method, while for low temperatures
the method is similar to the hill climbing local search
Tabu Search
• Tabu search was created by Glover (1986) and uses the concept of “memory” to force the
search into new areas.
• The algorithm is a variation of the hill climbing method that includes in a tabu list of
length L, which stores the most recent solutions that become “tabu” and thus cannot be
used when selecting a new solution.
• The intention is to keep a short-term memory of recent changes, preventing future moves
from deleting these changes.
• It explores the solution space beyond local optimum by use of Tabu List.
• Tabu List: Contains set of solutions which are not to be used anymore or forbidden.
• Tabu Tenure: Don’t make same move for certain period of time.
• worsening moves can be accepted if no improving move is available (like when the
search is stuck at a strict local minimum).
• Tabu search was devised for discrete spaces and combinatorial problems (e.g., traveling
salesman problem).
• However, the method can be extended to work with real valued points if a similarity
function is used to check if a solution is very close to a member of the tabu list.
12b. Using R-Programming, Solve Bag Prices Problem using Simulated Annealing Search
Optimization technique.
D=5;
C=list(maxit=10000,temp=1000,trace=TRUE,REPORT=10000)
s=optim(s,eval,gr=ichange,method="SANN",control=C)
cat("best:",s$par,"profit:",abs(s$value),"\n")
Output