Lec 9 & 10 (A) Multi-Objective Optimization
Lec 9 & 10 (A) Multi-Objective Optimization
Optimization
Dr. Sobia Tariq Javed
Outline
• The objectives of this lecture are to:
• Multi-objective Optimization
• Address the design issues of evolutionary multi-objective optimization
algorithms
• Explore ways to handle Constraints
Multi-objective
Optimization
• Multiple Objectives: The problem has
several goals to optimize, often
conflicting (e.g., cost vs. quality).
X is better than
Multi-objective optimization
Objective function – Minimize f(x) = (f1(x), f2(x), f3(x))
f1(x) f2(x) f3(x)
is better than
Multi-objective optimization
Objective function – Minimize f(x) = (f1(x), f2(x), f3(x))
f1(x) f2(x) f3(x)
is ???? than
Multi-Objective -Dominance
Relationship
• A solution x dominates another solution y if
• x is no worse than y in all objectives And
• x is strictly better than y in at least one of the objectives.
Dominated
Solution
• A solution x is said to
dominate the other
solution y if
• the solution x is no • 1 Vs 2: ?
worse than y in all • 1 Vs 5:?
objectives and • 1 Vs 4:?
• the solution x is strictly
better than y in at least
one objective
Pareto Optimal Front
• Non-Dominated Solution Set
• The non-dominated solution set consists of all solutions within a given set that are not dominated
by any other solution in the set.
Non-dominate
2D-case
f1(x) f2(x)
f1(x)
f2(x)
Dominance in 2D
f1(x)
dominated
dominating incomparable
f2(x)
Pareto Optimal Front
Min-min Min-max
Max-min Max-max
Pareto Optimal Solutions
• Pareto optimal solutions are the non-dominated solutions
f2(minimize)
4
3 4
2 1 5
1 3
0
8 9 11 12 16
f1 (maximize)
Decision and Objective Space
f2(x)
Pareto front = { }
Algorithm Design issues
Optimization Goal?
• Find all Pareto-optimal solutions?
• How should the decision maker handle 10000 solutions?
• Find a representative subset of Pareto set?
• Many problems are NP-hard
• What does representative really mean?
• Find a good approximation of the
Pareto set?
• What is a good approximation?
• How to formalize intuitive
understanding:
• Close to the Pareto-front
• Well distributed
Algorithm Design issues
Evolutionary Multiobjective Optimization (EMO) Principles
Two ideal goals of multi-objective optimization:
• Convergence
• Find a (finite) set of solutions which lie on the Pareto-
optimal front.
• Diversity
• Find a set of solutions which are diverse enough to
represent the entire range of the Pareto-optimal front.
Basic aim: Guiding the search towards the Pareto set and
keeping a diverse set of nondominated solutions.
Algorithm Design issues
2. (Diversity Preservation)
3. (Elitism)
1. (Fitness assignment)
Parameters -
weights
• Lack of Adaptability:
• Fixed weights do not adapt dynamically during evolution.
• Requires multiple runs with different weights to explore the entire Pareto front.
B) Criterion-based (Objective-based)
• These methods switch between objectives during the selection phase.
• Different objectives may influence the selection of individuals at different times.
• Example: Vector Evaluated Genetic Algorithm (VEGA) by David Schaffer.
• Subpopulations are created, each evaluated based on a different objective.
• Ensures that solutions for all objectives are explored separately.
B) Criterion-based (Objective-based)
• Advantages:
• Simple & Easy to Implement (can extend single-objective algorithms).
• Finds Individual Best Solutions (useful when independent optimal solutions for each objective are
needed).
• Disadvantages:
• Each solution considers only one objective (ignores the trade-offs).
• May get stuck in local optima (reduces diversity in the final solution set).
B) Criterion-based -Example
• A company wants to design a lightweight yet strong material for an aircraft
component. The optimization problem has two conflicting objectives:
• 1. Minimize Weight (kg) → Reduce material usage.
• 2. Maximize Strength (MPa) → Ensure durability.
• Apply VEGA
• 1) Initialization
• A population of 10 solutions is randomly selected from the available materials.
• Each solution is a material selection for the component.
• 2) Subpopulation Creation
• VEGA splits the population into two equal subpopulations:
• Subpopulation 1 → Optimizes Weight
• Subpopulation 2 → Optimizes Strength
B) Criterion-based -Example
• 3. Selection Process
• In Subpopulation 1, solutions are ranked by minimum weight.
• In Subpopulation 2, solutions are ranked by maximum strength.
• The best-ranked solutions from each subpopulation are combined to form the next generation.
• Interpretation:
• Solution A is the best → It is non-dominated (Front 1, DR = 0, DC = 2).
• Solution B is dominated only by A → It is in Front 2.
• Solution C is the worst → It is dominated by both A and B (Front 3).
C) Dominance based
2. Diversity Preservation
• Ensuring a well-distributed Pareto front is crucial for obtaining a diverse set of trade-
off solutions. Diversity preservation prevents solutions from clustering in certain
regions while maintaining coverage across all objectives.
• The chance of an individual being selected depends on the density of solutions in its
vicinity:
• Increases: If the number of nearby solutions is low (promotes diversity).
• Decreases: If the number of nearby solutions is high (prevents overcrowding).
2. Density-Based Selection:
• Solutions in less crowded regions are preferred.
• Ensures diversity by avoiding too many solutions in one part of the search space.
• Example: Crowding Distance in NSGA-II.
3. Elitism-Environmental Selection
• Selection Criteria for Environmental Selection
3. Time-Based Selection:
• Each solution has an equal probability of being retained in the elite archive.
• Helps maintain exploration by not always favoring the best-known solutions.
Techniques to solve MOP
• 1) A Priori Methods
• Definition: Preferences are defined before the optimization process starts.
• How it Works: The decision-maker assigns weights, goals, or priorities to objectives in
advance.
• Example: Weighted Sum Method (assigning importance to each objective).
• Pros: Simple and fast.
• Cons: Requires accurate preference estimation upfront, may miss other valuable
solutions.
Techniques to solve MOP
• 2) A Posteriori Methods:
• Definition: Optimization is performed first, and preferences are applied after generating
a set of solutions.
• How it Works: The algorithm finds the Pareto front, and the decision-maker selects the
preferred solution from this set.
• Example: NSGA-II, MOPSO (evolutionary algorithms providing diverse solutions).
• Pros: Provides a range of trade-off solutions, flexible for decision-making.
• Cons: Computationally intensive, especially for large problems.
Techniques to solve MOP
• Interactive Methods:
• Definition: Preferences are refined during the optimization process in an iterative
manner.
• How it Works: The decision-maker gives feedback after each iteration to guide the
search toward more preferred regions.
• Example: Reference Point-Based Methods, Trade-off Analysis.
• Pros: Dynamic adjustment to evolving preferences, better alignment with decision-
maker’s goals.
• Cons: Requires continuous involvement, can be time-consuming.
Techniques to solve MOP
Optimization and Decision Making