SSRN 5091503
SSRN 5091503
Metaheuristic Algorithms
1*
P. Anju Chowdary, 2Meka Sai Sri Hanish, 3BG. Shresta, 4T. Mahithi Reddy, 5Mandapati Bindusree,
6
Gayathri Ramasamy
1*, 2, 3, 4, 5, 6
Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa
Vidyapeetham, Bengaluru, Karnataka, India.
1*
bl.en.u4aie22170@bl.students.amrita.edu, 2bl.en.u4aie22130@bl.students.amrita.edu,
3
bl.en.u4aie22106@bl.students.amrita.edu, 4bl.en.u4aie22078@bl.students.amrita.edu,
5
bl.en.u4aie22078@bl.students.amrita.edu, 6rgayathri@blr.amrita.edu
Abstract
1. Introduction
The importance of job shop scheduling is well known in the fields of operations research and
industrial engineering, where the objective is to assign a series of jobs to different resources
at specified times for the purpose of maximizing a certain criterion. The task is not only
difficult but also very interesting because it requires high level dynamic application and also
involves many combinational logics to solve such kind problem. Dynamic nature added with
complexity makes traditional methods fall short in multiple context applied industrial
environments. One result of this situation often cuts across sectors and therefore results in
decreased productivity and increased operational costs in manufacturing, logistics, as well as
service industries. In their article [14], ‘A genetic algorithm-based approach for solving job
shop scheduling problem’, Surekha and Sumathi propose planning JSSP by evolutionary
computation algorithms that include genetic algorithms and fuzzy logic. The paper which has
appeared on International Journal on Artificial Intelligent Systems and Machine Learning
shows how the proposed algorithm can be exploited through simulations run on FT10
benchmark problem. To deal with the challenges, this study scrutinizes the techniques of
metaheuristic algorithms - ACO, GA, SA and their combinations with FIFO and RR under a
general framework. Our goal is to minimize the make span a critical scheduling efficiency
measure, by considering the proposed sequencing strategies.
The novel algorithm given by K Rameshkumar and co. in [2] utilizes a unique schedule builder
to generate active schedules, effectively producing high-quality solutions when compared with
existing approaches, including hybrid particle swarm and variable neighbourhood search PSO
algorithms, across various benchmark job shop scheduling problems. Decision-making
complexity in job shops by employing DES to evaluate Make span, Flow Time, and Tardiness-
based measures, integrating MCDM methods to define job priorities, and demonstrating strong
performance for large-scale real-world problems with static and dynamic job arrivals are
addressed in [3, 4].
Decision-making complexity is alone not discussed but also the introduction of simulated
annealing-based meta- heuristic for permutation flow shop scheduling problems [4]. To reduce
make span and total time taken we use Pareto- optimal solutions.
The primary objective of this research is to assess & evaluates the performances of different
scheduling algorithms through a custom-built simulation platform. This platform models
predefined jobs across multiple machines, allowing for a comprehensive evaluation of
algorithmic effectiveness and adaptability in real-world scheduling situations. Through this
study, we pursue to provide valuable understandings into the potential of metaheuristic
algorithms in addressing job shop scheduling challenges, thereby contributing to the
advancement of scheduling methodologies and fostering operational excellence across diverse
industrial domains.
2. Literature Review
Hegen Xiong, Shuangyuan Shi and co. in [5] analyze JSSP entities, attributes, subtypes, and
performance measures, along with statistical analysis of 297 papers from 72 journals (2016-
2021), offering valuable insights for researchers and scholars in scheduling research.
In [9], Mohammad Mahdi Ahmadian and et.al. proposed a variable neighbourhood search
(VNS) approach for the just in time job scheduling problem to minimise earliness or tardiness
penalties attached to the completion times of operations. The algorithm extends JIT-JSS into
a collection of sub-problems, using different neighborhood structures, including new relaxed
neighborhoods, to produce optimal or near-optimal sequences of operations. Computational
experiments on benchmark instances show the efficiency of the VNS algorithm with respect
to new best solutions for a large number of them, which suggests that the when used in
scheduling, the proposed VNS algorithm is very promising for a future contribution to
scheduling.
The problem can be formulated by an MIP model in order to minimize the weighted sum of
overall delay of urgent jobs and also the duration of typical jobs [10]. Due to the problem’s
NP-hard nature, metaheuristic algorithms, including the greedy algorithm and a simulated
annealing algorithm, these all are used for efficient solution finding. Computational
experiments demonstrate the effectiveness & efficiency of the proposed approaches in
obtaining quality solutions quickly [4].
The four scales of metaheuristic research: introduction to new algorithms, comparisons and
analysis, hybrids and modifications, research gaps, future directions, guiding future research
in this area are addressed in the paper [11] by Kashif Hussain and co. Use of simulation
modelling and decision-making analysis to enhance this problem. Published in [15].
Vishnu Kumar Prajapati and co. in [12] provided a comprehensive survey on Tabu Search
Algorithms (TSA), a metaheuristic approach aimed at finding global optimal solutions for
various problems like the vehicle routing problem (VRP). The paper discusses the Tabu Search
framework, the algorithm itself, and its applications, and concludes with its performance
towards the problem and potential areas of improvement. It serves as a valuable resource for
researchers and practitioners interested in utilizing TSA for optimization problems
Principles of local search optimization algorithms outline the basic SA algorithm and discuss
its theoretical properties. It further explores practical considerations such as finite-time
approximation, cooling schedules, and stopping criteria. The applications of SA in a real-world
optimization problem are numerous, like knapsack problems, traveling salesperson problems,
and aircraft trajectory planning, to name a few.
3. Methodology
The study focuses on addressing the Job Shop Scheduling problem, which involves scheduling
a group of jobs in a particular set of machines in order to minimize the make span, i.e., the
needed time spent in completing entire jobs.
Simulated Annealing This is an optimization algorithm that derives from the process of
annealing in metallurgy. It is an iterative search technique that allows for moves uphill
(accepting worse solutions) but with lower probability as time goes by, which helps escape
local optima and converge to a global optimum [22]. This algorithm explores solution space
iteratively through moves from one solution to a worse one whose probability declines as time
goes on hence escaping local optima. Temperature parameter which decreases gradually
according to the cooling schedule governs the acceptance rate of sub-optimal solutions leading
to the convergence of this algorithm towards a global optimum.
First-In-First-Out (FIFO) A simple scheduling policy that prioritizes jobs based on their
arrival time, executing the oldest job first and proceeding in the order of arrival. The oldest job
in the queue is executed first, followed by the next in the order of arrival. Although simple and
easy to implement, FIFO does not account for job complexity or processing time, which can
lead to suboptimal schedules and increased make span in complex job shop environments [23].
Round Robin: A scheduling algorithm commonly used in CPU scheduling, where each process
is assigned a fixed time slice(quantum) and executed cyclically, allowing each process to
execute for a predefined quantum before being pre-empted scheduling algorithm which is
frequently used in Round Robin, where each process has assigned fixed time slice called
quantum and executes it cyclically. Before being preempted and placed to the end of the queue,
allowing other processes also run for a given quantum, to provide equal CPU time share among
all processes. This strategy does help to some extent in achieving balanced load although it may
lead to high context switching overheads [24].
Tabu Search: A local search algorithm that builds a tabu list, which avoids revisiting a solution
recently visited, so the search is able to cover different regions of the solution space and finally
able to avoid cycles and to converge to better solutions in the set. Tabu Search (TS) is an
iterative local search algorithm that extends the basic local search approach by preventing
returning to recently visited solutions using a tabu list [25]. This mechanism allows the
algorithm to explore areas of the solution space which it couldn’t if storing previous states, thus
The algorithm input consists of jobs represented in the format” machine, duration”, where each
job specifies the machine it runs on and its duration. Also, the number of machines is used,
which indicates the number of machines available for scheduling. The primary metric for
evaluating algorithm performance is the make span, which represents the total completion time
of all jobs. Additionally, execution time, time complexity, and space complexity are measured.
A robustness analysis is conducted to assess algorithm performance under disruptions.
Disruptions are simulated by introducing random changes to job duration. The input data for
our job shop scheduling problem consists of a list of jobs. Each job has a specific duration and
is to be processed on a specific machine. The format for the input data is as follows:
Each job is represented as a pair (machine, duration). The jobs are provided in a sequence
where each pair is separated by a semicolon (;). For example, 0,3;1,2;2,2 represents three jobs
where: The first job is processed on machine 0 for a duration of 3 units of time. The second
job is processed on machine 1 for a duration of 2 units of time. The third job is processed on
machine 2 for a duration of 2 units of time. Additionally, the number of machines available
for processing the jobs is specified as an input parameter.3.3
For each selected algorithm, the following steps are performed. First, the provided job data
and algorithm parameters are parsed to prepare the input for the algorithm. This involves
extracting information such as job duration, machine assignments, and the number of
machines. Once the input is parsed, the selected algorithm is executed using the parsed data.
During execution, the algorithm applies its specific logic to schedule the jobs on the available
machines, aiming to minimize the make span – the total completion time of all jobs. After the
algorithm finishes running, this is when its performance is scrutinized. Here one has to apply
different metrics to determine how well an algorithm performs. The major metrics are make
span and other time-based measures including the runtime of the algorithm. Further on,
complexity specifications like time complexity (e.g., Big O notation) and space complexity
prone are conducted for checking the algorithm’s computational efficiency.
In order to make sense of the scheduling outcomes, their results are presented in forms of Gantt
charts. A Gantt chart displays all jobs execution timeline on machines making it easy to have
a glance at the schedule solution. Moreover, a dependency graph is drawn to show how jobs
relate under scheduling problems such as constraints or dependencies so that they can find out
We do this comparison to analyze the generated results and compare them on the basis of
makes pan and complexity criterions among all these algorithms. Make span: make span is a
basic performance metric and is the time taken to complete all the jobs. Therefore, the number
of outputs make spans means the achievement of job scheduling optimization, and the larger
number of outputs makes pan less effective algorithm. Figure. Figure 1 A schematic diagram
representing the whole process going on in our study full size image
In addition to make span, several complexity measures such as time and space complexities
are also included to analyse the computational efficiency of the algorithm. It is will be longest
or maximum time an algorithm can take to execute an operation based on input size N and it
has different notation and represented as T(n) it can be written in big notation as Big O.
4. Results
Upon execution of the selected algorithms with the defined constraints, the given observations
were seen:
Job
Make
Algorithm dependenci
span
es
Ant Colony
P1-P2-P0 7
Optimization (ACO)
Genetic Algorithm
P0-P1-P2 4
(GA)
Simulated Annealing
P0-P1-P2 4
(SA)
First-In-First-Out
P0-P1-P2 7
(FIFO)
NSGA - II P0-P1-P2 3
On executing the chosen algorithms, a precise highly detailed Gantt chart and dependency
graphs as shown in figure below were generated. These visualizations give a good articulation
about the scheduling process and show how the tasks are divided over the machines as well as
the relations between the tasks. These charts are beneficial for prescribing the scheduling
decisions provided by each algorithm as they present an efficient approach towards
Understanding the quality of the scheduling solutions created.
Make span Evaluation One has to necessarily calculate the make span that reflects the
cumulative sum of total job duration which can easily reflect on the scheduling algorithms
efficiency. Upon careful scrutiny based on the results generated in the process, it was found
that some algorithms had potentially lower make spans compared to others. Shorter make
spans are a sign that there is better scheduling efficiency, they are also deemed more desirable
in real-world problems due to the essence of trying to have the shortest time possible for
completing jobs.
Apart from make span other complexity measures like time complexity and space complexity
were examined in great detail. Time Complexity of the algorithm is the computational
resources the algorithm needs to run within a specified period while Space Complexity refers
Figure 2. Comparison of all the algorithms based on their make span time
In the comparison graph presented in Figure 2 you can see how all the algorithms we selected
perform with respect to their make span. The picture also helps you to compare the scheduling
efficiency of different algorithms quickly and conveniently. This scheduling performance
comparison graph helps decision makers find approaches that are more favourable for
specified requirements by a visual representation of the relative performance among various
algorithms
By means of Figure 1 it is easy to compare the basic time complexities of the algorithms and
let us you dig deeper into this issue. Time complexity plays a crucial role in algorithm’s
computational efficiency evaluation what makes it an important perspective from quantitative
side of algorithmic execution. As a result, comparing their time complexities gives more
understanding about how their scalability and performance on such large-scale scheduling
problems can be appreciated.
Time Space
Algorithm
Complexity Complexity
Ant Colony
O(n2) O(n)
Optimization (ACO)
Genetic Algorithm
O(g · n) O(n)
(GA)
Simulated Annealing
O(n2) O(n)
(SA)
First-In-First-Out
O(n) O(n)
(FIFO)
A detailed breakdown of execution times for each algorithm is provided in Table 3. This
information allows for an in-depth analysis of the computational resources required by each
algorithm. These results are particularly useful for decision-makers in selecting algorithms,
considering both computational constraints and performance objectives. Among the tested
algorithms, NSGA-II showed the highest execution time, while FIFO recorded the lowest.
Ant Colony
14.30
Optimization (ACO)
Genetic Algorithm
15942.70
(GA)
Simulated Annealing
11362.20
(SA)
NSGA - II 10135885.30
4.8 Discussion
The Round Robin approach achieved the shortest makespan of 3, as shown in Table 1. In
scenarios with higher machine counts or more intricate job dependencies, the Non-Dominated
Sorting Genetic Algorithm II also produced a makespan of 3, making it especially suitable for
multiobjective optimization problems where multiple conflicting goals must be addressed
concurrently.
Regarding execution time, FIFO was the fastest, completing jobs in only 8.90 microseconds
(see Table 3), making it effective in applications requiring rapid decision-making. However,
this speed comes at the cost of makespan optimization in most scenarios.
Although Round Robin minimized the makespan in all tests, its efficiency might vary
depending on the job shop environment. In contexts with a high number of machines and
complex job interactions, NSGA-II might offer superior overall performance due to its multi-
objective capabilities. FIFO, despite being simple and fast, may not be ideal for applications
prioritizing makespan optimization, as it does not consider job complexity or processing times.
GA and SA can effectively identify job sequences, making them valuable in large, dynamic
industrial settings where dependencies and priorities frequently change.
The theoretical contributions of this study include a wide literature review that offers a
comparative analysis of the various kinds of scheduling algorithms, and another contribution
for the evaluation of job shop scheduling problem using make span, time, and space
complexity. It is clear that the scaled results reveal differences in scheduling efficiency and
even some of the aforementioned algorithms are superior in terms of efficiency and scalability.
These findings have significant implications that can be useful for industries to give proper
information in relation to the identification and establishment of match and choice schedules
It would be worthwhile to also repeat simulation studies for the new merged algorithms which
will contain stronger elements of genetic algorithm and also algorithms such as ant colony
optimization. Since the nature of these algorithms is not to present theoretical models that
could be tested in a controlled environment but as tools for solving problems in real life,
observation of such effects is paramount important in viewing how these algorithms affect
when such circumstances like break down of some machines or varying arrival rates of every
job etc. In addition, application of advanced computation techniques such as parallel
computation methods and optimized algorithm parameters enhances the functionality of the
given better algorithm and thereby increases its scalability.
Also, it is the reason why consideration of multi objective optimization can unite two and more
opposite objectives for example minimize job completion times i nthe same time to max this
rescouses using. Continued reinvestment into these fields will see that right, relevant and
resilient scheduling methods are conceptualized and implemented so as to meet the current
and future requisites of the modern world industries.
References