CPU Scheduling in Operating System
CPU Scheduling in Operating System
Scheduling of processes/work is done to finish the work on time. CPU Scheduling is a process
that allows one process to use the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of
CPU Scheduling is to make the system more efficient, faster, and fairer.
What is a process?
In computing, a process is the instance of a computer program that is being executed by one
or many threads. It contains the program code and its activity. Depending on the operating
system (OS), a process may be made up of multiple threads of execution that execute instructions
concurrently.
The process memory is divided into four sections for efficient operation:
The text category is composed of integrated program code, which is read from fixed storage
when the program is launched.
The data class is made up of global and static variables, distributed and executed before the
main action.
Heap is used for flexible, or dynamic memory allocation and is managed by calls to new, delete,
malloc, free, etc.
The stack is used for local variables. The space in the stack is reserved for local variables when it
is announced.
To know further, you can refer to our detailed article on States of a Process in Operating
system.
Process Scheduling is the process of the process manager handling the removal of an active
process from the CPU and selecting another process based on a specific strategy.
Scheduling is important in many different computer environments. One of the most important
areas is scheduling which programs will work on the CPU. This task is handled by the Operating
System (OS) of the computer and there are many different ways in which we can choose to
configure programs.
Process Scheduling allows the OS to allocate CPU time for each process. Another important
reason to use a process scheduling system is that it keeps the CPU busy at all times. This allows
you to get less response time for programs.
Considering that there may be hundreds of programs that need to work, the OS must launch the
program, stop it, switch to another program, etc. The way the OS configures the system to run
another in the CPU is called “context switching”. If the OS keeps context-switching programs in
and out of the provided CPUs, it can give the user a tricky idea that he or she can run any
programs he or she wants to run, all at once.
So now that we know we can run 1 program at a given CPU, and we know we can change the
operating system and remove another one using the context switch, how do we choose which
programs we need. run, and with what program?
That’s where scheduling comes in! First, you determine the metrics, saying something like “the
amount of time until the end”. We will define this metric as “the time interval between which a
function enters the system until it is completed”. Second, you decide on a metrics that reduces
metrics. We want our tasks to end as soon as possible.
CPU scheduling is the process of deciding which process will own the CPU to use while another
process is suspended. The main function of the CPU scheduling is to ensure that whenever the
CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use
line.
If most operating systems change their status from performance to waiting then there may always
be a chance of failure in the system. So in order to minimize this excess, the OS needs to
schedule tasks in order to make full use of the CPU and avoid the possibility of deadlock.
Minimum turnaround time, i.e. time taken by a process to finish execution should be the least.
There should be a minimum waiting time and the process should not starve in the ready queue.
Minimum response time. It means that the time when a process produces the first response
should be as less as possible.
What are the different terminologies to take care of in any CPU Scheduling
algorithm?
Arrival Time: Time at which the process arrives in the ready queue.
Turn Around Time: Time Difference between completion time and arrival time.
Waiting Time(W.T): Time Difference between turn around time and burst time.
Different CPU Scheduling algorithms have different structures and the choice of a particular
algorithm depends on a variety of factors. Many conditions have been raised to compare CPU
scheduling algorithms.
CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to
90 percent depending on the system load.
Throughput: The average CPU performance is the number of processes performed and
completed during each unit. This is called throughput. The output may vary depending on the
length or duration of the processes.
Turn round Time: For a particular process, the important conditions are how long it takes to
perform that process. The time elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time is the amount of time spent
waiting for memory access, waiting in line, using CPU, and waiting for I / O.
Waiting Time: The Scheduling algorithm does not affect the time required to complete the
process once it has started performing. It only affects the waiting time of the process i.e. the
time spent in the waiting process in the ready queue.
Response Time: In a collaborative system, turn around time is not the best option. The process
may produce something early and continue to computing the new results while the previous
results are released to the user. Therefore another method is the time taken in the submission
of the application process until the first response is issued. This measure is called response time.
Preemptive Scheduling: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state.
FCFS considered to be the simplest of all operating system scheduling algorithms. First come
first serve scheduling algorithm states that the process that requests the CPU first is allocated the
CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
Easy to implement
Disadvantages of FCFS:
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on First come, First serve Scheduling.
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest
execution time to execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes waiting to be executed. The
full form of SJF is Shortest Job First.
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among all
operating system scheduling algorithms.
It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.
As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
Disadvantages of SJF:
Many times it becomes complicated to predict the length of the upcoming CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Shortest Job First.
3. Longest Job First(LJF):
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the
name suggests this algorithm is based upon the fact that the process with the largest burst time is
processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
Among all the processes waiting in a waiting queue, CPU is always assigned to the process
having largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process that
arrived first is processed first.
Advantages of LJF:
No other task can schedule until the longest job or process executes completely.
Disadvantages of LJF:
Generally, the LJF algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Longest job first scheduling.
4. Priority Scheduling:
Less complex
One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the
Starvation Problem. This is the problem in which a process has to wait for a longer amount of
time to get scheduled into the CPU. This condition is called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Priority Preemptive Scheduling algorithm.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed
time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
It is considered preemptive as the processes are given to the CPU for a very limited time.
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Round robin Scheduling algorithm.
6. Shortest Remaining Time First:
Shortest remaining time first is the preemptive version of the Shortest job first which we have
discussed earlier where the processor is allocated to the job closest to completion. In SRTF the
process with the smallest amount of time remaining until completion is selected to execute.
SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s overhead
charges are not counted.
The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s valuable
time for processing. This adds up to its processing time and diminishes its advantage of fast
processing.
Advantages of SRTF:
The system also requires very little overhead since it only makes a decision when a process
completes or a new process is added.
Disadvantages of SRTF:
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the shortest remaining time first.
The longest remaining time first is a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program incoming
processes for use in a systematic way. This algorithm schedules those processes first which have
the longest processing time remaining for completion.
Among all the processes waiting in a waiting queue, the CPU is always assigned to the process
having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process that
arrived first is processed first.
LRTF CPU Scheduling can be of both preemptive and non-preemptive.
No other process can execute until the longest task executes completely.
Disadvantages of LRTF:
This algorithm gives a very high average waiting time and average turn-around time for a given
set of processes.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the longest remaining time first.
The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
HRRN is considered as the modification of Shortest Job First to reduce the problem of starvation.
In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the next
process which has the highest response ratio and not to the process having less burst time.
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN:
HRRN Scheduling algorithm generally gives better performance than the shortest job first
Scheduling.
There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN:
The implementation of HRRN scheduling is not possible as it is not possible to know the burst
time of every job in advance.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Highest Response Ratio Next.
Processes in the ready queue can be divided into different classes where each class has its own
scheduling needs. For example, a common division is a foreground (interactive) process and a
background (batch) process. These two classes have different scheduling needs. For this kind
of situation Multilevel Queue Scheduling is used.
System Processes: The CPU itself has its process to run, generally termed as System Process.
Interactive Processes: An Interactive Process is a type of process in which there should be the
same type of interaction.
Batch Processes: Batch processing is generally a technique in the Operating system that collects
the programs and data together in the form of a batch before the processing starts.
The main merit of the multilevel queue is that it has a low scheduling overhead.
Starvation problem
It is inflexible in nature
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Multilevel Queue Scheduling.
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue
Scheduling but in this process can move between the queues. And thus, much more efficient
than multilevel queue scheduling.
As the processes are permanently assigned to the queue, this setup has the advantage of low
scheduling overhead,
It is more flexible
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Multilevel Feedback Queue Scheduling.