Lecture 3
Lecture 3
Process Scheduling
What is scheduler ?
There are many processes running to single
resource, this processes are managed by
Operating system
The process scheduling is the activity of the
process manager called os remove the
running process from the CPU and the
selection of another process on the basis of
a particular strategy.
Process Scheduling
Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process
to be loaded into the executable memory at a
time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling Queue
The OS maintains all PCBs in Process
Scheduling Queues.
The OS maintains a separate queue for each of
the process states and PCBs of all processes in the
same execution state are placed in the same
queue.
When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its
new state queue
Process Scheduling Queue
The Operating System maintains the following
important process scheduling queues:
Ready queue –
This queue keeps a set of all processes be located
in main memory, ready and waiting to execute.
Device queues- The processes which are blocked
due to unavailability of an I/O device constitute
this queue to waiting the device free.
Process Scheduling Queues
Job queue - This queue keeps all the processes in the
system/when the process want to enter to the
RAM/from HD to RAM
Scheduling Queues
• As the process enters the system or when a
running process is interrupted, it is put into a
ready queue
• There are also device queues(waiting queues),
where each device has its own device queue.
• All are generally stored in a queue(linked list), not
necessarily a FIFO queue.
7
Types of Process Schedulers
1. Long-Term Scheduler or Job Scheduler
• The job scheduler is another name for Long-Term
scheduler.
• It selects processes from the secondary memory) and
then maintains them in the primary memory’s ready
queue.
Types of Process Schedulers
2. Short-Term Scheduler or CPU Scheduler
• It chooses one job from the ready queue and
then sends it to the CPU for processing.
• To determine which work will be dispatched for
execution, a scheduling method is utilized.
• The Short-Term scheduler’s task can be essential
in the sense that if it chooses a job with a long
CPU burst time, all subsequent jobs will have to
wait in a ready queue for a long period.
Types of Process Schedulers
• This is known as hunger, and it can occur if the
Short-Term scheduler makes a mistake when
selecting the work.
Types of Process Schedulers
3. Medium-Term Scheduler
• If the running state processes require some IO time to
complete, the state must be changed from running to
waiting.
• This is accomplished using a Medium-Term scheduler.
• It stops the process from executing in order to make
space for other processes.
• Swapped out processes are examples of this, and the
operation is known as swapping
Scheduling Algorithms
What are the most common algorithms??
1. First Come First Served
2. Round Robin
3. Shortest Job First
4. Shortest Remaining Job First
5. Priority Scheduling
FCFS (First Come First Serve)
Selection criteria:
• The process that request first is served first. It means
that processes are served in the exact order of their
arrival.
Decision Mode:
• Non preemptive: Once a process is selected, it runs until
it is blocked for an I/O or some event, or it is terminated.
Implementation:
• This strategy can be easily implemented by using FIFO
queue, FIFO means First In First Out.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Cont’…
• Example: Consider the following set of four
processes. Their arrival time and time required to
complete the execution are given in following
table. Consider all time values in milliseconds
Cont’…
• Initially only process P0 is present and it is
allowed to run. But, when P0 completes ,
all other processes are present.
• So, next process P1 from ready queue is
selected and allowed to run till it
completes.
• This procedure is repeated till all
processes completed their execution
Cont’…
Cont’…
Advantages:
Simple, fair, no starvation.
Easy to understand, easy to implement.
Disadvantages :
Not efficient. Average waiting time is too high.
CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes
Shortest Job First (SJF)
Selection Criteria:
• The process, that requires shortest time to complete
execution, is served first.
Decision Mode:
• Non preemptive: Once a process is selected, it runs
until either it is blocked for an I/O or some event, or
it is terminated.
Implementation:
• This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted in
ascending order based on their required CPU bursts.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Shortest Job First (SJF)
• Example: Consider the following set of four processes.
• Their arrival time and time required to complete the
execution are given in following table. Consider all time
values in milliseconds.
Shortest Job First (SJF)
• Initially only process P0 is present and it is allowed to
run. But, when P0 completes, all other processes are
present.
• So, process with shortest CPU burst P2 is selected and
allowed to run till it completes.
• Whenever more than one process is available, such
type of decision is taken.
• This procedure us repeated till all process complete
their execution
Shortest Job First (SJF)
Advantages:
• Less waiting time.
• Good response for short processes.
Disadvantages :
• It is difficult to estimate time required to complete execution.
• Starvation is possible for long process. Long process may wait forever.
Shortest Remaining Time Next (SRTN)
Selection criteria
• The process, whose remaining run time is shortest, is
served first.
• This is a preemptive version of SJF scheduling
Decision Mode
• Preemptive: When a new process arrives, its total time is
compared to the current process remaining run time
• If the new job needs less time to finish than the current
process, the current process is suspended and the new
job is started
Shortest Remaining Time Next (SRTN)
Implementation
• This strategy can also be implemented by using sorted
FIFO queue.
• All processes in a queue are sorted in ascending order
on their remaining run time
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Shortest Remaining Time Next (SRTN)
Shortest Remaining Time Next (SRTN)
Shortest Remaining Time Next (SRTN)
Advantages:
Less waiting time.
Quite good response for short processes.
Disadvantages:
Again it is difficult to estimate remaining time necessary
to complete execution.
Starvation is possible for long process.
Long process may wait forever.
• Context switch overhead is there.
Round Robin
Selection Criteria:
Each selected process is assigned a time interval, called
time quantum or time slice.
Process is allowed to run only for this time interval
Decision Mode:
Preemptive
Implementation :
This strategy can be implemented by using circular FIFO
queue.
If any process comes, or process releases CPU, or
process is preempted
Round Robin
Example :
Consider the following set of four processes.
Their arrival time and time required to complete the
execution are given in the following table.
Consider that time quantum is of 4ms, and context
switch overhead is of 1 ms...
Round Robin
Advantages:
• One of the oldest, simplest, fairest and most widely used algorithms.
Disadvantages:
• Context switch overhead is there.
• Determination of time quantum is too critical. If it is too short, it causes frequent context
switches and lowers CPU efficiency. If it is too long, it causes poor response for short
interactive process.
Non Preemptive Priority Scheduling:
Selection criteria:
• The process, that has highest priority, is served first.
Decision Mode:
• Non Preemptive: Once a process is selected, it runs until
it blocks for an I/O or some event, or it terminates.
Implementation:
• This strategy can be implemented by using sorted FIFO
queue.
• All processes in a queue are sorted based on their
priority with highest priority process at front end
Non Preemptive Priority Scheduling:
Example :
• Consider the following set of four processes.
• Their arrival time, total time required completing the
execution and priorities are given in following table.
• Consider all time values in millisecond and small values
for priority means higher priority of process.
Non Preemptive Priority Scheduling
Non Preemptive Priority Scheduling
• Initially only process P0 is present and it is allowed to run.
But, when P0 completes, all other processes are present.
• So, process with highest priority P3 is selected and allowed
to run till it completes.
• This procedure is repeated till all processes complete their
execution.
Non Preemptive Priority Scheduling
Advantages:
Priority is considered. Critical processes can get even
better response time.
Disadvantages:
Starvation is possible for low priority processes.
It can be overcome by using technique called ‘Aging’.
Aging: gradually increases the priority of processes that
wait in the system for a long time.
Figure 5-1. (a) Fixed memory partitions with separate input queues for each
partition. (b) Fixed memory partitions with a single input queue.
Multiprogramming with Fixed partitions
• The disadvantage of sorting the incoming jobs into
separate queues becomes obvious when the queue for a
large partition is empty but the queue for a small
partition is full, as is the case for partitions 1 and 3 in Fig.
5-1(a)
• Here small jobs have to wait to get into memory, even
though plenty of memory is free.
• An alternative organization is to maintain a single queue
as in Fig. 5-1(b).
• Whenever a partition becomes free, the job closest to the
front of the queue that fits in it could be loaded into the
empty partition and run.
Variable size partitioning
• Also known as dynamic partitioning, the memory is
divided into blocks of varying sizes.
• At run-time, processes request blocks of main memory
from variable-sized memory partitions.
• If enough main memory is available, the process will be
allotted a block of main memory with the same size as
the process requires.
Variable size partitioning
• For example, a process P1 of size 4 MB comes
into the memory. After that, another process P2
of size 12 MB comes into the memory, and then
another process P3 of size 5 MB comes into the
memory. In the memory, these processes will look
like this-
• This method overcomes internal fragmentation but suffers
from external fragmentation. If the process P1 frees the
memory block, then that space will only be used by a process
less than or equal to 4 MB.
• Till then, the space will remain wasted.
Multiprogramming with dynamic
partitions
• If enough free memory is not available to fit the process, process needs to
wait until required memory becomes available
Multiprogramming with dynamic
partitions
Whenever any process gets terminate, it releases the
space occupied. If the released free space is contiguous to
another free partition, both the free partitions are
merged together in to single free partition
Better utilization of memory than fixed sized size
partition.
This method suffers from External fragmentation.
Figure 5-2. Memory allocation changes as processes come into memory and leave it. The shaded
regions are unused memory
Fragmentation
• When free memory is broken into small blocks, it
can be difficult to allocate larger chunks of
memory to processes.
• This can lead to wasted memory and reduced
system performance.
• The free memory space gets fragmented
• Fragmentation
When free memory is broken into small blocks, it can
be difficult to allocate larger chunks of memory to
processes.
This can lead to wasted memory and reduced system
performance.
Types of fragmentation
Internal Fragmentation
• It is occurs when the memory allocator leaves extra
space empty inside of a block of memory that has been
allocated for a client.
• This usually happens because the processor’s design
requires that memory must be cut into blocks of certain
sizes for example, blocks may be required to be evenly
be divided by four, eight or 16 bytes.
Internal Fragmentation
• When this occurs, a client that needs 57 bytes of
memory, for example, may be allocated a block
that contains 60 bytes, or even 64.
• The extra bytes that the client doesn’t need go to
waste, and over time these tiny chunks of
unused memory can build up and create large
quantities of memory that can’t be put to use by
the allocator.
• Because all of these useless bytes are inside larger
memory blocks, the fragmentation is considered
internal
External Fragmentation
• External fragmentation in an operating system (OS) is
when a dynamic memory allocation technique leaves a
small amount of unusable memory after allocating
memory.
• This happens when the allocated memory is divided
into smaller pieces, which are scattered throughout the
memory space
External fragmentation