0% found this document useful (0 votes)
7 views101 pages

Lecture 3

The document discusses process scheduling in operating systems, explaining the role of schedulers in managing multiple processes and their states through various queues. It outlines different types of schedulers (long-term, short-term, medium-term) and common scheduling algorithms such as First Come First Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it addresses deadlock conditions, detection, and prevention strategies, emphasizing the importance of resource management to avoid deadlocks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views101 pages

Lecture 3

The document discusses process scheduling in operating systems, explaining the role of schedulers in managing multiple processes and their states through various queues. It outlines different types of schedulers (long-term, short-term, medium-term) and common scheduling algorithms such as First Come First Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it addresses deadlock conditions, detection, and prevention strategies, emphasizing the importance of resource management to avoid deadlocks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Lecture 3…

Process Scheduling
What is scheduler ?
 There are many processes running to single
resource, this processes are managed by
Operating system
 The process scheduling is the activity of the
process manager called os remove the
running process from the CPU and the
selection of another process on the basis of
a particular strategy.
Process Scheduling
 Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process
to be loaded into the executable memory at a
time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling Queue
 The OS maintains all PCBs in Process
Scheduling Queues.
 The OS maintains a separate queue for each of
the process states and PCBs of all processes in the
same execution state are placed in the same
queue.
 When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its
new state queue
Process Scheduling Queue
The Operating System maintains the following
important process scheduling queues:
Ready queue –
This queue keeps a set of all processes be located
in main memory, ready and waiting to execute.
Device queues- The processes which are blocked
due to unavailability of an I/O device constitute
this queue to waiting the device free.
Process Scheduling Queues
Job queue - This queue keeps all the processes in the
system/when the process want to enter to the
RAM/from HD to RAM
Scheduling Queues
• As the process enters the system or when a
running process is interrupted, it is put into a
ready queue
• There are also device queues(waiting queues),
where each device has its own device queue.
• All are generally stored in a queue(linked list), not
necessarily a FIFO queue.

7
Types of Process Schedulers
1. Long-Term Scheduler or Job Scheduler
• The job scheduler is another name for Long-Term
scheduler.
• It selects processes from the secondary memory) and
then maintains them in the primary memory’s ready
queue.
Types of Process Schedulers
2. Short-Term Scheduler or CPU Scheduler
• It chooses one job from the ready queue and
then sends it to the CPU for processing.
• To determine which work will be dispatched for
execution, a scheduling method is utilized.
• The Short-Term scheduler’s task can be essential
in the sense that if it chooses a job with a long
CPU burst time, all subsequent jobs will have to
wait in a ready queue for a long period.
Types of Process Schedulers
• This is known as hunger, and it can occur if the
Short-Term scheduler makes a mistake when
selecting the work.
Types of Process Schedulers
3. Medium-Term Scheduler
• If the running state processes require some IO time to
complete, the state must be changed from running to
waiting.
• This is accomplished using a Medium-Term scheduler.
• It stops the process from executing in order to make
space for other processes.
• Swapped out processes are examples of this, and the
operation is known as swapping
Scheduling Algorithms
What are the most common algorithms??
1. First Come First Served
2. Round Robin
3. Shortest Job First
4. Shortest Remaining Job First
5. Priority Scheduling
FCFS (First Come First Serve)
Selection criteria:
• The process that request first is served first. It means
that processes are served in the exact order of their
arrival.
Decision Mode:
• Non preemptive: Once a process is selected, it runs until
it is blocked for an I/O or some event, or it is terminated.
Implementation:
• This strategy can be easily implemented by using FIFO
queue, FIFO means First In First Out.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Cont’…
• Example: Consider the following set of four
processes. Their arrival time and time required to
complete the execution are given in following
table. Consider all time values in milliseconds
Cont’…
• Initially only process P0 is present and it is
allowed to run. But, when P0 completes ,
all other processes are present.
• So, next process P1 from ready queue is
selected and allowed to run till it
completes.
• This procedure is repeated till all
processes completed their execution
Cont’…
Cont’…
Advantages:
 Simple, fair, no starvation.
 Easy to understand, easy to implement.
Disadvantages :
 Not efficient. Average waiting time is too high.
 CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes
Shortest Job First (SJF)
Selection Criteria:
• The process, that requires shortest time to complete
execution, is served first.
Decision Mode:
• Non preemptive: Once a process is selected, it runs
until either it is blocked for an I/O or some event, or
it is terminated.
Implementation:
• This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted in
ascending order based on their required CPU bursts.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Shortest Job First (SJF)
• Example: Consider the following set of four processes.
• Their arrival time and time required to complete the
execution are given in following table. Consider all time
values in milliseconds.
Shortest Job First (SJF)
• Initially only process P0 is present and it is allowed to
run. But, when P0 completes, all other processes are
present.
• So, process with shortest CPU burst P2 is selected and
allowed to run till it completes.
• Whenever more than one process is available, such
type of decision is taken.
• This procedure us repeated till all process complete
their execution
Shortest Job First (SJF)

Advantages:
• Less waiting time.
• Good response for short processes.
Disadvantages :
• It is difficult to estimate time required to complete execution.
• Starvation is possible for long process. Long process may wait forever.
Shortest Remaining Time Next (SRTN)
Selection criteria
• The process, whose remaining run time is shortest, is
served first.
• This is a preemptive version of SJF scheduling
Decision Mode
• Preemptive: When a new process arrives, its total time is
compared to the current process remaining run time
• If the new job needs less time to finish than the current
process, the current process is suspended and the new
job is started
Shortest Remaining Time Next (SRTN)
Implementation
• This strategy can also be implemented by using sorted
FIFO queue.
• All processes in a queue are sorted in ascending order
on their remaining run time
• When CPU becomes free, a process from the first
position in a queue is selected to run.
Shortest Remaining Time Next (SRTN)
Shortest Remaining Time Next (SRTN)
Shortest Remaining Time Next (SRTN)
Advantages:
 Less waiting time.
 Quite good response for short processes.
Disadvantages:
 Again it is difficult to estimate remaining time necessary
to complete execution.
 Starvation is possible for long process.
 Long process may wait forever.
• Context switch overhead is there.
Round Robin
Selection Criteria:
 Each selected process is assigned a time interval, called
time quantum or time slice.
 Process is allowed to run only for this time interval
Decision Mode:
 Preemptive
Implementation :
 This strategy can be implemented by using circular FIFO
queue.
 If any process comes, or process releases CPU, or
process is preempted
Round Robin
Example :
Consider the following set of four processes.
 Their arrival time and time required to complete the
execution are given in the following table.
 Consider that time quantum is of 4ms, and context
switch overhead is of 1 ms...
Round Robin

• At 4ms, process P0 completes its time quantum. So it


preempted and another process P1 is allowed to run.
• At 12 ms, process P2 voluntarily releases CPU, and
another process is selected to run.
• 1 ms is wasted on each context switch as overhead.
• This procedure is repeated till all process completes
their execution
Round Robin

Advantages:
• One of the oldest, simplest, fairest and most widely used algorithms.
Disadvantages:
• Context switch overhead is there.
• Determination of time quantum is too critical. If it is too short, it causes frequent context
switches and lowers CPU efficiency. If it is too long, it causes poor response for short
interactive process.
Non Preemptive Priority Scheduling:
Selection criteria:
• The process, that has highest priority, is served first.
Decision Mode:
• Non Preemptive: Once a process is selected, it runs until
it blocks for an I/O or some event, or it terminates.
Implementation:
• This strategy can be implemented by using sorted FIFO
queue.
• All processes in a queue are sorted based on their
priority with highest priority process at front end
Non Preemptive Priority Scheduling:
Example :
• Consider the following set of four processes.
• Their arrival time, total time required completing the
execution and priorities are given in following table.
• Consider all time values in millisecond and small values
for priority means higher priority of process.
Non Preemptive Priority Scheduling
Non Preemptive Priority Scheduling
• Initially only process P0 is present and it is allowed to run.
But, when P0 completes, all other processes are present.
• So, process with highest priority P3 is selected and allowed
to run till it completes.
• This procedure is repeated till all processes complete their
execution.
Non Preemptive Priority Scheduling
Advantages:
 Priority is considered. Critical processes can get even
better response time.
Disadvantages:
 Starvation is possible for low priority processes.
 It can be overcome by using technique called ‘Aging’.
 Aging: gradually increases the priority of processes that
wait in the system for a long time.

Preemptive Priority Scheduling(Read!)


Deadlock
 A set of processes is deadlocked if each process in
the set is waiting for an event that only another
process in the set can cause.
 Suppose process D holds resource T and process C
holds resource U.
 Now process D requests resource U and process C
requests resource T but none of process will get this
resource because these resources are already hold by
other process so both can be blocked, with neither
one be able to proceed, this situation is called
deadlock.
Deadlock
Resources come in two types:
oPreemptable
oNon-preemptable.
A preemptable resource is one that can be
taken away from the process holding it with no
ill effects.
Consider, for example, a system with 32 MB of
user memory, one printer, and two 32- MB
processes such that each process wants to print
something.
Deadlock
 Process A requests and gets the printer, then
starts to compute the values to print and process
B request and get the RAM due to process Go to
the printer.
 Before it has finished with the computation, it
exceeds its time quantum and is swapped out.
 Process B now runs and tries, unsuccessfully, to
acquire the printer.
 Potentially, we now have a deadlock situation,
because A has the printer and B has the
memory, and neither can proceed without the
resource held by the other.
Deadlock
• Fortunately, it is possible to preempt (take
away) the memory from B by swapping it out
and swapping A in.
• Now A can run, do its printing, and then release
the printer.
• No deadlock occurs…
Deadlock
 A non-preemptable resource, in contrast, is one
that cannot be taken away from its current
owner without causing the computation to fail.
 If a process has begun to burn a CD-ROM,
suddenly taking the CD recorder away from it
and giving it to another process will result in a
garbled CD, CD recorders are not
preemptable at an arbitrary moment.
 In general, deadlocks involve non-preemptable
resources.
Deadlock
The conditions that lead to deadlock
There are four conditions or characteristic that must
hold for deadlock:
1) Mutual exclusion condition
 It is the concurrency control property that ensures that only
one process can access a shared resource at a time.
 This prevents race condition
2) Hold and wait condition
 Hold and wait is a condition in which a process is
holding one resource while simultaneously waiting
for another resource that is being held by another
process. The process cannot continue till it gets all the
required resources.
Deadlock
3) No preemption condition
 It is a condition in os that states that resources
cannot be taken away from a process that is
holding them, unless the process releases them
voluntarily.
 This condition can contribute to deadlock.
4) Circular wait condition
 There must be a circular chain of 2 or more
processes.
 Each process is waiting for resource that is held
by next member of the chain
1. Explain deadlock ignorance. OR Explain Ostrich
Algorithm
Pretend (imagine) that there’s no problem.
This is the easiest way to deal with problem.
This algorithm says that stick your head in the sand and
pretend (imagine) that there is no problem at all
Deadlock ignorance. OR
Explain Ostrich Algorithm.
 This strategy suggests to ignore the deadlock
because deadlocks occur rarely, but system crashes
due to hardware failures, compiler errors, and
operating system bugs frequently, then not to pay a
large penalty in performance or convenience to
eliminate deadlocks.
This method is reasonable if
 Deadlocks occur very rarely
 Cost of prevention is high
 When system is difficult to recover
Deadlock Detection and Recovery
2. Detection Algorithms
Deadlock Detection with One Resource of Each Type
Deadlock Detection with Multiple Resources of Each Type
Deadlock Detection with One Resource of Each Type
Deadlock Detection and Recovery
Deadlock Detection and Recovery
Banker’s Algorithm for Deadlock
Avoidance
• Deadlock can be avoided by allocating resources
carefully
• Carefully analyze each resource request to see if
it can be safely granted.
• Need an algorithm that can always avoid
deadlock by making right choice all the time.
Deadlock detection for multiple resource
• When multiple copies of some of the resources
exist, a matrix-based algorithm can be used for
detecting deadlock among n processes. Deadlock
Deadlock detection for multiple resource
Deadlock detection for multiple resource
Deadlock detection for multiple resource
Deadlock detection for multiple resource
Deadlock detection for multiple resource
Prevent Deadlock
Eliminate Mutual Exclusion:
• Mutual exclusion is a condition where multiple
users are blocked from accessing the same shared
data or variable at the same time.
• It's a necessary condition for deadlocks, and in
most cases, it's not possible to prevent it.
• The only way to eliminate mutual exclusion is
with a hardware solution, like increasing the
number of devices.
• To do you can prevent processes from holding resources
while waiting for other resources. Here are some
strategies:
• Allocate all resources before execution: This prevents
processes from waiting for resources during
execution. However, it can lead to underused resources if
the process doesn't need all resources at once.
• Require processes to release resources before
requesting more:
• Example:
• In an operating system, no preemption is a condition
that prevents resources from being forcibly taken
away from a process, and can lead to deadlock.
• To eliminate no preemption, you can try these
strategies:
• Establish priorities: Allow higher-priority processes to
preempt lower-priority processes.
• Define rollback and recovery: Create mechanisms to
release resources held by processes in deadlock states.
 eliminate a circular wait, you can impose a linear
ordering on resources and ensure that processes request
resources in a consistent order. This can be done by:
1. Assigning priority numbers to resources Each resource is
assigned a priority number, and processes can only
request resources in increasing order of priority.
2. Numbering resources Each resource type is assigned a
number, and processes can only allocate resources in
increasing order of those numbers. For example, if the
tape drive is numbered 1, the disk drive is numbered 5,
and the printer is numbered 12, a process will first need
to allocate the disk drive before the printer
Deadlock avoidance:
• The Operating system runs an algorithm on requests to
check for a safe state.
• Any request that may result in a deadlock is not
granted.
• Example: Checking each car and not allowing any car
that can block the road. If there is already traffic on the
road, then a car coming from the opposite direction can
cause blockage.
Deadlock avoidance:
• Deadlock detection & recovery: OS detects
deadlock by regularly checking the system
state, and recovers to a safe state using
recovery techniques.
• Example: Unblocking the road by backing
cars from one side. Deadlock prevention
and deadlock avoidance are carried out
before deadlock occurs.
Lecture 5 loading …
Tuesday 2:00 to 4:00 G.C
Memory management
• It is the process of organizing and regulating a
computer's memory to efficiently allocate and
deallocate memory space to programs…
 The task of subdividing the memory among
different processes
 The main aim of memory management is to
achieve efficient utilization of memory.
Memory management
• Memory is the important part of the computer
that is used to store the data.
• At any time, many processes are competing for it.
• Moreover, to increase performance, several
processes are executed simultaneously.
• The event of copying process from hard disk to
main memory is called as
• The event of copying process from main memory
to hard disk is called as
Swapping in memory management
• When swapping creates multiple holes in
memory, it is possible to them all
into one big by moving all the processes
downward as far as possible.
Memory Management Techniques
• A uniprogramming operating system (OS) is a
type of OS that

• In a uniprogramming OS, the CPU dedicates all its


resources to a single program, allowing it to
execute
• In uni-programming system jobs are submitted

• From within the batch, the jobs are processed one by


one.
• A collection of jobs forms a batch from which the jobs are
executed one by one.
• Every user submits his/her job to the operator who forms
a batch of jobs.
• The entire system is used by one process at a time
Multiprogramming
• A multiprogramming operating system (OS)
allows multiple programs to run simultaneously
on a single processor computer. The main goal of
multiprogramming is to keep the CPU busy and
efficiently manage the system's resources
Multiprogramming
Characteristics
Multiprogramming
involves the execution of multiple programs
concurrently.
The CPU allocates a time slice to
each program, switching between them rapidly.
The CPU saves the context of
one program and restores the context of another
program, allowing for efficient switching between
programs.
Multiprogramming method
Fixed-size partitioning
• Also known as static partitioning, the memory is
partitioned into blocks of fixed size.
• For example, in the below diagram, the memory
is divided into five blocks, each of size 4 MB.
If a process of size 4 MB comes, it will be easily allocated to any of the
4 MB memory blocks.
Fixed-size partitioning
• If a process of less than 4 MB comes, we can easily
allocate that process in the memory. Still, that
process will suffer internal fragmentation since a
whole block of 4 MB will be allocated, which is
not required, and thus the leftover memory will
be wasted.
• Moreover, if a process size greater than 4 MB
comes, we cannot allocate that process in the
memory since the spanning is not allowed in
fixed-sized partitioning.
Swapping in memory management
Multiprogramming with Fixed partitions
Multiprogramming with Fixed partitions
• Whenever any program needs to be loaded in memory, a
free partition big enough to hold the program is found.
• This partition will be allocated to that program or
process.
• If there is no free partition available of required size, then
the process needs to wait. Such process will be put in a
queue.
• In fixed partitioning, the main memory is not utilized
effectively.
• The memory assigned to each process is of the same size
that can cause some processes to have more memory
than they need.
Multiprogramming with Fixed partitions

Figure 5-1. (a) Fixed memory partitions with separate input queues for each
partition. (b) Fixed memory partitions with a single input queue.
Multiprogramming with Fixed partitions
• The disadvantage of sorting the incoming jobs into
separate queues becomes obvious when the queue for a
large partition is empty but the queue for a small
partition is full, as is the case for partitions 1 and 3 in Fig.
5-1(a)
• Here small jobs have to wait to get into memory, even
though plenty of memory is free.
• An alternative organization is to maintain a single queue
as in Fig. 5-1(b).
• Whenever a partition becomes free, the job closest to the
front of the queue that fits in it could be loaded into the
empty partition and run.
Variable size partitioning
• Also known as dynamic partitioning, the memory is
divided into blocks of varying sizes.
• At run-time, processes request blocks of main memory
from variable-sized memory partitions.
• If enough main memory is available, the process will be
allotted a block of main memory with the same size as
the process requires.
Variable size partitioning
• For example, a process P1 of size 4 MB comes
into the memory. After that, another process P2
of size 12 MB comes into the memory, and then
another process P3 of size 5 MB comes into the
memory. In the memory, these processes will look
like this-
• This method overcomes internal fragmentation but suffers
from external fragmentation. If the process P1 frees the
memory block, then that space will only be used by a process
less than or equal to 4 MB.
• Till then, the space will remain wasted.
Multiprogramming with dynamic
partitions

• If enough free memory is not available to fit the process, process needs to
wait until required memory becomes available
Multiprogramming with dynamic
partitions
 Whenever any process gets terminate, it releases the
space occupied. If the released free space is contiguous to
another free partition, both the free partitions are
merged together in to single free partition
 Better utilization of memory than fixed sized size
partition.
 This method suffers from External fragmentation.
Figure 5-2. Memory allocation changes as processes come into memory and leave it. The shaded
regions are unused memory
Fragmentation
• When free memory is broken into small blocks, it
can be difficult to allocate larger chunks of
memory to processes.
• This can lead to wasted memory and reduced
system performance.
• The free memory space gets fragmented
• Fragmentation
When free memory is broken into small blocks, it can
be difficult to allocate larger chunks of memory to
processes.
This can lead to wasted memory and reduced system
performance.
Types of fragmentation
Internal Fragmentation
• It is occurs when the memory allocator leaves extra
space empty inside of a block of memory that has been
allocated for a client.
• This usually happens because the processor’s design
requires that memory must be cut into blocks of certain
sizes for example, blocks may be required to be evenly
be divided by four, eight or 16 bytes.
Internal Fragmentation
• When this occurs, a client that needs 57 bytes of
memory, for example, may be allocated a block
that contains 60 bytes, or even 64.
• The extra bytes that the client doesn’t need go to
waste, and over time these tiny chunks of
unused memory can build up and create large
quantities of memory that can’t be put to use by
the allocator.
• Because all of these useless bytes are inside larger
memory blocks, the fragmentation is considered
internal
External Fragmentation
• External fragmentation in an operating system (OS) is
when a dynamic memory allocation technique leaves a
small amount of unusable memory after allocating
memory.
• This happens when the allocated memory is divided
into smaller pieces, which are scattered throughout the
memory space
External fragmentation

Suffers from external fragmentation


Problem
Problem
Problem
Problem
Problem

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy