0% found this document useful (0 votes)
10 views54 pages

Unit 2 Problem

The document outlines various scheduling algorithms including First-Come, First-Served (FCFS), Priority Scheduling (both preemptive and non-preemptive), and Round Robin Scheduling, detailing the processes, burst times, waiting times, and turnaround times for each method. It provides Gantt charts and calculations for average waiting and turnaround times across multiple problems. The results indicate the efficiency of each scheduling method in managing process execution times.

Uploaded by

nctitacademic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views54 pages

Unit 2 Problem

The document outlines various scheduling algorithms including First-Come, First-Served (FCFS), Priority Scheduling (both preemptive and non-preemptive), and Round Robin Scheduling, detailing the processes, burst times, waiting times, and turnaround times for each method. It provides Gantt charts and calculations for average waiting and turnaround times across multiple problems. The results indicate the efficiency of each scheduling method in managing process execution times.

Uploaded by

nctitacademic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Unit 2

Problem:1
First-Come, First-Served (FCFS) Scheduling:
Process Burst Time Priority Arrival Time
P1 10 5 0
P2 6 2 0
P3 7 4 1
P4 4 1 1
P5 5 3 2
The Gantt chart for the schedule is:
P1 P2 P3 P4 P5
0 10 16 23 27 32
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 - 0 0
P2 10 - 0 10
P3 16 - 1 15
P4 23 - 1 22
P5 27 - 2 25
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (0 + 10 + 15 + 22 + 25) / 5
= 72 / 5 = 14.4 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 0 + 10 10
P2 10 + 6 16
P3 15 + 7 22
P4 22 + 4 26
P5 25 + 5 30

Page 1 of 54
Average Turnaround Time = Turnaround Time / No of Process
= (10 + 16 + 22 + 26 + 30) / 5
= 104 / 5 = 20.8 MS
Priority Scheduling (Preemptive)
The Gantt chart for the schedule is:
P2 P4 P2 P5 P3 P1
0 1 5 10 15 22 32
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 22 - 0 22
P2 0+(5-1)=4 - 0 4
P3 15 - 1 14
P4 1 - 1 0
P5 10 - 2 8
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (22 + 4 + 14 + 0 + 8) / 5
= 48 / 5
= 9.6 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 22 + 10 32
P2 4 + 6 10
P3 14 + 7 21
P4 0 + 4 4
P5 8 + 5 13
Average Turnaround Time = Turnaround Time / No of Process
= (32 + 10 + 21 + 4 + 13) / 5
= 80 / 5
= 16 MS

Page 2 of 54
Priority Scheduling (Non-Preemptive)
The Gantt chart for the schedule is:
P2 P4 P5 P3 P1
0 6 10 15 22 32

Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 22 - 0 22
P2 0 - 0 0
P3 15 - 1 14
P4 6 - 1 5
P5 10 - 2 8
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (22 + 0 + 14 + 5 + 8) / 5
= 49 / 5
= 9.8 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 22 + 10 32
P2 0 + 6 6
P3 14 + 7 21
P4 5 + 4 9
P5 8 + 5 13
Average Turnaround Time = Turnaround Time / No of Process
= (32 + 6 + 21 + 9 + 13) / 5
= 81 / 5
= 16.2 MS
Round Robin Scheduling
(Time Quantum = 5)

Page 3 of 54
The Gantt chart for the schedule is:
P1 P2 P3 P4 P5 P1 P2 P3
0 5 10 15 19 24 29 30 32
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 + (24 – 5) = 19 - 0 19
P2 5 + (29 – 10) = 24 - 0 24
P3 10 + (30 – 15) = 25 - 1 24
P4 15 - 1 14
P5 19 - 2 17
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (19 + 24 + 24 + 14 + 17) / 5
= 98 / 5
= 19.96 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 19 + 10 29
P2 24 + 6 30
P3 24 + 7 31
P4 14 + 4 18
P5 17 + 5 22
Average Turnaround Time = Turnaround Time / No of Process
= (29 + 30 + 31 + 18 + 22) / 5
= 130 / 5
= 26 MS
Problem:2
First-Come, First-Served (FCFS) Scheduling:
Process Burst Time Priority Arrival Time
P1 6 2 0

Page 4 of 54
P2 2 2 1
P3 3 4 1
P4 1 1 2
P5 2 3 2
The Gantt chart for the schedule is:
P1 P2 P3 P4 P5
0 6 8 11 12 14
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 - 0 0
P2 5 - 1 5
P3 8 - 1 7
P4 11 - 2 9
P5 12 - 2 10
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (0 + 5 + 7 + 9 + 10) / 5
= 31 / 5 = 6.2 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 0 + 6 6
P2 5 + 2 7
P3 7 + 3 10
P4 9 + 1 10
P5 10 + 2 12
Average Turnaround Time = Turnaround Time / No of Process
= (6 + 7 + 10 + 10 + 12) / 5
= 45 / 5 = 9 MS
Priority Scheduling (Preemptive)
The Gantt chart for the schedule is:

Page 5 of 54
P1 P4 P1 P2 P5 P3
0 2 3 7 9 11 14
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 + (3-2) = 1 - 0 1
P2 7 - 1 6
P3 11 - 1 10
P4 2 - 2 0
P5 9 - 2 7
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (1 + 6 + 10 + 0 + 7) / 5
= 24 / 5
= 4.8 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 1 + 6 7
P2 6 + 2 8
P3 10 + 3 13
P4 0 + 1 1
P5 7 + 2 9
Average Turnaround Time = Turnaround Time / No of Process
= (7 + 8 + 13 + 1 + 9) / 5
= 38 / 5
= 7.6 MS
Priority Scheduling (Non-Preemptive)
The Gantt chart for the schedule is:
P1 P4 P2 P5 P3
0 6 7 9 11 14

Page 6 of 54
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 - 0 0
P2 7 - 1 6
P3 11 - 1 10
P4 6 - 2 4
P5 9 - 2 7
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (0 + 6 + 10 + 4 + 7) / 5
= 27 / 5
= 5.2 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 0 + 6 6
P2 6 + 2 8
P3 10 + 3 13
P4 4 + 1 5
P5 7 + 2 9
Average Turnaround Time = Turnaround Time / No of Process
= (6 + 8 + 13 + 5 + 9) / 5
= 41 / 5
= 8.2 MS
Round Robin Scheduling
(Time Quantum = 5)
The Gantt chart for the schedule is:
P1 P2 P3 P4 P5 P1 P2 P3 P5 P1 P3 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Waiting Time
Proces Waiting Time Subtract Arrival Time Total

Page 7 of 54
s
P1 0+(5-1)+(9-6)+(11-10)=8 - 0 8
P2 1+(6-2)=5 - 1 4
P3 2+(7-3)+(10-8)=8 - 1 7
P4 3 - 2 1
P5 4+(8-5)=7 - 2 5
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (8 + 4 + 7 + 1 + 5) / 5
= 25 / 5
= 5 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 8 + 6 14
P2 4 + 2 6
P3 7 + 3 10
P4 1 + 1 2
P5 5 + 2 7
Average Turnaround Time = Turnaround Time / No of Process
= (14 + 6 + 10 + 2 + 7) / 5
= 39 / 5
= 7.8 MS
Problem:3
First-Come, First-Served (FCFS) Scheduling:
Process Burst Time Arrival Time
P1 8 0
P2 4 1
P3 9 2
P4 5 3
The Gantt chart for the schedule is:
P1 P2 P3 P4

Page 8 of 54
0 8 12 21 26
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 - 0 0
P2 8 - 1 7
P3 12 - 2 10
P4 21 - 3 18
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (0 + 7 + 10 + 18 / 4
= 25 / 5
= 8.75 MS

Turnaround Time = Waiting time + Burst Time


Proces Waiting Time Addition Burst Time Total
s
P1 0 + 8 8
P2 7 + 4 11
P3 10 + 9 19
P4 18 + 5 23
Average Turnaround Time = Turnaround Time / No of Process
= (8 + 11 + 19 + 23) / 5
= 61 / 4 = 15.25 MS
Shortest Job First Scheduling (Preemptive)
The Gantt chart for the schedule is:
P1 P2 P4 P1 P3
0 1 5 10 17 26
Waiting Time

Page 9 of 54
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 + (10-1) = 9 - 0 9
P2 1 - 1 0
P3 17 - 2 15
P4 5 - 3 2
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (9 + 0 + 15 + 2) / 4
= 26 / 4
= 6.5 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 9 + 8 17
P2 0 + 4 4
P3 15 + 9 24
P4 2 + 5 7
Average Turnaround Time = Turnaround Time / No of Process
= (17 + 4 + 24 + 7) / 4
= 52 / 4
= 13 MS
Shortest Job First Scheduling (Non-Preemptive)
The Gantt chart for the schedule is:
P1 P2 P4 P3
0 8 12 17 26
Waiting Time
Proces Waiting Time Subtract Arrival Time Total
s
P1 0 - 0 0
P2 8 - 1 7
P3 17 - 2 15

Page 10 of 54
P4 12 - 3 9
Average Waiting Time = Waiting Time / No of Process
Average Waiting Time = (0 + 7 + 15 + 9) / 4
= 31 / 4
= 7.75 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 0 + 8 8
P2 7 + 4 11
P3 15 + 9 24
P4 9 + 5 14
Average Turnaround Time = Turnaround Time / No of Process
= (8 + 11 + 25 + 15) / 4
= 57 / 4 = 14.25 MS
Round Robin Scheduling
(Time Quantum = 1)
The Gantt chart for the schedule is:
P P P P P P P P P P P P P P P P P P P P P P P P P P
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 3 4 1 3 1 3 1 3 3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 2526
Waiting Time
Process Waiting Time Subtract Arrival Total
Time
P1 0+(4-1)+(8-5)+(12-9)+(16-13)+(19-17)+(21-20)+(23-22) - 0 16
= 0+3+3+3+3+2+1+1 = 16
P2 1+(5-2)+(9-6)+(13-10) = 1+3+3+3 = 10 - 1 9
P3 3+(6-4)+(10-7)+(14-11)+(17-15)+(20-18)+(22-21)+(24-23) - 2 15
= 3+2+3+3+2+2+1+1 = 17
P4 3+(7-4)+(11-8)+(15-12)+(18-16) - 3 11
= 3+3+3+3+2 = 14
Average Waiting Time = Waiting Time / No of Process

Page 11 of 54
Average Waiting Time = (16 + 9 + 15 + 11) / 4
= 51 / 4
= 12.75 MS
Turnaround Time = Waiting time + Burst Time
Proces Waiting Time Addition Burst Time Total
s
P1 16 + 8 24
P2 9 + 4 13
P3 15 + 9 24
P4 11 + 5 16
Average Turnaround Time = Turnaround Time / No of Process
= (24 + 13 + 24 + 16) / 4
= 77 / 4
= 19.25 MS

CPU SCHEDULING:
 The assignment of physical processors to processes allows processors to accomplish work.
 The problem of determining when processors should be assigned and to which processes is
called processor scheduling or CPU scheduling.
 When more than one process is runnable, the operating system must decide which one first.
 The part of the operating system concerned with this decision is called the scheduler, and
algorithm it uses is called the scheduling algorithm.
Goals of scheduling (objectives):
 Many objectives must be considered in the design of a scheduling discipline.
 In particular, a scheduler should consider fairness, efficiency, response time, turnaround
time, throughput, etc.,

Page 12 of 54
 Some of these goals depend on the system one is using for example batch system, interactive
system or real-time system, etc. but there are also some goals that are desirable in all
systems.
Scheduling Criteria:
Some commonly used criteria are given below:
 Fairness: Avoid the process from the starvation. All the process must be given equal
opportunity to execute.
 Waiting time: the average period of time a process spends waiting. Waiting time may be
expressed on turnaround time less the actual execution time.
 Efficiency: Scheduler should keep the system (or in particular CPU) busy cent percent of the
time when possible. If the CPU and all the Input/Output devices can be kept running all the
time, more work gets done per second than if some components are idle.
 Response Time: A scheduler should minimize the response time for interactive user.
 Turnaround: A scheduler should minimize the time batch users must wait for an output.
 Throughput: It refers to the amount of work completed in a unit of time. The number of
processes the system can execute in a period of the time. The higher the number, the more
work is done by the system.
Preemptive vs. Nonpreemptive Scheduling:
The Scheduling algorithms can be divided into two categories with respect to how they deal with
clock interrupts.
Non-preemptive Scheduling
A scheduling discipline is non-preemptive if, once a process has been given the CPU, the CPU
cannot be taken away from that process.
Following are some characteristics of non-preemptive scheduling
 In non-preemptive system, short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
 In non-preemptive system, response times are more predictable because incoming high
priority jobs cannot displace waiting jobs.
 In non-preemptive scheduling, a scheduler executes jobs in the following two situations.
 When a process switches from running state to the waiting state.
 When a process terminates.

Page 13 of 54
Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can
taken away. The strategy of allowing processes that are logically runable to be temporarily
suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.
Scheduling Algorithm:
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.Following are some scheduling algorithms
 FCFS Scheduling.
 Round Robin Scheduling.
 SJF Scheduling.
 Priority Scheduling.
 Multilevel Queue Scheduling.
 Multilevel Feedback Queue Scheduling.
First-Come-First-Served (FCFS) Scheduling:
Other names of this algorithm are:
 First-In-First-Out (FIFO)
 Run-to-Completion
 Run-Until-Done
 Perhaps, First-Come-First-Served algorithm is the simplest scheduling algorithm is the
simplest scheduling algorithm.
 Processes are dispatched according to their arrival time on the ready queue.
 Being a non-preemptive discipline, once a process has a CPU, it runs to completion.
 The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in
the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait.
 FCFS is more predictable than most of other schemes since it offers time.
 FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good
response time
 The code for FCFS scheduling is simple to write and understand.
 One of the major drawbacksof this scheme is that the average time is often quite long.
 The First-Come-First-Served algorithm is rarely used as a master scheme in modern
operating systems but it is often embedded within other schemes.

Page 14 of 54
Round Robin Scheduling;
 One of the oldest, simplest, fairest and most widely used algorithms is round robin (RR).
 In the round robin scheduling, processes are dispatched in a FIFO manner but are given a
limited amount of CPU time called a time-slice or a quantum.
 If a process does not complete before its CPU-time expires, the CPU is preempted and given
to the next process waiting in a queue.
 The preempted process is then placed at the back of the ready list.
 Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in
time-sharing environments in which the system needs to guarantee reasonable response times
for interactive users.
 The only interesting issue with round robin scheme is the length of the quantum. Setting the
quantum too short causes too many context switches and lower the CPU efficiency. On the
other hand, setting the quantum too long may cause poor response time and
approximatesFCFS.
 In any event, the average waiting time under round robin scheduling is often quite long.
Shortest-Job-First (SJF) Scheduling:
 Other name of this algorithm is Shortest-Process-Next (SPN).
 Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with
the smallest estimated run-time-to-completion is run next.
 In other words, when CPU is available, it is assigned to the process that has smallest next
CPU burst.
 The SJF scheduling is especially appropriate for batch jobs for which the run times are
known in advance. Since the SJF scheduling algorithm gives the minimum average time for
a given set of processes, it is probably optimal.
 The SJF algorithm favors short jobs (or processors) at the expense of longer ones.
 The obvious problem with SJF scheme is that it requires precise knowledge of how long a
job or process will run, and this information is not usually available.
 The best SJF algorithm can do is to rely on user estimates of run times.
 In the production environment where the same jobs run regularly, it may be possible to
provide reasonable estimate of run time, based on the past performance of the process.
 But in the development environment users rarely know how their program will execute.

Page 15 of 54
 Like FCFS, SJF is non-preemptive therefore; it is not useful in timesharing environment in
which reasonable response time must be guaranteed.
Shortest-Remaining-Time (SRT) Scheduling:
 The SRT is the preemptive counterpart of SJF and useful in time-sharing environment.
 In SRT scheduling, the process with the smallest estimated run-time to completion is run
next, including new arrivals.
 In SJF scheme, once a job begins executing, it run to completion.
 In SJF scheme, a running process may be preempted by a new arrival process with shortest
estimated run-time.
 The algorithm SRT has higher overhead than its counterpart SJF.
 The SRT must keep track of the elapsed time of the running process and must handle
occasional preemptions.
 In this scheme, arrival of small processes will run almost immediately. However, longer jobs
have even longer mean waiting time.
Priority Scheduling:
 The basic idea is straightforward: each process is assigned a priority, and priority is allowed
to run. Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF)
algorithm is a special case of general priority scheduling algorithm.
 An SJF algorithm is simply a priority algorithm where the priority is the inverse of the
(predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice
versa.
 Priority can be defined either internally or externally. Internally defined priorities use some
measurable quantities or qualities to compute priority of a process.
 Priority scheduling can be either preemptive or non-preemptive.
 When process arrives at the ready queue, its priority is compared with the priority of the
currently running process.
 A preemptive priority-scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the priority of the currently running process.
 A non-preemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
 A major problem with priority-scheduling algorithms is indefinite blocking or starvation.

Page 16 of 54
 A process that is ready to run but lacking the CPU is considered blocked – waiting for the
CPU.
 A priority-scheduling algorithm can leave some low-priority processes waiting indefinitely
for the CPU.
 Generally one of the two things will happen.
 Either the process will eventually be run, or the computer system will eventually crash and
lose all unfinished low-priority processes.
 A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is
a technique of gradually increasing the priority of processes that wait in the system for a long
time.
Multilevel Queue Scheduling:
 A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues.
 The processes are permanently assigned to one queue, based on some property of the process,
such as memory size, process priority or process type.
 Each queue has its own scheduling algorithm.
 For example, separate queues can be used as foreground and background queues.
 Foreground queues: this is for interactive processes, with highest priority. This can be
scheduled using RR scheduling algorithm.
 Background queues: This is for batch processes, with lowest priority and uses FCFS
scheduling algorithm for scheduling.
System processes

Interactive processes

Interactive editing processes

Batch processes

Student processes

 Let us look at an example of a multilevel queue scheduling algorithm with five queues:
 System processes
 Interactive processes

Page 17 of 54
 Interactive editing processes
 Batch processes
 Student processes
 We can execute the high priority queue first.
 It means that, no process in batch queue for example could run unless the queues for
system processes, interactive processes, and interactive editing processes were all empty.
 If an interactive editing process entered the ready queue while a batch process was
running, the batch process would be preempted.
Advantages:
 The processes are permentally assigned to queue when they enter the systems
 Since processes do not change their fore or back ground nature.
 So there is low scheduling overhead.
Disadvantages:
 Starvation of process for CPU occurs here all the process never allowed to change their
queue inflexibility and if one or the other higher priority queues never become empty.

Multilevel Feedback Queue Scheduling:


 In a multilevel queue scheduling algorithm, processes are permanently assigned to a queue
on entry to the system.
 Processes do not move between queues.
 If there are separate queues for foreground and background processes, for example, processes
do not move from one queue to the other, since processes do not change their foreground or
background nature.
 Multilevel feedback queue scheduling, allows a process to move between the queues.
 The idea is to separate processes with different CPU burst characteristics.
 If a process uses too much CPU time, it will be moved to a lower priority queue.
 If a process waits too long in a lower priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.

Time Quantum (q) = 8 Highest Priority

Page 18 of 54
Time Quantum (q) = 16
Aging

FCFS Algorithm Lowest Priority

 For example consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2.
 The scheduler first executes all processes in queue 0.
 Only when queue 0 is empty will it execute processe4s in queue 1.
 Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
 A process that arrives for queue 1 will preempt a process in queue 2.
 A process that arrives for queue 0 will, in turn preempt a process in queue 1.
 A process entering the ready queue is put in queue 0.
 A process in queue is given a time quantum of 8 milliseconds.
 If it does not finish within this time it is moved to the tail of the queue. If queue 0 is empty,
the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not
complete, it is preempted and is put into queue 2.
 Process in queue 2 is run on an FCFS basis, only when queues 0 and 1 are empty.
Multilevel feedback queue scheduler is defined by the following parameters:
 The number of queues
 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a higher priority queue
 The method used to determine when to demote a process to a lower priority queue
 The method used to determine which queue a process will enter when the process needs
service.
_____________________________________________________________________________
Multiple Processor Scheduling:
 If multiple CPUs are available, the scheduling problem is more complex.
 In multiple processors system those system with identical in terms of functionality are called
homogenous.

Page 19 of 54
 And those CPUs with different functionality are called heterogeneous.
 Even within homogenous multiprocessor, there are sometimes limitations on scheduling.
 If several identical processes are available, then load sharing can occur.
 It would be possible to provide a separate queue for each processor.
 In this case one processor could be idle, with an empty queue, while another processor was
very busy.
 To prevent this situation, we use a common ready queue.
 All processes go into one queue and are scheduled onto any available processor.
 In such a scheduling scheme one of the two scheduling approaches may be used.
 Self-scheduling:Each processor examines the common ready queue and selects a
process to execute.
 Master-Slave structure: appointing one processor as a scheduler for the other
processors, thus creating a master slave structure.
_____________________________________________________________________________
Real Time Scheduling:
 Real time computing is divided into two types:
 Hard real time systems
 Soft real time systems
 Hard real time systems are required to complete a critical task within a guaranteed amount
of time. A process is submitted with a statement of the amount of time in which it needs to
complete or perform I/O. The scheduler either admits the process, guaranteeing that the
process will complete on time or rejects the request as impossible. This is known as resource
reservation.
 Such a guarantee requires that the scheduler know exactly how long each type of operating
system functions takes to perform, and therefore each operation must be guaranteed to take a
maximum amount of time. Therefore, hard real time systems run on special software
dedicated to their critical process, and lack the full functionality of modern computers and
operating systems.
 Soft real time systems computing are less restrictive. It requires that critical processes
receive priority over less fortunate ones. Implementing soft real time functionality requires
careful design of the scheduler and related aspects of the operating system.

Page 20 of 54
 The system must have priority scheduling, and real time processes must have the highest
priority. The priority of real time processes must not degrade over time, even though the
priority of non-real time processes may.
 The dispatch latency must be small. The smaller the latency the faster a real time process
canstart executing once it is runnable.
___________________________________________________________________________

Process Scheduling in Linux:


Linux gives two separate process scheduling algorithms. They are
1. A time sharing algorithm for fair preemptive scheduling among multiple process
2. Another algorithm is designed for real time tasks where absolute priorities are more
important than fairness.
 Linux permits only processes running in user mode to be preempted. A process may not be
preempted while it is running in kernel mode, even if a real time process with a higher
priority is available to run.
 For the time shared processes, Linux used a prioritized credit based algorithm. Every process
processes a certain number of scheduling credits.
 When a new task must be chosen to run, the process with the most credits is selected. Every
time that a timer interrupt occurs, the currently running process loses one credit. When this
credit goes to zero, it is suspended and another process is located.
 If runnable processes have any credits the Linux performs a recrediting operation, adding
credits to every process in the system based on the rule given here.
 Credits= (Credits / 2) / Priority
 It allows two mix two factors:
1. The process’s history

Page 21 of 54
2. Priority
 The use of process priority in calculating new credits allows the priority of a process to be
fine-tuned.
 Back ground batch jobs can be given a low priority. They will automatically receive fewer
credits than interactive user’s jobs. So it will receive a smaller percentage of the CPU time
than will similar jobs with higher priorities. Linux employs this priority system to implement
the standard UNIX nice process priority mechanism.
_____________________________________________________________________________

PROCESS SYNCHRONIZATION:
 As we know that the process of changing context from an executing program to an interrupt
handler is called as context switching
 On the other hand, the method of changing context from an executing program to another
process is called as process switching
Tasking may be implicit or explicit
1. Implicit tasking means processes are defined b y the system
2. Explicit tasking means processes are defined by the programmer in a single application
Suppose C, L, S and R are four instruction each requiring 3, 4, 2 and 3 units of time,respectively.
Then, there are two ways of executing them
1. Sequential Processing
While (true)
{
C;
L;
R;
S;
} So, one round takes 12 units of time

Page 22 of 54
2. Multitasking
C1

L1 S1

C2 R1

L1 S2

C3 R2

To complete two rounds, it takes 15 units o time whereas in the sequential case it takes 24
units of time. So there is a performance gain.
However, process synchronization is needed here.
Racing problem;
The situation where, two or more processes are reading or writing some shared data and the
result depends on who runs precisely when is called as a race condition.
To avoid race condition
CRITICAL SECTION PROBLEM:
A system consists on n processes {P0, P1, and Pn-1}. Each process has a division of code called
a critical section.
Critical Section:
 In which the process may be changing common variables, updating a table, writing a file and
so on. At any moment at most one process can execute in critical section.When one process
is executing in its critical section, no other process is allowed to execute in its critical
section.The execution of critical sections by the process is mutually exclusive in time.
 Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
Exit Section:
 The critical section may be followed by an exit section. The remaining code is the remainder
section. Fig. General structure of a typical process Pi.
Do {

Page 23 of 54
Entry section
Critical section
Exit Section
Remainder section
} while (1);
A solution to the critical section problem must satisfy the following requirements:
 Mutual Exclusion: If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
 Progress: If no process is executing in its critical section and some process wish to enter
their critical sections, then only those processes that are not executing in their remainder
section can enter the critical section.
 Bounded waiting: There exists a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to its critical
section.
Two Process Solutions:
1. Algorithm 1:
Here the algorithms are used to only two processes at a time. The processes are represented as
P0, P1.
A common integer variable turns which is assigned initially to 0 or 1.
This variable has to be shared by the process.
When the turn == i, then the process Pi is permitted to run in its critical section.
do {
While (turn! = i);
Critical section
Turn = j;
Remainder section
} while (1);
This gives assurance for only one process at a time can be in its critical section.
But it does not satisfy the progress requirement.
2. Algorithm 2
The remedy to algorithm 1 is we can replace the variable turn with the following array:

Page 24 of 54
Boolean flag[2];
The elements of the array are initialized to false.
When the flag[i] is true, this value indicates that Pi is ready to enter the critical section.
The structure of the process P1 is given as
Do {
flag[i] = true;
while (flag[j]);
Critical Section
flag[i] = false;
Remainder section
{ while (1);
Here, first the process Pi first sets flag[i] = true
This represents that it is ready to enter its critical section.
Then Pi checks to verify that process Pjdoes not enter into its critical section.
Suppose if the Pj is ready, to enter its critical section then Pi has to wait until the Pjdoes not like
to be in the critical section.
So, now the Pi isenters its critical section.
The Pi is set to flag[i] = false when existing the critical section.
In this mutual exclusion is satisfied.
But progress is not satisfied.
3. Algorithm 3:
By combining the key ideas of algorithm 1 and 2 we obtain the correct solution to the critical
section problem where all the three requirements areseen.
The processes share the two variables:
Boolean flag[2];
int turn;
At the starting stage, the value of flag [0] = flag [1].
The process structure is given as
Do {
Flag[i] = true;
Turn = j;

Page 25 of 54
While (flag[j] && turn == j);
Critical section
Flag[i] = false;
Remainder section
} while (1);
Here the process P1 first assigns flag [i] =true.
Then it sets the turn = j by asserting that if any other process wants to enter the critical section, it
is possible to prepare.
So that the other process wishes to enter the critical section can enter. If both process try to enter
at the same time then value of turn decides which of the two process to enter.

Multiple Process Solution:


Bakery algorithm is used in multiple process solution.
It solves the problem of critical section for n processes.
Each process requesting entry to critical section is given a numbered token such that the number
on the token is larger than the maximum number issued earlier.
This algorithm was developed for a distributed environment.
Do {
choosing[i] :=true;
number[i] := max(number[0], number[1],…,number[n-1]) +1;
choosing[i] :=false;
for(j= 0; j <n; j++)
{
While (choosing[j]);
While ((number[j]! =0) &&number [j , j]<[i , i]));
}critical section
number[i]:=0;
remainder section
} while (1);
_____________________________________________________________________________

Page 26 of 54
Synchronization Hardware:
 Many systems provide hardware support for critical section code
 Uniprocessors environment, the critical section problem is simple solved. Here the current
sequence of instructions would be allowed to execute in order without preemption.
 Other instructions would not be run. So the unexpected change does not performed to the
shared variable. But it not suitable for multiprocessor environment. Disabling interrupts on a
multiprocessor can be time consuming as the message is passed to all the processors.
 This message passing
Delays entry into every critical section
Decreases the system efficiency
 Modern machines provide special atomic hardware instructions
 Either test memory word and set value
 Or swap contents of two memory words
Test-and-set instruction
This instruction is executed automatically.
The definition of TestAndSet instruction is given as
boolean TestAndSet (boolean *target)
{
booleanrv = *target;
*target = TRUE;
returnrv:
}
When two TestAndSet instructions are executed at the same time, they will be executed
sequentially in some arbitrary order.
The mutual exclusion is implemented by declaring a Boolean variable ‘Lock’.
This is declared as false when the machine prefers TestAndSet instructions.
The structure of process Pi is given as
do
{
whileTest-and-Set (lock)do no-op; no-op is no operation
critical section

Page 27 of 54
lock:= false;
remainder section
} while (1);
Swap instruction:
The Swap instruction is used on the contents of the two words.
The mutual exclusion is provided when the system supports swap instruction.
This swap instruction is defined as
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
The mutual exclusion is obtained by declaring a global Boolean variable ‘Lock’.
This ‘Lock’ is assigned to false.
Also every process holds a local Boolean variable key.
The structure of process Pi given as
do{
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
critical section
lock = FALSE;
remainder section
}while (1);
_____________________________________________________________________________
Semaphores:
Overcome the difficulty of critical section problem.
It is a synchronization tool that does not require busy waiting.
A semaphore S is integer value ( counting Semaphore ) that can only be accessed via two
standard ( atomic ) operation); wait and signal

Page 28 of 54
Originally termed as P and V
Wait:
Operation wait(s)
Begin
If s> 0
Then s= s – 1
Else block the process on s;
End:
Signal:
Operation wait(s)
Begin
If some processes are blocked on s then activate one blocked process
Else s = s + 1;
End
When a process performs a wait operation on a semaphore the operations either blocks the
process on the semaphore or allows it to continue its execution after decrementing the value of
semaphore.
A signal operation activates a process blocked on the semaphore or increment the value of
semaphore by 1.
Usage:
The semaphores are used to solve the n process critical section problem. These n processes share
a semaphore called as Mutex. This mutex is initialized to 1.
Every process Pi is given as
Do {
Wait(mutex);
Critical section
Signal(mutex);
Remainder section
} while (1);
Also the synchronization problems are solved by the semaphores.
Example:

Page 29 of 54
Let the two processes P1 with a statement S1 and
P2 with a statement S2.
Here S2 has to be done after the S1 is completed
This is achieved by allowing the P1 and P2 to share a common semaphore ‘Synch’.
It is assigned to 0 initially.
Also it is obtained by
Inserting the statement S1: Signal (Synch); in process P1
Inserting the statement Wait (Synch); S2 in process P2,
Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked a signal (synch)
after S1.
Implementation:
The main disadvantages of mutual exclusion and semaphore requires busy waiting.
When a process is in its critical section, any other process which tries to enter its critical section
must loop continuously in the entry code.
This continuously looping is a problem in the multi programming system. Busy waiting wastes
CPU cycles that some other process might be also able to use productively. This kind of
semaphore is called as Spin Lock.This Spin lock is also used in multiprocessor system.
Semaphore implementation with no busy waiting:
To overcome the need for busy waiting we can modify the definition of wait and signal
semaphore operation.
With each semaphore there is an associated waiting queue.
Each entry in a waiting queue has 2 data item
Value (of the type integer)
Pointer to next record in the list.
Assume two simple operations:
 blocksuspends the process that invokes it.
 wakeup(P) remove one of the processor in waiting queue and placed in the ready queue.
Semaphore operations are now defined as
wait (S):
{ value--;
if (Value< 0)

Page 30 of 54
{
Add this process to waiting queue;
Block();
}
}
signal(S):
{
value++;
if (value <= 0)
{
Removed process to waiting queue;
Wakeup;
}
}
Block/Resume Semaphore Implementation:
If process is blocked, enqueue PCB of process and call scheduler to run a different process.
Semaphores are executed atomically;
 No two processes execute wait and signal at the same time.
 Mutex can be used to make sure that two processes do not change count at the same time.
 If an interrupt occurs while mutex is held, it will result in a long delay.
 Solution: Turn off interrupts during critical section.
Two Types of Semaphores:
 Counting Semaphore - integer value can range over an unrestricted domain.
 Binary Semaphore - integer value can range only between 0 and 1; simpler to
implement. It is also known as Mutex locks
 Can implement a counting semaphore S as a binary semaphore.
Implementing S (counting semaphore) as a Binary Semaphore:
Let S be a counting semaphore.
To implement it n terms of binary semaphore.
We need the following data structures
Binary semaphore S1,S2;

Page 31 of 54
Int C;
Initially S1=1, S2=0 and value of the integer C is set to the initial value of the counting
semaphore S
Implementing S:
Wait operation
wait(S1);
C--;
If(C<0)
{
Signal(s1)
Wait(S2);
}
Signal(s1);

Signal operation
Wait(S1);
C++;
If(C<0)
Signal(S2);
Else
Signal(s1);
Deadlock and Starvation:
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by
only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. . .
signal (S); signal (Q);
signal (Q); signal (S);

Page 32 of 54
Starvation – indefinite blocking. A process may never be removed from the semaphore queue
in which it is suspended.
_____________________________________________________________________________
CLASSIC PROBLEMS OF SYNCHRONIZATION:
 Bounded Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
I. Bounded Buffer Problem:
This illustrates the concept of cooperating processes; consider the producer and consumer
problem. A producer process produces information that is consumed by a consumer. Example,
a print program produces characters that are consumed by the printer device. We must have
available buffer of items that can be filled by the producer and emptied by the consumer. A
producer can produce one item while the consumer is consuming another item. The producer and
the consumer must be synchronized so that the consumer does not try to consume n item that has
not yet been produced.
In unbounded buffer producer consumer problems has no problem on the limit on the
size of the buffer. The consumers have to wait for new items but the producer can always
produce new items.
In bounded buffer producer consumer problem assumes fixed buffer size. In this case if
consumer must wait if the buffer is empty and the producer must wait if the buffer is full.
In the bounded buffer problem, we assume the pool consists of n buffers, each capable of
holding one item. The mutex semaphore provides mutual exclusion for access to the buffer pool
and is initialized to the value 1. The empty and full semaphores count the number of empty and
full buffers. The semaphore empty is initialized to value n, and full initialized to the value 0.

Page 33 of 54
Fig. The structure of the producer process.
Do {
….
Produce an item
….
Wait (empty);
Wait (mutex);
….
Add to buffer
….
Signal (mutex);
Signal (full);
} while (1);
Fig. The structure of consumer process.
Do {
Wait (full);
Wait (mutex);
….
Remove an item from buffer
….
Signal (mutex);
Signal (empty)
….

Page 34 of 54
Consume the item
…. }while(1);
II. The Readers –Writers Problem
A data object such as a file is to be shared among several processes. Some of these
processes may want to read the contents of the object, others may want to update. Processes
that are interested in only reading are referred to as readers and the rest as writers. If two
readers share the data object no effects will result, it a writer and some other process access the
shared object simultaneously errors occur.
This synchronization problem is referred to as readers-writersproblem.The reader’s
writer’s problem has several variations. The simplest one, referred to as thefirstreaders-writers
problem, requires that no reader will be kept waiting unless a writer already obtained permission
to use the shared object. Second readers-writers problem requires that once a writer is ready,
that writer performs its write as soon as possible.
Solution to either problem is starvation. In the first case writers may starve, in the
second case readers may starve.
The solution to the first readers-writers problem, the reader processes share the
following data structures:
Semaphore mutex, wrt;
intreadcount;
The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The
semaphore wrt is common to both the reader and writer processes. The mutex semaphore is used
to confirm mutual exclusion when the variable readcount is updated. The readcount variable
keeps track of how many processes are currently reading the object.

Page 35 of 54
Fig. The structure of reader process:
wait(mutex);
readcount++;
if (readcount ==1)
wait(wrt);
signal(mutex);

reading is performed

wait(mutex);
readcount --;
if(readcount ==0)
signal(wrt);

Page 36 of 54
signal(mutex);
Fig. The structure of a writer process:
wait(wrt);

writing is performed

signal(wrt);
III. The Dining-Philosophers Problem:
Consider five philosophers who spend their lives thinking and eating. The
philosophers share a common circular table surrounded by five chairs each for the five
philosophers. In the center of the table is a bowl of rice, and the table is arranged with five single
chopsticks. When a philosopher thinks, she does not interact with her colleagues. From time to
time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her. A
philosopher may pick up only one chopstick at a time. She cannot pick up a chopstick that is
already in the hand of a neighbor. When a hungry philosopher has both her chopsticks at the
same time, she eats without releasing her chopsticks. When she is finished eating, she puts down
both of her chopsticks and starts thinking again.

The dining philosopher is a simple representation of the need to allocate several


resources among several processes in a deadlock and starvation free manner. One simple solution
is to represent each chopstick by a semaphore. A philosopher goes to take the chopstick by
executing a wait operation on that semaphore. She releases her chopsticks by executing the
signal operation on appropriate semaphores. Thus the shared data are:
Semaphore chopstick[5];
Where all the elements of chopstick are initialized to 1.

Page 37 of 54
do{
wait(chopstick[I]);
wait(chopstick[(I+1)%5];

eat

signal(chopstick[I]);
signal(chopstick[(I+1)]%5);

think

}while(1);
This solution guarantees that no two neighbors are eating simultaneously, it must be
rejected because it has the possibility of creating a deadlock. Suppose that all five philosophers
become hungry simultaneously, and each takes her left chopstick. All the elements of chopstick
will now be equal to 0. When each philosopher tries to take her right chopstick, she will be
delayed forever.
Several remedies to deadlock problem are:
 Allow at most four philosophers to be sitting simultaneously at the table.
 Allow a philosopher to pick up her chopsticks only if both chopsticks are available.
 Use asymmetric solution: an odd philosopher picks up first her left chopstick and then
her right chopstick, whereas an even philosopher picks up her right and then her left
chopstick.
_____________________________________________________________________________

Monitors & Critical Regions:


1 Introduction
Given that we have semaphores, why do we need another programming language feature for
dealing with concurrency?
 Semaphores are low-level.

Page 38 of 54
 Omitting a wait breaches safety - we can end up with more than one process inside a
critical section.
 Omitting a signal can lead to deadlock.
 Semaphore code is distributed throughout a program, which causes maintenance
problems.
 Better to use a high-level construct/abstraction.
2 Critical Regions
 A critical region is a section of code that is always executed under mutual exclusion.
 Critical regions shift the responsibility for enforcing mutual exclusion from the
programmer (where it resides when semaphores are used) to the compiler.
 They consist of two parts:
1. Variables that must be accessed under mutual exclusion.
2. A new language statement that identifies a critical region in which the variables
are accessed.
Example:This is only pseudo-Pascal-FC - Pascal-FC doesn't support critical regions
var
v : shared T;
...
region v do
begin
...
end;
All critical regions that are `tagged' with the same variable have compiler-enforced mutual
exclusion so that only one of them can be executed at a time:
Process A:
region V1 do
begin
{ Do some stuff. }
end;
region V2 do
begin

Page 39 of 54
{ Do more stuff. }
end;
Process B:
region V1 do
begin
{ Do other stuff. }
end;
Here process A can be executing inside its V2 region while process B is executing inside its V1
region, but if they both want to execute inside their respective V1 regions only one will be
permitted to proceed.
Each shared variable (V1 and V2 above) has a queue associated with it. Once one process is
executing code inside a region tagged with a shared variable, any other processes that attempt to
enter a region tagged with the same variable are blocked and put in the queue.
3 Conditional Critical Regions
Critical regions aren't equivalent to semaphores. As described so far, they lack condition
synchronization. We can use semaphores to put a process to sleep until some condition is met
(e.g. see the bounded-buffer Producer-Consumer problem), but we can't do this with critical
regions.
Conditional critical regions provide condition synchronization for critical regions:
region v when B do
begin
...
end;
where B is a boolean expression (usually B will refer to v).
Conditional critical regions work as follows:
1. A process wanting to enter a region for v must obtain the mutex lock. If it cannot, then it
is queued.
2. Once the lock is obtained the Boolean expression B is tested. If B evaluates to true then
the process proceeds, otherwise it releases the lock and is queued. When it next gets the
lock it must retest B.
3.1 Implementation

Page 40 of 54
Each shared variable now has two queues associated with it. The main queue is for processes
that want to enter a critical region but find it locked. The event queue is for the processes that
have blocked because they found the condition to be false. When a process leaves the conditional
critical region the processes on the event queue join those in the main queue. Because these
processes must retest their condition they are doing something akin to busy-waiting, although the
frequency with which they will retest the condition is much less. Note also that the condition is
only retested when there is reason to believe that it may have changed (another process has
finished accessing the shared variable, potentially altering the condition). Though this is more
controlled than busy-waiting, it may still be sufficiently close to it to be unattractive.
3.2 Limitations
 Conditional critical regions are still distributed among the program code.
 There is no control over the manipulation of the protected variables - no information
hiding or encapsulation. Once a process is executing inside a critical region it can do
whatever it likes to the variables it has exclusive access to.
 Conditional critical regions are more difficult to implement efficiently than semaphores.
_____________________________________________________________________________
Monitors:
 Consist of private data and operations on that data.
 Can contain types, constants, variables and procedures.
 Only the procedures explicitly marked can be seen outside the monitor.
 The monitor body allows the private data to be initialized.
 The compiler enforces mutual exclusion son a particular monitor.
 Each monitor has a boundary queue, and processes wanting to call a monitor routine
join this queue if the monitor is already in use.
 Monitors are an improvement over conditional critical regions because all the code that
accesses the shared data is localized.
4.1 Condition synchronization
Just as plain critical regions needed to be extended to conditional critical regions, so do monitors
(as described thus far) need to be extended to make them as applicable as semaphores.
Condition variables allow processes to block until some condition is true and then be woken up:
var

Page 41 of 54
c : condition;
Two operations are defined for condition variables:
1. delay
2. resume
4.1.1 Delay
 Similar to the semaphore wait operation.
 delay(c) blocks the calling process on c and releases the lock on the monitor.
4.1.2 Resume
 Similar to the semaphore signal operation.
 resume(c) unblocks a process waiting on c.
 resume is a nop if no processes are blocked (compare to signal, which always has an
effect).
But once resume has been called we have (potentially) two processes inside the monitor:
 The process that called delay and has been woken up.
 The process that called resume.
Solutions:
 Resume-and-continue The woken process must wait until the one that called resume
releases the monitor.
 Immediate resumption The process that called resume must immediately leave the
monitor (this is what Pascal-FC uses).
Resume-and-continue means that processes that call delay must use
while not B do
delay(c);
instead of
if not B then
delay(c);
Because a resumer might carry on and alter the condition after calling resume but before exiting
the monitor.
4.2 Immediate resumption variants
1. Resume and exit: The resumer is automatically forced to exit the monitor after calling
resume.

Page 42 of 54
2. Resume and wait: The resumer is put back on the monitor boundary queue. When it gets
back in, it is allowed to continue from where it left off.
3. Resume and urgent wait: The resumer is put on a second queue that has priority over the
monitor boundary queue.
4.3 Nested monitor calls
A nested monitor call is when a procedure in monitor A calls a procedure in a different monitor,
say monitor B. They are potentially troublesome: think about what happens if the procedure in
monitor B contains a delay statement.
There are several approaches that can be taken when faced with this situation:
1. Retain the lock on A when calling the procedure in B, release the lock on B when calling
delay in B.
2. Retain the lock on A when calling the procedure in B, release both locks when calling
delay in B.
3. Release the lock on A when making a nested call.
4. Ban nested calls altogether!
Pascal-FC chooses the first of these alternatives.
_____________________________________________________________________________

Deadlock:
A process request the resources, the resources are not available at that time, so the process enter
into the waiting state. The requesting resources are held by another waiting process both are in
waiting state, this situation said to be deadlock.
Example:
R1

P1 P1

Page 43 of 54

R2
 P1 and p2 are two processes.
 R1 and r2 are two resources.
 P1 request the resources R1.
 R1 held by the processes P2.
 P2 request the resources R2.
 R2 held by the processes P1.
 Then both are entered into the waiting state.
 There is no work progress for process P1 and p2. It is also a deadlock.
Resource Allocation Graph (RAG).
Deadlock can be represented by resource allocation graph.
Resource allocation graph is a directed graph. It is used to represent the deadlocks.
The graph consisting of two types of nodes
1. Process it is represented by Circles.
2. Resources it is represented by squares
And the graph is consisting of two types of edges
1. Requesting edge
2. Assignment edge
An edge from process to resource is said to be a requesting edge.
And an edge from a resource to a process is said to be assignment edge.
If the resource allocation graph consisting of cycles P1 R1P2R2P1
But if system in deadlock state, the resource allocation graph must consisting of cycles
R1

P1 P1

R2

Condition for Deadlock or Characteristics of Deadlock:


It must satisfy for four conditions.

Page 44 of 54
1. Mutual exclusion:
It means resources are (at least one) in non-sharable mode only, it means only one process at a
time can use a resource.
If some other process requests that resources the requesting process must be wait until the
resources has been released.
For example: line printers
The line printers are always in non sharable mode only
Only one process can use the resource at a time.
2. Hold and Wait
Each and every process in deadlock state must hold at least one resource is waiting for additional
resources that are currently being held by other processes.
Example: R2
R1

P1
P1 P3

R3
R4
P1, P2 and P3 are Processes
R1, R2, and R3 are Resources
Each processes holding a resources and waiting for another resources.
P1 holding the resources R3 and is waiting for the resources R1.
P2 holding resources R1 and R2 are waiting for R3 resources.
P3 holding the resources R2 is waiting for resources R4.
3. No preemption:
It means resources are not released in the middle of the work they released only after the process
has completed its task.
4. No preemption:
In above figure,
 P1 is waiting for a resource R1 held by the processes P2.
 P2 is waiting for a resource R2 held by the processes P3.
 P3 is waiting for a resource R4 held by the processes P2.

Page 45 of 54
 P2 is waiting for a resource R3 held by the processes P1.
P1 R1P2R2P3R4P2R3

Methods for Handling Deadlocks:


Principally, there are three different methods for dealing with the deadlock problem:
1. We can use a protocol to ensure that the system will never enter a deadlock state.
2. We can allow the system to enter a deadlock state detect it and then recover.
3. Ignore the problem To ensure that deadlocks never occur in the system.
The system can use either a deadlock prevention or a deadlock-avoidance scheme.
Deadlock prevention is a set of methods for ensuring that at least one of the necessary
conditions cannot hold. These methods prevent deadlocks by constraining how requests for
resources can be made.
Deadlock avoidance, on the other hand, requires that the operating system be given in advance
additional information concerning which resources a process will request and use during its
lifetime.
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. In this environment, the system can provide an algorithm
that examines the state of the system to determine whether a deadlock has occurred, and an
algorithm to recover from the deadlock (if a deadlock has indeed occurred).
Deadlock Prevention:
General Meaning for prevention to take the medicine before the attack of diseases
Deadlock prevention is same as take the prevention methods before attacking the deadlock
For a deadlock to occur,
Each of the four necessary conditions must hold by ensuring that at least one of these
conditions can’t hold.
We can prevent the occurrence of the deadlock. So apply this techniques on four necessary
conditions.
1. Mutual exclusion:
It means only one process can use the resources at a time
It means the resources are not shared by the number of process at a time. So the resources are
sharable mode only.

Page 46 of 54
We can deny this condition, with simple protocol
Convert the all non-sharable resources to sharable resources.
So this condition is not satisfied by the deadlock, the n we can prevent the deadlock.
Here is a small correction; a printer is not shared by number of process at a time, because we
can’t convert the printer from non sharable to sharable mode.
So we can’t apply this prevention method if the system consisting of printers.
2. Hold and Wait
It means each and every process in deadlock state must hold at least one resource and waiting for
at least one resources.
We can deny this condition, with simple protocol
I.e. A process request the resources (it doesn’t hold any other resources) only when the process
has none.
This protocol is very expensive and time consuming one.
For example:
Process wants to copy the data from a tape drive to a disk file and then take a print out.
Initially the processes consisting of tape drive and disk file
 Now the process to be requests the printer. Apply the above said protocol.
The process should releases the tape drive ad disk file before the request of printer.
So it is time consuming
 Each process to request and be allocated all its resources before it begins execution.
It is very difficult to implement because more number of resources are available to begin
the execution.

For example:
P1…..P10 are 100 processes
Each requires a printer to complete their jobs. Applying the above protocol the system must
consist of 100 printers. So it is very difficult to implement.
There are two main disadvantages:
Resources Utilization it is very poor, since many of the resources may be allocated but unused
for a long period.

Page 47 of 54
Starvationa process that needs several popular resources may have to wait indefinitely because
at least one of the resources i.e. needs always allocated to some other process.
3. No preemption:
It means resources are not released in the middle of the processing of a resource.
We ensure that this condition doesn’t hold if we use the following protocol.
 A process request some resources, we first check whether they are available.
If they are available we allocate them
 If they are not available, we check whether they are allocated to some other process i.e.
waiting for additional resources.
If so, we prevent the desired resources from the waiting process and allocate to the requesting
process.
If resources are not either available or held by a waiting process then the requesting process must
wait.
While it is waiting some of it resources may be used by other process.
4. Circular Wait:
We ensure that circular wait must not happen if we apply a simple solution.
i.e. numbering all the resources types and each process request resources in an increasing order
of enumeration.
Alternatively:
Whenever a process request an instance resources type R j, it has released only resources R i. Such
that F(Ri) > F(Rj).
A process can initially request any number of instances type Rj, if and only if F(Rj) > F(Ri)
If there two protocols are used then the circular wait cannot hold.

Deadlock Avoidance:
It is one for the methods of dynamically escaping from the deadlocks.
The word dynamically means online.
In this scheme if a process request for resources the avoidance algorithm checks before the
allocation of resources about the state of the system.
There are two states:
 Safe state  the system allocates the resources to the requesting process.

Page 48 of 54
 Unsafe states  do not allocate the resources.
So taking care before the allocation is said to be deadlock avoidance.
The students may have a doubt about safe and unsafe state.
Safe state means no deadlock will happen; even we allocate the resources to requesting process.
Unsafe state means the deadlock may happen if the grant the resources.
Unsafe
Deadlocko

Safe

Figure: safe, unsafe and deadlock state.

R1

P1 P1

R2

Figure: unsafe state in resources allocation graph


A safe state is not a deadlock state.
On the other hand a deadlock sate is an unsafe state
Not all states are deadlocks. However, an unsafe state may lead to deadlock.
As long as the state is safe, the operating system can avoid unsafe state.

Bankers Algorithm:
The resource-allocation graph algorithm is not applicable to a resource allocation system with
multiple instances of each resource type. The deadlock-avoidance algorithm that we describe
next is applicable to such a system, but is less efficient than the resource-allocation graph
scheme. This algorithm is commonly known as the banker's algorithm.
The name was chosen because this algorithm could be used in a banking system to ensure that
the bank never allocates its available cash such that it can no longer satisfy the needs of all its

Page 49 of 54
customers. When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need. This number may not exceed the total number
of resources in the system. When a user requests a set of resources, the system must determine
whether the allocation of these resources will leave the system in a safe state. If it will, the
resources are allocated; otherwise/the process must wait until some other process releases
enough resources. Several data structures must be maintained to implement the banker's
algorithm. These data structures encode the state of the resource-allocation system.
Example:
Resource-Allocation State: Available ABC = [10, 5, 7] - [7, 2, 5] = [3, 3, 2]
Process Allocation MAX Available
A B C A B C A B C
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Need = MAX - Allocation
Process Need
A B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1

Currently the system is safe sequence


Safe sequence: Safe sequence is calculated as follows:
1. Need of each process is compared with available.
If Needi  Availablei, then the resources are allocated to that process and process will
releases the resource.
2. If Need >= Available, next process need is taken for comparison.
3. In the above example,

Page 50 of 54
Need of Process P1 is (7, 4, 3) & Available is (3, 3, 2)
Need >= Available
7, 4, 3 >= 3, 3, 2False
So the system will move for next process
4. Need for Process P2(1, 2, 2) & Available is (3, 3, 2), so
Need < Available (work),
1, 2, 2 > 3, 3, 2True
Then Finish[i] = True
Request of p2 is granted and a process P2 is releases the resource to the system.
Work: = Work + Allocation or
Available = Available + Allocation = (3, 3, 2) + (2, 0, 0) = (5, 3, 2) New available
This procedure continues for all processes.
5. Next Process P3 need (6, 0, 0) is compared with new available (5, 3, 2)
Need > Available
6, 0, 0 > 5, 3, 2False
So the system will move for next process
6. Process P4 need (0, 1, 1) is compared with new available (5, 3, 2)
Need < Available
0, 1, 1 < 5, 3, 2True
Available = Available + Allocation = (5, 3, 2) + (2, 1, 1):= (7, 4, 3) New Available
7. Then Process P5 need (4, 3, 1) is compared with new available (7, 4, 3)
Need < Available
4, 3, 1 < 7, 4, 3True
Available = Available + Allocation = (7, 4, 3) + (0, 0, 2):= (7, 4, 5) New Available
8. Then Process P1 need (7, 4, 3) and available (7, 4, 5) is request is granted then the
system may be in the deadlock state. After the granting the request, available resource
is (0, 0, 2) so the system is in unsafe state.
9. Then Process P3 need (6, 0, 0) is compared with new available (7, 4, 5)
Need < Available
6, 0, 0 < 7, 4, 5True
Available = Available + Allocation = (7, 4, 5) + (3, 0, 2) = (10, 4, 7) New available

Page 51 of 54
10. Last the remaining Process P1 need (7, 4, 3) and available (10, 4, 7)
Need < Available
7, 4, 3 < 10, 4, 7True
Available = Available + Allocation = (10, 4, 7) + (0, 1, 0) = (10, 5, 7)
State is Safe:
Safe sequence is <P2, P4, P5, P3, and P1>
Deadlock Detection:
It is used by employing an algorithm that tracks the circular waiting and killing one or more
processes. So that deadlock is removed.
In Deadlock Detection Techniques:
The Resources are granted to the requesting processes without any check.
The system state is examined periodically to determine of a set of processes is deadlock.
A deadlock is resolved by aborting and restarting a process, (resign or give up) all the resources
that the process held.
There are two algorithms for Deadlock Detection:
 Single instance for each resource type
 Multiple or Several instance for each resource type
Single Instance of Each Resource Type
1. In Resource Allocation Graph (RAG), every resource has only one instance or single
instance.
Then we define Deadlock Detection algorithm that uses a different of the RAG and is
called wait for graph(WFG).
How to get this graph from RAG:
We can get this graph from RAG by removing the nodes of type resource and collapsing
the appropriate edge.
In this situation if the wait for graph has any cycle then there is deadlock in the system.
2. To detect deadlock, the system needs to maintain the WFG and to periodically invoke an
algorithm that searches for a cycle in the graph.
The complexity algorithm will be O(n2) where n is the number of vertices of n the graph.
We can draw corresponding WFG by removing all nodes that represent resources and
collapsing this edge.

Page 52 of 54
Observe carefully that the system is in deadlock sate there exists a cycle between
P1….P4.
Also a check is found between P1, P2, P3 and P4.
So all processes P1 to P4 are deadlocked

Several Instances of a Resource Type


The wait-for graph scheme is not applicable to a resource-allocation system with multiple
instances of each resource type. The deadlock-detection algorithm that we describe next is
applicable to such a system. The algorithm employs several time-varying data structures that are
similar to those used in the banker's algorithm This algorithm requires an order; of m x n2
operations to detect whether the system is in a deadlocked state.
Recovery from Deadlock:
When a detection algorithm determines that a deadlock exists, several alternatives exist.
 One possibility is to inform the operator that a deadlock has occurred, and to let
the operator deal with the deadlock manually.
 The other possibility is to let the system recover from the deadlock automatically.

There are two options for breaking a deadlock.


 First Solution is simply to abort one or more processes to break the circular wait.
 Second option is to preempt some resources from one or more of the deadlocked
processes.
Recovery from Deadlock: Process Termination
 Abort all deadlocked processes: it means releases the all processes in the deadlock
state and start the allocation from the starting point

Page 53 of 54
 Abort one process at a time until the deadlock cycle is eliminated: first abort one of the
processes on the deadlock state and allocated the resources to some other process in the
deadlock state the check whether the deadlock braked or not.
If not abort another process from the deadlock, continue this process until we recover
from deadlock. Compared to first one second one is better
 In which order should we choose to abort?
- Priority of the process.
- How long process has computed, and how much longer to completion.
- Resources the process has used.
- Resources process needs to complete.
- How many processes will need to be terminated
- Is process interactive or batch?
Recovery from Deadlock: Resource Preemption
 Selecting a victim – minimize cost.
 Rollback – return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim, include number of rollback in
cost factor.

Page 54 of 54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy