0% found this document useful (0 votes)
62 views32 pages

Unit 2

Uploaded by

janani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views32 pages

Unit 2

Uploaded by

janani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT – 2

Process Management
2.1 Process Concept:
Process is a unit of work in a modern time sharing system.
Process is a program in execution. It includes event activity represented by value of the
program counter and the contents of the processor’s registers.
A program is a passive entity such as contents of a file stored on disk.
Process is a active entity with program counter specifies the next instruction is to be fetched
and executed.
Process State:
If the process executes, it changes its state.
It has the following states,
1. New -> The process is being created.
2. Running -> Instructions are being executed.
3. Waiting -> The process is waiting for some event such as I/O or CPU.
4. Ready -> Waiting process is assigned to the processor.
5. Terminated -> The process has finished execution.

Terminated
New

Admitted interrupt exit

Scheduler dispatcher Running


Ready

I/O or I/O or event wait


Event completion
Waiting

1
Process Control Block:
Each process is represented in the operating system by a process control block (PCB) also
called task control block.

Pointer process state

Process number

Program counter

Register.

Memory limits

List of Open files

.
.

It includes following objectives:


1. process state:
The state may be new, ready, running,waiting,halted, and so on.
2. Program Counter:
The counter indicates address of the next instruction to be executed of this
process.
3. CPU Register:
Temporary registers are used to stores results. It includes accumulators, index
registers, stack pointer, general purpose registers and condition code (cc i.e indicates
error status of the progress.) information.
4. CPU – Scheduling information:
It includes process priority, pointer to scheduling queue.
5. Memory management information:
It includes values of the base and limit registers, page tables or segment tables
depending on the memory system used by the operating system.
6. Accounting information:
It includes amount of CPU and real time used, time limits, account numbers, job
or process numbers …..
7. I/O status information:
It includes I/O dervies allocated to the process….

2
operating system interrupt / system call process p1

Save state into PCBO


idle

Reload state from PCB1


executing

Save state into PCB1

Idle

Reload state from PCBO

Threads:
Thread is a logical extension of the program. Program in execution is process that
perform a single thread of execution.

2.2 Process Scheduling

The main objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.

The main objective of time-sharing is to switch the CPU among different processes so
frequently that users can interact with each program while it is running.

A system with one CPU can only have one running process at any time. As user jobs enter the
system, they are put on a queue called as the “job pool”. This consists of all jobs in the system.

Processes that reside in main memory that are ready to execute are put on the “Ready Queue”.
A queue has a “header” node which contains pointers to the first and the last PCBs in the list.

3
There are also other queues in the system like device queue which is the list of the processes
waiting for a particular device. Each device has its own queue.

A general queuing diagram is given below:

Queue Header PCB1 PCB2


Ready head
Queue tail registers registers
… …
… …

The list of processes waiting for a particular I/O device is called a device queue. Each device
has its own device queue.
A common representation of process scheduling is a queuing diagram.
Two types of queues are present.
1. ready queue
2. set of device queues.

Ready queue CPU

I/
O I/O queue I/O request

Time slice expired

Child Fork a child


executes

Interrupt Wait for a interrupt


occurs

FIG : Queuing diagram for process

4
---- Represents process

----- flow of processes

----- queue

New process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution. Following events could occur.

1. The process could issue an I/O request, placed in an I/O queue.


2. The process could create a new subprocess and wait for its termination.
3. The process removed from CPU, as a result of an interrupt and put back in the ready queue.

Schedulers:

The selection of processes from the queues is managed by schedulers. There are two kinds:
• Long-term scheduler selects processes from a mass-storage (i.e., hard disk) device where they
are spooled and loads them into memory for execution. Also referred to as job scheduler, it
selects a job to run and creates a new process.
• Short term Scheduler (CPU Scheduler) is responsible for scheduling ready processes that are
already loaded in memory and are ready to execute.

The long-term scheduler brings the processes into memory and hands them over to the CPU
scheduler. It controls the number of processes that the CPU scheduler is handling thus maintains the
degree of multiprogramming which corresponds to the number of processes in memory.

The long-term scheduler has to make careful selection among I/O-bound and CPU-bound
processes. I/O bound processes spend more of their time doing I/O then computation, while CPU-
bound processes spend more time on computation then I/O.

A long-term scheduler should pick a relatively good mix of I/O- and CPU-bound processes so
that the system resources are better utilized. If all processes are I/O-bound, the ready queue will almost
always be empty. If all processes are CPU-bound, I/O queue will be almost always empty. A balanced
combination should be selected for system efficiency.
5
Context Switching

Switching the CPU from one process to another requires saving (storage) the state of the running
process into its PCB and loading the state of the latter from its PCB. This is known as the context
switch.

Switching involves copying registers, local and global data, file buffers and other information to the
PCB.

2.3 Operations on Processes


Major operations on processes are creation and termination.

Process Creation

The long-term scheduler creates a job’s first process as the job is selected for execution from
the job pool.

A process may create new processes via various system calls. So created processes are called
children processes while the creating process is referred to as the mother or parent process. On Unix,
the system call is fork(). Creating a process involves creating a PCB for it and scheduling for its
execution.

Process Execution
Depending on OS policy, a newly created process may inherit its resources from its parent or it
may acquire its own resources from the OS. When a child process is restricted to the parent’s
resources, new processes do not overload the system. At the same time some initialization data may be
passed from the parent to the child process. In Unix OS, each process has a different process identifier
(aka, process number as referred to above) and each child process executes in the address space of the
parent. This eases communication between parent and child processes.

When a new process (child) is created, either the parent runs concurrently with its child or
parent waits until the child terminates.

Process Termination
Having completed its execution and sent its output to its parent, a process terminates by
signaling the OS that it’s finished. On Unix, this is accomplished via the exit() system call. The OS de-
allocates memory, reclaims resources such as I/O buffers, open files that were allocated to that process.

6
On some systems, when a parent process terminates, the OS also terminates all children processes.
Likewise if:
• The child has exceeded its usage of resources it has been allocated; or
• The task assigned to the child is no longer required; or
• The OS does not allow a child to continue after its parent terminates,
then the parent may terminate its child process. The concept of terminating the children processes by a
terminated parent is known as cascading termination.

2.4 Cooperating Processes


Processes running simultaneously may be either independent or cooperating. An independent
process is not affected nor it can affect other processes executing in the system. A cooperating process
can affect the state of other processes by sharing memory, sending signals, etc....

Advantages of process cooperation:

1. Information sharing: Accessing some information sources like a shared database by multiple
processes simultaneously may be essential.
2. Computation speed-up: Computer systems with multiple CPUs may allow decomposition of
the task into subtasks and then the parallel execution of them provides speedup.
3. Modularity: The system can be constructed in a modular fashion.
4. Convenience: A user may be willing to use different system tools like editing, printing,
compiling, etc. in parallel.

Example:

Consider the producer- consumer problem which is a typical example of cooperating processes
where cooperation is demonstrating the classical inter-process communication problem.

The idea is that an operating system may have many processes that need to communicate.
Imagine a program that prints output somewhere internally which is later consumed by a printer driver.

In the case of unbounded-buffer producer-consumer problem, there is no restriction on the size of


the buffer. On the other hand, bounded-buffer producer-consumer problem assumes that there is a
fixed buffer size. A producer process produces information that is consumed by a consumer process.
Producer places its production into a buffer and consumer takes its consumption from the buffer. When
7
buffer is full, producer must wait until consumer consumes at least an item; likewise, when buffer is
empty, consumer must wait until producer places at least an item into the buffer. Consider a shared-
memory solution to the bounded-buffer problem.

Shared variables:
var n; // buffer size
type item = …; // kind of items kept in the buffer
var buffer : array [0 .. n-1] of item; // buffer
in, out : 0 .. n-1; // in & out indexes to buffer.

Producer code:

var nextp : item; // local variable of item produced

repeat
….
produce an item in nextp;
….
while in + 1 mod n = out do no-op;
buffer [in] := nextp;
in := in + 1 mod n;
until false;

Consumer code:

var nextc: item; // local variable of item consumed

repeat
while in = out do no-op;
nextc := buffer [out];
out := out + 1 mod n;
….
Consume the item in nextc;
….
until false;

Shared buffer is an array that is used circularly as follows:

. . .
 n-1 out
Buffer empty when:
x in = out
0
y
1
Buffer full when:
z
2 in + 1 mod n = out

.
. . 3 in

8
2.5 Inter-Process Communication

Inter-process communication (IPC) provides a mechanism to allow processes to communicate


and synchronize their actions. It is best provided by a message-passing system.

An IPC facility basically provides two operations: send (message) and receive (message). The
messages may be of fixed or variable length.

If two processes want to communicate, a communication link must exist between them so that
they can send and receive messages to and from each other.

The communication established can be either direct communication or indirect communication.

Direct Communication

In this type of communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication:
• send (P, message) : Process P is the recipient.
• receive (Q, message): Process Q is the sender.

Direct communication links have the following properties:


• It is established automatically between every pair of processes that want to communicate. The
ID of each process is necessary;
• A link is associated with exactly two processes;
• Between each pair of processes, there exist exactly one link;
• A link may be unidirectional, but are usually bi-directional.

Consider the producer – consumer problem: direct communication algorithm for producer and
consumer are as follows:

Producer: Consumer:

repeat repeat
… receive (Producer, nextc);
produce an item in nextp; …
… consume the item in nextc;
send (Consumer, nextp); …
until false; until false;

The main disadvantage is limited modularity in direct communication. The names of producer
and receiver should be correctly set in every time they are used.

9
Indirect Communication

The messages are sent to and received from mailboxes also known as ports. Each mailbox has a
unique identification into which messages can be placed by processes. Then two processes can
communicate only if they share the same mailbox. The instructions for communication are in the
following form:
• send (A, message)
• receive (A, message), where A is the name of a mailbox.
Indirect communication has the following properties:
• A link is established between a pair of processes only if a shared mailbox is available;
• A link may be associated with more than two processes;
• Between two processes, there may be a number of different links, each link corresponding to a
mailbox;
• A link may be unidirectional or bidirectional. P
1
What if a single mailbox is shared by more than two processes? Reference
the following figure, assume that P1 sends a message to mailbox A. Which
Mailbox
of the other processes will receive it, P2 or P3? In order to resolve this issue, A
either
• A link is associated with at most two processes, or
P P
• One process at a time can execute receive operation, or 1 1

• The system may identify the recipient.

Buffering (Link Capacity)

A link capacity determines the number of messages that can reside in it


temporarily. A link may be viewed as a queue of messages which can be
implemented in one of three ways:

1. Zero Capacity (No buffering): The queue has maximum length 0.


The link cannot hold any messages waiting in it. Sender must wait until the previously sent
message is received.

2. Bounded Capacity: The queue has finite length n. Hence at most n messages can reside in it. If
the link is full, the sender must be delayed.

3. Unbounded Capacity: Any number of messages can wait in it.

10
Message Acknowledgement

After receiving a message, in order to inform sender of the receipt of the message, receiver
sends an acknowledgement message.

Process P sends a Process Q


message to
1st send (Q, message) 2nd receive (P, message)
4th receive (Q, message) 3rd send (Q, “Acknowledgement”)

Exception Conditions
Message system is useful in distributed environments since the probability of error during
communication is larger than that of a single machine environment. In a single machine environment,
shared memory system is generally used.

If an error occurs in a computer system, error recovery or exception condition handling must
take place. Some typical errors that must be handled can be as follows:

a) A Process Terminates:
A sender or receiver process may terminate before a message is processed. This situation will leave
behind message that will never be received or processes waiting for messages that will never be sent.

P Q Receiver waits a message from Q that has


terminated.
waiting
Then P will be blocked forever.

P sends a message to a terminated process. If P


P Q requires an acknowledgement from Q for
sender further processing, P will be blocked forever.

In the above cases, the OS should either


terminate P or notify P that Q has terminated.

b) Lost Messages
Either,
1) The OS is responsible for detecting and
P Q responding, or

The message is lost


due to hardware or
communication line 11
failure.
2) The sending process is responsible for detecting and should retransmit the message.

Responding in case of (1) by the OS may also be implemented by notifying the sending process that
the message has been lost.

The most common method for detecting lost messages is to use timeouts. For instance, if the
acknowledgement signal does not arrive at the sending process in a specified time interval, it is
assumed that the signal is lost and resent.

c) Scrambled Messages
Because of the noise in communication channels, the delivered message may be scrambled (modified)
in transit.

2.6 Threads

A thread in computer science is short for a thread of execution. Threads are a way for a
program to fork (or split) itself into two or more simultaneously (or pseudo-simultaneously) running
tasks. Threads and processes differ from one operating system to another but, in general, a thread is
contained inside a process and different threads of the same process share some resources while
different processes do not.
Thread has
1. Thread ID
2. PC
3. Register set
4. Stack

Thread ID
For which programs thread is going be used, depends on process number / name depends on
process ID
PC:
Instruction is fetched from process ID by PC -> active process
Register set
Temp Location used in thread, identifies it
Stack

12
By using stack, top element and other element can be identified.

Uses
Taking larger applications, like web browsers. We can use multiple thread of accessing each and
every module
Single process  single thread.

2.7 Multithreading Modules

Multiple threads can be executed in parallel on many computer systems.


• This multithreading generally occurs by time slicing (similar to time-division
multiplexing), wherein a single processor switches between different threads, in which case
the processing is not literally simultaneous, for the single processor is really doing only one
thing at a time.
• This switching can happen so fast as to give the illusion of simultaneity to an end user. For
instance, many PCs today only contain one processor core, but one can run multiple
programs at once, such as typing in a document editor while listening to music in an audio
playback program; though the user experiences these things as simultaneous, in truth, the
processor quickly switches back and forth between these separate processes.
• On a multiprocessor or multi-core system, now coming into general use, threading can be
achieved via multiprocessing, wherein different threads and processes can run literally
simultaneously on different processors or cores.
Benefits :
1. Responsiveness
2. Resource sharing
3. Economy
4. Utilization of Mutilprocessor Architecture.

1. Responsiveness
If an error is found, it gives a efficient response
2. Resource sharing
Sharing of CPU, Clock, memory etc.
3. Economy
Cost is less
4. utilization of multiprocessor architecture
usage of context switch to implement multiprogramming / processing.

13
2.8 PROCESS SYNCHRONIZATION

Concurrent Processes

• several processes between starting and finishing

Concurency

• managing concurrent prcesses that share resources

Mutual Exclusion

• only one process is allowed access to a shared resource at the same time (e.g. printer, processor,
variable, data structure, semaphore)

• Example: A train requires exclusive access to the track running through the tunnel. The shared
variable is the single track going through the tunnel.

Resource Allocation

• we will focus on shared variables


• printers are actually controlled by one process called the printer server or printer daemon.
Modern operating systems do not use mutual exclusion techniques for controlling the printer.

Low-Level Techniques for Mutual Exclusion

1. Turn Off Interrupts


o in UNIX, if the operating system is about to make changes to its kernel data structures
(e.g., create a new process by changing the process table)
o turn off interrupts; therefore, a process may get more time on the processor
 process cannot lose the processor because the short term scheduler is run in
response to a timer interrrupt
o change data structure
 keep the code very short without bugs
 usually only one or two changes are needed here
o allow interrupts again
o used on hercules

14
o great for single-processor machines

Two Problems:

1. if you need to do a lot of operations in the mutually exclusive segment


2. multiple processor machine (multiprocessor)
 Zeus has 12 processors that share the same memory (shared memory
architecture)
 turning off interrupts gives one process exclusive access to its processor but
other processes on other processors can still make changes to memory
2. Busy Wait (SpinLock)

• one process ``busy waits'' for another


• used for multiple processor machines

2.9 The Critical Section Problem

Critical Section

• set of instructions that must be controlled so as to allow exclusive access to one process
• execution of the critical section by processes is mutually exclusive in time

Critical Section (S&G, p. 166) (for example, ``for the process table'')

repeat

critical section

remainder section
until FALSE

Solution to the Critical Section Problem must meet three conditions...

1. mutual exclusion: if process is executing in its critical section, no other process is executing
in its critical section
2. progress: if no process is executing in its critical section and there exists some processes that
wish to enter their critical sections, then only those processes that are not executing in their
remainder section can participate in the decision of which will enter its critical section next, and
this decision cannot be postponed indefinitely
o if no process is in critical section, can decide quickly who enters
o only one process can enter the critical section so in practice, others are put on the queue
3. bounded waiting: there must exist a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted
o The wait is the time from when a process makes a request to enter its critical section
until that request is granted
o in practice, once a process enters its critical section, it does not get another turn until a
waiting process gets a turn (managed as a queue)

15
Solutions to the Critical Section Problem

Assumption

• assume that a variable (memory location) can only have one value; never ``between values''
• if processes A and B write a value to the same memory location at the ``same time,'' either the
value from A or the value from B will be written rather than some scrambling of bits

Peterson's Algorithm

• a simple algorithm that can be run by two processes to ensure mutual exclusion for one
resource (say one variable or data structure)
• does not require any special hardware
• it uses busy waiting (a spinlock)

Peterson's Algorithm
Shared variables are created and initialized before either process starts. The shared variables flag[0]
and flag[1] are initialized to FALSE because neither process is yet interested in the critical section. The
shared variable turn is set to either 0 or 1 randomly (or it can always be set to say 0).

var flag: array [0..1] of boolean;


turn: 0..1;
%flag[k] means that process[k] is interested in the critical section
flag[0] := FALSE;
flag[1] := FALSE;
turn := random(0..1)

After initialization, each process, which is called process i in the code, runs this code:

repeat
flag[i] := TRUE;
turn := j;
while (flag[j] and turn=j) do no-op;
CRITICAL SECTION
flag[i] := FALSE;
REMAINDER SECTION
until FALSE;

Information common to both processes...


turn = 1
flag[1] = FALSE
flag[2] = FALSE

EXAMPLE 1
Process 1 Process 2
i=1, j=2 i=2, j=1
flag[1] := TRUE
turn := 2
check (flag[2] = TRUE and turn = 2)
- Condition is false because flag[2] =
FALSE
- Since condition is false, no waiting in

16
while loop
- Enter the critical section
- Process 1 happens to lose the processor
flag[2] := TRUE
turn := 1
check (flag[1] = TRUE and turn = 1)
- Since condition is true, it keeps busy
waiting until it loses the processor
- Process 1 resumes and continues until it
finishes in the critical section
- Leave critical section
flag[1] := FALSE
- Start executing the remainder (anything
else a process does besides using the
critical section)
- Process 1 happens to lose the processor
check (flag[1] = TRUE and turn = 1)
- This condition fails because flag[1] =
FALSE
- No more busy waiting
- Enter the critical section

EXAMPLE 2
Process 1 Process 2
i=1, j=2 i=2, j=1
flag[1] = TRUE
turn = 2
- Lose processor here
flag[2] := TRUE
turn := 1
check (flag[1] = TRUE and turn = 1)
- Condition is true so Process 2 busy waits
until it loses the processor
check (flag[2] = TRUE and turn = 2)
- This condition is false because turn = 1
- No waiting in loop
- Enters critical section

EXAMPLE 3
Process 1 Process 2
i=1, j=2 i=2, j=1
flag[1] = TRUE
- Lose processor here
flag[2] = TRUE
turn = 2
check (flag[1] = TRUE and turn = 1)
17
- Condition is true so, Process 2 busy waits
until it loses the processor
turn := 2
check (flag[2] = TRUE and turn = 2)
- Condition is true so Process 1 busy waits
until it loses the processor
check (flag[1] = TRUE and turn = 1)
- The condition is false so, Process 2 enters
the critical section

Summary of Techniques for Critical Section Problem


Software

1. Peterson's Algorithm: based on busy waiting


2. Semaphores: general facility provided by operating system (e.g., OS/2)

obased on low-level techniques such as busy waiting or hardware assistance


odescribed in more detail below
2. Monitors: programming language technique
o see S&G, pp. 190--197 for more details

Hardware

1. Exclusive access to memory location

o always assumed
2. Interrupts that can be turned off
o must have only one processor for mutual exclusion
3. Test-and-Set: special machine-level instruction
o described in more detail below
4. Swap: atomically swaps contents of two words
o see S&G, pp. 173--174 for more details

Test-and-Set

• hardware assistance for process synchronization


• a special hardware instruction that does two operations atomically
• i.e., both operations are executed or neither

o sets the result to current value


o changes current value
•  Test-and-Set Solution to the Critical Section Problem repeat

critical section

remainder section
until false

Test-and-Set(target)
result := target
18
target := TRUE
return result

Information common to both processes...


target = FALSE

2.10 Semaphores

• originally, semaphores were flags for signalling between ships


• a variable used for signalling between processes
•  operations possible on a semaphore:

o o initialization
 done before individual processes try to operate on the semaphore
o o two main operations:
 wait (or acquire)
 signal (or release)
o o the wait and signal operations are atomic operations (e.g., the test-and-set at the top
of the loop of wait is done before losing the processor)
o o e.g., A resource such as a shared data structure is protected by a semaphore. You
must acquire the semaphore before using the resource.

wait(S):

while S 0 do no-op;
S := S - 1;

signal(S):
S := S + 1;

In either case, the initial value for S:

1. equals 1 if only one process is allowed in the critical section (binary semaphore)
2. equals n if at most n processes are allowed in the critical section

Semaphore Solution to the Critical Selection Problem


repeat
19
critical section

remainder section
until false;

Alternative Implementation of Wait and Signal


wait(S):
S.value := S.value - 1;
if S.value < < 0/tt>
then begin
add this process to S.L;
suspend this process; this code is executed atomically
end;
actual waiting is done by the suspended process signal(S):
S.value := S.value + 1;

if S.value 0
then begin
remove a process P from S.L;
wakeup(P);
end

mutex: semaphore used to enforce mutual exclusion (i.e., solve the critical section problem)

Implementing Semaphores
Must use:

• turn off interrupts


• busy waiting: to make sure only one process does a wait operation at once
20
o Test-and-Set
o Peterson's Algorithm

Bounded Buffer Problem (Producer/Consumer Problem)


for example, in UNIX a pipe between two processes is implemented as a 4Kb buffer between the two
processes

Producer

• creates data and adds to the buffer


• do not want to overflow the buffer

Consumer

• removes data from buffer (consumes it)


• do not want to get ahead of producer

Information common to both process...


empty := n
full := 0
mutex := 1
Producer Process
repeat

produce an item in nextp

wait(empty);
wait(mutex);

add nextp to buffer

signal(mutex);
signal(full);

until false;

Consumer Process
repeat

wait(full);
wait(mutex);

remove an item from buffer to nextc

signal(mutex);
21
signal(empty);

consume the item in nextc

until false;

mutex: (semaphore) initialized to 1


empty: count of empty locations in the buffer
initialized to n, the buffer size
full: count of full locations in the buffer
initialized to 0

The empty and full semaphores are used for process synchronization. The mutex semaphore is used to
ensure mutually exclusive access to the buffer.
if we were to change the code in the consumer process from:
wait(full)
wait(mutex)
to
wait(mutex)
wait(full),
then we could reach a deadlock where Process 1 is waiting for Process 2 and Process 2 is waiting for
Process 1.

2.11 Classical Problems of Synchronization


22
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem

Bounded-buffer
• Shared data
o semaphore full, empty, mutex;
• Initially:
o full = 0, empty = n, mutex = 1

Bounded-buffer producer
do {

produce an item in nextp

wait(empty);
wait(mutex);

add nextp to buffer

signal(mutex);
signal(full);
} while (TRUE);

Bounded-buffer consumer
do {
wait(full)
wait(mutex);

remove an item from buffer to nextc

signal(mutex);
signal(empty);

consume the item in nextc

} while (TRUE);

Readers-Writers Problem
• The Readers-writers problem dealed with data objects shared among several concurrent
processes.
• Some processes are readers, others writers
o writers require exclusive access
• aka shared vs exclusive locks

23
• Several variations:
o this discussion deals with first readers-writers problem
o No reader will be kept waiting unless a writer has already obtained permission to use
the shared object (readers don't need to wait for other readers).

Readers-Writers solution
// Shared data

semaphore mutex, wrt;


// mutex serves as common mutual exclusion
// wrt serves as mutual exclusion for writers
// also used to in reader to signal that a writer can start

// Initially

mutex = 1, wrt = 1, readcount = 0

Writer process
wait(wrt);
// …
// writing is performed
// …
signal(wrt);

Reader process
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);
// …
// reading is performed
// …
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):

Dining Philosophers problem

24
// Shared data
semaphore chopstick[5];
// Initially all values are 1

Philosopher
The following simple solution has the possibility of creating deadlock

// Shared data
semaphore chopstick[5];
// Initially all values are 1
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
// …
// eat
// …
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
// …
// think
// …
} while (TRUE);

2.12 DEADLOCK

Deadlock refers to a specific condition when two or more processes are each waiting for
another to release a resource, or more than two processes are waiting for resources in a circular chain
(see Necessary conditions).

Deadlock is a common problem in multiprocessing where many processes share a specific type
of mutually exclusive resource known as a software, or soft, lock.

Computers intended for the time-sharing and/or real-time markets are often equipped with a
hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialization.
Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks.

Necessary conditions

There are four necessary conditions for a deadlock to occur


25
1. Mutual exclusion condition: a resource is either assigned to one process or it is available.
2. Hold and wait condition: processes already holding resources may request new resources.
3. No preemption condition: only a process holding a resource may release it
4. Circular wait condition: two or more processes form a circular chain where each process waits
for a resource that the next process in the chain holds

Deadlock can only occur in systems where all 4 conditions happen.

• Circular wait prevention


• Avoidance

• Prevention
• Detection

2.13 System Model

A system consists of finite number of resources to be distributed among a number of competing


process.

It has three functions

1. Request :

If the resource cannot be granted immediately, it should wait until it can release the resource.

2. Use

The process can operate on the resource. Eg: printer prints the documentation

3. Release:

The process releases the resources.

2.14 DEADLOCK CHARACTERIZATION

• A deadlock situation can arise if the following 4 conditions hold simultaneously in a system.
o Mutual Exclusion: only one process at a time can use a resource.
o Hold and Wait: a process must be holding at least one resource and waiting to acquire
additional resources currently held by another process.
o No Preemption: resources cannot be preempted; that is, a resource can be released only
voluntarily by the holding process.

26
o Circular Wait: P0 is waiting for a resource that P1 is holding, P1 is waiting for a resource
that Pn-1 is holding, Pn-1 is waiting for a resource held by Pn, and Pn is waiting for a
resource held by P0.
Resource Allocation Graph:
• Deadlocks can be mostly described into of a directed graph called system.
• Resource allocation graph is the graph with the set of vertices(v) and the set of Edges (E)
• Set of vertices may be a process or resources
Process -> p = { p1,p2,…..pn} -> set of all active process.
Resource -> R = { R1,R2,…..Rn}
Where R is the set of all resources types in the system.
• A directed graph from process Pi to resource type Rj , is represented by pi -> Rj called “request edge”.
It signifies process requests the resources.
• A directed graph from the Resource Rj to process Pi , is represented by Rj->Pi called “assignment
edge” . It signifies the process Pi holding the Resource Rj.
R1 R2

. .

P P
P3
1 2

. .
.
. ---------- Sample Resource allocation Graph

R2 R4

The resource allocation graph depicts the following situation.


1. The sets P,R and E
• P = { p1,p2,p3}
• R = { R1,R2,R3,R4}
• E = { p1 -> R1, p2 -> R3, R1->p2, R2->P1,R3->p3}
Thus the resource allocation graph with Deadlock is shown below

R1 R2

. .

27
P P
P3
1 2

. .
.
. ---------- Resource allocation Graph with Deadlock

R2 R4

i.e p1->R1->p2->R3->P3->R2->P1->p2->R2->p3->r2->p2

Now consider the resource allocation graph have a cycle but no deadlock.

. P
2
.
P P
1 3

.
P4
.

NOTE:
If a resource allocation graph does not have a cycle, then system in a deadlock state. On
the other hand, if there is a cycle, then the system may or may not be in a deadlock state.

2.15 Methods for Handling Deadlocks

The Deadlocks can be handled by the following methods:

PAID:

1. Prevention
2. Avoidance
3. Ignorance
4. Detection & Recovery

Ignorance
‘Ostrich Algorithm’: Stick your head in the sand and pretend there is no problem at all.

28
Justification: Problem does not happen frequently.
Used by: UNIX, JVM, Windows
Problem: Once it happens problem does not go away. Lockup exists until reboot or deadlocked
process is manually aborted.
Solution: Let application take care of it via designing deadlock prevention.
Prevention
Prevent one of the four conditions from happening:
Mutual Exclusion: Avoid this by allowing spooling or sharing.
• E.g. Spool output to printer instead of waiting for printer.
• E.g. Have multiple processes read from same file simultaneously.
• Problem: Not all resources are sharable.
Hold and Wait: A process may not request a resource if it holds a resource.
• Have processes define all the resources they need at the beginning of requests.
• If resource is not available, do not allocate any resource and wait until all resources are
available.
• Alternately: allow a process to request a resource only if it gives up all resources being
held first. It may claim the resources back if it succeeds in the request.
• Problem: Resources are held for longer than they are used.
• Problem: Starvation is possible.
No Preemption: If a process requests a resource it cannot immediately get, it is forced to release all
held resources.
• Problem: Don't want to give away a printer or tape drive that has already been written to
while waiting for resource.
• Problem: If process re-requests resources before problem is solved.
Circular Wait: Each process requests resources in an increasing order of enumeration.
• Problem: Resources may be held for a longer time than they are used.

Dining Philosophers Problem


Situation:
There are 5 philosophers.
Each philosopher eats for a while and then thinks for a while.
Each philosopher eats with 2 forks: fork on left and fork on right.
There is a table with spaghetti with 5 chairs, 5 plates, 5 forks.
Problem: Deadlock
What solutions exist following Prevention solutions?

Detection & Recovery


Requested resources are granted to processes whenever possible.
Periodically the O.S. performs an algorithm to detect circular wait conditions, using graph
algorithm)
• Check at resource request/not available. Advantage: Incremental checks & early
detection
• Check every hour or so. Advantage: less overhead.
When deadlock detected solutions can be:
• Abort all deadlocked processes.
• Back up deadlocked processes to checkpoint and restart.
• Successively abort deadlocked processes until deadlock no longer exists.
• Successively preempt resources and back up preempted processes.
Preempt process which (for example):
29
• Consumed least amount of processor, or least total resources, or incurs least number of
rollbacks.
• Has lowest priority.
Problems with preempting:
May leave data in inconsistent state.

Avoidance:

Deadlock can be avoided if certain information about processes is available in advance of


resource allocation. For every resource request, the system sees if granting the request will mean that
the system will enter an unsafe state, meaning a state that could result in deadlock. The system then
only grants request that will lead to safe states.

In order for the system to be able to figure out whether the next state will be safe or unsafe, it
must know in advance at any time the number and type of all resources in existence, available, and
requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which
requires resource usage limit to be known in advance. However, for many systems it is impossible to
know in advance what every process will request. This means that deadlock avoidance is often
impossible.

Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-breaking
technique. In both these algorithms there exists an older process (O) and a younger process (Y).
Process age can be determined by a time stamp at process creation time. Smaller time stamps are older
processes, while larger timestamps represent younger processes.

dlock Prevention

o Mutual Exclusion: not required for sharable resources; must hold for nonsharable
resources.
o Hold and Wait: must guarantee that whenever a process requests a resource, it does not
hold any other resources.
 Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none
already.
 Low resource utilization; starvation possible
o No Preemption:
 If the process that is holding some resources requests another resource that cannot
immediately be allocated to it, then all resources currently being held are reIeased.
 Preempted resources are added to the list of resources for which the process is
waiting.
 Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requiring.
o Circular Wait: impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration.

2.17 Deadlock Avoidance:

30
Deadlock can be avoided if certain information about processes is available in advance of
resource allocation. For every resource request, the system sees if granting the request will mean that
the system will enter an unsafe state, meaning a state that could result in deadlock. The system then
only grants request that will lead to safe states.

In order for the system to be able to figure out whether the next state will be safe or unsafe, it
must know in advance at any time the number and type of all resources in existence, available, and
requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which
requires resource usage limit to be known in advance. However, for many systems it is impossible to
know in advance what every process will request. This means that deadlock avoidance is often
impossible.

Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-breaking
technique. In both these algorithms there exists an older process (O) and a younger process (Y).
Process age can be determined by a time stamp at process creation time. Smaller time stamps are older
processes, while larger timestamps represent younger processes.

2.18 Deadlock Detection

Often neither deadlock avoidance nor deadlock prevention may be used. Instead deadlock
detection and process restart are used by employing an algorithm that tracks resource allocation and
process states, and rolls back and restarts one or more of the processes in order to remove the deadlock.
Detecting a deadlock that has already occurred is easily possible since the resources that each process
has locked and/or currently requested are known to the resource scheduler or OS.

Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact,
generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However,
in specific environments, using specific means of locking resources, deadlock detection may be
decidable. In the general case, it is not possible to distinguish between algorithms that are merely
waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because
of deadlock.

Deadlocks prevention detection and recovery

There are 2 principal methods of dealing with deadlock problem. Use a deadlock prevention protocol
or allow the system to enter a deadlock state and then try to detect it and recover from it.

Deadlocks can be prevented by ensuring that at least one of the following four conditions occur:

• Removing the mutual exclusion condition, this means that no process may have exclusive
access to a resource. This proves impossible for resources that cannot be spooled, and even
with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are
called non-blocking synchronization algorithms.
• The "hold and wait" conditions may be removed by requiring processes to request all the
resources they will need before starting up (or before starting a particular set of operations);
getting this kind of information is difficult to get in advance and it is also inefficient way to use
resources. Another way is to require processes to release all their resources before requesting
all the resources they will need.

31
• A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process
has to be able to have a resource for a certain amount of time, or the processing outcome may
be inconsistent or thrashing may occur. However, the inability to enforce preemption may
interfere with a priority algorithm. (Note: Preemption of a "locked out" resource generally
implies a rollback, and is to be avoided, since it is very costly in terms of overhead.)
Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic
concurrency control.
• Circular wait prevention consists of allowing processes to wait for resources, but ensure that
the waiting can't be circular. One approach might be to assign precedence to each resource and
force processes to allocate resources in order of increasing precedence. That is to say that if a
process holds some resources and the highest precedence of these resources is m, and then this
process cannot request any resource with precedence smaller than m. This forces resource
allocation to follow a particular and non-circular ordering, so circular wait cannot occur.
Another approach is to allow holding only one resource per process; if a process requests
another resource, it must first free the one it's currently holding (or hold-and-wait).

2.19 Recovery from Deadlocks :

When a deadlock is detected, the most common method is to roll back one or more transactions to
break the deadlock. Three actions are required
1. Selecting a victim
2. Rollback, can be total rollback or partial rollback. Total rollback means that the complete
transaction is rolled back. Partial rollback means that the transaction is rolled back only to the
point where the deadlock breaks.
3. In a system where selection is based in terms of cost, there can be a victim which can be chosen
again and again. This causes the victim to starve. This situation has to be detected and avoided.

Log Based Recovery:


All updates are recorded on the log, which must be kept in stable storage. These logs have different
formats. When a failure occurs, the log file is read and the changes which are incomplete are undone.
There are 2 types of schemes:

In the deferred-modifications scheme, during the execution of a transaction, all the write operations
are deferred until the transaction partially commits, at which time the information on the log associated
with the transaction is used in executing the deferred writes.

In the immediate-modifications scheme, all updates are applied directly to the database. If a crash
occurs, the information in the log is used in restoring the state of the system to a previous consistent
state.

An example of the log file format is, it has the following fields:

• Transaction identifier which is unique for all transactions


• Data identifier, which is identifier for the data item.
• Old Value is the value of the data item prior to the transaction
• New value is the value of the data item that it will have after the transaction completes.

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy