0% found this document useful (0 votes)
48 views

Lecture 4 - Process Synchronization

This document discusses process communication and synchronization in operating systems. It covers the need for synchronization when processes access shared resources to prevent race conditions and ensure proper execution order. The critical section concept is introduced, where synchronization algorithms are needed to enforce mutual exclusion so only one process is in the critical section at a time. Several synchronization algorithms using lock, turn, and interested variables are described and compared in terms of how they satisfy requirements like mutual exclusion, progress, and bounded waiting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Lecture 4 - Process Synchronization

This document discusses process communication and synchronization in operating systems. It covers the need for synchronization when processes access shared resources to prevent race conditions and ensure proper execution order. The critical section concept is introduced, where synchronization algorithms are needed to enforce mutual exclusion so only one process is in the critical section at a time. Several synchronization algorithms using lock, turn, and interested variables are described and compared in terms of how they satisfy requirements like mutual exclusion, progress, and bounded waiting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CS 334: Principles of Operating

System

Lecture 4: Process Communication and


Synchronization
Outline
• Introduction
• Need for Synchronization
• Critical Section
• Synchronization Algorithms
• Semaphores
• Message Passing Systems
Introduction
• Previous chapters considered processes to be independent
• In reality, processes cooperate with each other, that is
they share data and communicate among themselves
• The concurrent access of resources by these cooperative
processes may cause problems in the system if not
managed well.
• The procedure involved in preserving the appropriate
order of execution of cooperative processes is known as
Process Synchronization.
Need for Synchronization
Data Access Synchronization
• Consider a scenario of concurrent access to a shared
variable, XY by two processes A and B.
Initially XY=4
A updates the variable as XY = XY + 3
New value XY = 7
B updates the variable as XY = XY - 2
New value XY = 5
• A will get a wrong value if it accesses the variable XY after
B has executed
Need for Synchronization
Data Access Synchronization
• When more than one process accesses and updates the
same data concurrently, the result depends on the
sequence of execution of the instructions.
• This situation is known as a race condition.
• Race conditions lead to data inconsistency and, thereby, to
wrong results.
• Data access synchronization is required to prevent
processes from updating global data concurrently.
Need for Synchronization
Control Synchronization
• Consider two interacting processes where one process,
P2 depends on the output of another process P1
• P2 has to wait until the required input is received from
P1
• There should be control over the processes such that P2
is forced to wait until the execution of P1 has been
finished
• This is known as control synchronization in interacting
processes where the order of the execution of processes
is maintained.
Need for Synchronization
Resource Access Synchronization
• Lack of control over competing processes for accessing
the same resources can lead to a severe problem called
deadlock.

Example scenario
• A Process P1 is accessing a
Resource R1 and needs
another Resource R2 to
complete its execution.
• Another Process P2 is holding
a Resource R2 and requires R1
to proceed
• Both processes are waiting for
each other to release the
resources held by them,
processes in this state can wait
indefinitely
Critical Section
• The region of a program that tries to access shared
resources and may cause race conditions is called the
critical section
• The key to preventing problems involving shared
resources is to prohibit more than one process from
accessing the shared resource at the same time.
• This is called mutual exclusion, that is, when one
process is using a shared resource, the other processes
will be prevented from using the resource.
• Mutual exclusivity can be implemented by ensuring that
no two processes are ever in their critical regions at the
same time
Critical Section
Synchronization Algorithms
Requirements of a Synchronization Algorithm
i. Mutual Exclusion: If one process is executing inside a
critical section then the other process must not enter
in the critical section.
ii. Progress: if one process doesn't need to execute into a
critical section then it should not stop other processes
from getting into the critical section.
iii. Bounded Waiting: A process must not wait endlessly to
get into the critical section.
Synchronization Algorithms
Lock Variable Mechanism
• One of the simplest synchronization mechanisms.
• It is a software mechanism implemented in User mode
(user application).
• A variable lock is used with two possible values, either 0
or 1.
• Lock value 0 means that the critical section is vacant
while the lock value 1 means that it is occupied.
• A process in need to get into the critical section first
checks the value of the lock variable. If it is 0 then it sets
the value of lock as 1 and enters into the critical section,
otherwise it waits.
Synchronization Algorithms
Lock Variable Mechanism
//Execute non critical section(cs) codes
//Check ‘lock’ until its value is equal to zero
While (lock! = 0); {
//perform busy waiting
}

//Set lock to one to prevent other processes from entering cs


lock = 1;
//Execute code in cs

//Set lock to zero to allow other processes to enter cs


lock = 0;
Synchronization Algorithms
Lock Variable Mechanism
Analysis of the solution

Requirement Status Explanations

Mutual Exclusion Fail Two or more processes can see the value of the lock variable
as 1 and enter the critical section at the same time e.g.
during context switching
Progress Pass

Bounded Waiting Pass


Synchronization Algorithms
Turn Variable Mechanism
• An improvement to the lock variable algorithm.
• It is a software mechanism implemented in User mode
(user application), works with only two processes.
• A variable turn is used with two possible values, either i
or j.
• i is the PID of processi and j is the PID of processj.
• A process can only enter the critical section if the value
of turn is equal to its PID.
• This solves the main problem in lock mechanism where
more than one process can see the value of lock variable
as 1 and get to the critical section
Synchronization Algorithms
Turn Variable Mechanism
Process i Process j
//Execute non CS code //Execute non CS code
//Check ‘turn’ until its value is equal to i //Check ‘turn’ until its value is equal to j
While (turn! = i); { While (turn! = j); {
//perform busy waiting //perform busy waiting
} }

//Execute code in the critical section //Execute code in the critical section

//Set turn equal to j to allow process j to //Set turn equal to i to allow process i to
enter the critical section enter the critical section
turn = j; turn = i;
Synchronization Algorithms
Turn Variable Mechanism
Analysis of the solution

Requirement Status Explanations

Mutual Exclusion Pass The process will only enter the critical section when the turn
variable is equal to its Process ID hence more than one
process can never be in the critical section at the same time
Progress Fail Ø Pj cannot enter the critical section until Pi has entered
and exited the critical section
Ø Pi cannot re-enter the critical section until Pj has entered
and exited the critical section
Bounded Waiting Pass
Synchronization Algorithms
Interested Variable Mechanism
• An improvement to the 'turn' variable mechanism.
• The interested variable mechanism makes use of a
Boolean variable named interested.
• A process which wants to enter the critical section first
checks whether the other process is interested in getting
inside.
• The process will wait for the time until the other process
is interested.
• During exit, the process makes the value of its interest
variable false so that the other process can get into the
critical section.
Synchronization Algorithms
Interested Variable Mechanism
Process i Process j
//Execute non CS code //Execute non CS code
//Set interest variable equal to true //Set interest variable equal to true
interest[i]=T; interest[j]=T;
//Check if another process is interested //Check if another process is interested
While (interest[j]==T); { While (interest[i]==T); {
//perform busy waiting //perform busy waiting
} }

//Execute code in the critical section //Execute code in the critical section

//Set interest variable equal to false //Set interest variable equal to false
interest[i]=F; interest[j]=F;
Synchronization Algorithms
Interested Variable Mechanism
Analysis of the solution

Requirement Status Explanations

Mutual Exclusion Pass If one process is interested in getting into the CPU then the
other process will wait until it becomes uninterested
therefore, more than one process can never be in the critical
section at the same time
Progress Pass If a process is not interested in getting into the critical
section, then it will not stop the other process from getting
into the critical section
Bounded Waiting Fail A process can be stuck is a while loop waiting for the other
process interest variable to be set to false
Synchronization Algorithms
Peterson Solution
• A software mechanism implemented in user mode.
• The algorithm uses two variables: flag and turn. Flag is a
boolean array of size 2 while turn is an int variable
• A flag[n] value of true indicates that the process n wants
to enter the critical section.
• Entrance to the critical section is granted for process P0
if P1 does not want to enter its critical section or if P1
has given priority to P0 by setting turn to 0
Synchronization Algorithms
Peterson Solution
Process i Process j
//Execute non CS code //Execute non CS code
flag[i]=true; //set flag to true flag[j]=true; //set flag to true
turn=j; //set turn to index of other process turn=i; //set turn to index of other process
//Check if another process wants to enter //Check if another process wants to enter
while (flag[j] == true && turn == j){ while (flag[i] == true && turn == i){
// busy wait // busy wait
} }
// critical section // critical section
... ...
// end of critical section // end of critical section
flag[i] = false; flag[j] = false;

ü Initially, both flags are set to false.


ü When a process wants to execute it’s critical section, it sets its flag to true and turn into the index of
the other process.
ü This means that the process wants to execute but it will allow the other process to run first.
ü The process performs busy waiting until the other process has finished it’s own critical section.
ü After this, the current process enters its critical section and adds or removes a random number from
the shared buffer.
ü After completing the critical section, it sets it’s own flag to false, indicating it does not wish to execute
anymore.
Synchronization Algorithms
Peterson Solution
Analysis of the solution

Requirement Status Explanations

Mutual Exclusion Pass The while condition involves two variables therefore a
process cannot enter in the critical section until the other
process is interested and the process is the last one to update
turn variable.
Progress Pass An uninterested process can never stop the other interested
process from entering in the critical section
Bounded Waiting Pass A deadlock can never happen because the process which first
sets the turn variable will enter in the critical section
Synchronization Algorithms
Semaphores
• The semaphore is used to protect resources such as
global shared memory that need to be accessed and
updated by many processes simultaneously.
• Semaphore acts as a guard on the resources, whenever
a process needs to access the resource, it first needs to
take permission from the semaphore.
• If the resource is free the process will be allowed,
otherwise permission is denied.
• In case of denial, the requesting process needs to wait
until semaphore permits it
Synchronization Algorithms
Semaphores
• The semaphore is implemented as an integer variable(S),
and can be initialized with any positive integer values.
• The semaphore is accessed by only two indivisible
operations, wait(P) and signal(V) operations.
Synchronization Algorithms
Semaphores
• Initially, the count of semaphore is 1
• Whenever a process tries to enter the critical section, it
performs the wait operation.
• The count of semaphore is decremented when a process
accesses the critical section hence it becomes zero.
• Another process trying to access the critical section will
not be allowed to enter unless the semaphore value
becomes greater than zero.
• When a process exits the critical section, it performs the
signal operation which increments the semaphore by 1
Synchronization Algorithms
Semaphores
Home Work: Discuss how semaphores can be used to solve
the following classic synchronization problems
i. Producer–Consumer Synchronization Problem
ii. Reader–Writer Synchronization Problem
iii. Dining-philosopher Synchronization Problem
iv. Cigarette Smokers’ Synchronization Problem
v. Sleeping Barber Synchronization Problem
Message Passing
• The message-passing system allows processes to
communicate through explicit messages instead of
sharing memory as discussed in the previous sections
• In this system, a source process (sender) sends a
message to a known destination process (receiver)
• Two system calls are used to implement these systems
• send (name of destination process, message);
• receive (name of source process, message).
Message Passing
Addressing between two processes can take place through
two methods
Direct Addressing
• The two processes need to name each other to
communicate,
• This becomes easy if they have the same parent
Message Passing
Addressing between two processes can take place through
two methods
Indirect Addressing
• Each process has a mailbox that it uses for receiving
messages
• The sender and receiver processes share mailbox
information before starting communication
• The sender submits a message to the mailbox that will
be relayed later to the receiver process
Home Work
1. Go to https://teach-sim.com/tutorials/
2. Perform HW 3: Investigating Synchronization
3. Upload the results (1 per group) in the LMS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy