0% found this document useful (0 votes)
2 views12 pages

OS Notes Unit-2

The document discusses process synchronization, focusing on the critical section problem, semaphores, and classical synchronization issues. It explains the concepts of race conditions, critical sections, and various synchronization techniques such as monitors and semaphores. Additionally, it covers deadlocks, their characterization, and methods for handling them, emphasizing the importance of synchronization in maintaining data integrity and system efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views12 pages

OS Notes Unit-2

The document discusses process synchronization, focusing on the critical section problem, semaphores, and classical synchronization issues. It explains the concepts of race conditions, critical sections, and various synchronization techniques such as monitors and semaphores. Additionally, it covers deadlocks, their characterization, and methods for handling them, emphasizing the importance of synchronization in maintaining data integrity and system efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

UNIT- II

Process Synchronization: The Critical Section Problem, Semaphores, And


Classical Problems of Synchronization, Critical Regions, Monitors, Synchronization
examples.

Deadlocks: Principles of Deadlocks, System Model, Deadlocks Characterization,


Methods for Handling Deadlocks, Deadlock Prevention, Avoidance, Detection &
Recovery from Deadlocks.

************

Process Synchronization

Process Synchronization is the coordination of execution of multiple processes in a


multi-process system to ensure that they access shared resources in a controlled and
predictable manner. It aims to resolve the problem of race conditions and other
synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a
crucial role in ensuring the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two
types:
 Independent Process: The execution of one process does not affect the
execution of other processes.
 Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.

Race Condition
When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and
process the manipulations over the same data concurrently, and then the outcome
depends on the particular order in which the access takes place. A race condition is a
situation that may occur inside a critical section. This happens when the result of
multiple thread execution in the critical section differs according to the order in which
the threads execute. Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction. Also, proper thread synchronization using
locks or atomic variables can prevent race conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable (shared=10),
both processes are present in – queue and waiting for their turn to be executed. Suppose,
Process P1 first come under execution, and the CPU store a common variable between
them (shared=10) in the local variable (X=10) and increment it by 1(X=11), after then
when the CPU read line sleep(1),it switches from current process P1 to process P2
present in ready-queue. The process P1 goes in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10) in
its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read
sleep(1), the current process P2 goes in waiting for state and CPU remains idle for some
time as there is no process in ready-queue, after completion of 1 second of process P1
when it comes in ready-queue, CPU takes the process P1 under execution and execute
the remaining line of code (store the local variable (X=11) in common variable
(shared=11) ), CPU remain idle for sometime waiting for any process in ready-
queue,after completion of 1 second of Process P2, when process P2 comes in ready-
queue, CPU start executing the further remaining line of Process P2(store the local
variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10

Process 1 Process 2

int X = shared int Y = shared

X++ Y–

sleep(1) sleep(1)

shared = X shared = Y

Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But
we are getting undesired value due to a lack of proper synchronization.
Actual meaning of race-condition
 If the order of execution of the process(first P1 -> then P2) then we will get the
value of common variable (shared) =9.
 If the order of execution of the process(first P2 -> then P1) then we will get the
final value of common variable (shared) =11.
 Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime
we will get 10 as the final value of a common variable(shared). This
phenomenon is called race condition.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time.
The critical section contains shared variables that need to be synchronized to maintain
the consistency of data variables. So the critical section problem means designing a way
for cooperative processes to access shared resources without creating data
inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will
enter the critical section next, and the selection can not be postponed
indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


 Mutual Exclusion is assured as only one process can access the critical section
at any time.
 Progress is also assured, as a process outside the critical section does not block
other processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves busy waiting. (In the Peterson’s solution, the code statement-
“while(flag[j] && turn == j);” is responsible for this. Busy waiting is not
favored because it wastes CPU cycles that could be used to perform other
tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.
***********

Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can
be signaled by another thread. This is different than a mutex as the mutex can be signaled
only by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex
semaphore 1 and start its critical section. When it completes its critical section,
it can reset the value of the mutex semaphore to 0 and some other process can
enter its critical section.
 Counting Semaphores: They can have any value and are not restricted to a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process
wants to use that resource, it checks if the number of remaining instances is
more than zero, i.e., the process has an instance available. Then, the process
can enter its critical section thereby decreasing the value of the counting
semaphore by 1. After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.

Advantages of Process Synchronization


 Ensures data consistency and integrity
 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization


 Adds overhead to the system
 This can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlocks if not implemented properly.
**************
Critical Regions:
In an operating system, a critical region refers to a section of code or a data structure that
must be accessed exclusively by one method or thread at a time. Critical regions are
utilized to prevent concurrent entry to shared sources, along with variables, information
structures, or devices, that allow you to maintain information integrity and keep away
from race conditions.
The concept of important regions is carefully tied to the want for synchronization and
mutual exclusion in multi-threaded or multi-manner environments. Without proper
synchronization mechanisms, concurrent admission to shared resources can lead to
information inconsistencies, unpredictable conduct, and mistakes.
To implement mutual exclusion and shield important areas, operating structures provide
synchronization mechanisms, inclusive of locks, semaphores, or monitors. These
mechanisms ensure that the handiest one procedure or thread can get the right of entry to
the vital location at any given time, even as other procedures or threads are averted from
entering till the cutting-edge occupant releases the lock.

Critical Region Characteristics and Requirements


Following are the characteristics and requirements for critical regions in an operating
system.
1. Mutual Exclusion
Only one procedure or thread can access the important region at a time. This ensures
that concurrent entry does not bring about facts corruption or inconsistent states.
2. Atomicity
The execution of code within an essential region is dealt with as an indivisible unit of
execution. It way that after a system or thread enters a vital place, it completes its
execution without interruption.
3. Synchronization
Processes or threads waiting to go into a essential vicinity are synchronized to prevent
simultaneous access. They commonly appoint synchronization primitives, inclusive of
locks or semaphores, to govern access and put in force mutual exclusion.
4. Minimal Time Spent in Critical Regions
It is perfect to reduce the time spent inside crucial regions to reduce the capacity for
contention and improve gadget overall performance. Lengthy execution within essential
regions can increase the waiting time for different strategies or threads.
***********
Monitors in Process Synchronization
Monitors are a higher-level synchronization construct that simplifies process
synchronization by providing a high-level abstraction for data access and
synchronization. Monitors are implemented as programming language constructs,
typically in object-oriented languages, and provide mutual exclusion, condition variables,
and data encapsulation in a single construct.

1. A monitor is essentially a module that encapsulates a shared resource and


provides access to that resource through a set of procedures. The procedures
provided by a monitor ensure that only one process can access the shared
resource at any given time, and that processes waiting for the resource are
suspended until it becomes available.
2. Monitors are used to simplify the implementation of concurrent programs by
providing a higher-level abstraction that hides the details of synchronization.
Monitors provide a structured way of sharing data and synchronization
information, and eliminate the need for complex synchronization primitives
such as semaphores and locks.
3. The key advantage of using monitors for process synchronization is that they
provide a simple, high-level abstraction that can be used to implement complex
concurrent systems. Monitors also ensure that synchronization is encapsulated
within the module, making it easier to reason about the correctness of the
system.
However, monitors have some limitations. For example, they can be less efficient than
lower-level synchronization primitives such as semaphores and locks, as they may
involve additional overhead due to their higher-level abstraction. Additionally, monitors
may not be suitable for all types of synchronization problems, and in some cases, lower-
level primitives may be required for optimal performance.

The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
For example Java Synchronized methods. Java provides wait() and notify() constructs.

1. It is the collection of condition variables and procedures combined together in


a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of
the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Condition Variables: Two different operations are performed on the condition variables
of the monitor.
Wait.

signal.

let say we have 2 condition variables condition x, y; // Declaring variable Wait


operation x.wait() : Process performing wait operation on any condition variable are
suspended. The suspended processes are placed in block queue of that condition
variable. Note: Each condition variable has its unique block queue. Signal
operation x.signal(): When a process performs signal operation on condition variable,
one of the blocked processes is given chance.
If (x block queue empty)

// Ignore signal

else

// Resume a process from block queue.

Advantages of Monitor: Monitors have the advantage of making parallel programming


easier and less error prone than using techniques such as semaphore.
Disadvantages of Monitor: Monitors have to be implemented as part of the
programming language.
*********************
Bounded-Buffer Problem
 N buffers, each can hold one item
 Semaphore mute initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value n
The structure of the producer process
while (true)
{
... /* produce an item in next_produced*/ ...
wait(empty);
wait(mutex);
... /* add next produced to the buffer */ ...
signal(mutex);
signal(full);
}
The structure of the consumer process
while (true)
{
wait(full);
wait(mutex);
... /* remove an item from buffer to next_consumed */ ...
signal(mutex);
signal(empty); ...
/* consume the item in next consumed */ ...
}
****************
DEADLOCKS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy