0% found this document useful (0 votes)
47 views

Unit-3 OS

The critical section problem in operating systems refers to coordinating access to shared resources among multiple processes or threads to avoid race conditions and ensure data consistency. It requires mutual exclusion where only one process can enter the critical section at a time, progress so waiting processes can enter, and bounded waiting to limit how long processes wait. Solutions include software algorithms like Peterson's and Dekker's and hardware instructions, while examples are a printer, database, or data structure shared by concurrent processes. Classical synchronization problems in operating systems demonstrate concurrency challenges and include the bounded buffer, dining philosophers, readers-writers, and sleeping barber problems.

Uploaded by

Tia Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Unit-3 OS

The critical section problem in operating systems refers to coordinating access to shared resources among multiple processes or threads to avoid race conditions and ensure data consistency. It requires mutual exclusion where only one process can enter the critical section at a time, progress so waiting processes can enter, and bounded waiting to limit how long processes wait. Solutions include software algorithms like Peterson's and Dekker's and hardware instructions, while examples are a printer, database, or data structure shared by concurrent processes. Classical synchronization problems in operating systems demonstrate concurrency challenges and include the bounded buffer, dining philosophers, readers-writers, and sleeping barber problems.

Uploaded by

Tia Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-3

Critical Section Problem in Operating Systems


Definition: The critical section problem is a classic synchronization problem in operating systems.
It refers to a situation where multiple processes or threads share a common resource, and they must
coordinate their access to that resource to avoid race conditions and ensure data consistency.

 Key Objectives:
1. Mutual Exclusion: Only one process should be allowed to enter its critical section at a
time.
2. Progress: If no process is in its critical section and some processes are waiting, one of the
waiting processes must be allowed to enter.
3. Bounded Waiting: There must be a limit on the number of times other processes can enter
their critical sections after a process has made a request to enter its critical section.

Critical Section:
A critical section is a portion of the program code where a process accesses and manipulates shared
resources. It's a region of code that needs to be protected to avoid conflicts.

 Solution Approaches:
1. Software-Based Solutions:
Peterson's Algorithm: A simple software-based solution for two processes to achieve mutual
exclusion.
a) Dekker's Algorithm: Another software-based algorithm for two processes.
Test-and-Set, Swap, and Compare-and-Swap: These are atomic instructions used for designing
software-based solutions.
b) Semaphores: A synchronization mechanism to protect critical sections. They can be
binary or count-based.

2. Hardware-Based Solutions:
Some modern processors offer atomic hardware instructions for synchronization, making it easier
to implement mutual exclusion.

 Examples of Critical Section Problem:


 A printer shared by multiple processes.
 A database accessed by multiple transactions.
 A shared data structure in a multi-threaded program.

The critical section problem is a fundamental challenge in concurrent programming and operating
systems. Synchronizing access to shared resources is crucial to ensure data consistency and prevent
race conditions. Various software and hardware-based solutions, along with operating system
support, can be used to address this problem while considering associated challenges like deadlock
and starvation. Understanding and effectively managing the critical section problem is essential
for developing reliable and efficient concurrent software systems.

Semaphores in Operating Systems


Definition: Semaphores are a synchronization mechanism used in operating systems to manage
access to shared resources and coordinate the activities of multiple processes or threads. They were
introduced by Edsger Dijkstra and are essential for preventing race conditions and achieving
mutual exclusion.
Types of Semaphores:
1. Binary Semaphore:
A binary semaphore has only two values: 0 and 1.
It is typically used for mutual exclusion and can be thought of as a mutex or a lock.

2. Counting Semaphore:
A counting semaphore can have a non-negative integer value.
It can be used to control access to a resource with multiple instances, like a pool of database
connections.
Semaphore Operations:
Two fundamental operations associated with semaphores:

1. Wait (P) Operation:


Decrements the semaphore value. If the value becomes negative, the process is blocked until the
semaphore becomes positive.

2. Signal (V) Operation:


Increments the semaphore value. If any process was waiting due to a "Wait" operation, it is
unblocked.
Semaphore Properties:
1. Mutual Exclusion: Semaphores can be used to achieve mutual exclusion by ensuring that
only one process can enter a critical section at a time.
2. Synchronization: Semaphores enable processes to synchronize their actions by
controlling the order of execution.
3. Counting and Resource Management: Counting semaphores can be used to manage
resources with limited availability, such as printer spooling, database connections, or
thread pools.

Semaphore Example:
Suppose you have a binary semaphore protecting a critical section.
In pseudocode:
Semaphore mutex = 1;

Process 1:
P(mutex); // Entry section
// Critical section
V(mutex); // Exit section

Process 2:
P(mutex); // Entry section
// Critical section
V(mutex); // Exit section

In this example, only one process can be in the critical section at a time, ensuring mutual exclusion.

Semaphore Implementation:
Semaphores can be implemented using atomic hardware instructions, as well as through software
constructs, depending on the operating system and hardware support.
Benefits of Semaphores:
1. General Purpose: Semaphores are versatile and can be used to solve a wide range of
synchronization problems.
2. Efficiency: They are efficient in terms of CPU and memory usage when properly
implemented.
3. Concurrency Control: Semaphores help manage concurrent access to shared resources,
preventing data corruption.

Challenges and Considerations:


1. Deadlocks: Poorly managed semaphores can lead to deadlocks, where processes wait
indefinitely for resources.
2. Starvation: Unfair use of semaphores can result in some processes always getting access
while others wait.

Semaphores are a crucial synchronization mechanism in operating systems and concurrent


programming. They provide a systematic way to control access to shared resources, prevent race
conditions, and enable cooperation between processes. Understanding their use and appropriate
implementation is fundamental for designing reliable and efficient multi-process or multi-threaded
applications.

Classical Problems of Synchronization in Operating Systems


Introduction: Synchronization is a critical aspect of operating systems that ensures orderly
execution of processes or threads sharing common resources. Classical synchronization problems
help demonstrate the challenges of coordination and provide insights into solving complex
concurrency issues.

Key Classical Synchronization Problems:


1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
1. Bounded-Buffer (or Producer-Consumer) Problem
The Bounded Buffer problem is also called the producer-consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. The solution to this problem is, to create
two counting semaphores “full” and “empty” to keep track of the current number of full and empty
buffers respectively. Producers produce a product and consumers consume the product, but both
use of one of the containers each time.

2. Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not both. This problem involves the allocation
of limited resources to a group of processes in a deadlock-free and starvation-free manner.

3. Readers and Writers Problem


Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to read
and write) the database. We distinguish between these two types of processes by referring to the
former as readers and to the latter as writers. Precisely in OS we call this situation as the readers-
writers problem.
Problem parameters:
 One set of data is shared among a number of processes.
 Once a writer is ready, it performs its write. Only one writer may write at a time.
 If a process is writing, no other process can read it.
 If at least one reader is reading, no other process can write.
 Readers may not write and only read.

4. Sleeping Barber Problem


Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the
barber goes to sleep in barber chair and must be woken when a customer comes in. When barber
is cutting hair new customers take empty seats to wait, or leave if no vacancy. This is basically
the Sleeping Barber Problem.

Common Synchronization Tools:


a) Semaphores: Binary and counting semaphores are used to control access to resources and
signal events between processes.
b) Mutexes: Mutexes (short for "mutual exclusion") are used to protect critical sections,
ensuring that only one process can execute them at a time.
c) Condition Variables: Condition variables allow processes to wait for a specific condition
to be met before proceeding.
d) Atomic Operations: Modern processors provide atomic operations to ensure that specific
operations are executed in a single, uninterruptible step.
Challenges in Synchronization:
 Deadlocks: Poor synchronization can lead to deadlocks where processes are stuck waiting
for each other indefinitely.
 Starvation: Unfair resource allocation may result in some processes never getting access
to resources.
 Race Conditions: Failure to synchronize correctly can result in race conditions, leading to
data corruption.

Characteristics of Deadlocks in Operating Systems


Definition: Deadlock is a situation in a computer system where two or more processes are unable
to proceed because each is waiting for the other to release resources. It is a critical issue in
operating systems, as it can lead to a system becoming unresponsive and unproductive.

A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.

A deadlock occurs if the four Coffman conditions hold true.

Characteristics of Deadlocks:
1. Mutual Exclusion
There should be a resource that can only be held by one process at a time.

In the diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.

2. Hold and Wait


A process can hold multiple resources and still request more resources from other processes which
are holding them.
In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.

3. No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily.

In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released
when Process 1 relinquishes it voluntarily after its execution is complete.

4. Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource
held by the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain.

For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Method of handling deadlock
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating system
assumes that deadlock never occurs. It simply ignores deadlock. This approach is best suitable for
a single end user system where User uses the system only for browsing and all other normal stuff.

There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.

In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.

2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but
there can be a big argument on its physical implementation in the system.

3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is
in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock
in the system.

4. Deadlock detection and recovery


This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs then it applies some of the recovery methods to the system
to get rid of deadlock.

Deadlock Prevention in Operating Systems


Introduction: Deadlock prevention is a proactive strategy to avoid the occurrence of deadlocks in
computer systems by eliminating one or more of the necessary conditions for a deadlock to occur.
It aims to ensure that processes can continue to execute without getting into a deadlock state.

Necessary Conditions for Deadlock: Before we delve into deadlock prevention, it's essential to
understand the four necessary conditions for a deadlock:
 Mutual Exclusion: Processes must request exclusive access to resources, meaning only
one process can use a resource at a time.
 Hold and Wait: Processes must hold some resources while requesting additional ones,
leading to potential resource contention.
 No Preemption: Resources cannot be forcibly taken away from a process. A process must
release its resources voluntarily.
 Circular Wait: There must be a circular chain of processes, with each waiting for a
resource held by the next one in the chain.
Deadlock Prevention Strategies:
 Mutual Exclusion Prevention: This condition cannot be eliminated as some resources
require exclusive access. However, it can be minimized by making resources shareable
whenever possible.
 Hold and Wait Prevention: Processes are not allowed to hold resources while requesting
additional ones. A process must request all the resources it needs at once. This approach
reduces the likelihood of processes being stuck in a hold-and-wait state.
 No Preemption Prevention: In situations where deadlock can occur, resources can be
preempted from one process and allocated to another if needed. This approach is more
suitable for certain types of resources and can lead to complexities.
 Circular Wait Prevention: Assign a unique numerical priority to each resource or
process. Ensure that processes request resources in increasing numerical order. This
approach eliminates circular wait as processes can only request resources with higher
numerical priorities.

Banker's Algorithm:
 The Banker's algorithm is a well-known technique for deadlock prevention, based on the
concept of safe and unsafe states.
 It is used to allocate resources to processes in a way that ensures the system remains in a
safe state.
 Processes must specify their maximum resource requirements at the start, and the system
only allocates resources when it is safe to do so.

Resource Allocation Graph (RAG):


 A resource allocation graph can be used to monitor resource allocation and detect potential
deadlocks.
 It represents processes as nodes and resources as rectangles, with arrows indicating
resource allocation and resource request edges.
 A cycle in the graph indicates a potential deadlock.

Advantages of Deadlock Prevention:


1. Proactive Approach: Deadlock prevention aims to avoid deadlocks from occurring in the
first place, eliminating the need for deadlock recovery strategies.
2. Resource Efficiency: Preventing deadlocks reduces resource wastage and improves
system efficiency.

Challenges and Considerations:


1. Complexity: Deadlock prevention strategies can be complex to implement, especially in
systems with a large number of processes and resources.
2. Resource Preemption: Preempting resources from processes can be problematic in some
situations, as it may result in data corruption or undesirable system behavior.

Deadlock prevention is a fundamental strategy in operating systems to ensure that processes can
execute without getting stuck in deadlocks. By addressing the necessary conditions for deadlocks,
such as hold-and-wait and circular wait, and employing techniques like the Banker's algorithm and
resource allocation graphs, systems can be designed to operate more efficiently and reliably.
Proper deadlock prevention strategies are essential for the smooth functioning of modern computer
systems.

Deadlock Avoidance in Operating Systems


Introduction: Deadlock avoidance is a strategy used in operating systems to prevent the
occurrence of deadlocks by carefully managing resource allocation in a way that ensures the
system remains in a safe state. It focuses on controlling the resource requests of processes to avoid
situations that might lead to deadlock.

Safe State:
In the context of deadlock avoidance, a system is in a safe state if there exists a sequence of
resource allocations and process completions such that every process can eventually complete its
execution, without causing a deadlock.

Resource Allocation Graph (RAG):


 The Resource Allocation Graph is a graphical representation used to monitor resource
allocation and detect potential deadlocks.
 In the RAG, processes are represented as nodes, and resources are represented as
rectangles. Arrows indicate resource allocation and resource request edges.
 A cycle in the graph indicates a potential deadlock.
Banker's Algorithm:
 The Banker's algorithm is a widely used approach for deadlock avoidance.
 It allows processes to request resources but ensures that the system remains in a safe state.
 To use the Banker's algorithm, processes must declare their maximum resource
requirements at the start.

Banker's Algorithm Steps:


1. Initialization:
 Determine the number of resources available in each class.
 Create a matrix representing the maximum claim for each process and the allocation
of resources.

2. Request Validation:
 When a process requests resources, check if it can be granted without causing an
unsafe state.
 If the request doesn't lead to an unsafe state, allocate the resources and continue.
 If the request would lead to an unsafe state, the process must wait.

3. Release Resources:
 When a process finishes, it releases the allocated resources.

4. Safety Check:
 After each resource allocation or resource release, check if the system remains in a
safe state.

Characteristics of Deadlock Avoidance:


1. Dynamic Allocation:
 Resources are allocated to processes incrementally based on resource requests.
 A resource request is granted only if it doesn't lead to an unsafe state.

2. Safety and Liveness:


 Safety is the property that the system is in a safe state.
 Liveness ensures that every process eventually obtains the resources it needs.

3. Request-Release Model:
 A process requests resources when it starts and releases them when it is done.
 The system continually checks if granting a request would lead to an unsafe state.

4. Resource Constraints:
 The system must know the maximum number of resources available in each
resource class.
Advantages of Deadlock Avoidance:
1. Proactive Approach: Deadlock avoidance prevents deadlocks before they occur, reducing
the need for deadlock recovery mechanisms.
2. Resource Efficiency: Resource allocation is controlled, reducing the chance of resource
wastage.

Challenges and Considerations:


1. Complexity: Deadlock avoidance strategies can be complex, especially in systems with
many processes and resources.
2. Process Cooperation: Processes must cooperate by declaring their maximum resource
requirements in advance.

Deadlock avoidance is a proactive strategy in operating systems that ensures processes can execute
without getting stuck in deadlocks. By controlling resource allocation using techniques like the
Banker's algorithm and continuously checking for safe states, systems can operate more efficiently
and reliably. Proper implementation of deadlock avoidance is crucial for the smooth functioning
of modern computer systems.

Deadlock Detection and Recovery


In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
Therefore the system considers that the deadlock will definitely occur. In order to get rid of
deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of the
deadlock then the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help
of Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then there will definitely
be a deadlock. On the other hand, in multiple instanced resource type graph, detecting a cycle is
not just enough. We have to apply the safety algorithm on the system by converting the resource
allocation graph into the allocation matrix and request matrix.
In order to recover the system from deadlocks, either OS considers resources or processes.

For Resource
1. Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this resource
sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.

2. Rollback to a safe state


System passes through various states to get into the deadlock state. The operating system can
rollback the system to the previous safe state. For this purpose, OS needs to implement check
pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe
state.

For Process
1. Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.

2. Kill all process


This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute
again from starting.
Process Scheduling in OS (Operating System)
Operating system uses various schedulers for the process scheduling described below.
1. Long term scheduler
Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present
in the pool.

If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the
system for a very long time.

2. Short term scheduler


Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution. The
Job of the short term scheduler can be very critical in the sense that if it selects job whose CPU
burst time is very high then all the jobs after that, will have to wait in the ready queue for a very
long time.
This problem is called starvation which may arise if the short term scheduler makes some mistakes
while selecting the job.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes. If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting.

Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.

Some Other Schedulers


1. I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations
such as reading and writing to discs or networks. They can use various algorithms to
determine the order in which I/O operations are executed, such as FCFS (First-Come, First-
Served) or RR (Round Robin).
2. Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks
are completed within a specified time frame. They can prioritize and schedule tasks using
various algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).
Comparison among Scheduler

Long Term Scheduler Short term schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser than Speed is the fastest among
short and long-term
short term scheduler all of them.
schedulers.

It gives less control over


It controls the degree of It reduces the degree of
how much
multiprogramming multiprogramming.
multiprogramming is done.

It is barely present or
It is a minimal time-sharing It is a component of systems
nonexistent in the time-sharing
system. for time sharing.
system.

It can re-enter the process into It can re-introduce the process


It selects those processes
memory, allowing for the into memory and execution
which are ready to execute
continuation of execution. can be continued.

Hritika Vaishnav
Assistant Professor
MITS Jadan

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy