Unit-3 OS
Unit-3 OS
Key Objectives:
1. Mutual Exclusion: Only one process should be allowed to enter its critical section at a
time.
2. Progress: If no process is in its critical section and some processes are waiting, one of the
waiting processes must be allowed to enter.
3. Bounded Waiting: There must be a limit on the number of times other processes can enter
their critical sections after a process has made a request to enter its critical section.
Critical Section:
A critical section is a portion of the program code where a process accesses and manipulates shared
resources. It's a region of code that needs to be protected to avoid conflicts.
Solution Approaches:
1. Software-Based Solutions:
Peterson's Algorithm: A simple software-based solution for two processes to achieve mutual
exclusion.
a) Dekker's Algorithm: Another software-based algorithm for two processes.
Test-and-Set, Swap, and Compare-and-Swap: These are atomic instructions used for designing
software-based solutions.
b) Semaphores: A synchronization mechanism to protect critical sections. They can be
binary or count-based.
2. Hardware-Based Solutions:
Some modern processors offer atomic hardware instructions for synchronization, making it easier
to implement mutual exclusion.
The critical section problem is a fundamental challenge in concurrent programming and operating
systems. Synchronizing access to shared resources is crucial to ensure data consistency and prevent
race conditions. Various software and hardware-based solutions, along with operating system
support, can be used to address this problem while considering associated challenges like deadlock
and starvation. Understanding and effectively managing the critical section problem is essential
for developing reliable and efficient concurrent software systems.
2. Counting Semaphore:
A counting semaphore can have a non-negative integer value.
It can be used to control access to a resource with multiple instances, like a pool of database
connections.
Semaphore Operations:
Two fundamental operations associated with semaphores:
Semaphore Example:
Suppose you have a binary semaphore protecting a critical section.
In pseudocode:
Semaphore mutex = 1;
Process 1:
P(mutex); // Entry section
// Critical section
V(mutex); // Exit section
Process 2:
P(mutex); // Entry section
// Critical section
V(mutex); // Exit section
In this example, only one process can be in the critical section at a time, ensuring mutual exclusion.
Semaphore Implementation:
Semaphores can be implemented using atomic hardware instructions, as well as through software
constructs, depending on the operating system and hardware support.
Benefits of Semaphores:
1. General Purpose: Semaphores are versatile and can be used to solve a wide range of
synchronization problems.
2. Efficiency: They are efficient in terms of CPU and memory usage when properly
implemented.
3. Concurrency Control: Semaphores help manage concurrent access to shared resources,
preventing data corruption.
2. Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not both. This problem involves the allocation
of limited resources to a group of processes in a deadlock-free and starvation-free manner.
A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.
Characteristics of Deadlocks:
1. Mutual Exclusion
There should be a resource that can only be held by one process at a time.
In the diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.
3. No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released
when Process 1 relinquishes it voluntarily after its execution is complete.
4. Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource
held by the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Method of handling deadlock
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating system
assumes that deadlock never occurs. It simply ignores deadlock. This approach is best suitable for
a single end user system where User uses the system only for browsing and all other normal stuff.
There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but
there can be a big argument on its physical implementation in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is
in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock
in the system.
Necessary Conditions for Deadlock: Before we delve into deadlock prevention, it's essential to
understand the four necessary conditions for a deadlock:
Mutual Exclusion: Processes must request exclusive access to resources, meaning only
one process can use a resource at a time.
Hold and Wait: Processes must hold some resources while requesting additional ones,
leading to potential resource contention.
No Preemption: Resources cannot be forcibly taken away from a process. A process must
release its resources voluntarily.
Circular Wait: There must be a circular chain of processes, with each waiting for a
resource held by the next one in the chain.
Deadlock Prevention Strategies:
Mutual Exclusion Prevention: This condition cannot be eliminated as some resources
require exclusive access. However, it can be minimized by making resources shareable
whenever possible.
Hold and Wait Prevention: Processes are not allowed to hold resources while requesting
additional ones. A process must request all the resources it needs at once. This approach
reduces the likelihood of processes being stuck in a hold-and-wait state.
No Preemption Prevention: In situations where deadlock can occur, resources can be
preempted from one process and allocated to another if needed. This approach is more
suitable for certain types of resources and can lead to complexities.
Circular Wait Prevention: Assign a unique numerical priority to each resource or
process. Ensure that processes request resources in increasing numerical order. This
approach eliminates circular wait as processes can only request resources with higher
numerical priorities.
Banker's Algorithm:
The Banker's algorithm is a well-known technique for deadlock prevention, based on the
concept of safe and unsafe states.
It is used to allocate resources to processes in a way that ensures the system remains in a
safe state.
Processes must specify their maximum resource requirements at the start, and the system
only allocates resources when it is safe to do so.
Deadlock prevention is a fundamental strategy in operating systems to ensure that processes can
execute without getting stuck in deadlocks. By addressing the necessary conditions for deadlocks,
such as hold-and-wait and circular wait, and employing techniques like the Banker's algorithm and
resource allocation graphs, systems can be designed to operate more efficiently and reliably.
Proper deadlock prevention strategies are essential for the smooth functioning of modern computer
systems.
Safe State:
In the context of deadlock avoidance, a system is in a safe state if there exists a sequence of
resource allocations and process completions such that every process can eventually complete its
execution, without causing a deadlock.
2. Request Validation:
When a process requests resources, check if it can be granted without causing an
unsafe state.
If the request doesn't lead to an unsafe state, allocate the resources and continue.
If the request would lead to an unsafe state, the process must wait.
3. Release Resources:
When a process finishes, it releases the allocated resources.
4. Safety Check:
After each resource allocation or resource release, check if the system remains in a
safe state.
3. Request-Release Model:
A process requests resources when it starts and releases them when it is done.
The system continually checks if granting a request would lead to an unsafe state.
4. Resource Constraints:
The system must know the maximum number of resources available in each
resource class.
Advantages of Deadlock Avoidance:
1. Proactive Approach: Deadlock avoidance prevents deadlocks before they occur, reducing
the need for deadlock recovery mechanisms.
2. Resource Efficiency: Resource allocation is controlled, reducing the chance of resource
wastage.
Deadlock avoidance is a proactive strategy in operating systems that ensures processes can execute
without getting stuck in deadlocks. By controlling resource allocation using techniques like the
Banker's algorithm and continuously checking for safe states, systems can operate more efficiently
and reliably. Proper implementation of deadlock avoidance is crucial for the smooth functioning
of modern computer systems.
For Resource
1. Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this resource
sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.
For Process
1. Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present
in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the
system for a very long time.
Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
It is barely present or
It is a minimal time-sharing It is a component of systems
nonexistent in the time-sharing
system. for time sharing.
system.
Hritika Vaishnav
Assistant Professor
MITS Jadan