Ch. 2 ... Part - 2
Ch. 2 ... Part - 2
Management
and
CH. 2 Synchronization
BY DR. VEEJYA KUMBHAR
Ch. 2
Process Management and Synchronization
Syllabus
• Process
• PCB
• Job and processor scheduling
• Problems of concurrent processes-
• Critical sections, Mutual exclusion,
Synchronization
• Deadlock
• Device and File Management
Concurrent Processes
In today's computing environment, with the availability of multi-core CPUs and high-speed
networking, concurrency processing has become increasingly important for operating systems to meet
the demands of users.
Concurrent Processes
Concurrency processing, also known as concurrent processing, refers to the ability of an operating
system to execute multiple tasks or processes simultaneously, allowing for efficient utilization of
resources and improved performance.
It involves the parallel execution of tasks, with the operating system managing and coordinating the
tasks to ensure that they do not interfere with each other.
Concurrency processing is typically achieved through techniques such as process scheduling, multi-
threading, and parallel processing, and it is a critical technology in modern operating systems,
enabling them to provide the performance, scalability, and responsiveness required by today's
computing environment.
Advantages of Concurrent Processes
• Improved performance
• Resource utilization
• Enhanced responsiveness
• Scalability
• Flexibility
Types of Concurrent Processes
Process scheduling − This is the most basic form of concurrency processing, in which the operating
system executes multiple processes one after the other, with each process given a time slice to execute
before being suspended and replaced by the next process in the queue.
Multi-threading − This involves the use of threads within a process, with each thread executing a
different task concurrently. Threads share the same memory space within a process, allowing them to
communicate and coordinate with each other easily.
Parallel processing − This involves the use of multiple processors or cores within a system to
execute multiple tasks simultaneously. Parallel processing is typically used for computationally intensive
tasks, such as scientific simulations or video rendering.
Distributed processing − This involves the use of multiple computers or nodes connected by a
network to execute a single task. Distributed processing is typically used for large-scale, data-intensive
applications, such as search engines or social networks.
Process Scheduling
Process scheduling is a core operating system function that manages the allocation of
system resources, particularly the CPU, among multiple running processes.
Process scheduling is necessary for achieving concurrency processing, as it allows the
operating system to execute multiple processes or threads simultaneously.
Scheduling algorithms
• Round-robin scheduling
• Priority-based scheduling
• Lottery scheduling
• Shortest Job First Scheduling
• Shortest Job Remaining First Scheduling
Process Synchronization and Deadlocks
•Independent Process: The execution of one process does not affect the
execution of other processes.
When more than one process is executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition.
Several processes access and process the manipulations over the same data concurrently, then
the outcome depends on the particular order in which the access takes place.
A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in the critical section differs
according to the order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction.
Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Atomic Instruction
A critical section is a code segment that can be accessed by only one process
at a time.
So, the critical section problem means designing a way for cooperative
processes to access shared resources without creating data inconsistencies.
Critical Section
•If a process enters the critical section, then no other process should be allowed to enter the critical section.
This is called mutual exclusion.
•If a process is in the critical section and another process arrives, then the new process must wait until the first
process exits the critical section. In such cases, the process that is waiting to enter the critical section should
not wait for an unlimited period. This is called progress.
•If a process wants to enter into the critical section, then there should be a specified time that the process can
be made to wait. This property is called bounded waiting.
•The solution should be independent of the system's architecture. This is called neutrality.
•Some of the software-based solutions for critical section problems are Peterson's
solution, semaphores, monitors.
•Some of the hardware-based solutions for the critical section problem involve atomic instructions such
as TestAndSet, compare and swap, Unlock and Lock.
Solutions to the Critical Section Problem
•If a process enters the critical section, then no other process should be allowed to enter the critical section.
This is called mutual exclusion.
•If a process is in the critical section and another process arrives, then the new process must wait until the first
process exits the critical section. In such cases, the process that is waiting to enter the critical section should
not wait for an unlimited period. This is called progress.
•If a process wants to enter into the critical section, then there should be a specified time that the process can
be made to wait. This property is called bounded waiting.
•The solution should be independent of the system's architecture. This is called neutrality.
•Some of the software-based solutions for critical section problems are Peterson's
solution, semaphores, monitors.
•Some of the hardware-based solutions for the critical section problem involve atomic instructions such
as TestAndSet, compare and swap, Unlock and Lock.
Deadlocks
Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each
needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
Deadlocks
Deadlocks
3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
Circular Wait: A set of processes waiting for each other in circular form.
Deadlock Handling
Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly, so we
use this in cases where our priority is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in four
different ways:
1. Eliminate mutual exclusion 3. Allow preemption
2. Solve hold and Wait 4. Circular wait Solution
Deadlock prevention or avoidance
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources that the process
will need is known to us before the execution of the process. We use Banker’s
algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
2) Deadlock detection and recovery:
If Deadlock prevention or avoidance is not applied to the software then we can handle
this by deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance
decreases.
3) Deadlock ignorance: