0% found this document useful (0 votes)
38 views28 pages

Ch. 2 ... Part - 2

The document discusses concurrent processing and synchronization in operating systems. It defines concurrent processing as the ability of an operating system to execute multiple tasks simultaneously. It discusses advantages like improved performance and resource utilization. It also covers different types of concurrent processes like process scheduling, multi-threading, and parallel processing. The document then discusses process synchronization techniques needed to coordinate shared resources and prevent race conditions and deadlocks.

Uploaded by

V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views28 pages

Ch. 2 ... Part - 2

The document discusses concurrent processing and synchronization in operating systems. It defines concurrent processing as the ability of an operating system to execute multiple tasks simultaneously. It discusses advantages like improved performance and resource utilization. It also covers different types of concurrent processes like process scheduling, multi-threading, and parallel processing. The document then discusses process synchronization techniques needed to coordinate shared resources and prevent race conditions and deadlocks.

Uploaded by

V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Process

Management
and
CH. 2 Synchronization
BY DR. VEEJYA KUMBHAR
Ch. 2
Process Management and Synchronization
Syllabus

• Process 
• PCB 
• Job and processor scheduling 
• Problems of concurrent processes-
• Critical sections, Mutual exclusion,
Synchronization
• Deadlock
• Device and File Management
Concurrent Processes

Concurrency processing is the ability of an operating system to execute multiple tasks


simultaneously, allowing for efficient utilization of resources and improved performance.

In today's computing environment, with the availability of multi-core CPUs and high-speed
networking, concurrency processing has become increasingly important for operating systems to meet
the demands of users.
Concurrent Processes

Definition of concurrency processing

Concurrency processing, also known as concurrent processing, refers to the ability of an operating
system to execute multiple tasks or processes simultaneously, allowing for efficient utilization of
resources and improved performance.
It involves the parallel execution of tasks, with the operating system managing and coordinating the
tasks to ensure that they do not interfere with each other.
Concurrency processing is typically achieved through techniques such as process scheduling, multi-
threading, and parallel processing, and it is a critical technology in modern operating systems,
enabling them to provide the performance, scalability, and responsiveness required by today's
computing environment.
Advantages of Concurrent Processes

• Improved performance
• Resource utilization
• Enhanced responsiveness
• Scalability
• Flexibility
Types of Concurrent Processes

Process scheduling − This is the most basic form of concurrency processing, in which the operating
system executes multiple processes one after the other, with each process given a time slice to execute
before being suspended and replaced by the next process in the queue.
Multi-threading − This involves the use of threads within a process, with each thread executing a
different task concurrently. Threads share the same memory space within a process, allowing them to
communicate and coordinate with each other easily.

Parallel processing − This involves the use of multiple processors or cores within a system to
execute multiple tasks simultaneously. Parallel processing is typically used for computationally intensive
tasks, such as scientific simulations or video rendering.

Distributed processing − This involves the use of multiple computers or nodes connected by a
network to execute a single task. Distributed processing is typically used for large-scale, data-intensive
applications, such as search engines or social networks.
Process Scheduling

Process scheduling is a core operating system function that manages the allocation of
system resources, particularly the CPU, among multiple running processes.
Process scheduling is necessary for achieving concurrency processing, as it allows the
operating system to execute multiple processes or threads simultaneously.
Scheduling algorithms
• Round-robin scheduling
• Priority-based scheduling
• Lottery scheduling
• Shortest Job First Scheduling
• Shortest Job Remaining First Scheduling
Process Synchronization and Deadlocks

Process Synchronization is the coordination of execution of multiple processes in a multi-


process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in
a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization techniques
such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a crucial role
in ensuring the correct and efficient functioning of multi-process systems.
Process Types

•Independent Process: The execution of one process does not affect the
execution of other processes.

•Cooperative Process: A process that can affect or be affected by other


processes executing in the system.

•Process synchronization problem arises in the case of Cooperative process


also because resources are shared in Cooperative processes.
RACE condition

When more than one process is executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition.
Several processes access and process the manipulations over the same data concurrently, then
the outcome depends on the particular order in which the access takes place.
A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in the critical section differs
according to the order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction.
Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Atomic Instruction

•An operation during which a processor can simultaneously read a


location and write it in the same bus operation.

•This prevents any other processor or I/O device from writing or


reading memory until the operation is complete.

•Atomic implies indivisibility and irreducibility, so an atomic


operation must be performed entirely or not performed at all.
Critical Section Problem

A critical section is a code segment that can be accessed by only one process
at a time.

The critical section contains shared variables that need to be synchronized to


maintain the consistency of data variables.

So, the critical section problem means designing a way for cooperative
processes to access shared resources without creating data inconsistencies.
Critical Section

In the entry section, the process


requests for entry in
the Critical Section.
Any solution to the critical section problem must satisfy three
requirements:
•Mutual Exclusion: If a process is executing in its critical section,
then no other process is allowed to execute in the critical section.
•Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
•Bounded Waiting: A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.
Solutions to the Critical Section Problem

•If a process enters the critical section, then no other process should be allowed to enter the critical section.
This is called mutual exclusion.
•If a process is in the critical section and another process arrives, then the new process must wait until the first
process exits the critical section. In such cases, the process that is waiting to enter the critical section should
not wait for an unlimited period. This is called progress.
•If a process wants to enter into the critical section, then there should be a specified time that the process can
be made to wait. This property is called bounded waiting.
•The solution should be independent of the system's architecture. This is called neutrality.

•Some of the software-based solutions for critical section problems are Peterson's
solution, semaphores, monitors.
•Some of the hardware-based solutions for the critical section problem involve atomic instructions such
as TestAndSet, compare and swap, Unlock and Lock.
Solutions to the Critical Section Problem

•If a process enters the critical section, then no other process should be allowed to enter the critical section.
This is called mutual exclusion.
•If a process is in the critical section and another process arrives, then the new process must wait until the first
process exits the critical section. In such cases, the process that is waiting to enter the critical section should
not wait for an unlimited period. This is called progress.
•If a process wants to enter into the critical section, then there should be a specified time that the process can
be made to wait. This property is called bounded waiting.
•The solution should be independent of the system's architecture. This is called neutrality.

•Some of the software-based solutions for critical section problems are Peterson's
solution, semaphores, monitors.
•Some of the hardware-based solutions for the critical section problem involve atomic instructions such
as TestAndSet, compare and swap, Unlock and Lock.
Deadlocks

 A process in operating system uses resources in the following way.


1. Requests a resource
2. Use the resource
3. Releases the resource
 A deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
 Consider an example when two trains are coming toward each other on the same track and there is
only one track, none of the trains can move once they are in front of each other. A similar situation
occurs in operating systems when there are two or more processes that hold some resources and wait
for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource 1
and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlocks
Deadlocks

Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each
needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
Deadlocks
Deadlocks

3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.

Deadlock occurs if both processes progress to their second request.


Deadlocks

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

Mutual Exclusion: Two or more resources are non-shareable (Only one


process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.

No Preemption: A resource cannot be taken from a process unless the process


releases the resource.

Circular Wait: A set of processes waiting for each other in circular form.
Deadlock Handling

 There are three ways to handle deadlock


1) Deadlock prevention or avoidance
 2) Deadlock detection and recovery
 3) Deadlock ignorance
Deadlock prevention or avoidance

 Prevention:
 The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly, so we
use this in cases where our priority is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in four
different ways:
 1. Eliminate mutual exclusion 3. Allow preemption
 2. Solve hold and Wait 4. Circular wait Solution
Deadlock prevention or avoidance

 Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources that the process
will need is known to us before the execution of the process. We use Banker’s
algorithm (Which is in turn a gift from Dijkstra) to avoid deadlock.
 In prevention and avoidance, we get the correctness of data but performance decreases.
2) Deadlock detection and recovery:

 If Deadlock prevention or avoidance is not applied to the software then we can handle
this by deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
 In Deadlock detection and recovery, we get the correctness of data but performance
decreases.
3) Deadlock ignorance:

 3)Deadlock ignorance: If a deadlock is very rare, then let it


happen and reboot the system. This is the approach that both
Windows and UNIX take. we use the ostrich algorithm for
deadlock ignorance.
In Deadlock, ignorance performance is better than the above two
methods but the correctness of data.
Thank
You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy