0% found this document useful (0 votes)
12 views

5-Week5 Osa Theory

Process synchronization ensures that no two processes access the same shared data simultaneously, addressing the critical section problem through mutual exclusion, progress, and bounded waiting. Semaphores are used as synchronization tools to manage access to shared resources, while deadlocks occur when processes are unable to proceed because they are each waiting for resources held by the other. The document also discusses methods for handling deadlocks, including prevention, avoidance, detection, and recovery, as well as the differences between user-level and kernel-level threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

5-Week5 Osa Theory

Process synchronization ensures that no two processes access the same shared data simultaneously, addressing the critical section problem through mutual exclusion, progress, and bounded waiting. Semaphores are used as synchronization tools to manage access to shared resources, while deadlocks occur when processes are unable to proceed because they are each waiting for resources held by the other. The document also discusses methods for handling deadlocks, including prevention, avoidance, detection, and recovery, as well as the differences between user-level and kernel-level threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Process Synchronization

Process Synchronization is the task of coordinating the execution of


processes in a way that no two processes can have access to the
same shared data and resources.
Critical Section: Is a code segment that accesses shared variables and has
to be executed as an atomic action.

Critical Section refers to the segment of code or the program which tries to
access or modify the value of the variables in a shared resource.

Critical Section Problem: The problem of how to ensure that at most one
process is executing its critical section at a given time.
A solution to the critical-section problem must satisfy the following three
requirements:
 Mutual exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.

1
 Progress: This solution is used when no one is in the critical
section, and someone wants in. Then those processes not in their
reminder section should decide who should go in, in a finite time.

Progress means that if one process doesn't need to execute into


critical section then it should not stop other processes to get into
the critical section.

 Bounded waiting. There exists a bound, or limit, on the number of


times that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and before
that request is granted.
We should be able to predict the waiting time for every process
to get into the critical section. The process must not be endlessly

2
waiting for getting into the critical section.

Semaphores
A synchronization tool used to solve critical-section problem is called
Semaphore.
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait ( ) and signal ( ).
The wait ( ) operation was originally termed P (from the Dutch
Proberen, "to test"); signal( ) was originally called V (from
Verhogen, "to increment").
The definition of wait ( ) is as follows:
wait ( S ) {
while ( S <= 0 )
; // no-op
S––

}
The definition of signal( ) is as follows:

3
signal ( S ) {
S++;
}
 As wait ( ) and signal ( ) are atomic operations, when one process
modifies the semaphore value, no other process can simultaneously
modify that same semaphore value.
In the case of wait (S), the testing of the integer value of S (S <=
0), as well as its possible modification (S – –), must be executed
without interruption.

4
Deadlock- System model
Deadlock
“When processes request a resource and if the resources are not
available at that time the process enters into waiting state. Waiting
process may not change its state because the resources they are
requested are held by other process. This situation is called deadlock”.
System model
A system may consist of finite number of resources and is distributed
among number of processes. These resources are partitioned into several
instances each with identical instances.

A process must request a resource before using it and it must release


the resource after using it. It can request any number of resources to carry
out a designated task. The amount of resource requested may not exceed the
total number of resources available.

A process may utilize the resources in only the following sequences:


1. Request:If the request is not granted immediately then the requesting
process must wait until it can acquire the resources.
2. Use:The process can operate on the resource. i.e use the memory to perform
its task.
3. Release:-The process releases the resource after using it.

Deadlock may involve different types of resources.


A deadlock happens in operating system when two or more processes

5
need some resource to complete their execution that is held by the other
process

In the above diagram, the process 1 has resource 1 and needs to


acquire resource 2. Similarly process 2 has resource 2 and needs to
acquire resource 1. Process 1 and process 2 are in deadlock as each of
them needs the other’s resource to complete their execution but neither
of them is willing to relinquish their resources.

Neccessary Conditions for Deadlock


The four necessary conditions for a deadlock to arise are as follows.

 Mutual Exclusion: Only one process can use a resource at any given time
i.e. the resources are non-sharable.
 Hold and wait: A process is holding at least one resource at a time and is
waiting to acquire other resources held by some other process.
 No preemption: The resource can be released by a process voluntarily i.e.
after execution of the process
 Circular Wait: A set of processes are waiting for each other in a circular
fashion.

6
.

Methods for handling deadlocks


Deadlock problem can be dealt with in one of three ways –
1. Use a protocol to prevent or avoid deadlocks ensuring that the system
will never enter a deadlock state
2. Allow the system to enter a deadlock state, detect it and recover

3 Ignore the problem altogether and pretend that deadlocks never


occur in the system

Deadlock prevention
For a deadlock to occur, each of the four necessary conditions must
hold. By ensuring that at least one of these conditions cannot hold,
7
we can prevent the occurrence of a deadlock. The necessary four
conditions are: Mutual Exclusion, Hold and Wait, No pre-emption,
Circular Wait.

Mutual Exclusion:
If a resource could have been used by more than one process at the
same time then the process would have never been waiting for any
resource.
However, if we can be able to violate resources behaving in the
mutually exclusive manner then the deadlock can be prevented.
Backward Skip 10sPlay Video

Hold and Wait:


To ensure that the hold-and-wait condition never occurs in the
system, we must guarantee that, whenever a process requests a
resource, it does not hold any other resources.
Protocols used are;
 Requires each process to request and be allocated all its resources
before it begins execution.
 Allows a process to request resources only when it has none.

No preemption:
Deadlock arises due to the fact that a process can't be stopped
once it starts. However, if we take the resource away from the
process which is causing deadlock then we can prevent
deadlock.
Circular Wait:
One way to ensure that circular wait condition never holds is to impose
a total ordering of all resource types and to require that each process

8
requests resources in an increasing order of enumeration.

Deadlock Avoidance
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested.

When a process requests a resource, the deadlock avoidance algorithm


examines the resource-allocation state. If allocating that resource sends
the system into an unsafe state, the request is not granted.

Therefore, it requires additional information such as how many


resources of each type is required by a process. If the system enters into
an unsafe state, it has to take a step back to avoid deadlock.

Deadlock Detection and Recovery

We let the system fall into a deadlock and if it happens, we detect it


using a detection algorithm and try to recover.

Some ways of recovery are as follows.


 Aborting all the deadlocked processes.
 Abort one process at a time until the system recovers from the
deadlock.
 Resource Preemption: Resources are taken one by one from a
process and assigned to higher priority processes until the deadlock
is resolved.

Deadlock Ignorance

In the method, the system assumes that deadlock never occurs. Since the
problem of deadlock situation is not frequent, some systems simply
ignore it. Operating systems such as UNIX and Windows follow this
approach. However, if a deadlock occurs we can reboot our system and
the deadlock is resolved automatically.
9
Threads – Multithreading models, Threads, and processes.

What is Thread?

A thread is the subset of a process and is also known as the


lightweight process. A process can have more than one thread,
and these threads are managed independently by the scheduler.
All the threads within one process are interrelated to each other.
Threads have some common information, such as data segment,
code segment, files, etc., that is shared to their peer threads. But
contains its own registers, stack, and counter.

o If a single thread executes in a process, it is known as a


single-threaded And if multiple threads execute
simultaneously, then it is known as multithreading.

10
11
Multithreading Model:
A thread is the smallest unit of processing that can be
performed in an OS. In most modern operating systems,
a thread exists within a process - that is, a single process
may contain multiple threads.

Multithreading allows the application to divide its task into


individual threads.

With the use of multithreading, multitasking can be achieved.

Multithreading allows the execution of multiple parts of a program


at the same time. These parts are known as threads and are
lightweight processes available within the process. Therefore,
multithreading leads to maximum utilization of the CPU by
multitasking.

The main drawback of single threading systems is that only one task can
be performed at a time, so to overcome the drawback of this single
threading, there is multithreading that allows multiple tasks to be
performed.

12
Types of threads - Kernel level and User level

The two main types of threads are user-level threads and kernel-
level threads. A diagram that demonstrates these is as follows –

User - Level Threads


The user-level threads are implemented by users and the kernel is not
aware of the existence of these threads.
It handles them as if they were single-threaded processes.
User-level threads are small and much faster than kernel level threads.
They are represented by a program counter(PC), stack, registers and a
small process control block.
Also, there is no kernel involvement in synchronization for user-level
threads.
Advantages of User-Level Threads
Some of the advantages of user-level threads are as follows −
 User-level threads are easier and faster to create than kernel-level
threads. They can also be more easily managed.

13
 User-level threads can be run on any operating system.
 There are no kernel mode privileges required for thread switching in
user-level threads.
Disadvantages of User-Level Threads
Some of the disadvantages of user-level threads are as follows −
 Multithreaded applications in user-level threads cannot use
multiprocessing to their advantage.
 The entire process is blocked if one user-level thread
performs blocking operation.

Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and
the thread management is done by the kernel.
The context information for the process as well as the process threads is
all managed by the kernel. Because of this, kernel-level threads are
slower than user-level threads.

Advantages of Kernel-Level Threads


Some of the advantages of kernel-level threads are as follows −
 Multiple threads of the same process can be scheduled on different
processors in kernel-level threads.
 The kernel routines can also be multithreaded.
 If a kernel-level thread is blocked, another thread of the same
process can be scheduled by the kernel.
Disadvantages of Kernel-Level Threads
Some of the disadvantages of kernel-level threads are as follows −
 A mode switch to kernel mode is required to transfer control from
one thread to another in a process.
 Kernel-level threads are slower to create as well as manage as
compared to user-level threads.
14
Assignment Questions
 What is Process Synchronization? Write Solution to Critical
section Problem?
 Explain Semaphore
 when two or more processes waiting for some resource to complete their execution that is
held by the other process, What is this Situation is called and What are

necessary conditions for this situation.


 Justify how process is different from thread
 Why two level of threads are required justify it

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy