5-Week5 Osa Theory
5-Week5 Osa Theory
Critical Section refers to the segment of code or the program which tries to
access or modify the value of the variables in a shared resource.
Critical Section Problem: The problem of how to ensure that at most one
process is executing its critical section at a given time.
A solution to the critical-section problem must satisfy the following three
requirements:
Mutual exclusion. If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
1
Progress: This solution is used when no one is in the critical
section, and someone wants in. Then those processes not in their
reminder section should decide who should go in, in a finite time.
2
waiting for getting into the critical section.
Semaphores
A synchronization tool used to solve critical-section problem is called
Semaphore.
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait ( ) and signal ( ).
The wait ( ) operation was originally termed P (from the Dutch
Proberen, "to test"); signal( ) was originally called V (from
Verhogen, "to increment").
The definition of wait ( ) is as follows:
wait ( S ) {
while ( S <= 0 )
; // no-op
S––
}
The definition of signal( ) is as follows:
3
signal ( S ) {
S++;
}
As wait ( ) and signal ( ) are atomic operations, when one process
modifies the semaphore value, no other process can simultaneously
modify that same semaphore value.
In the case of wait (S), the testing of the integer value of S (S <=
0), as well as its possible modification (S – –), must be executed
without interruption.
4
Deadlock- System model
Deadlock
“When processes request a resource and if the resources are not
available at that time the process enters into waiting state. Waiting
process may not change its state because the resources they are
requested are held by other process. This situation is called deadlock”.
System model
A system may consist of finite number of resources and is distributed
among number of processes. These resources are partitioned into several
instances each with identical instances.
5
need some resource to complete their execution that is held by the other
process
Mutual Exclusion: Only one process can use a resource at any given time
i.e. the resources are non-sharable.
Hold and wait: A process is holding at least one resource at a time and is
waiting to acquire other resources held by some other process.
No preemption: The resource can be released by a process voluntarily i.e.
after execution of the process
Circular Wait: A set of processes are waiting for each other in a circular
fashion.
6
.
Deadlock prevention
For a deadlock to occur, each of the four necessary conditions must
hold. By ensuring that at least one of these conditions cannot hold,
7
we can prevent the occurrence of a deadlock. The necessary four
conditions are: Mutual Exclusion, Hold and Wait, No pre-emption,
Circular Wait.
Mutual Exclusion:
If a resource could have been used by more than one process at the
same time then the process would have never been waiting for any
resource.
However, if we can be able to violate resources behaving in the
mutually exclusive manner then the deadlock can be prevented.
Backward Skip 10sPlay Video
No preemption:
Deadlock arises due to the fact that a process can't be stopped
once it starts. However, if we take the resource away from the
process which is causing deadlock then we can prevent
deadlock.
Circular Wait:
One way to ensure that circular wait condition never holds is to impose
a total ordering of all resource types and to require that each process
8
requests resources in an increasing order of enumeration.
Deadlock Avoidance
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested.
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the
problem of deadlock situation is not frequent, some systems simply
ignore it. Operating systems such as UNIX and Windows follow this
approach. However, if a deadlock occurs we can reboot our system and
the deadlock is resolved automatically.
9
Threads – Multithreading models, Threads, and processes.
What is Thread?
10
11
Multithreading Model:
A thread is the smallest unit of processing that can be
performed in an OS. In most modern operating systems,
a thread exists within a process - that is, a single process
may contain multiple threads.
The main drawback of single threading systems is that only one task can
be performed at a time, so to overcome the drawback of this single
threading, there is multithreading that allows multiple tasks to be
performed.
12
Types of threads - Kernel level and User level
The two main types of threads are user-level threads and kernel-
level threads. A diagram that demonstrates these is as follows –
13
User-level threads can be run on any operating system.
There are no kernel mode privileges required for thread switching in
user-level threads.
Disadvantages of User-Level Threads
Some of the disadvantages of user-level threads are as follows −
Multithreaded applications in user-level threads cannot use
multiprocessing to their advantage.
The entire process is blocked if one user-level thread
performs blocking operation.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and
the thread management is done by the kernel.
The context information for the process as well as the process threads is
all managed by the kernel. Because of this, kernel-level threads are
slower than user-level threads.
15