UNIT-2
UNIT-2
Concurrent Processes:
Concurrent processing in operating systems refers to the ability to execute multiple tasks
simultaneously, allowing for efficient utilization of resources and improved performance.
This is achieved through various techniques such as process scheduling, multi-threading, and
parallel processing.
Types of Concurrency Processing
Process scheduling: - This is the most basic form of concurrency processing, in which
the operating system executes multiple processes one after the other, with each
process given a time slice to execute before being suspended and replaced by the next
process in the queue.
Multi-threading: - This involves the use of threads within a process, with each thread
executing a different task concurrently. Threads share the same memory space within
a process, allowing them to communicate and coordinate with each other easily.
Parallel processing: - This involves the use of multiple processors or cores within a
system to execute multiple tasks simultaneously. Parallel processing is typically used
for computationally intensive tasks, such as scientific simulations or video rendering.
Distributed processing: - This involves the use of multiple computers or nodes
connected by a network to execute a single task. Distributed processing is typically
used for large-scale, data-
Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of concurrent
processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
The activities of other processes
The way operating system handles interrupts
The scheduling policies of the operating system
Problems in Concurrency
Sharing global resources: Sharing of global resources safely is difficult. If two
processes both make use of a global variable and both perform read and write on that
variable, then the order in which various read and write are executed is critical.
Optimal allocation of resources: It is difficult for the operating system to manage
the allocation of resources optimally.
Locating programming errors: It is very difficult to locate a programming error
because reports are usually not reproducible.
Locking the channel: It may be inefficient for the operating system to simply lock
the channel and prevents its use by other processes.
Advantages of Concurrency
Running of multiple applications: It enable to run multiple applications at the same
time.
Better resource utilization: It enables that the resources that are unused by one
application can be used for other applications.
Better average response time: Without concurrency, each application has to be run
to completion before the next one can be run.
Better performance: It enables the better performance by the operating system.
When one application uses only the processor and another application uses only the
disk drive then the time to run both applications concurrently to completion will be
shorter than the time to run each application consecutively.
Drawbacks of Concurrency
It is required to protect multiple applications from one another.
It is required to coordinate multiple applications through additional mechanisms.
Additional performance overheads and complexities in operating systems are required
for switching among applications.
Sometimes running too many applications concurrently leads to severely degraded
performance.
Issues of Concurrency
Non-atomic: Operations that are non-atomic but interruptible by multiple processes
can cause problems.
Race conditions: A race condition occurs of the outcome depends on which of
several processes gets to a point first.
Blocking: Processes can block waiting for resources. A process could be blocked for
long period of time waiting for input from a terminal. If the process is required to
periodically update some data, this would be very undesirable.
Starvation: Starvation occurs when a process does not obtain service to progress.
Deadlock: Deadlock occurs when two processes are blocked and hence neither can
proceed to execute.
Race conditions
A race condition is a situation that arises in a multithreaded environment when multiple
threads or processes access and manipulate shared resources concurrently. This can lead to
unpredictable and undesirable outcomes, as the final state of the resource depends on the
order in which the threads execute.
Types of Race Conditions
1. Read-Modify-Write: This occurs when two processes read a value and then write
back a new value. If not handled properly, it can cause inconsistent data3.
2. Check-Then-Act: This happens when two processes check a value and then act on it.
If the value changes between the check and the act, it can lead to incorrect behaviour.
Identification
Detecting race conditions can be challenging. Tools for static and dynamic analysis are used
to find race conditions. Static analysis tools scan the code without executing it, while
dynamic analysis tools monitor the program during execution1.
Prevention
Locks and Mutexes: Implementing locks (like mutexes) ensures that only one thread
can access a resource at a time, preventing conflicting operations2.
Synchronization: Proper synchronization techniques, such as semaphores, ensure that
threads work in a coordinated sequence when accessing shared data2.
Volatile Keyword: In Java, the volatile keyword ensures that the value of a variable is
always read from the main memory, preventing threads from caching the value
locally3.
Synchronized Keyword: The synchronized keyword in Java ensures that only one
thread can access a method or code block at a time, protecting shared resources
do {
// Entry Section: Request to enter critical section
Example Problem:
Two processes want to update a shared variable counter. If both do it at the same time without
control, the final value may be wrong.
Producer-Consumer problem
The Producer-Consumer problem is a classic synchronization issue in operating systems. It
involves two types of processes: producers, which generate data, and consumers, which
process that data. Both share a common buffer. The challenge is to ensure that the producer
doesn’t add data to a full buffer and the consumer doesn’t remove data from an empty buffer
while avoiding conflicts when accessing the buffer.
Mutual exclusion
It is a fundamental concept in operating systems and concurrent programming, ensuring that
no two processes can access a shared resource or critical section simultaneously. This concept
is crucial for preventing race conditions, where the outcome of processes depends on the
sequence of their execution.
For example:- Changing room in mall
Key Principles of Mutual Exclusion
Mutual exclusion ensures that when one process is accessing a shared resource, no
other process can access that resource until the first process has finished. This is
essential for maintaining data consistency and preventing conflicts.
Conditions for Mutual Exclusion
1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, any process that wishes to enter must
be allowed to do so without delay.
3. Bounded Waiting: There must be a limit on the number of times other processes can
enter the critical section after a process has made a request to enter.
4. No Assumptions: No assumptions should be made about the relative speeds or the
number of CPUs.
Implementing Mutual Exclusion
Software Methods
Software methods rely on algorithms to ensure mutual exclusion. Examples include:
Dekker's Algorithm
Peterson's Algorithm
Lamport's Bakery Algorithm
These algorithms use busy-waiting, where a process repeatedly checks if it can enter
the critical section3.
Hardware Methods
Hardware methods use special machine instructions to achieve mutual exclusion.
Examples include:
Test-and-Set
Compare-and-Swap
These instructions are atomic, meaning they complete without interruption, ensuring
that only one process can modify a shared variable at a time3.
Programming Language Methods
Some programming languages and operating systems provide built-in support for
mutual exclusion through constructs like:
Locks (Mutexes)
Semaphores
Monitors
These constructs help manage access to shared resources, ensuring that only one
process can access the resource at a time
Peterson’s Algorithm
do {
# Produce an item
wait(empty)
wait(mutex)
# place in buffer
signal(mutex)
signal(full)
} while (true)
When the producer produces an item, the value of empty is reduced by 1 because one slot
will be filled now.
The value of mutex is also reduced to prevent the consumer from accessing the buffer.
Now, the producer has placed the item and thus the value of full is increased by 1. The value
of mutex is also increased by 1 because the task of the producer has been completed and the
consumer can access the buffer.
Consumer Code
do {
wait(full)
wait(mutex)
# consume item from buffer
signal(mutex)
signal(empty)
} while (true)
the consumer is removing an item from the buffer, the value of full is reduced by 1 and the
value of mutex is also reduced so that the producer cannot access the buffer at this moment.
Now, the consumer has consumed the item, thus increasing the value of empty by 1. The
value of mutex is also increased so that the producer can access the buffer now.
semaphores to solve the Producer-Consumer problem ensures that producers and consumers
access the shared buffer in an organized way. Semaphores help manage the buffer’s state,
preventing the producer from adding data when the buffer is full and stopping the consumer
from removing data when the buffer is empty. This approach ensures smooth, conflict-free
data processing between producers and consumers
while (1) {
while (TestAndSet(lock)); // Busy wait
// Critical section
lock = false;
// Remainder section
}
In this pseudocode, the TestAndSet function checks the value of the lock. If the lock is false,
it sets it to true and allows the process to enter the critical section. If the lock is true, the
process keeps busy-waiting until the lock becomes false.
Characteristics
Mutual Exclusion
The Test and Set mechanism guarantees mutual exclusion, meaning only one process can
enter the critical section at a time. This is achieved by the atomic nature of the Test and Set
instruction2.
Deadlock-Free
The mechanism is deadlock-free because once a process enters the critical section, no other
process can enter until the first process exits and sets the lock to false2.
Bounded Waiting
However, Test and Set does not guarantee bounded waiting. A process may starve if it
repeatedly finds the lock to be true and keeps waiting indefinitely2.
Spin Lock
The mechanism suffers from spin lock, where a process keeps busy-waiting in a loop until it
can enter the critical section. This can lead to high CPU usage2.
Architectural Neutrality
Test and Set is not architecturally neutral as it depends on hardware support for the Test and
Set instruction. Some platforms may not provide this instruction3.
Example Scenario
Consider a scenario where multiple processes need to access a shared resource, such as a
printer. Using the Test and Set mechanism, the first process to execute the Test and Set
instruction will set the lock and enter the critical section. Other processes will keep busy-
waiting until the first process exits and releases the lock1.
Test and Set is an effective hardware synchronization mechanism that ensures mutual
exclusion and prevents deadlock but may cause starvation and high CPU usage due to busy-
waiting
Classical Problems of Synchronization
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in
systems where cooperating processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
Bounded Buffer Problem
Because the buffer pool has a maximum size, this problem is often called the Bounded
buffer problem.
This problem is generalised in terms of the Producer Consumer problem, where
a finite buffer pool is used to exchange messages between producer and consumer
processes.
Solution to this problem is, creating two counting semaphores "full" and "empty" to
keep track of the current number of full and empty buffers respectively.
In this Producers mainly produces a product and consumers consume the product, but
both can use of one of the containers each time.
The main complexity of this problem is that we must have to maintain the count for
both empty and full containers that are available.
Bounded buffer Problem Example
There is a buffer of n slots and each slot is capable of storing one unit of data. There
are two processes running, namely, producer and consumer, which are operating on
the buffer.
Process Generation
Process generation in an OS refers to the creation of new processes during the execution of a
program. A process is a program in execution, and operating systems provide mechanisms
for processes to create other processes — often referred to as child processes.
Main component:
Process States:
After generation, the process goes through various states: New → Ready → Running
→ Waiting → Terminated.
System Call
A system call is a programmatic way for a user-level process to request a service from
the operating system kernel. These services can involve hardware operations (e.g.,
reading from disk), process control, file management, and more.
System Calls Are Needed:
User programs cannot access hardware or critical resources directly.
To maintain security and stability, requests must go through a controlled interface.
System calls act as the interface between user programs and the OS kernel.
Types of System Calls: