Concurrent ProcessesUnit2B.tech
Concurrent ProcessesUnit2B.tech
Unit 2
Concurrent Processes
Concurrent processes refer to the ability to manage and execute multiple
tasks or processes seemingly simultaneously, improving system
efficiency and responsiveness, even on single-core processors through
techniques like time-slicing.
Examples of Concurrent Processes
Running multiple applications simultaneously on a computer.
Downloading a file while using another application.
Printing a document while working on another document.
Handling multiple network requests concurrently by a web server.
Process Concept
A process is basically a program in execution. The execution of a
process must progress in a sequential fashion.
Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.
Heap
This is dynamically allocated memory to a process during its run
time.
Text
This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
Data
This section contains the global and static variables.
Start
Ready
The process is waiting to be assigned to a processor. Ready processes
are waiting to have the processor allocated to them by the operating
system so that they can run. Process may come into this state
after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.
Running
Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor
executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to
become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits
to be removed from main memory
Principles of Concurrency
Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.
Process privileges
This is required to allow/disallow access to system resources.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to
be executed for this process.
CPU registers
Various CPU registers where process need to be stored for execution
for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required
to schedule the process.
Memory management information
This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the process.
Producer/Consumer Problem: -
The Producer-Consumer problem is a classic concurrency issue where
multiple processes (producers and consumers) share a fixed-size
buffer. Producers add items to the buffer, while consumers remove
items. The challenge is to synchronize access to the buffer to prevent
issues like buffer overflow (producer tries to add to a full buffer) or
buffer underflow (consumer tries to remove from an empty buffer).
Key Concepts:
Producers: These processes generate data and add it to a shared buffer.
Consumers: These processes take data from the shared buffer and
process it.
Buffer: A shared data structure (like a queue) where producers store
data for consumers.
Synchronization: Mechanisms (like semaphores, mutexes, or monitors)
are used to control access to the shared buffer and prevent data
inconsistencies.
Mutual Exclusion
Mutual exclusion is a program object that blocks multiple users from
accessing the same shared variable or data at the same time. With a
critical section, a region of code in which multiple processes or threads
access the same shared resource, this idea is put to use in concurrent
programming.
Mutual exclusion, also known as a mutex.
Locks / Mutex
There are two possible states for a lock:- locked and unlocked.
Semaphore
The wait() instruction is used in the entry section to gain access to the
critical section.
Atomic Operations
Software-based Techniques
These techniques make a guarantee that only a single thread at one point
is able to utilize the critical component by combining factors, flags, and
busy-waiting.
Printer Spooling
Many people may simultaneously try to obtain and alter their financial
accounts in an electronic banking system. In order to avoid problems
like overloading or erratic accounts, mutual exclusion is required. A
single transaction is allowed to access a certain bank account at a time
using locks or other synchronization primitives, maintaining the
confidentiality of the information and avoiding conflicts.
In the above diagram, the entry section handles the entry into the critical
section. It acquires the resources needed for execution by the process.
The exit section handles the exit from the critical section. It releases the
resources and also informs the other processes that the critical section is
free.
Mutual Exclusion
Mutual exclusion implies that only one process can be inside the
critical section at any time. If any other processes require the
critical section, they must wait until it is free.
Progress
Progress means that if a process is not using the critical section,
then it should not stop any other process from accessing it. In other
words, any process can enter a critical section if it is free.
Bounded Waiting
Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical
section.
Dekker’s Algorithm
Dekker’s algorithm was the first probably-correct solution to the
critical section problem. It allows two threads to share a single-use
resource without conflict, using only shared memory for
communication. It avoids the strict alternation of a naïve turn-taking
algorithm, and was one of the first mutual exclusion algorithms to be
invented.
Although there are many versions of Dekker’s Solution, the final or 5th
version is the one that satisfies all of the above conditions and is the
most efficient of them all.
A process is generally represented as :
do {
//entry section
critical section
//exit section
remainder section
} while (TRUE);
Strengths
Dekker's Algorithm is simple and easy to understand.
The algorithm guarantees mutual exclusion and progress, meaning
that at least one process will eventually enter the critical section.
The algorithm does not require hardware support and can be
implemented in software.
Weaknesses
The algorithm is prone to starvation since it does not ensure
fairness, meaning that one process could continuously enter the
critical section while the other process waits indefinitely.
The algorithm requires busy waiting, which can lead to high CPU
usage and inefficiency.
The algorithm is susceptible to race conditions and may fail under
certain conditions.
Time complexity
The time complexity of the algorithm is O(n^2) in the worst case,
where n is the number of processes.
This is because each process may need to wait for the other process
to finish its critical section, leading to a potential infinite loop.
Space complexity
The space complexity of the algorithm is O(1), as it only requires a
few flags and turn variables.
Peterson’s Solution
Peterson’s Algorithm is a classic solution to the critical section problem
in process synchronization. It ensures mutual exclusion meaning only
one process can access the critical section at a time and avoids race
conditions. The algorithm uses two shared variables to manage the
turn-taking mechanism between two processes ensuring that both
processes follow a fair order of execution. It’s simple and effective for
solving synchronization issues in two-process scenarios.
Peterson’s Solution is a mutual exclusion algorithm used in operating
systems to ensure that two processes do not enter their critical sections
simultaneously. It provides a simple and effective way to manage
access to shared resources by two processes, preventing race
conditions. The solution uses two shared variables to coordinate the
two processes: a flag array and a turn variable.
do{ do{
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn == i);
//critical section //critical section
Waiting Loop: Both processes enter a loop where they check the
flag of the other process and the turn variable:
o If the other process wants to enter (i.e., flag[1 - processID]
== true), and
o It’s the other process’s turn (i.e., turn == 1 - processID),
then the process waits, allowing the other process to enter
the critical section.
This loop ensures that only one process can enter the critical section
at a time, preventing a race condition.
Exiting the Critical Section: After finishing its work in the critical
section, the process resets its flag to false. This signals that it no
longer wants to enter the critical section, and the other process can
now have its turn.
Semaphores
Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations wait and signal that are
used for process synchronization.
The definitions of wait and signal are as follows?
Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows?
Counting Semaphores
Binary Semaphores
The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.
Advantages of Semaphores
Semaphores allow only one process into the critical section. They
follow the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in
semaphores as processor time is not wasted unnecessarily to check
if a condition is fulfilled to allow a process to access the critical
section.
Semaphores are implemented in the machine independent code of
the microkernel. So they are machine independent.
Disadvantages of Semaphores
Semaphores are complicated so the wait and signal operations
must be implemented in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to
loss of modularity. This happens because the wait and signal
operations prevent the creation of a structured layout for the
system.
Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority
processes later.
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
Advantages -
Disadvantages
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for inter process communication
and are used by most operating systems.
Process Generation
A 'Generating Process' refers to the key activity of creating process
models by synthesizing knowledge captured in ontology into modeling
objects, which are then consecutively generated in a graph-based
modeling environment to produce executable model code automatically.
Release timelines:
Intel, for example, regularly releases new generations of processors,
usually on an annual basis.
Identification:
The first number after the core designation (i3, i5, i7, i9) usually
indicates the generation (e.g., i7-10710U is 10th generation, says
Intel).