0% found this document useful (0 votes)
9 views31 pages

Concurrent ProcessesUnit2B.tech

Concurrent processes allow for the management and execution of multiple tasks simultaneously, enhancing system efficiency. The document outlines the process concept, life cycle, principles of concurrency, and mutual exclusion techniques, including synchronization methods like locks and semaphores. It also discusses the critical section problem and presents Dekker's algorithm as a solution for mutual exclusion.

Uploaded by

shankarspshukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views31 pages

Concurrent ProcessesUnit2B.tech

Concurrent processes allow for the management and execution of multiple tasks simultaneously, enhancing system efficiency. The document outlines the process concept, life cycle, principles of concurrency, and mutual exclusion techniques, including synchronization methods like locks and semaphores. It also discusses the critical section problem and presents Dekker's algorithm as a solution for mutual exclusion.

Uploaded by

shankarspshukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Concurrent Processes

Unit 2

Concurrent Processes
Concurrent processes refer to the ability to manage and execute multiple
tasks or processes seemingly simultaneously, improving system
efficiency and responsiveness, even on single-core processors through
techniques like time-slicing.
Examples of Concurrent Processes
 Running multiple applications simultaneously on a computer.
 Downloading a file while using another application.
 Printing a document while working on another document.
 Handling multiple network requests concurrently by a web server.

Process Concept
A process is basically a program in execution. The execution of a
process must progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of


work to be implemented in the system.
When a program is loaded into the memory and it becomes a
process, it can be divided into four sections -
Stack, heap, text and data.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 1


Concurrent Processes
Unit 2

Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.
Heap
This is dynamically allocated memory to a process during its run
time.
Text
This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
Data
This section contains the global and static variables.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 2


Concurrent Processes
Unit 2

Process Life Cycle


When a process executes, it passes through different states. These stages
may differ in different operating systems, and the names of these states
are also not standardized.

Start

This is the initial state when a process is first started/created.

Ready
The process is waiting to be assigned to a processor. Ready processes
are waiting to have the processor allocated to them by the operating
system so that they can run. Process may come into this state
after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.
Running
Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor
executes its instructions.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 3


Concurrent Processes
Unit 2

Waiting
Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to
become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits
to be removed from main memory

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 4


Concurrent Processes
Unit 2

Principles of Concurrency

The principles of concurrency in operating systems are designed to


ensure that multiple processes or threads can execute efficiently and
effectively, without interfering with each other or causing deadlock.

 Interleaving ? Interleaving refers to the interleaved execution of


multiple processes or threads. The operating system uses a
scheduler to determine which process or thread to execute at any
given time. Interleaving allows for efficient use of CPU resources
and ensures that all processes or threads get a fair share of CPU
time.

 Synchronization ? Synchronization refers to the coordination of


multiple processes or threads to ensure that they do not interfere
with each other. This is done through the use of synchronization
primitives such as locks, semaphores, and monitors. These
primitives allow processes or threads to coordinate access to
shared resources such as memory and I/O devices.

 Mutual exclusion ? Mutual exclusion refers to the principle of


ensuring that only one process or thread can access a shared
resource at a time. This is typically implemented using locks or
semaphores to ensure that multiple processes or threads do not
access a shared resource simultaneously.

 Deadlock avoidance ? Deadlock is a situation in which two or


more processes or threads are waiting for each other to release a
resource, resulting in a deadlock. Operating systems use various
techniques such as resource allocation graphs and deadlock
prevention algorithms to avoid deadlock.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 5


Concurrent Processes
Unit 2

 Process or thread coordination ? Processes or threads may need


to coordinate their activities to achieve a common goal. This is
typically achieved using synchronization primitives such as
semaphores or message passing mechanisms such as pipes or
sockets.

 Resource allocation ? Operating systems must allocate resources


such as memory, CPU time, and I/O devices to multiple processes
or threads in a fair and efficient manner. This is typically achieved
using scheduling algorithms such as round-robin, priority-based, or
real-time scheduling.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 6


Concurrent Processes
Unit 2

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating
System for every process. The PCB is identified by an integer process
ID (PID).

Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.
Process privileges
This is required to allow/disallow access to system resources.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to
be executed for this process.
CPU registers
Various CPU registers where process need to be stored for execution
for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required
to schedule the process.
Memory management information
This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 7


Concurrent Processes
Unit 2

Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System


and may contain different information in different operating systems.
Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is


deleted once the process terminates.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 8


Concurrent Processes
Unit 2

Producer/Consumer Problem: -
The Producer-Consumer problem is a classic concurrency issue where
multiple processes (producers and consumers) share a fixed-size
buffer. Producers add items to the buffer, while consumers remove
items. The challenge is to synchronize access to the buffer to prevent
issues like buffer overflow (producer tries to add to a full buffer) or
buffer underflow (consumer tries to remove from an empty buffer).

Key Concepts:
 Producers: These processes generate data and add it to a shared buffer.
 Consumers: These processes take data from the shared buffer and
process it.
 Buffer: A shared data structure (like a queue) where producers store
data for consumers.
 Synchronization: Mechanisms (like semaphores, mutexes, or monitors)
are used to control access to the shared buffer and prevent data
inconsistencies.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 9


Concurrent Processes
Unit 2

Mutual Exclusion
Mutual exclusion is a program object that blocks multiple users from
accessing the same shared variable or data at the same time. With a
critical section, a region of code in which multiple processes or threads
access the same shared resource, this idea is put to use in concurrent
programming.
Mutual exclusion, also known as a mutex.

Requisites of Mutual Exclusion


 Only one process can enter its critical section at a particular time.
 A process is permitted to execute in its critical section only for a
bounded time.
 No process (which is not in its critical section) can prevent another
process from entering into its critical section.
 No process must wait indefinitely to enter its critical section.

Techniques of Mutual Exclusion in Synchronization

Mutual exclusion can be achieved using a variety of strategies, such as


the following −

Locks / Mutex

Synchronization primitives called locks or Mutex (short for mutual


exclusion) are implemented to maintain consistency of the value or
status of the shared resources.

There are two possible states for a lock:- locked and unlocked.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 10


Concurrent Processes
Unit 2

An operating systems or procedure must obtain the lock before being


able to utilize the resource that is shared. The requesting string is going
to be restricted as long as the lock has been released if it has already
been locked by a distinct thread.

Semaphore

Semaphore is an advanced synchronization tool. It uses two atomic


operations, "wait()" and "signal()".

The wait() instruction is used in the entry section to gain access to the
critical section.

The signal() operation is used to release control of the shared resource.

Semaphores can be of two types, binary semaphores and counting


semaphores.

In a counting semaphore, when a thread needs to enter the critical area,


semaphores keep track of a counter and decrease it. The running thread
becomes immobilized if the counter decreases, signifying that the
critical portion has become in use.

Atomic Operations

Without using locks or semaphores, certain processors offer atomic


operations which may be employed to guarantee mutual exclusion.

Atomic operations are advantageous for modifying shared parameters


because they are unbreakable and can't be stopped.

A property of a variable may only be altered using atomic compare-and-


swap (CAS) procedures, for instance, if the value of the variable
coincides with the value that is anticipated.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 11


Concurrent Processes
Unit 2

Software-based Techniques

Mutual exclusion can be achieved using a variety of software-based


algorithms and strategies, including Peterson's algorithm, Dekker's
algorithm, or Lamport's bakery algorithm.

These techniques make a guarantee that only a single thread at one point
is able to utilize the critical component by combining factors, flags, and
busy-waiting.

Practical Instances of Mutual Exclusion in Synchronization

A few instances of mutual exclusion in synchronization that can occur in


real life −

Printer Spooling

Several procedures or individuals may ask for printed documents at once


in an OS with a number of users. Mutual exclusion is used to guarantee
that just one process at the moment has access to the printer. In order to
provide restricted access to the printer, avoid conflicts, and guarantee
that printed positions are dealt with in the proper order, a lock or
semaphore is used.

Bank Account Transactions

Many people may simultaneously try to obtain and alter their financial
accounts in an electronic banking system. In order to avoid problems
like overloading or erratic accounts, mutual exclusion is required. A
single transaction is allowed to access a certain bank account at a time
using locks or other synchronization primitives, maintaining the
confidentiality of the information and avoiding conflicts.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 12


Concurrent Processes
Unit 2

Traffic Signal Control

Traffic signals at a crosswalk must be coordinated in order to safely


manage the movement of transport vehicles. In order to avoid competing
communication from being displayed at once, mutual exclusion is used.
One indicator is allowed to be present at a time thanks to the mutual
exclusion rule, which promotes efficient and organized traffic flow.

Resource Allocation in Shared Database

Mutual exclusion is essential to preserving information consistency in


database systems where various procedures or transactions access
information that is shared concurrently. For instance, mutual exclusion
mechanisms make absolutely certain that just a single transaction is able
to alter identical data at a time, hindering disagreements and maintaining
data integrity when two separate operations try to alter the same
information concurrently.

Accessing Shared Memory in Real-Time Systems

Mutual exclusion is required in real-time systems in which operations or


procedures require shared memory to facilitate interaction or
cooperation. Important memory-sharing regions are protected using
synchronization basic functions like locks or semaphores, which make
sure that only a single assignment is able to use and alter the area of
shared memory at once.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 13


Concurrent Processes
Unit 2

Critical Section Problem


The critical section is a code segment where the shared variables can be
accessed. An atomic action is required in a critical section i.e. only one
process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −

In the above diagram, the entry section handles the entry into the critical
section. It acquires the resources needed for execution by the process.
The exit section handles the exit from the critical section. It releases the
resources and also informs the other processes that the critical section is
free.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 14


Concurrent Processes
Unit 2

Solution to the Critical Section Problem

The critical section problem needs a solution to synchronize the different


processes. The solution to the critical section problem must satisfy the
following conditions –

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the
critical section at any time. If any other processes require the
critical section, they must wait until it is free.
 Progress
Progress means that if a process is not using the critical section,
then it should not stop any other process from accessing it. In other
words, any process can enter a critical section if it is free.
 Bounded Waiting
Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical
section.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 15


Concurrent Processes
Unit 2

Dekker’s Algorithm
Dekker’s algorithm was the first probably-correct solution to the
critical section problem. It allows two threads to share a single-use
resource without conflict, using only shared memory for
communication. It avoids the strict alternation of a naïve turn-taking
algorithm, and was one of the first mutual exclusion algorithms to be
invented.
Although there are many versions of Dekker’s Solution, the final or 5th
version is the one that satisfies all of the above conditions and is the
most efficient of them all.
A process is generally represented as :
do {
//entry section
critical section
//exit section
remainder section
} while (TRUE);

One of the most popular of these is Dekker's algorithm, which was


proposed by Cornelis Dekker in 1965. It is a simple and efficient
algorithm that allows only one process to access a shared resource at a
time. The algorithm achieves mutual exclusion by using two flags that
indicate each process's intent to enter the critical section. By alternating
the flags' use and checking if the other process's flag is set, the algorithm
ensures that only one process enters the critical section at a time.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 16


Concurrent Processes
Unit 2

Strengths
 Dekker's Algorithm is simple and easy to understand.
 The algorithm guarantees mutual exclusion and progress, meaning
that at least one process will eventually enter the critical section.
 The algorithm does not require hardware support and can be
implemented in software.

Weaknesses
 The algorithm is prone to starvation since it does not ensure
fairness, meaning that one process could continuously enter the
critical section while the other process waits indefinitely.
 The algorithm requires busy waiting, which can lead to high CPU
usage and inefficiency.
 The algorithm is susceptible to race conditions and may fail under
certain conditions.

Time complexity
 The time complexity of the algorithm is O(n^2) in the worst case,
where n is the number of processes.
 This is because each process may need to wait for the other process
to finish its critical section, leading to a potential infinite loop.

Space complexity
 The space complexity of the algorithm is O(1), as it only requires a
few flags and turn variables.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 17


Concurrent Processes
Unit 2

Peterson’s Solution
Peterson’s Algorithm is a classic solution to the critical section problem
in process synchronization. It ensures mutual exclusion meaning only
one process can access the critical section at a time and avoids race
conditions. The algorithm uses two shared variables to manage the
turn-taking mechanism between two processes ensuring that both
processes follow a fair order of execution. It’s simple and effective for
solving synchronization issues in two-process scenarios.
Peterson’s Solution is a mutual exclusion algorithm used in operating
systems to ensure that two processes do not enter their critical sections
simultaneously. It provides a simple and effective way to manage
access to shared resources by two processes, preventing race
conditions. The solution uses two shared variables to coordinate the
two processes: a flag array and a turn variable.

Algorithm for Pi process Algorithm for Pi process

do{ do{
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn == i);
//critical section //critical section

flag[i] = false; flag[j] = false;

//remainder section //remainder section


}while(true); }while(true);

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 18


Concurrent Processes
Unit 2

Here’s how Peterson’s Algorithm works step-by-step:

 Initial Setup: Initially, both processes set their respective flag


values to false, meaning neither wants to enter the critical section.
The turn variable is set to the ID of one of the processes
(either 0 or 1), indicating that it’s that process’s turn to enter.

 Intention to Enter: When a process wants to enter the critical


section, it sets its flag value to true signaling its intent to enter.
 Set the Turn: Then the process, which is having the next turn, sets
the turn variable to its own ID. This will indicate that it is its turn to
enter the critical section.

 Waiting Loop: Both processes enter a loop where they check the
flag of the other process and the turn variable:
o If the other process wants to enter (i.e., flag[1 - processID]
== true), and
o It’s the other process’s turn (i.e., turn == 1 - processID),
then the process waits, allowing the other process to enter
the critical section.
This loop ensures that only one process can enter the critical section
at a time, preventing a race condition.

 Critical Section: Once a process successfully exits the loop, it


enters the critical section, where it can safely access or modify the
shared resource without interference from the other process.

 Exiting the Critical Section: After finishing its work in the critical
section, the process resets its flag to false. This signals that it no
longer wants to enter the critical section, and the other process can
now have its turn.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 19


Concurrent Processes
Unit 2

Example of Peterson’s Algorithm

Peterson’s solution is often used as a simple example of mutual


exclusion in concurrent programming. Here are a few scenarios where
it can be applied:

 Accessing a shared printer: Peterson’s solution ensures that only


one process can access the printer at a time when two processes are
trying to print documents.

 Reading and writing to a shared file: It can be used when two


processes need to read from and write to the same file, preventing
concurrent access issues.

 Competing for a shared resource: When two processes are


competing for a limited resource, such as a network connection or
critical hardware, Peterson’s solution ensures mutual exclusion to
avoid conflicts.
Advantages of the Peterson’s Solution
1. With Peterson’s solution, multiple processes can access and share a
resource without causing any resource conflicts.
2. Every process has a chance to be carried out.
3. It uses straightforward logic and is easy to put into practice.
4. Since it is entirely software dependent and operates in user mode, it
can be used with any hardware.
eliminates the chance of a deadlock.

Disadvantages of the Peterson’s Solution


1. Waiting for the other processes to exit the critical region may take a
long time. We call it busy waiting.
2. On systems that have multiple CPUs, this algorithm might not
function.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 20


Concurrent Processes
Unit 2

Semaphores
Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations wait and signal that are
used for process synchronization.
The definitions of wait and signal are as follows?

 Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}
 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows?

 Counting Semaphores

These are integer value semaphores and have an unrestricted value


domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 21


Concurrent Processes
Unit 2

automatically incremented and if the resources are removed, the


count is decremented.

 Binary Semaphores
The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.

Advantages of Semaphores
 Semaphores allow only one process into the critical section. They
follow the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in
semaphores as processor time is not wasted unnecessarily to check
if a condition is fulfilled to allow a process to access the critical
section.
 Semaphores are implemented in the machine independent code of
the microkernel. So they are machine independent.

Disadvantages of Semaphores
 Semaphores are complicated so the wait and signal operations
must be implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to
loss of modularity. This happens because the wait and signal
operations prevent the creation of a structured layout for the
system.
 Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority
processes later.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 22


Concurrent Processes
Unit 2

Classical Problem in Concurrency


Dining Philosopher Problem
The Dining Philosopher Problem states that K philosophers are seated
around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pick up the two chopsticks adjacent to
him. One chopstick may be picked up by any one of its adjacent
followers but not both.

Semaphore Solution to Dining Philosopher

process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 23


Concurrent Processes
Unit 2

The correctness properties it needs to satisfy are:


 Mutual Exclusion Principle: No two Philosophers can have the two
forks simultaneously.
 Free from Deadlock: Each philosopher can get the chance to eat in
a certain finite time.
 Free from Starvation: When few Philosophers are waiting then one
gets a chance to eat in a while.
 No strict Alternation
 Proper utilization of time

There are three states of the philosopher: THINKING, HUNGRY,


and EATING. Here there are two semaphores: Mutex and a semaphore
array for the philosophers. Mutex is used such that no two philosophers
may access the pickup or put it down at the same time. The array is
used to control the behavior of each philosopher. But, semaphores can
result in deadlock due to programming errors.

Advantages of this Solution


 Allows a large degree of concurrency
 Free from Starvation
 Free from Deadlock
 More Flexible Solution
 Economical
 Fairness
 Bounded ness

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 24


Concurrent Processes
Unit 2

Sleeping Barber Problem


The Sleeping Barber problem is a classic problem in process
synchronization that is used to illustrate synchronization issues that can
arise in a concurrent system.
There is a barber shop with one barber and a number of chairs for
waiting customers. Customers arrive at random times and if there is an
available chair, they take a seat and wait for the barber to become
available. If there are no chairs available, the customer leaves. When
the barber finishes with a customer, he checks if there are any waiting
customers. If there are, he begins cutting the hair of the next customer
in the queue. If there are no customers waiting, he goes to sleep.
The problem is to write a program that coordinates the actions of the
customers and the barber in a way that avoids synchronization
problems, such as deadlock or starvation.
One solution to the Sleeping Barber problem is to use semaphores to
coordinate access to the waiting chairs and the barber chair.
Initialize two semaphores: one for the number of waiting chairs and
one for the barber chair. The waiting chairs semaphore is initialized to
the number of chairs, and the barber chair semaphore is initialized to
zero.
Problem: The analogy is based upon a hypothetical barber shop with
one barber. There is a barber shop which has one barber, one barber
chair, and n chairs for waiting for customers if there are any to sit on
the chair.

 If there is no customer, then the barber sleeps in his own chair.


 When a customer arrives, he has to wake up the barber.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 25


Concurrent Processes
Unit 2

 If there are many customers and the barber is cutting a customer’s


hair, then the remaining customers either wait if there are empty
chairs in the waiting room or they leave if no chairs are empty.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 26


Concurrent Processes
Unit 2

Advantages -

1. Efficient use of resources: The use of semaphores or other


synchronization mechanisms ensures that resources (e.g., the barber
chair and waiting room) are used efficiently, without wasting
resources or causing unnecessary delays.

2. Prevention of race conditions: By ensuring that only one customer


is in the barber chair at a time and that the barber is always working
on a customer if there is one in the chair, synchronization
mechanisms prevent race conditions that could lead to errors or
incorrect results.

3. Fairness: Synchronization mechanisms can be used to ensure that


all customers have a fair chance to be served by the barber.

Disadvantages

1. Complexity: Implementing synchronization mechanisms can be


complex, especially for larger systems or more complex
synchronization scenarios.

2. Overhead: Synchronization mechanisms can introduce overhead in


terms of processing time, memory usage, and other system
resources.

3. Deadlocks: Incorrectly implemented synchronization mechanisms


can lead to deadlocks, where processes are unable to proceed
because they are waiting for resources that are held by other
processes.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 27


Concurrent Processes
Unit 2

Inter Process Communication Model


Inter process communication is the mechanism provided by the
operating system that allows processes to communicate with each other.
This communication could involve a process letting another process
know that some event has occurred or transferring of data from one
process to another.

The models of inter process communication are as follows −

Shared Memory Model

Shared memory is the memory that can be simultaneously accessed by


multiple processes. This is done so that the processes can communicate
with each other. All POSIX systems, as well as Windows operating
systems use shared memory.

Advantage of Shared Memory Model


Memory communication is faster on the shared memory model as
compared to the message passing model on the same machine.

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 28


Concurrent Processes
Unit 2

Disadvantages of Shared Memory Model


 All the processes that use the shared memory model need to make sure that
they are not writing to the same memory location.
 Shared memory model may create problems such as synchronization and
memory protection that need to be addressed.

Message Passing Model

Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for inter process communication
and are used by most operating systems.

Advantage of Messaging Passing Model


The message passing model is much easier to implement than the shared memory
model.

Disadvantage of Messaging Passing Model


The message passing model has slower communication than the shared memory
model because the connection setup takes time.
A diagram that demonstrates the shared memory model and message passing
model is given as follows −

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 29


Concurrent Processes
Unit 2

Process Generation
A 'Generating Process' refers to the key activity of creating process
models by synthesizing knowledge captured in ontology into modeling
objects, which are then consecutively generated in a graph-based
modeling environment to produce executable model code automatically.

Key aspects of processor generations:


 Major upgrades:
Each new generation represents a significant improvement over the
previous one, often involving new architectures, algorithms, and
technologies.
 Performance and efficiency:
Generations generally offer faster processing speeds, better graphics,
and more efficient power usage.
 Feature additions:
New generations may include new technologies like AI, discrete
graphics, and improved security.
Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 30
Concurrent Processes
Unit 2

 Release timelines:
Intel, for example, regularly releases new generations of processors,
usually on an annual basis.
 Identification:
The first number after the core designation (i3, i5, i7, i9) usually
indicates the generation (e.g., i7-10710U is 10th generation, says
Intel).

Prepared by:Mrs Ankita Tripathi[Asst.Professor,KCNIT College of Education,Banda] Page 31

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy