0% found this document useful (0 votes)
2 views15 pages

UNIT-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

UNIT-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT-2

Concurrent Processes:
Concurrent processing in operating systems refers to the ability to execute multiple tasks
simultaneously, allowing for efficient utilization of resources and improved performance.
This is achieved through various techniques such as process scheduling, multi-threading, and
parallel processing.
Types of Concurrency Processing
 Process scheduling: - This is the most basic form of concurrency processing, in which
the operating system executes multiple processes one after the other, with each
process given a time slice to execute before being suspended and replaced by the next
process in the queue.
 Multi-threading: - This involves the use of threads within a process, with each thread
executing a different task concurrently. Threads share the same memory space within
a process, allowing them to communicate and coordinate with each other easily.
 Parallel processing: - This involves the use of multiple processors or cores within a
system to execute multiple tasks simultaneously. Parallel processing is typically used
for computationally intensive tasks, such as scientific simulations or video rendering.
 Distributed processing: - This involves the use of multiple computers or nodes
connected by a network to execute a single task. Distributed processing is typically
used for large-scale, data-

 intensive applications, such as search engines or social networks.

Features of Concurrency Processing


 Improved performance: With the advent of multi-core processors, modern operating
systems can execute multiple threads or processes simultaneously, leading to
improved system performance. Concurrency processing enables the operating system
to make optimal use of available resources, thereby maximizing system throughput.
 Resource utilization: Concurrency processing allows for better utilization of system
resources such as CPU, memory, and I/O devices. By executing multiple threads or
processes concurrently, the operating system can use idle resources to execute other
tasks, leading to better resource utilization.
 Enhanced responsiveness: Concurrency processing enables the operating system to
handle multiple user requests simultaneously, leading to improved system
responsiveness. This is particularly important in applications that require real-time
processing, such as online gaming or financial trading applications.
 Scalability: Concurrency processing enables the operating system to scale efficiently
as the number of users or tasks increases. By executing tasks concurrently, the
operating system can handle a larger workload, leading to better system scalability.
 Flexibility: Concurrency processing enables the operating system to execute tasks
independently, making it easier to manage and maintain the system. This flexibility
makes it possible to develop complex applications that require multiple threads or
processes without compromising system performance.
Process
A process is a fundamental concept in operating systems, representing a program in
execution. It is an active entity, unlike a program, which is a passive set of instructions. When
a program is loaded into memory and starts execution, it becomes a process. This
transformation involves the program being divided into several sections: stack, heap, text,
and data.
Components of a Process
1. Stack: This section contains temporary data such as function parameters, return
addresses, and local variables.
2. Heap: This is the memory dynamically allocated to the process during its runtime.
3. Text: This includes the current activity represented by the value of the program
counter and the contents of the processor's registers.
4.
5. Data: This section contains global and static variables
Process States
A process goes through various states during its lifecycle:
1. New: The process is being created.
2. Ready: The process is waiting to be assigned to a processor.
3. Running: The process is currently being executed by the processor.
4. Waiting: The process is waiting for some event to occur (e.g., I/O completion).
5. Terminated: The process has finished execution
Difference between process and program
Operation of process

Process Control Block (PCB)


The Process Control Block (PCB) is a crucial data structure maintained by the operating
system for each process. It contains all the information needed to manage the process,
including:
 Process ID: A unique identifier for the process.
 Process State: The current state of the process.
 Program Counter: The address of the next instruction to be executed.
 CPU Registers: The contents of the processor registers.

 CPU Scheduling Information: Information such as process priority and scheduling


queue pointers.
 Memory Management Information: Information about the process's memory
allocation.
I/O Status Information: Information about the I/O devices allocated to the process
Role of PCB in Context Switching:
During context switching, the CPU:
1. Saves the state of the currently running process into its PCB.
2. Loads the state of the next process to run from its PCB.
This allows multiple processes to share the CPU without losing their execution state.
Concurrency in Operating System
 Concurrency in operating systems refers to the capability of an OS to handle more
than one task or process at the same time, thereby enhancing efficiency and
responsiveness.
 It may be supported by multi-threading or multi-processing.
 Concurrency is essential in modern operating systems due to the increasing demand
for multitasking, real-time processing, and parallel computing.
 It is used in a wide range of applications, including web servers, databases, scientific
simulations, and multimedia processing.
 concurrency also introduces new challenges such as race conditions, deadlocks, and
priority inversion, which need to be managed effectively to ensure the stability and
reliability of the system.
Principles of Concurrency
 The principles of concurrency in operating systems are designed to ensure that
multiple processes or threads can execute efficiently and effectively, without
interfering with each S other or causing deadlock.

Function of concurrent execution:

 Physical resource Sharing: Multiuser environment since hardware resources are


limited
 Logical resource Sharing: Shared file (same piece of information)
 Computation Speedup: Parallel execution
 Modularity: Divide system functions into separation processes

Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of concurrent
processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
 The activities of other processes
 The way operating system handles interrupts
 The scheduling policies of the operating system

Problems in Concurrency
 Sharing global resources: Sharing of global resources safely is difficult. If two
processes both make use of a global variable and both perform read and write on that
variable, then the order in which various read and write are executed is critical.
 Optimal allocation of resources: It is difficult for the operating system to manage
the allocation of resources optimally.
 Locating programming errors: It is very difficult to locate a programming error
because reports are usually not reproducible.
 Locking the channel: It may be inefficient for the operating system to simply lock
the channel and prevents its use by other processes.
Advantages of Concurrency
 Running of multiple applications: It enable to run multiple applications at the same
time.
 Better resource utilization: It enables that the resources that are unused by one
application can be used for other applications.
 Better average response time: Without concurrency, each application has to be run
to completion before the next one can be run.
 Better performance: It enables the better performance by the operating system.
When one application uses only the processor and another application uses only the
disk drive then the time to run both applications concurrently to completion will be
shorter than the time to run each application consecutively.

Drawbacks of Concurrency
 It is required to protect multiple applications from one another.
 It is required to coordinate multiple applications through additional mechanisms.
 Additional performance overheads and complexities in operating systems are required
for switching among applications.
 Sometimes running too many applications concurrently leads to severely degraded
performance.

Issues of Concurrency
 Non-atomic: Operations that are non-atomic but interruptible by multiple processes
can cause problems.
 Race conditions: A race condition occurs of the outcome depends on which of
several processes gets to a point first.
 Blocking: Processes can block waiting for resources. A process could be blocked for
long period of time waiting for input from a terminal. If the process is required to
periodically update some data, this would be very undesirable.
 Starvation: Starvation occurs when a process does not obtain service to progress.
 Deadlock: Deadlock occurs when two processes are blocked and hence neither can
proceed to execute.

Race conditions
A race condition is a situation that arises in a multithreaded environment when multiple
threads or processes access and manipulate shared resources concurrently. This can lead to
unpredictable and undesirable outcomes, as the final state of the resource depends on the
order in which the threads execute.
Types of Race Conditions
1. Read-Modify-Write: This occurs when two processes read a value and then write
back a new value. If not handled properly, it can cause inconsistent data3.
2. Check-Then-Act: This happens when two processes check a value and then act on it.
If the value changes between the check and the act, it can lead to incorrect behaviour.

Examples of Race Conditions; - Tube Light Example


Consider a tube light with two switches. If both switches are turned on simultaneously, the
tube light may end up in an inconsistent state, similar to how race conditions affect computer
systems
Producer-Consumer Problem
In the Producer-Consumer problem, if the producer threads are faster than the consumer
threads, extra messages will be produced that the consumer cannot consume in time.
Conversely, if the consumer threads are faster, they might consume the same message
multiple times3.

Identifying and Preventing Race Conditions

Identification
Detecting race conditions can be challenging. Tools for static and dynamic analysis are used
to find race conditions. Static analysis tools scan the code without executing it, while
dynamic analysis tools monitor the program during execution1.
Prevention
 Locks and Mutexes: Implementing locks (like mutexes) ensures that only one thread
can access a resource at a time, preventing conflicting operations2.
 Synchronization: Proper synchronization techniques, such as semaphores, ensure that
threads work in a coordinated sequence when accessing shared data2.
 Volatile Keyword: In Java, the volatile keyword ensures that the value of a variable is
always read from the main memory, preventing threads from caching the value
locally3.
 Synchronized Keyword: The synchronized keyword in Java ensures that only one
thread can access a method or code block at a time, protecting shared resources

Critical section problem


The Critical Section Problem is a classical synchronization problem that occurs in
concurrent programming when multiple processes (or threads) need to access shared
resources (like memory, files, variables) without conflict.
A critical section is a part of the program where the shared resource is accessed. If
multiple processes enter their critical sections simultaneously, it may lead to data
inconsistency or corruption.

do {
// Entry Section: Request to enter critical section

// Critical Section: Access shared resource

// Exit Section: Release control of shared resource

// Remainder Section: Other code


} while (true);

Example Problem:

Two processes want to update a shared variable counter. If both do it at the same time without
control, the final value may be wrong.

Producer-Consumer problem
The Producer-Consumer problem is a classic synchronization issue in operating systems. It
involves two types of processes: producers, which generate data, and consumers, which
process that data. Both share a common buffer. The challenge is to ensure that the producer
doesn’t add data to a full buffer and the consumer doesn’t remove data from an empty buffer
while avoiding conflicts when accessing the buffer.
Mutual exclusion
It is a fundamental concept in operating systems and concurrent programming, ensuring that
no two processes can access a shared resource or critical section simultaneously. This concept
is crucial for preventing race conditions, where the outcome of processes depends on the
sequence of their execution.
For example:- Changing room in mall
Key Principles of Mutual Exclusion
Mutual exclusion ensures that when one process is accessing a shared resource, no
other process can access that resource until the first process has finished. This is
essential for maintaining data consistency and preventing conflicts.
Conditions for Mutual Exclusion
1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, any process that wishes to enter must
be allowed to do so without delay.
3. Bounded Waiting: There must be a limit on the number of times other processes can
enter the critical section after a process has made a request to enter.
4. No Assumptions: No assumptions should be made about the relative speeds or the
number of CPUs.
Implementing Mutual Exclusion
Software Methods
Software methods rely on algorithms to ensure mutual exclusion. Examples include:
 Dekker's Algorithm
 Peterson's Algorithm
 Lamport's Bakery Algorithm
These algorithms use busy-waiting, where a process repeatedly checks if it can enter
the critical section3.
Hardware Methods
Hardware methods use special machine instructions to achieve mutual exclusion.
Examples include:
 Test-and-Set
 Compare-and-Swap
These instructions are atomic, meaning they complete without interruption, ensuring
that only one process can modify a shared variable at a time3.
Programming Language Methods
Some programming languages and operating systems provide built-in support for
mutual exclusion through constructs like:
 Locks (Mutexes)
 Semaphores
 Monitors
These constructs help manage access to shared resources, ensuring that only one
process can access the resource at a time

Critical Section Problem


The critical section problem in operating systems arises when multiple processes or threads
need to access shared resources concurrently. This can lead to conflicts or inconsistencies,
known as race conditions, where the outcome depends on the sequence of access by the
processes.
Key Requirements for Solutions
To address the critical section problem, any solution must meet three key
requirements:
1. Mutual Exclusion: Only one process can execute in its critical section at a time. This
ensures that shared resources are accessed by only one process, preventing conflicts
and data corruption12.
2. Progress: If no process is in the critical section and some processes wish to enter, one
of those processes should be allowed to enter without indefinite delay12.
3. Bounded Waiting: There must be a limit on the number of times other processes can
enter their critical sections after a process has requested access but before that request
is granted. This ensures fairness and prevents starvation
Example
Consider a scenario where money is withdrawn from a bank account by both a cashier
and an ATM simultaneously. If the account balance is ₹10,000, and the cashier
withdraws ₹7,000 while the ATM withdraws ₹6,000 within the same time frame, the
total withdrawal exceeds the account balance. This inconsistency occurs because both
withdrawals happen concurrently.
Advantages and Disadvantages
Advantages
 Data Integrity: Ensures consistent and predictable modification of shared data3.
 Simplicity: Straightforward implementation with synchronization primitives3.
 Predictable Execution: Allows developers to specify exclusive execution parts of the
code3.
 Compatibility: Supported by various programming languages and operating
systems3.
Disadvantages
 Potential for Deadlocks: Can lead to deadlocks if not used carefully3.
 Reduced Concurrency: Limits parallelism as other processes must wait3.
 Overhead: Acquiring and releasing locks incurs overhead3.
 Complexity in Debugging: Debugging issues related to race conditions and
deadlocks can be challenging
Dekker’s solution
Dekker’s algorithm is the first solution of critical section problem.
The algorithm allows two threads to share a single-use resource without conflict,
using only shared memory for communication1.
Key Principles
Dekker's algorithm ensures three critical conditions for solving the critical section
problem:
1. Mutual Exclusion: Only one process can enter the critical section at a time.
2. Progress: If no process is in the critical section, one of the waiting processes must be
allowed to enter.
3. Bounded Waiting: There is a limit on the number of times other processes can enter
the critical section before a waiting process is granted access.

Peterson’s Algorithm

Peterson’s Algorithm is a well-known solution for ensuring mutual exclusion in process


synchronization. It is designed to manage access to shared resources between two processes in
a way that prevents conflicts or data corruption.
 Set the turn to either 0 or 1 to indicating which process can enter its critical section
first.
 Set flag[i] to true, indicating that process i wants to enter its critical section.
 Set turn to j, the index of the other process.
 While flag[j] is true and turn equals j, wait.
 Enter the critical section.
 Set flag[i] to false, indicating that process i has finished its critical section.
 Remainder section.

Peterson's Algorithm is a software-based solution to the mutual exclusion problem, ensuring


that only one process is ever in its critical section at a time. the algorithm is based on two
shared variables: a flag array and a turn variable.
Each process has an associated flag in the flag array, which contains Boolean values to
indicate whether a process is interested in entering its critical section. The turn variable is a
number that determines which process should proceed frost in case of a conflict.
A process invokes the algorithm's lock() and unlock() methods each time it wishes to enter or
exit its critical section. The implementation of the lock() method is as follows −
void lock(int i) {
flag[i] = true;
turn = j; //j is the other process
while (flag[j] && turn == j);
}
The unlock() method is a lot easier to use −
void unlock(int i) {
flag[i] = false;
}
The lock() method signals that the calling process wants to access its critical section by first
setting its own flag to true. if both processes attempt to access their critical sections
simultaneously, the method sets the turn variable to other process's index(j), indicating that
the other process should proceed first.
Then,the method enters a busy-wait loop, repeatedly checking whether the other process's
flag is true and if it's their turn to access the critical section. The loop continues as long as
either condition is not me. Once both conditions are satisfied, the loop ends, and the calling
process proceeds to its critical section.
The calling process can exit its critical section and indicate it no longer wants to enter by
setting its flag to false using the unlock() method.
Semaphores
A semaphore is a synchronization tool used to manage access to shared resources. It works
like a signal that allows multiple processes or threads to coordinate their actions. Semaphores
use counters to keep track of how many resources are available, ensuring that no two
processes can use the same resource at the same time, thus preventing conflicts and ensuring
orderly execution1.
Types of Semaphores
Binary Semaphore: Similar to a mutex lock but not the same. It can have only two
values – 0 and 1. It is used to implement the solution of critical section problems with
multiple processes.
Counting Semaphore: Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances
Solution Using Semaphores
To solve the Producer-Consumer problem, we need two counting semaphores
– Full and Empty. “Full” keeps track of some items in the buffer at any given time and
“Empty” keeps track of many unoccupied slots
initialization of Semaphores
mutex = 1
full = 0 # Initially, all slots are empty. Thus full slots are 0
empty = n # All slots are empty initially
Producer Code

do {
# Produce an item
wait(empty)
wait(mutex)
# place in buffer
signal(mutex)
signal(full)
} while (true)
When the producer produces an item, the value of empty is reduced by 1 because one slot
will be filled now.
The value of mutex is also reduced to prevent the consumer from accessing the buffer.
Now, the producer has placed the item and thus the value of full is increased by 1. The value
of mutex is also increased by 1 because the task of the producer has been completed and the
consumer can access the buffer.
Consumer Code
do {
wait(full)
wait(mutex)
# consume item from buffer
signal(mutex)
signal(empty)
} while (true)
the consumer is removing an item from the buffer, the value of full is reduced by 1 and the
value of mutex is also reduced so that the producer cannot access the buffer at this moment.
Now, the consumer has consumed the item, thus increasing the value of empty by 1. The
value of mutex is also increased so that the producer can access the buffer now.

semaphores to solve the Producer-Consumer problem ensures that producers and consumers
access the shared buffer in an organized way. Semaphores help manage the buffer’s state,
preventing the producer from adding data when the buffer is full and stopping the consumer
from removing data when the buffer is empty. This approach ensures smooth, conflict-free
data processing between producers and consumers

Test and Set in Operating Systems


Test and Set is a hardware synchronization mechanism used to solve the critical section
problem in operating systems. It ensures that only one process can enter the critical section at
a time, thereby preventing race conditions.
Definition and Working
The Test and Set instruction is an atomic operation that tests the value of a lock and sets it to
1 if it is 0. This operation is performed as a single, indivisible step, ensuring that no other
process can interrupt it. The instruction returns the old value of the lock, which indicates
whether the critical section is currently occupied or not
Pseudocode
Here is the pseudocode for the Test and Set mechanism:
// Shared variable lock initialized to false
boolean lock;

boolean TestAndSet(boolean &target) {


boolean rv = target;
target = true;
return rv;
}

while (1) {
while (TestAndSet(lock)); // Busy wait
// Critical section
lock = false;
// Remainder section
}
In this pseudocode, the TestAndSet function checks the value of the lock. If the lock is false,
it sets it to true and allows the process to enter the critical section. If the lock is true, the
process keeps busy-waiting until the lock becomes false.
Characteristics
Mutual Exclusion
The Test and Set mechanism guarantees mutual exclusion, meaning only one process can
enter the critical section at a time. This is achieved by the atomic nature of the Test and Set
instruction2.
Deadlock-Free
The mechanism is deadlock-free because once a process enters the critical section, no other
process can enter until the first process exits and sets the lock to false2.
Bounded Waiting
However, Test and Set does not guarantee bounded waiting. A process may starve if it
repeatedly finds the lock to be true and keeps waiting indefinitely2.
Spin Lock
The mechanism suffers from spin lock, where a process keeps busy-waiting in a loop until it
can enter the critical section. This can lead to high CPU usage2.
Architectural Neutrality
Test and Set is not architecturally neutral as it depends on hardware support for the Test and
Set instruction. Some platforms may not provide this instruction3.
Example Scenario
Consider a scenario where multiple processes need to access a shared resource, such as a
printer. Using the Test and Set mechanism, the first process to execute the Test and Set
instruction will set the lock and enter the critical section. Other processes will keep busy-
waiting until the first process exits and releases the lock1.
Test and Set is an effective hardware synchronization mechanism that ensures mutual
exclusion and prevents deadlock but may cause starvation and high CPU usage due to busy-
waiting
Classical Problems of Synchronization
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in
systems where cooperating processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
Bounded Buffer Problem
Because the buffer pool has a maximum size, this problem is often called the Bounded
buffer problem.
 This problem is generalised in terms of the Producer Consumer problem, where
a finite buffer pool is used to exchange messages between producer and consumer
processes.
 Solution to this problem is, creating two counting semaphores "full" and "empty" to
keep track of the current number of full and empty buffers respectively.
 In this Producers mainly produces a product and consumers consume the product, but
both can use of one of the containers each time.
 The main complexity of this problem is that we must have to maintain the count for
both empty and full containers that are available.
Bounded buffer Problem Example
 There is a buffer of n slots and each slot is capable of storing one unit of data. There
are two processes running, namely, producer and consumer, which are operating on
the buffer.

Dining Philosophers Problem


 The dining philosopher's problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free manner.
 There are five philosophers sitting around a table, in which there are five
chopsticks/forks kept beside them and a bowl of rice in the centre, When a
philosopher wants to eat, he uses two chopsticks - one from their left and one from
their right. When a philosopher wants to think, he keeps down both chopsticks at their
original place.
The Readers Writers Problem
 In this problem there are some processes (called readers) that only read the shared
data, and never change it, and there are other processes (called writers) who may
change the data in addition to reading, or instead of reading it.
 There are various type of readers-writers problem, most centred on relative priorities
of readers and writers.
 The main complexity with this problem occurs from allowing more than one reader to
access the data at the same time

Process Generation
Process generation in an OS refers to the creation of new processes during the execution of a
program. A process is a program in execution, and operating systems provide mechanisms
for processes to create other processes — often referred to as child processes.
Main component:

Parent and Child Processes:

 A parent process is an existing process that creates another process.


 The child process inherits properties (like code and data) from the parent.

System Calls Used:

 In UNIX/Linux, the following system calls are commonly used:


o fork() – to create a new process.
o exec() – to replace the process’s memory space with a new program.
o wait() – to wait for child process termination.
o exit() – to terminate a process.

Process States:

 After generation, the process goes through various states: New → Ready → Running
→ Waiting → Terminated.

Process Generation Uses:

 Running background tasks.


 Spawning threads in server applications.
 Creating shells and subshells in command-line interfaces.
 Executing different parts of a program concurrently.

System Call

A system call is a programmatic way for a user-level process to request a service from
the operating system kernel. These services can involve hardware operations (e.g.,
reading from disk), process control, file management, and more.
System Calls Are Needed:
 User programs cannot access hardware or critical resources directly.
 To maintain security and stability, requests must go through a controlled interface.
 System calls act as the interface between user programs and the OS kernel.
 Types of System Calls:

Category Examples Description


Create, terminate, or manage
Process Control fork(), exit(), wait(), exec()
processes
open(), read(), write(),
File Management File access and operations
close()
Device Management ioctl(), read(), write() Interact with I/O devices
Information
getpid(), alarm(), sleep() Get system info or delay
Maintenance
pipe(), shmget(), send(),
Communication IPC (Inter-Process Communication)
recv()
System Calls Work:

1. User program requests a service (e.g., open a file).


2. A trap (software interrupt) is generated.
3. Control is transferred to the OS kernel.
4. OS executes the service and returns the result to the user program.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy