Process Management
Process Management
MANAGEMENT
Dr Ajay Kumar
Process
For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both
programs. When we actually run the binary code, it becomes a process.
Process vs Program
The operating system is responsible for the following activities in connection with Process
Management
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization.
• Providing mechanisms for process communication.
Components of a Process
• Data : The global as well as static variables are included in this section
Process Attributes
• Process state : The process’s present state, such as whether it’s ready, waiting, running,
or whatever.
• Process privileges : This is required in order to grant or deny access to system resources.
• Program counter : The program counter refers to a pointer that points to the address of
the process’s next instruction.
• Priority : Every process has its own priority. The process with the highest priority among
the processes gets the CPU first. This is also stored on the process control block.
Process Attributes
• CPU registers : Processes must be stored in various CPU registers for execution in the
running state.
• Memory management information : This includes information from the page table,
memory limitations, and segment table, all of which are dependent on the amount of
memory used by the OS.
• Accounting information : This comprises CPU use for process execution, time
constraints, and execution ID, among other things.
• IO status information : This section includes a list of the process’s I/O devices.
Process Control Block
•The PCB architecture is fully dependent
on the operating system, and different
operating systems may include different
information. A simplified diagram of a PCB
is shown.
•The PCB is kept for the duration of a
procedure and then removed once the
process is finished.
Process States
The process, from its creation to completion, passes through various states. The
minimum number of states is five. The names of the states are not standardized
although the process may be in one of the following states during execution.
4. Block or wait : From the Running state, a process can make the transition to the
block or wait state depending upon the scheduling algorithm or the intrinsic
behaviour of the process.
When a process waits for a certain resource to be assigned or for the input from
the user then the OS move this process to the block or wait state and assigns the
CPU to the other processes.
If the main memory is full and a higher priority process comes for the execution then the
OS have to make the room for the process in the main memory by throwing the lower
priority process out into the secondary memory. The suspend ready processes remain in
the secondary memory until the main memory gets available.
7. Suspend wait : Instead of removing the process from the ready queue, it's better to
remove the blocked process which is waiting for some resources in the main memory.
Since it is already waiting for some resource to get available hence it is better if it waits in
the secondary memory and make room for the higher priority process. These processes
complete their execution once the main memory gets available and their wait is finished.
Process States
Inter Process Communication
A system can have two types of processes i.e. independent or cooperating.
Cooperating processes affect each other and may share data and information among
themselves.
OR
Message Queues
In general, several different messages are allowed to
read and write the data to the message queue. In the
message queue, the messages are stored or stay in
the queue unless their recipients retrieve them.
Pipes
It is a half-duplex method (or one-way communication)
used for IPC between two related processes. Typically, it
uses the standard methods for input and output. These
pipes are used in all types of POSIX systems and in
different versions of window operating systems as well.
Indirect Communication
• Pairs of communicating processes have shared mailboxes.
• Link (uni-directional or bi-directional) is established between pairs of
processes.
• Sender process puts the message in the port or mailbox of a receiver
Direct Communication
In this, process that want to communicate must name the sender or receiver.
• A pair of communicating processes must have one link between them.
• A link (generally bi-directional) establishes between every pair of communicating processes.
Message Passing
Since multiple processes can read and write on shared variables and data, it is very
important to maintain consistency of data, else we can get error-prone results. To prevent
this we synchronize the processes. Some common ways to achieve this are given below.
Semaphore: A variable that manages several processes' access to a shared resource.
Binary and counting semaphores are the two types of semaphores.
Mutual Exclusion or mutex: is a term used to describe a situation where only one
process or thread at a time can enter the crucial part due to mutual exclusion. This avoids
race conditions.
Role of Synchronization in IPC
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.
Process Scheduling
1. Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context Switch.
4. Context Switching has become such a performance bottleneck that programmers are
using new structures(threads) to avoid it whenever and wherever possible.
Operation on Processes
1. Creation : Once the process is created, it will be ready and come into the ready queue (main memory)
and will be ready for the execution.
2. Scheduling : Out of the many processes present in the ready queue, the Operating system chooses
one process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.
3. Execution : Once the process is scheduled for the execution, the processor starts executing it. Process
may come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.
4. Deletion/killing : Once the purpose of the process gets over then the OS will kill the process. The
Context of the process (PCB) will be deleted and the process gets terminated by the Operating system.
Process Creation
A process during its execution can create many new processes via system call(depending upon the
OS).
The creating process is called the parent while the created process is called the child.
Every process in the system is identified with a process identifier(PID) which is unique for each
process.
click
Process Creation
Why to create a child process?
#include
<sys/types.h>
#include
<sys/wait.h>
pid_t wait(int
*status);
status
If status is not NULL, wait store the exit status of the terminated child in the int to which
status points.
Return value
On success, wait returns the PID of the terminated child. On failure (no child), wait returns -1.
Orphans
• An orphan process is a process whose parent process has terminated, though it remains
running itself. Any orphaned process will be immediately adopted by the special init
system process with PID 1.ses execute concurrently
• Both the parent process and the child process competes for the CPU with all other
processes in the system. The operating systems decides which process to execute when
and for how long. The process in the system execute concurrently.
• most often the parent terminates before the child and the child becomes an orphan
process adopted by init (PID = 1) and therefore reports PPID = 1
• sometimes the child process terminates before its parent and then the child is able to
report PPID equal to the PID of the parent.
Zombie
A terminated process is said to be a zombie or defunct until the parent does wait on the child.
When a process terminates all of the memory and resources associated with it are deallocated so
they can be used by other processes.
However, the exit status is maintained in the PCB until the parent picks up the exit status
using wait and deletes the PCB.
A child process always first becomes a zombie.
In most cases, under normal system operation zombies are immediately waited on by their parent.
Processes that stay zombies for a long time are generally an error and cause a resource leak.
Process Termination
There are two methods a process can terminate:
1. Normal termination – A process finishes executing its final statement. All the resources allocated
2. Forced Termination – a parent process can terminate its child process by invoking the appropriate
system call. The parent can terminate the child due to the following reasons:
1. Child exceeds its usage of resources
3. Parent exits and OS does not allow child to run if parent terminates
Note: Certain operating systems do not allow a process to exist without its parent. Now, suppose a process
terminates either normally or forcibly. Then all its child process will also be killed, which may in turn kill their
child process. This is called cascading termination.
CPU Scheduling
The goal of the multi-programming system is to keep the CPU busy at all times.
In a uni-processor system whenever the CPU becomes idle it is allocated to a new process.
Which process will be allocated the CPU is decided by the short-term scheduler.
CPU scheduling means to select the next process to be allocated to the CPU whenever the CPU
becomes idle.
Other common names of CPU scheduling are process scheduling or thread scheduling.
CPU Scheduling
The question is – ”When to call the short-term scheduler?”
4. A Process terminates.
Dispatcher
A Dispatcher is that module of the operating system which gives control of the CPU to the process.
Short term scheduler only decides which process to allocate to CPU. Whereas the dispatcher actually
allocates the CPU to that process.
• Finally, Jumping to the proper location in the user program to restart that process.
Scheduling Algorithm’s
Which is the best CPU scheduling algorithm?
This is a common question that comes to our mind. An Operating System uses different algorithms
for CPU scheduling. Every algorithm can perform better depending upon the situation.
There are five points to decide which algorithm performs better than others.
1. CPU utilization – It means how much the CPU is busy. That is, for a given duration of time how long
was the CPU doing the processing. The CPU utilization should be high (40-90%)
2. Throughput – Throughput is the number of processes that complete execution in a given time.
3. Turnaround time – is the interval from the time of submission of the process to the time of its
completion
Scheduling Algorithm’s
4. Waiting time – is the time spent by the process in the ready queue (waiting for the CPU)
5. Response time – is the time from the submission of the process until it gives the first response.
Note: An algorithm is effective if it has high CPU utilization and throughput, whereas low turnaround,
waiting and response time.
CPU Scheduling Algorithm’s
Arrival Time: Time at which process enters the ready queue or state.
Response Time: Time at which process get CPU for the first time – Arrival Time
CPU Scheduling Algorithm’s
The idea is that the process that comes first must use the resource first.
It states that process which request the CPU first is allocated the CPU first.
It is implemented by using FIFO (First Come First Serve) queue and is a Non Pre-
emptive scheduling algorithm.
First Come First Serve
Example: PROCES BURST WAITING
S TIME TIME
P1 8 0
P2 3 8
P3 5 11
P4 3 16
Analysing the given processes, first the process P1 will be executed. Therefore, waiting time of process
P1 will be 0.
As, process P1 takes 8 units of time for completion, so waiting time for process P2 will be 8.
Likewise, waiting time of process P3 will be execution time of process P1 + execution time for process P2, i.e.
(8 + 3) units = 11 units. Furthermore, for process P4 it will be the sum of execution times of P1, P2 and P3.
First Come First Serve
Example:
Process Arrival Burst Completi Turn- Waitin Respons
Time Time on Time around g e
P1 0 2 2 Time Time Time
P2 1 2 4 2 0 0
P3 5 3 8 3 1 1
P4 6 4 12 3 0 0
6 2 2
P1 P2 P3 P4
0 2 4 5 8 12
Gantt Chart
First Come First Serve
Convoy Effect
Advantages of FCFS :
In convoy effect,
•Consider processes with higher burst time arrived before
• Simple the processes with smaller burst time.
•Then, smaller processes have to wait for a long time for
• Easy, useful and understandable longer processes to release the CPU.
Disadvantages of FCFS :
• Because of non pre-emptive scheduling, the process will continue to run until it is finished.
• As the scheduling is non pre-emptive so short processes which are at end of ready queue have to wait for a
larger time making them starve leading to problem of starvation.
If two processes have same burst time, then FCFS is used to break the tie.
Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time First (SRTF).
Shortest Job First (SJF)
Example:
Consider the set of 5 processes whose arrival time and burst time are given below
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average
turn around time.
Shortest Job First (SJF)
Example:
Consider the set of 5 processes whose arrival time and burst time are given below
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average turn
around time.
Shortest Job First (SJF)
Example:
Consider the set of 5 processes whose arrival time and burst time are given below
Arrival
Process Id Burst time
time
P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1
If the CPU scheduling policy is shortest remaining time first, calculate the average waiting time and
average turn around time.
Technique to predict the Burst
Time
There are several techniques which try to predict the burst time for the processes so that the
algorithm can be implemented.
Static Techniques to predict the
Burst Time
1. Based on Process Size
This technique predicts the burst time for a process based on its size.
Burst time of the already executed process of similar size is taken as the burst time for the process to be
executed.
Example-
Consider a process of size 200 KB took 20 units of time to complete its execution.
Then, burst time for any future process having size around 200 KB can be taken as 20 units.
NOTE
•The predicted burst time may not always be right.
•This is because the burst time of a process also depends on what kind of a process it is.
Static Techniques to predict the
Burst Time
2. Based on Process Type
This technique predicts the burst time for a process based on its type.
The following figure shows the burst time assumed for several kinds of processes.
Dynamic Techniques to predict the Burst
Time
1. Based on Simple Averaging
Burst time for the process to be executed is taken as the average of all the processes that are
executed till now.
Given n processes P1, P2, … , Pn and burst time of each process Pi as ti, then predicted burst
where-
α is called smoothening factor (0<= α <=1)
Solution-
Given:
Predicted burst time for 1st process = 10 units
Actual burst time of the first four processes = 4, 8, 6, 7
α = 0.5
Example
Predicted Burst Time for 2nd Process-
Advantages-
No process can complete until the longest job also reaches its completion.
All the processes approximately finishes at the same time.
Disadvantages-
Arrival
Process Id Burst time
time
P1 0 3
P2 1 2
P3 2 4
P4 3 5
P5 4 6
Longest Job First Algorithm
Example:
Consider the set of 5 processes whose arrival time and burst time are given below. If the CPU scheduling
policy is LJF non-preemptive, calculate the average waiting time and average turn around time.
Arrival
Process Id Burst time
time
P1 0 3
P2 1 2
P3 2 4
P4 3 5
P5 4 6
Longest Job First Algorithm Process Id
Arrival
Burst time
time
P1 0 3
Solution:
P2 1 2
P3 2 4
P4 3 5
Turn Around time = Exit time – Arrival time P5 4 6
3–0=
Average Turn Around time = (3 + 19 + 16 + 5 + 10) / 5 = 53 / 5 = P1 3 3–3=0
3
10.6 unit 20 – 1 = 19 – 2 =
P2 20
Average waiting time = (0 + 17 + 12 + 0 + 4) / 5 = 33 / 5 = 6.6 19 17
18 – 2 = 16 – 4 =
unit P3 18
16 12
8–3=
P4 8 5–5=0
5
Longest Job First Algorithm
Example:
Consider the set of 4 processes whose arrival time and burst time are given below. If the CPU scheduling
policy is LJF preemptive, calculate the average waiting time and average turn around time.
P1 1 2
P2 2 4
P3 3 6
P4 4 8
Longest Job First Algorithm
Solution:
P1 1 2
P2 2 4
P3 3 6
P4 4 8
Highest Response Ration Next Algorithm
In HRRN Scheduling,
Out of all the available processes, CPU is assigned to the process having highest response ratio.
In case of a tie, it is broken by FCFS Scheduling.
It operates only in non-preemptive mode.
Disadvantages-
P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2
If the CPU scheduling policy is Highest Response Ratio Next, calculate the average waiting
time and average turn around time.
Highest Response Ration Next Algorithm
Example:
P4 15 15 – 8 = 7 7–2=5
Round Robin Scheduling Algorithm
Disadvantages-
It leads to starvation for processes with larger burst time as they have to repeat the cycle many
times.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
Round Robin Scheduling Algorithm
Note-01:
With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
Note-04:
The performance of Round Robin scheduling heavily depends on the value of time
quantum.
The value of time quantum should be such that it is neither too big nor too small.
Round Robin Scheduling Algorithm
Example: Consider the set of 5 processes whose arrival time and burst time
are given below
Arrival
Process Id Burst time
time
P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3
If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.
Round Robin Scheduling Algorithm
Example: Consider the set of 5 processes whose arrival time and burst time
are given below Process Arrival Burst
Id time time
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.
Round Robin Scheduling Algorithm
Example: Consider the set of 5 processes whose arrival time and burst time
are given below Process Arrival Burst
Id time time
P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3 unit,
calculate the average waiting time and average turn around time.
Priority Scheduling
In Priority Scheduling,
Out of all the available processes, CPU is assigned to the process having the highest priority.
It considers the priority of the processes and allows the important processes to run first.
Priority scheduling in pre-emptive mode is best suited for real time operating system.
Disadvantages-
The waiting time for the process having the highest priority will always be zero in preemptive
mode.
The waiting time for the process having the highest priority may not be zero in non-preemptive
mode.
Note-02:
Priority scheduling in preemptive and non-preemptive mode behaves exactly same under following
conditions-
The arrival time of all the processes is same
All the processes become available
Priority Scheduling
Example: Consider the set of 5 processes whose arrival time and burst time
are given below
Arrival
Process Id Burst time Priority
time
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
P1 4 4–0=4 4–4=0
15 – 1 =
P2 15 14 – 3 = 11
14
12 – 2 =
P3 12 10 – 1 = 9
10
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
2. Consider the set of 4 processes whose arrival time and burst time are given below-
Burst Time
Arrival
Process No. Priority
Time CPU Burst I/O Burst CPU Burst
P1 0 2 1 5 3
P2 2 3 3 3 1
P3 3 1 2 3 1
If the CPU scheduling policy is Priority Scheduling, calculate the average waiting time and average turn
around time. (Lower number means higher priority)
Thread
What are Threads?
Thread is an execution unit that consists of its own program counter, a stack, and a set of registers where the
program counter mainly keeps track of which instruction to execute next, a set of registers mainly hold its current
working variables, and a stack mainly contains the history of execution.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel-
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple
kernel system calls simultaneously.
Types of Thread
User Level threads Kernel Level Threads
These threads are implemented by users. These threads are implemented by Operating systems
These threads are not recognized by operating systems, These threads are recognized by operating systems,
In User Level threads, the Context switch requires no hardware
In Kernel Level threads, hardware support is needed.
support.
These threads are mainly designed as dependent threads. These threads are mainly designed as independent threads.
In User Level threads, if one user-level thread performs a On the other hand, if one kernel thread performs a blocking
blocking operation then the entire process will be blocked. operation then another thread can continue the execution.
Example of User Level threads: Java thread, POSIX threads. Example of Kernel level threads: Window Solaris.
Implementation of User Level thread is done by a thread While the Implementation of the kernel-level thread is done by
library and is easy. the operating system and is complex.
Multithreading allows the application to divide its task into individual threads. In multi-threads,
the same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.
Multithreading
The main drawback of single threading systems is that only one task can be performed at a time, so
to overcome the drawback of this single threading, there is multithreading that allows multiple tasks
to be performed.
Multithreading : Example
In the above example, client1, client2, and client3 are accessing the web server without any
waiting. In multithreading, several tasks can run at the same time.
Multithreading Models
The user threads must be mapped to kernel threads, by one of the following
strategies:
Many to One Model
One to One Model
Many to Many Model
Many to One Model
In the many to one model, many user-level threads are all mapped onto a single
kernel thread.
Thread management is handled by the thread library in user space, which is
efficient in nature.
In this case, if user-level thread libraries are implemented in the operating
system in some way that the system does not support them, then the Kernel
threads use this many-to-one relationship model.
Many to One Model
One to One Model
The one to one model creates a separate kernel thread to handle
each and every user thread.
Most implementations of this model place a limit on how many
threads can be created.
Linux and Windows from 95 to XP implement the one-to-one
model for threads.
This model provides more concurrency than that of many to one
Model.
One to One Model
Many to Many Model
The many to many model multiplexes any number of user threads onto an
equal or smaller number of kernel threads, combining the best features of the
one-to-one and many-to-one models.
Users can create any number of threads.
Blocking the kernel system calls does not block the entire process.
Processes can be split across multiple processors.
Many to Many Model
Thread Libraries
Thread libraries provide programmers with API for the creation and
management of threads.
Thread libraries may be implemented either in user space or in kernel space.
The user space involves API functions implemented solely within the user space,
with no kernel support.
The kernel space involves system calls and requires a kernel with thread library
support.
Types of Thread
Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has
occurred. Now in when a Multithreaded process receives a signal, to which thread it
must be delivered? It can be delivered to all or a single thread.
Multithreading Issues
fork() System Call
fork() is a system call executed in the kernel through which a process
creates a copy of itself. Now the problem in the Multithreaded process
is, if one thread forks, will the entire process be copied or not?
Security Issues
Yes, there can be security issues because of the extensive sharing of
resources between multiple threads.