Unit2 Ca213
Unit2 Ca213
Process Management
(UNIT -2)
Process
Process Management
Process management refers to the techniques and strategies used by organizations to design,
monitor, and control their business processes to achieve their goals efficiently and effectively.
It involves identifying the steps involved in completing a task, assessing the resources
required for each step, and determining the best way to execute the task.
Process management can help organizations improve their operational efficiency, reduce
costs, increase customer satisfaction, and maintain compliance with regulatory requirements.
It involves analyzing the performance of existing processes, identifying bottlenecks, and
making changes to optimize the process flow.
Process management includes various tools and techniques such as process mapping, process
analysis, process improvement, process automation, and process control. By applying these
tools and techniques, organizations can streamline their processes, eliminate waste, and
improve productivity.
Overall, process management is a critical aspect of modern business operations and can help
organizations achieve their goals and stay competitive in today’s rapidly changing
marketplace.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 1
If operating system supports multiple users than services under this are very important . In
this regard operating systems has to keep track of all the completing processes , Schedule
them, dispatch them one after another. But user should feel that he has the full control of the
CPU.
some of the systems calls in this category are as follows.
1. create a child process identical to the parent.
2. Terminate a process
3. Wait for a child process to terminate
4. Change the priority of process
5. Block the process
6. Ready the process
7. Dispatch a process
8. Suspend a process
9. Resume a process
10. Delay a process
11. Fork a process
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 2
Explanation of Process
Text Section: A Process, sometimes known as the Text Section, also includes the
current activity represented by the value of the Program Counter.
Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically memory allocated to process during its run time.
Characteristics of a Process
A process has the following attributes.
Process Id: A unique identifier assigned by the operating system.
Process State: Can be ready, running, etc.
CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
I/O status information: For example, devices allocated to the process, open files, etc
CPU scheduling information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the shortest
job first scheduling).
States of Process
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 3
Context Switching of Process
The process of saving the context of one process and loading the context of another process
is known as Context Switching. In simple terms, it is like loading and unloading the process
from the running state to the ready state.
Types of Processes:
Cooperating Process: Cooperating Processes are those processes that depend on other
processes or processes. They work together to achieve a common task in an operating
system. These processes interact with each other by sharing the resources such as CPU,
memory, and I/O devices to complete the task.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 4
Inter Process Communication (IPC)
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss
the shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it. One way of communication
using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another
process. Process1 generates information about certain computations or resources being used
and keeps it as a record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory for
extracting information as a record from another process as well as for delivering any specific
information to other processes. Let’s discuss an example of communication between
processes using the shared memory method.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 5
Process scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Categories of Scheduling
Non-preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting state, resources
are switched.
Preemptive: In this case, the OS assigns resources to a process for a predetermined period.
The process switches from running state to ready state or from waiting for state to ready state
during resource allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the higher priority process.
Process Schedulers
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-
bound tasks are which use much of their time in input and output operations while CPU-
bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 6
2. Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring no starvation due to high burst time processes.
The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
Switching context.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.
I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations
such as reading and writing to discs or networks. They can use various algorithms to
determine the order in which I/O operations are executed, such asFCFS(First-Come, First-
Served) or RR (Round Robin).
Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF(Earliest Deadline First) or RM (Rate Monotonic).
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 7
Thread in Operating System
Within a program, a Thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the program
that created them. This enables multiple threads to collaborate and work efficiently within a
single program.
A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly
one process. In an operating system that supports multithreading, the process can consist of
many threads. But threads can be effective only if CPU is more than 1 otherwise two threads
have to context switch for that single CPU.
Threads run in parallel improving the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment.
Threads can share common data so they do not need to use interprocess communication. Like
the processes, threads also have states like ready, executing, blocked, etc.
Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB). As threads share the same
address space and resources, synchronization is also required for the various activities of the
thread.
Difference Between Process and Thread
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces. Threads are not independent of one another
like processes are, and as a result, threads share with other threads their code section, data
section, and OS resources (like open files and signals). But, like a process, a thread has its
own program counter (PC), register set, and stack space.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 8
Multi-Threading
The CPU scheduler decides the order and priority of the processes to run and allocates the
CPU time based on various parameters such as CPU usage, throughput, turnaround, waiting
time, and response time.
CPU scheduling is essential for the system’s performance and ensures that processes are
executed correctly and on time. Different CPU scheduling algorithms have other properties
and the choice of a particular algorithm depends on various factors. Many criteria have been
suggested for comparing CPU scheduling algorithms.
CPU Scheduling has several criteria. Some of them are mentioned below.
CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies
from 40 to 90 percent depending on the load upon the system.
Throughput
A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time. This is called throughput. The throughput may vary depending
on the length or duration of the processes.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 9
Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process.
The time elapsed from the time of submission of a process to the time of completion is
known as the turnaround time. Turn-around time is the sum of times spent waiting to get
into memory, waiting in the ready queue, executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.
Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it
starts execution. It only affects the waiting time of a process i.e. time spent by a process
waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time.
Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are
being output to the user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced. This measure is called response
time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time
Burst Time refers to the time required in milli seconds by a process for its execution. The
Burst Time takes into consideration the CPU time of a process. The I/O time is not taken
into consideration. It is called as the execution time or running time of the process. The
process makes a transition from the Running state to the Completion State during this time
frame. Burst time can be calculated as the difference of the Completion Time of the process
and the Waiting Time, that is,
Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 10
Priority
If the operating system assigns priorities to processes, the scheduling mechanism should
favor the higher-priority processes.
Predictability
A given process always should run in about the same amount of time under a similar
system load.
Process Queues
The Operating system manages various types of queues for each of the process states. The
PCB related to the process is also stored in the queue of the same state. If the Process is
moved from one state to another state then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.
1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from
the ready queue and dispatch to the CPU for the execution.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 11
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes
the state of the process from running to waiting. The context (PCB) associated with the
process gets stored on the waiting queue which will be used by the Processor when the
process finishes the IO.
In PreEmptive-Approach the process once starts its execution then the CPU is allotted to the
same process until the completion of process. There would be no shift of Processes by the
Central Processing Unit. The complete CPU is allocated to the Process and there would be no
change of CPU allocation until the process is complete.
In Non-PreEmptive-Approach the process once starts its execution, then the CPU is not
allotted to the same process until the completion of process. There is a shift of Processes by
the Central Processing Unit. The complete CPU is allocated to the Process when only certain
required conditions are achieved and there will be change of CPU allocation if there is break
or False occurrence in the required conditions.
Types of CPU Scheduling Algorithms
First Come First Serve can follow or can be executed in Pre emptive Approach or
Non-Pre emptive Approach
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 12
The Process which enters the Ready Queue is executed First. So, we say that FCFS
follows First in First Out Approach.
Advantages
Disadvantages
Examples
1. Process ID Process Name Arrival Time Burst Time
2. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
3. P 1 A 0 6
4. P 2 B 2 2
5. P 3 C 3 1
6. P 4 D 4 9
7. P 5 E 5 8
Now, we are going to apply FCFS (First Come First Serve) CPU Scheduling Algorithm for
the above problem.
In FCFS, there is an exception that Arrival Times are not removed from the Completion
Time.
Here, in First Come First Serve the basic formulas do not work. Only the basic formulas
work.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 13
Gantt Chart
Average Completion Time = The Total Sum of Completion Times which is divided by the
total number of processes is known as Average Completion Time.
Average Completion Time =( CT1 + CT2 + CT3 + . . . . . . . . . . . + CTn ) / n
Average Turn Around Time = The Total Sum of Turn Around Times which is divided by
the total number of processes is known as Average Turn Around Time.
Average Turn Around Time = (TAT1 + TAT2 + TAT3 + . . . . . . . . . . . + TATn ) / n
Average Waiting Time = The Total Sum of Waiting Times which is divided by the total
number of processes is known as Average Waiting Time.
Average Waiting Time = (WT1 + WT2 + WT3 + . . . . . . . . . . . + WTn ) / n
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 14
Shortest Job First CPU Scheduling Algorithm
The Shortest Job is heavily dependent on the Burst Times. In Shortest Job First CPU
Scheduling Algorithm, the CPU allots its resources to the process which is in ready queue
and the process which has least Burst Time.
If we face a condition where two processes are present in the Ready Queue and their Burst
Times are same, we can choose any process for execution. In actual Operating Systems, if we
face the same problem then sequential allocation of resources takes place.
Shortest Job First can be called as SJF in short form.
Characteristics:
SJF (Shortest Job First) has the least average waiting time. This is because all the
heavy processes are executed at the last. So, because of this reason all the very small,
small processes are executed first and prevent starvation of small processes.
It is used as a measure of time to do each activity.
If shorter processes continue to be produced, hunger might result. The idea of aging
can be used to overcome this issue.
Shortest Job can be executed in Preemptive and also non preemptive way also.
Advantages
SJF is used because it has the least average waiting time than the other CPU
Scheduling Algorithms
SJF can be termed or can be called as long term CPU scheduling algorithm.
Disadvantages
Starvation is one of the negative traits Shortest Job First CPU Scheduling Algorithm
exhibits.
Often, it becomes difficult to forecast how long the next CPU request will take.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 15
Now, we are going to solve this problem in both pre-emptive and non pre-emptive way
We will first solve the problem in non pre-emptive way.
Gantt Chart:
Now let us find out Average Completion Time, Average Turn Around Time, Average
Waiting Time.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 16
Pre Emptive Approach
In Pre emptive way, the process is executed when the best possible solution is found.
Gantt chart:
Calculations:
Process Arrival Burst Completion Turnaround Time Waiting Time
Time (AT) Time Time (CT) (TAT = CT - AT) (WT = TAT -
(BT) BT)
P0 1 3 5 4 1
P1 2 6 16 14 8
P2 0 2 2 2 0
P3 3 7 24 21 14
P4 2 4 8 6 2
P5 6 2 10 4 2
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 17
Preemptive Shortest Job First (SJF) Scheduling - Numerical Example and Solution
Given Processes:
| P1 | P2 | P3 | P4 | P3 | P2 | P2 | P2 | P1 | P1 | P1 | P1 | P1 | P1 | P1 | P1 |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Average TAT=16+7+3+14=27/4=6.75 ms
Average WT=8+3+1+04=12/4=3 ms
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 18
Priority CPU Scheduling
This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling Algorithm
we are going to learn how CPU is going to allot resources to the certain process.
The Priority CPU Scheduling is different from the remaining CPU Scheduling Algorithms.
Here, each and every process has a certain Priority number.
Priority for Prevention The priority of a process determines how the CPU Scheduling
Algorithm operates, which is a preemptive strategy. Since the editor has assigned equal
importance to each function in this method, the most crucial steps must come first. The most
significant CPU planning algorithm relies on the FCFS (First Come First Serve) approach
when there is a conflict, that is, when there are many processors with equal value.
Characteristics
Advantages
The typical or average waiting time for Priority CPU Scheduling is shorter than First
Come First Serve (FCFS).
It is easier to handle Priority CPU scheduling
It is less complex
Disadvantages
The Starvation Problem is one of the Pre emptive Priority CPU Scheduling
Algorithm's most prevalent flaws. Because of this issue, a process must wait a longer
period of time to be scheduled into the CPU. The hunger problem or the starvation
problem is the name given to this issue.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 19
Examples:
Now, let us explain this problem with the help of an example of Priority Scheduling
Gantt Chart
| P1 | P2 | P4 | P3 |
0 5 8 14 22
Average TAT=5+7+20+11=43/4=10.75 ms
Average WT=0+4+12+5/4=21/4=5.25 ms
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 20
Preemptive Priority Scheduling
Gantt Chart
| P1 | P2 | P4 | P1 | P3 |
0 1 4 10 14 22
Average TAT=14+3+20+7/4=454=11.0 ms
Average WT=9+0+12+24=22/4=5.5 ms
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 21
Round Robin (RR) CPU Scheduling - Numerical Example & Solution
Definition:
Examples:
Gantt Chart
| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 3 6 9 12 14 17 20 22
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 22
Multiple Processors Scheduling in Operating System
There are two approaches to multiple processor scheduling in the operating system:
Symmetric Multiprocessing and Asymmetric Multiprocessing.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 23
Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is
currently running. When a process runs on a specific processor, there are certain effects
on the cache memory. The data most recently accessed by the process populate the
cache for the processor. As a result, successive memory access by the process is often
satisfied in the cache memory.
1. Soft Affinity: When an operating system has a policy of keeping a process running on
the same processor but not guaranteeing it will do so, this situation is called soft
affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on
which it may run. Some Linux systems implement soft affinity and provide system
calls like sched_setaffinity() that also support hard affinity.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 24
Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of a process that is eligible to execute.
load balancing is unnecessary because it immediately extracts a runnable process from the
common run queue once a processor becomes idle. On SMP (symmetric multiprocessing), it
is important to keep the workload balanced among all processors to utilize the benefits of
having more than one processor fully. One or more processors will sit idle while other
processors have high workloads along with lists of processors awaiting the CPU. There are
two general approaches to load balancing:
1. Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each processor by
moving the processes from overloaded to idle or less busy processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task
from a busy processor for its execution.
Symmetric Multiprocessor
Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in memory
in this model, but any central processing unit can run it. Now, when a system call is made, the
central processing unit on which the system call was made traps the kernel and processed that
system call. This model balances processes and memory dynamically. This approach uses
Symmetric Multiprocessing, where each processor is self-scheduling.
There are mainly three sources of contention that can be found in a multiprocessor operating
system.
o Locking system: As we know that the resources are shared in the multiprocessor
system, there is a need to protect these resources for safe access among the multiple
processors. The main purpose of the locking scheme is to serialize access of the
resources by the multiple processors.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 25
o Shared data: When the multiple processors access the same data at the same time,
then there may be a chance of inconsistency of data, so to protect this, we have to use
some protocols or locking schemes.
o Cache coherence: It is the shared resource data that is stored in multiple local caches.
Suppose two clients have a cached copy of memory and one client change the
memory block. The other client could be left with an invalid cache without
notification of the change, so this conflict can be resolved by maintaining a coherent
view of the data.
Master-Slave Multiprocessor
In this multiprocessor model, there is a single data structure that keeps track of the ready
processes. In this model, one central processing unit works as a master and another as a slave.
All the processors are handled by a single processor, which is called the master server.
The master server runs the operating system process, and the slave server runs the user
processes. The memory and input-output devices are shared among all the processors, and all
the processors are connected to a common bus. This system is simple and reduces data
sharing, so this system is called Asymmetric multiprocessing.
By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 26