0% found this document useful (0 votes)
8 views26 pages

Unit2 Ca213

The document provides an overview of process management in operating systems, defining a process as a program in execution and detailing its characteristics and states. It discusses process scheduling, types of processes, inter-process communication, and the importance of threads in improving application performance. Additionally, it outlines CPU scheduling criteria and the various queues maintained by the operating system for managing processes.

Uploaded by

gglk371
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views26 pages

Unit2 Ca213

The document provides an overview of process management in operating systems, defining a process as a program in execution and detailing its characteristics and states. It discusses process scheduling, types of processes, inter-process communication, and the importance of threads in improving application performance. Additionally, it outlines CPU scheduling criteria and the various queues maintained by the operating system for managing processes.

Uploaded by

gglk371
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

DEPARTMENT OF COMPUTER APPLICATION

PRINCIPLES OF OPERATING SYSTEM


(CA213)
Notes

Process Management
(UNIT -2)

Process

A process is a program in execution. A process is an ‘active’ entity instead of a program,


which is considered a ‘passive’ entity. A single program can create many processes when
run multiple times; for example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).
Program vs Process: A process is a program in execution. For example, when we write a
program in C or C++ and compile it, the compiler creates binary code. The original code and
binary code are both programs. When we actually run the binary code, it becomes a process.

Process Management
Process management refers to the techniques and strategies used by organizations to design,
monitor, and control their business processes to achieve their goals efficiently and effectively.
It involves identifying the steps involved in completing a task, assessing the resources
required for each step, and determining the best way to execute the task.

Process management can help organizations improve their operational efficiency, reduce
costs, increase customer satisfaction, and maintain compliance with regulatory requirements.
It involves analyzing the performance of existing processes, identifying bottlenecks, and
making changes to optimize the process flow.

Process management includes various tools and techniques such as process mapping, process
analysis, process improvement, process automation, and process control. By applying these
tools and techniques, organizations can streamline their processes, eliminate waste, and
improve productivity.
Overall, process management is a critical aspect of modern business operations and can help
organizations achieve their goals and stay competitive in today’s rapidly changing
marketplace.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 1
If operating system supports multiple users than services under this are very important . In
this regard operating systems has to keep track of all the completing processes , Schedule
them, dispatch them one after another. But user should feel that he has the full control of the
CPU.
some of the systems calls in this category are as follows.
1. create a child process identical to the parent.
2. Terminate a process
3. Wait for a child process to terminate
4. Change the priority of process
5. Block the process
6. Ready the process
7. Dispatch a process
8. Suspend a process
9. Resume a process
10. Delay a process
11. Fork a process

How Does a Process Look Like in Memory?


The process looks like

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 2
Explanation of Process

 Text Section: A Process, sometimes known as the Text Section, also includes the
current activity represented by the value of the Program Counter.
 Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
 Data Section: Contains the global variable.
 Heap Section: Dynamically memory allocated to process during its run time.

Characteristics of a Process
A process has the following attributes.
 Process Id: A unique identifier assigned by the operating system.
 Process State: Can be ready, running, etc.
 CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
 Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
 I/O status information: For example, devices allocated to the process, open files, etc
 CPU scheduling information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the shortest
job first scheduling).

States of Process

A process is in one of the following states:


 New: Newly Created Process (or) being-created process.
 Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
 Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
 Wait (or Block): When a process requests I/O access.
 Complete (or Terminated): The process completed its execution.
 Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
 Suspended Block: When the waiting queue becomes full.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 3
Context Switching of Process
The process of saving the context of one process and loading the context of another process
is known as Context Switching. In simple terms, it is like loading and unloading the process
from the running state to the ready state.
Types of Processes:

In the operating system there are two types of processes:


 Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.

 Cooperating Process: Cooperating Processes are those processes that depend on other
processes or processes. They work together to achieve a common task in an operating
system. These processes interact with each other by sharing the resources such as CPU,
memory, and I/O devices to complete the task.

Some other types of Processes


CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running
state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound
process spends more time in the waiting state.
Process planning is an integral part of the process management operating system. It refers
to the mechanism used by the operating system to determine which process to run next. The
goal of process scheduling is to improve overall system performance by maximizing CPU
utilization, minimizing execution time, and improving system response time.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 4
Inter Process Communication (IPC)

Inter-process communication (IPC) is a mechanism that allows processes to communicate


with each other and synchronize their actions. The communication between these processes
can be seen as a method of co-operation between them. Processes can communicate with
each other through both:

1. Shared Memory
2. Message passing

Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss
the shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it. One way of communication
using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another
process. Process1 generates information about certain computations or resources being used
and keeps it as a record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory for
extracting information as a record from another process as well as for delivering any specific
information to other processes. Let’s discuss an example of communication between
processes using the shared memory method.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 5
Process scheduling

Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such


operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

Scheduling falls into one of two categories:

Non-preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting state, resources
are switched.

Preemptive: In this case, the OS assigns resources to a process for a predetermined period.
The process switches from running state to ready state or from waiting for state to ready state
during resource allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the higher priority process.

Process Schedulers

Types of Process Schedulers

There are three types of process schedulers:

1. Long Term or Job Scheduler

It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-
bound tasks are which use much of their time in input and output operations while CPU-
bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 6
2. Short-Term or CPU Scheduler

It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring no starvation due to high burst time processes.
The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:

Switching context.

Switching to user mode.

Jumping to the proper location in the newly loaded program.

3. Medium-Term Scheduler

It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.

Some Other Schedulers

I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations
such as reading and writing to discs or networks. They can use various algorithms to
determine the order in which I/O operations are executed, such asFCFS(First-Come, First-
Served) or RR (Round Robin).

Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF(Earliest Deadline First) or RM (Rate Monotonic).

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 7
Thread in Operating System

Within a program, a Thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the program
that created them. This enables multiple threads to collaborate and work efficiently within a
single program.

A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly
one process. In an operating system that supports multithreading, the process can consist of
many threads. But threads can be effective only if CPU is more than 1 otherwise two threads
have to context switch for that single CPU.

Why Do We Need Thread?

Threads run in parallel improving the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment.

Threads can share common data so they do not need to use interprocess communication. Like
the processes, threads also have states like ready, executing, blocked, etc.

Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.

Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB). As threads share the same
address space and resources, synchronization is also required for the various activities of the
thread.
Difference Between Process and Thread

The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces. Threads are not independent of one another
like processes are, and as a result, threads share with other threads their code section, data
section, and OS resources (like open files and signals). But, like a process, a thread has its
own program counter (PC), register set, and stack space.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 8
Multi-Threading

A thread is also known as a lightweight process. The idea is to achieve parallelism by


dividing a process into multiple threads. For example, in a browser, multiple tabs can be
different threads. MS Word uses multiple threads: one thread to format the text, another
thread to process inputs, etc. More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

CPU Scheduling Criteria

The CPU scheduler decides the order and priority of the processes to run and allocates the
CPU time based on various parameters such as CPU usage, throughput, turnaround, waiting
time, and response time.

CPU scheduling is essential for the system’s performance and ensures that processes are
executed correctly and on time. Different CPU scheduling algorithms have other properties
and the choice of a particular algorithm depends on various factors. Many criteria have been
suggested for comparing CPU scheduling algorithms.

Criteria of CPU Scheduling

CPU Scheduling has several criteria. Some of them are mentioned below.

CPU utilization

The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies
from 40 to 90 percent depending on the load upon the system.

Throughput
A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time. This is called throughput. The throughput may vary depending
on the length or duration of the processes.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 9
Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process.
The time elapsed from the time of submission of a process to the time of completion is
known as the turnaround time. Turn-around time is the sum of times spent waiting to get
into memory, waiting in the ready queue, executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.

Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it
starts execution. It only affects the waiting time of a process i.e. time spent by a process
waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time.
Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are
being output to the user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced. This measure is called response
time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time

Burst Time (BT) :

Burst Time refers to the time required in milli seconds by a process for its execution. The
Burst Time takes into consideration the CPU time of a process. The I/O time is not taken
into consideration. It is called as the execution time or running time of the process. The
process makes a transition from the Running state to the Completion State during this time
frame. Burst time can be calculated as the difference of the Completion Time of the process
and the Waiting Time, that is,

Burst Time (B.T.) = Completion Time (C.T.) - Waiting Time (W.T.)

Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 10
Priority
If the operating system assigns priorities to processes, the scheduling mechanism should
favor the higher-priority processes.

Predictability
A given process always should run in about the same amount of time under a similar
system load.

Process Queues

The Operating system manages various types of queues for each of the process states. The
PCB related to the process is also stored in the queue of the same state. If the Process is
moved from one state to another state then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.

There are the following queues maintained by the Operating system.

1. Job Queue

In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.

2. Ready Queue

Ready queue is maintained in primary memory. The short term scheduler picks the job from
the ready queue and dispatch to the CPU for the execution.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 11
3. Waiting Queue

When the process needs some IO operation in order to complete its execution, OS changes
the state of the process from running to waiting. The context (PCB) associated with the
process gets stored on the waiting queue which will be used by the Processor when the
process finishes the IO.

CPU SCHEDULING ALGORITHM

Modes in CPU Scheduling Algorithms

There are two modes in CPU Scheduling Algorithms. They are:


1. Pre-emptive Approach
2. Non Pre-emptive Approach

In PreEmptive-Approach the process once starts its execution then the CPU is allotted to the
same process until the completion of process. There would be no shift of Processes by the
Central Processing Unit. The complete CPU is allocated to the Process and there would be no
change of CPU allocation until the process is complete.

In Non-PreEmptive-Approach the process once starts its execution, then the CPU is not
allotted to the same process until the completion of process. There is a shift of Processes by
the Central Processing Unit. The complete CPU is allocated to the Process when only certain
required conditions are achieved and there will be change of CPU allocation if there is break
or False occurrence in the required conditions.
Types of CPU Scheduling Algorithms

 First Come First Serve


 Shortest Job First
 Priority Scheduling
 Round Robin Scheduling

First Come First Serve Scheduling Algorithm

The First-Come, First-Served (FCFS) scheduling algorithm is a non-preemptive


scheduling method in which processes are executed in the order they arrive in the ready
queue. It is similar to a queue structure (FIFO - First In, First Out).We can also say that First
Come First Serve CPU Scheduling Algorithm follows First In First Out in Ready Queue.
First Come First Serve can be called as FCFS in short form.

Characteristics of FCFS (First Come First Serve):

 First Come First Serve can follow or can be executed in Pre emptive Approach or
Non-Pre emptive Approach

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 12
 The Process which enters the Ready Queue is executed First. So, we say that FCFS
follows First in First Out Approach.

Advantages

 Very easy to perform by the CPU


 Follows FIFO Queue Approach

Disadvantages

 First Come First Serve is not very efficient.


 First Come First Serve suffers because of Convoy Effect.

Examples
1. Process ID Process Name Arrival Time Burst Time
2. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
3. P 1 A 0 6
4. P 2 B 2 2
5. P 3 C 3 1
6. P 4 D 4 9
7. P 5 E 5 8

Now, we are going to apply FCFS (First Come First Serve) CPU Scheduling Algorithm for
the above problem.
In FCFS, there is an exception that Arrival Times are not removed from the Completion
Time.
Here, in First Come First Serve the basic formulas do not work. Only the basic formulas
work.

Poces Arrival Time Burst Time Completion Turn Waiting


s
Time Around Time
ID
Time
P1 0 6 6 6 0
P2 2 2 8 8 6
P3 3 1 9 9 8
P4 4 9 18 18 9
P5 5 8 26 26 18

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 13
Gantt Chart

Average Completion Time = The Total Sum of Completion Times which is divided by the
total number of processes is known as Average Completion Time.
Average Completion Time =( CT1 + CT2 + CT3 + . . . . . . . . . . . + CTn ) / n

Average Completion Time


1. Average CT = ( 6 + 8 +9 + 18 + 26 ) / 5
2. Average CT = 67 / 5
3. Average CT = 13. 4

Average Turn Around Time = The Total Sum of Turn Around Times which is divided by
the total number of processes is known as Average Turn Around Time.
Average Turn Around Time = (TAT1 + TAT2 + TAT3 + . . . . . . . . . . . + TATn ) / n

Average Turn Around Time


1. Average TAT = ( 6 + 8 + 9 +18 +26 ) / 5
2. Average TAT = 13.4

Average Waiting Time = The Total Sum of Waiting Times which is divided by the total
number of processes is known as Average Waiting Time.
Average Waiting Time = (WT1 + WT2 + WT3 + . . . . . . . . . . . + WTn ) / n

Average Waiting Time


1. Average WT = ( 0 + 6 + 8 + 9 + 18 ) / 5
2. Average WT = 41 / 5
3. Average WT = 8.2

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 14
Shortest Job First CPU Scheduling Algorithm

The Shortest Job is heavily dependent on the Burst Times. In Shortest Job First CPU
Scheduling Algorithm, the CPU allots its resources to the process which is in ready queue
and the process which has least Burst Time.

If we face a condition where two processes are present in the Ready Queue and their Burst
Times are same, we can choose any process for execution. In actual Operating Systems, if we
face the same problem then sequential allocation of resources takes place.
Shortest Job First can be called as SJF in short form.

Characteristics:

 SJF (Shortest Job First) has the least average waiting time. This is because all the
heavy processes are executed at the last. So, because of this reason all the very small,
small processes are executed first and prevent starvation of small processes.
 It is used as a measure of time to do each activity.
 If shorter processes continue to be produced, hunger might result. The idea of aging
can be used to overcome this issue.
 Shortest Job can be executed in Preemptive and also non preemptive way also.

Advantages

 SJF is used because it has the least average waiting time than the other CPU
Scheduling Algorithms
 SJF can be termed or can be called as long term CPU scheduling algorithm.

Disadvantages

 Starvation is one of the negative traits Shortest Job First CPU Scheduling Algorithm
exhibits.
 Often, it becomes difficult to forecast how long the next CPU request will take.

Examples for Shortest Job First

1. Process ID Arrival Time Burst Time


2. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
3. P0 1 3
4. P1 2 6
5. P2 1 2
6 P3 3 7
7. P4 2 4
8. P5 5 5

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 15
Now, we are going to solve this problem in both pre-emptive and non pre-emptive way
We will first solve the problem in non pre-emptive way.

Non Pre Emptive Shortest Job First CPU Scheduling

Gantt Chart:

Now let us find out Average Completion Time, Average Turn Around Time, Average
Waiting Time.

Average Completion Time:

1. Average Completion Time = ( 5 + 20 + 2 + 27 + 9 + 14 ) / 6

2. Average Completion Time = 77/6

3. Average Completion Time = 12.833

Average Waiting Time:

1. Average Waiting Time = ( 1 + 12 + 17 + 0 + 5 + 4 ) / 6

2.Average Waiting Time = 37 / 6

3. Average Waiting Time = 6.666

Average Turn Around Time:

1. Average Turn Around Time = ( 4 +18 + 2 +24 + 7 + 10 ) / 6

2. Average Turn Around Time = 65 / 6

3. Average Turn Around Time = 6.166

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 16
Pre Emptive Approach
In Pre emptive way, the process is executed when the best possible solution is found.

Gantt chart:

Calculations:
Process Arrival Burst Completion Turnaround Time Waiting Time
Time (AT) Time Time (CT) (TAT = CT - AT) (WT = TAT -
(BT) BT)
P0 1 3 5 4 1
P1 2 6 16 14 8
P2 0 2 2 2 0
P3 3 7 24 21 14
P4 2 4 8 6 2
P5 6 2 10 4 2

Average TAT = (4+14+2+21+6+4) / 6 = 8.5

Average WT = (1+8+0+14+2+2) / 6 = 4.5

Average Completion Time

1. Average Completion Time = ( 5 + 17 + 2 + 24 + 11 +8 ) / 6

2. Average Completion Time = 67 / 6

3. Average Completion = 11.166

Average Turn Around Time

1. Average Turn Around Time = ( 4 +15 + 2 + 21 + 9 + 2 ) / 6

2.Average Turn Around Time = 53 / 6

3. Average Turn Around Time = 8.833

Average Waiting Time

1. Average Waiting Time = ( 1 + 9 + 0 + 14 + 5 + 0 ) /6

2. Average Waiting Time = 29 / 6

3. Average Waiting Time = 4.833

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 17
Preemptive Shortest Job First (SJF) Scheduling - Numerical Example and Solution

Given Processes:

Process Arrival Time (AT) Burst Time (BT)


P1 0 ms 8 ms
P2 1 ms 4 ms
P3 2 ms 2 ms
P4 3 ms 1 ms

Gantt Chart Representation

| P1 | P2 | P3 | P4 | P3 | P2 | P2 | P2 | P1 | P1 | P1 | P1 | P1 | P1 | P1 | P1 |

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Calculate Completion, Turnaround, and Waiting Times


Process Arrival Burst Completion Turnaround Time Waiting Time
Time (AT) Time Time (CT) (TAT = CT - AT) (WT = TAT -
(BT) BT)
P1 0 8 16 16 - 0 = 16 16 - 8 = 8
P2 1 4 8 8-1=7 7-4=3
P3 2 2 5 5-2=3 3-2=1
P4 3 1 4 4-3=1 1-1=0

Calculate Average Times

 Average Turnaround Time (TAT)

Average TAT=16+7+3+14=27/4=6.75 ms

 Average Waiting Time (WT)

Average WT=8+3+1+04=12/4=3 ms

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 18
Priority CPU Scheduling

This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling Algorithm
we are going to learn how CPU is going to allot resources to the certain process.

The Priority CPU Scheduling is different from the remaining CPU Scheduling Algorithms.
Here, each and every process has a certain Priority number.

There are two types of Priority Values.

 Highest Number is considered as Highest Priority Value


 Lowest Number is considered as Lowest Priority Value

Priority for Prevention The priority of a process determines how the CPU Scheduling
Algorithm operates, which is a preemptive strategy. Since the editor has assigned equal
importance to each function in this method, the most crucial steps must come first. The most
significant CPU planning algorithm relies on the FCFS (First Come First Serve) approach
when there is a conflict, that is, when there are many processors with equal value.

Characteristics

 Priority CPU scheduling organizes tasks according to importance.


 When a task with a lower priority is being performed while a task with a higher
priority arrives, the task with the lower priority is replaced by the task with the higher
priority, and the latter is stopped until the execution is finished.
 A process's priority level rises as the allocated number decreases.

Advantages

 The typical or average waiting time for Priority CPU Scheduling is shorter than First
Come First Serve (FCFS).
 It is easier to handle Priority CPU scheduling
 It is less complex

Disadvantages

 The Starvation Problem is one of the Pre emptive Priority CPU Scheduling
Algorithm's most prevalent flaws. Because of this issue, a process must wait a longer
period of time to be scheduled into the CPU. The hunger problem or the starvation
problem is the name given to this issue.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 19
Examples:

Now, let us explain this problem with the help of an example of Priority Scheduling

Non-Preemptive Priority Scheduling

Arrival Time Burst Time Priority (Lower number = Higher


Process
(AT) (BT) priority)
P1 0 ms 5 ms 3
P2 1 ms 3 ms 1
P3 2 ms 8 ms 4
P4 3 ms 6 ms 2

Gantt Chart
| P1 | P2 | P4 | P3 |
0 5 8 14 22

Calculate Completion, Turnaround, and Waiting Times


Process Arrival Burst Priority Completion Turnaround Waiting
Time Time Time (CT) Time (TAT = Time (WT =
(AT) (BT) CT - AT) TAT - BT)
P1 0 5 3 5 5-0=5 5-5=0
P2 1 3 1 8 8-1=7 7-3=4
P3 2 8 4 22 22 - 2 = 20 20 - 8 = 12
P4 3 6 2 14 14 - 3 = 11 11 - 6 = 5
Calculate Average Times

Average TAT=5+7+20+11=43/4=10.75 ms

Average WT=0+4+12+5/4=21/4=5.25 ms

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 20
Preemptive Priority Scheduling

Completion, Turnaround, and Waiting Times


Process Arrival Burst Priority Completion Turnaround Waiting
Time Time Time (CT) Time (TAT = Time (WT =
(AT) (BT) CT - AT) TAT - BT)
P1 0 5 3 14 14 - 0 = 14 14 - 5 = 9
P2 1 3 1 4 4-1=3 3-3=0
P3 2 8 4 22 22 - 2 = 20 20 - 8 = 12
P4 3 6 2 10 10 - 3 = 7 7- 6 = 1

Gantt Chart
| P1 | P2 | P4 | P1 | P3 |

0 1 4 10 14 22

Calculate Average Times


Average CT=14+4+22+10=50/4=12.5

Average TAT=14+3+20+7/4=454=11.0 ms

Average WT=9+0+12+24=22/4=5.5 ms

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 21
Round Robin (RR) CPU Scheduling - Numerical Example & Solution

Definition:

 Round Robin (RR) is a preemptive CPU scheduling algorithm.


 Each process gets a fixed time slice (time quantum) to execute.
 If a process does not complete in the given time quantum, it is preempted and added
to the end of the queue

Examples:

Given the following processes:

Process Arrival Time (AT) Burst Time (BT)


P1 0 ms 5 ms
P2 1 ms 3 ms
P3 2 ms 8 ms
P4 3 ms 6 ms

 Time Quantum (TQ) = 3 ms

Gantt Chart

| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 3 6 9 12 14 17 20 22

Turnaround Time (TAT = CT - AT)

Process Arrival Time Completion Time Turnaround Time (TAT = CT -


(AT) (CT) AT)
P1 0 ms 14 ms 14 - 0 = 14 ms
P2 1 ms 6 ms 6 - 1 = 5 ms
P3 2 ms 22 ms 22 - 2 = 20 ms
P4 3 ms 20 ms 20 - 3 = 17 ms

Waiting Time (WT = TAT - BT)


Process Turnaround Time (TAT) Burst Time (BT) Waiting Time (WT = TAT -
BT)
P1 14 ms 5 ms 14 - 5 = 9 ms
P2 5 ms 3 ms 5 - 3 = 2 ms
P3 20 ms 8 ms 20 - 8 = 12 ms
P4 17 ms 6 ms 17 - 6 = 11 ms

Step 3: Calculate Average Times


Average TAT=14+5+20+174=56/4=14 ms
Average WT=9+2+12+114=34/4=8.5

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 22
Multiple Processors Scheduling in Operating System

Multiple processor scheduling or multiprocessor scheduling focuses on designing the


system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously. In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many processors, and they
are identical, and we can run any process at any time.

Approaches to Multiple Processor Scheduling

There are two approaches to multiple processor scheduling in the operating system:
Symmetric Multiprocessing and Asymmetric Multiprocessing.

1. Symmetric Multiprocessing: It is used where each processor is self-scheduling. All


processes may be in a common ready queue, or each processor may have its private
queue for ready processes. The scheduling proceeds further by having the scheduler
for each processor examine the ready queue and select a process to execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O
processing are handled by a single processor called the Master Server. The other
processors execute only the user code. This is simple and reduces the need for data
sharing, and this entire scenario is called Asymmetric Multiprocessing.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 23
Processor Affinity

Processor Affinity means a process has an affinity for the processor on which it is
currently running. When a process runs on a specific processor, there are certain effects
on the cache memory. The data most recently accessed by the process populate the
cache for the processor. As a result, successive memory access by the process is often
satisfied in the cache memory.

There are two types of processor affinity, such as:

1. Soft Affinity: When an operating system has a policy of keeping a process running on
the same processor but not guaranteeing it will do so, this situation is called soft
affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on
which it may run. Some Linux systems implement soft affinity and provide system
calls like sched_setaffinity() that also support hard affinity.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 24
Load Balancing

Load Balancing is the phenomenon that keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of a process that is eligible to execute.

load balancing is unnecessary because it immediately extracts a runnable process from the
common run queue once a processor becomes idle. On SMP (symmetric multiprocessing), it
is important to keep the workload balanced among all processors to utilize the benefits of
having more than one processor fully. One or more processors will sit idle while other
processors have high workloads along with lists of processors awaiting the CPU. There are
two general approaches to load balancing:

1. Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each processor by
moving the processes from overloaded to idle or less busy processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task
from a busy processor for its execution.

Symmetric Multiprocessor

Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in memory
in this model, but any central processing unit can run it. Now, when a system call is made, the
central processing unit on which the system call was made traps the kernel and processed that
system call. This model balances processes and memory dynamically. This approach uses
Symmetric Multiprocessing, where each processor is self-scheduling.

There are mainly three sources of contention that can be found in a multiprocessor operating
system.

o Locking system: As we know that the resources are shared in the multiprocessor
system, there is a need to protect these resources for safe access among the multiple
processors. The main purpose of the locking scheme is to serialize access of the
resources by the multiple processors.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 25
o Shared data: When the multiple processors access the same data at the same time,
then there may be a chance of inconsistency of data, so to protect this, we have to use
some protocols or locking schemes.
o Cache coherence: It is the shared resource data that is stored in multiple local caches.
Suppose two clients have a cached copy of memory and one client change the
memory block. The other client could be left with an invalid cache without
notification of the change, so this conflict can be resolved by maintaining a coherent
view of the data.

Master-Slave Multiprocessor

In this multiprocessor model, there is a single data structure that keeps track of the ready
processes. In this model, one central processing unit works as a master and another as a slave.
All the processors are handled by a single processor, which is called the master server.

The master server runs the operating system process, and the slave server runs the user
processes. The memory and input-output devices are shared among all the processors, and all
the processors are connected to a common bus. This system is simple and reduces data
sharing, so this system is called Asymmetric multiprocessing.

By-FARHA ZIA
DEPARTMENT OF COMPUTER APPLICATION,IUL Page 26

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy