0% found this document useful (0 votes)
73 views12 pages

Cpu Scheduling Final

The document discusses CPU scheduling in operating systems. It introduces basic concepts of CPU scheduling like processes alternating between CPU bursts and I/O bursts. It describes the role of the CPU scheduler in allocating the CPU to ready processes. It then covers the different types of CPU scheduling algorithms like preemptive and non-preemptive, and evaluation criteria like CPU utilization, throughput, turnaround time and waiting time. Specific algorithms discussed include first-come first-served scheduling.

Uploaded by

dave vegafria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views12 pages

Cpu Scheduling Final

The document discusses CPU scheduling in operating systems. It introduces basic concepts of CPU scheduling like processes alternating between CPU bursts and I/O bursts. It describes the role of the CPU scheduler in allocating the CPU to ready processes. It then covers the different types of CPU scheduling algorithms like preemptive and non-preemptive, and evaluation criteria like CPU utilization, throughput, turnaround time and waiting time. Specific algorithms discussed include first-come first-served scheduling.

Uploaded by

dave vegafria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CPU SCHEDULING

CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive. In this lesson, basic CPU-
scheduling concepts and present several CPU-scheduling algorithms are introduced. The problem of
selecting an algorithm for a particular system is also considered.

CHAPTER OBJECTIVES

1. To introduce CPU scheduling, which is the basis for multiprogrammed operating systems.
2. To describe various CPU-scheduling algorithms.
3. To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system.
4. To examine the scheduling algorithms of several operating systems.

BASIC CONCEPT

In a single-processor system, only one process can run at a time. Others must wait until the CPU is
free and can be rescheduled. The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization. A process is executed until it must wait, typically for the completion of
some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is wasted;
no useful work is accomplished. With multiprogramming, it tries to use the time productively. When one
process must wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process must wait, another process can take
overuse of the CPU.

CPU– I/O BURST CYCLE


CPU BURST

It is the amount of time; a process uses the CPU until it starts waiting for some input or interrupted by
some other process.
I/O BURST OR INPUT OUTPUT BURST

It is the amount of time; a process waits for input-output before needing CPU time.
BURST CYCLE

The execution process consists of a cycle of CPU burst and I/O burst. Usually, it starts with CPU burst and
then followed by I/O burst, another CPU burst, another I/O burst and so on. This cycle will continue until
the process is terminated.

CPU SCHEDULER

Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term scheduler, or CPU
scheduler. The scheduler selects a process from the processes in memory that are ready to execute and
allocates the CPU to that process.
TYPES OF CPU SCHEDULING

CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example, as the result of an
I/O request or an invocation of wait () for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
5. When a process terminates

PREEMPTIVE SCHEDULING - Preemptive scheduling is used when a process switches from the running
state to the ready state or from the waiting state to the ready state. The resources (mainly CPU cycles)
are allocated to the process for a limited amount of time and then taken away, and the process is again
placed back in the ready queue if that process still has CPU burst time remaining. That process stays in
the ready queue till it gets its next chance to execute.
ADVANTAGES

1. Because a process may not monopolize the processor, it is a more reliable method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
4. Utilizing this method in a multi-programming environment is more advantageous.
5. The operating system makes sure that every process using the CPU is using the same amount of CPU
time.
DISADVANTAGES

1. Limited computational resources must be used.


2. Suspending the running process, changing the context, and dispatch the new incoming process all
take more time.
3. The low-priority process would have to wait if multiple high-priority processes arrived at the same
time.

NONPREEMPTIVE SCHEDULING - is used when a process terminates, or a process switches from


running to the waiting state. In this scheduling, once the resources (CPU cycles) are allocated to a
process, the process holds the CPU till it gets terminated or reaches a waiting state. In the case of non-
preemptive scheduling does not interrupt a process running CPU in the middle of the execution.
Instead, it waits till the process completes its CPU burst time, and then it can allocate the CPU to
another process.
ADVANTAGES

1. It has a minimal scheduling burden.


2. It is a very easy procedure.
3. Less computational resources are used.
4. It has a high throughput rate.
DISADVANTAGES

1. Its response time to the process is super.


2. Bugs can cause a computer to freeze up.

KEY DIFFERENCES BETWEEN PREEMPTIVE AND NON-PREEMPTIVE SCHEDULING


1. In preemptive scheduling, the CPU is allocated to the processes for a limited time whereas, in non-
preemptive scheduling, the CPU is allocated to the process till it terminates or switches to the waiting
state.
2. The executing process in preemptive scheduling is interrupted in the middle of execution when a
higher priority one comes whereas, the executing process in non-preemptive scheduling is not
interrupted in the middle of execution and waits till its execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from the ready state to the
running state, vise-verse, and maintaining the ready queue. Whereas in the case of non-preemptive
scheduling is no overhead of switching the process from running state to ready state.
4. In preemptive scheduling, if a high-priorThe process The process non-preemptive low-priority process
frequently arrives in the ready queue then the process with low priority has to wait for a long, and it
may have to starve. , in non-preemptive scheduling, if CPU is allocated to the process having a larger
burst time then the processes with a small burst time may have to starve.
5. Preemptive scheduling attains flexibility by allowing the critical processes to access the CPU as they
arrive in the ready queue, no matter what process is executing currently. Non-preemptive scheduling
is called rigid as even if a critical process enters the ready queue the process running CPU is not
disturbed.
6. Preemptive Scheduling has to maintain the integrity of shared data that’s why it is cost associative
which is not the case with Non-preemptive Scheduling.

SCHEDULING CRITERIA

Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm
may favor one class of processes over another. In choosing which algorithm to use in a particular situation,
we must consider the properties of the various algorithms.

Many criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics are
used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria
include the following:
▪ CPU UTILIZATION - We want to keep the CPU as busy as possible. Conceptually, CPU utilization can
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily loaded system).
▪ THROUGHPUT – If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit, called throughput. For long
processes, this rate maybe one process per hour; for short transactions, it may be ten processes per
second.
▪ TURN AROUND TIME - From the point of view of a particular process, the important criterion is how
long it takes to execute that process. The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
▪ WAITING TIME – The CPU-scheduling algorithm does not affect the amount of time during which a
process executes or does I/O. It affects only the amount of time that a process spends waiting in the
ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
▪ RESPONSE TIME - In an interactive system, turnaround time may not be the best criterion. Often, a
process can produce some output fairly early and can continue computing new results while previous
results are being output to the user. Thus, another measure is the time from the submission of a
request until the first response is produced. This measure, called response time, is the time it takes to
start responding, not the time it takes to output the response. The turnaround time is generally
limited by the speed of the output device.

CPU SCHEDULING ALGORITHMS

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be
allocated the CPU.

1. FIRST-COME, FIRST-SERVED SCHEDULING – is the simplest and most straightforward CPU scheduling
strategy to use is first come, first serve. The task that asks the CPU first is given priority when using this
type of scheduling technique. This means that the CPU will start by executing the process with the
shortest arrival time. It is a non-preemptive scheduling technique since processes are run regardless of
their priority when they come in front of the CPU. This scheduling algorithm is implemented with a
FIFO(First In First Out) queue. As the process is ready to be executed, its Process Control Block (PCB) is
linked with the tail of this FIFO queue. Now when the CPU becomes free, it is assigned to the process
at the beginning of the queue.

Advantages
1. Involves no complex logic and just picks processes from the ready queue one by one.
2. Easy to implement and understand.
3. Every process will eventually get a chance to run so no starvation occurs.
Disadvantages
1. Waiting time for processes with less execution time is often very long.
2. It favors CPU-bound processes then I/O processes.
3. Leads to convoy effect.
4. Causes lower device and CPU utilization.
5. Poor in performance as the average wait time is high.
If the processes arrive in the order P1, P2, P3, and a reserved in FCFS order, we get the result shown in the
following Gantt chart, which is a bar chart that illustrates a particular schedule, including the start and
finish times of each of the participating processes:

The waiting time is 0 milliseconds for process P1, 24milliseconds for process P2, and 27milliseconds for
process P3. Thus, the average waiting time is(0+24+27)/3=17milliseconds. If the processes arrive in the
order P2, P3, P1, however, the results will be as shown in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial. Thus, the
average waiting time under an FCFS policy is generally not minimal and may vary substantially if the
processes’ CPU burst times vary greatly.

Note also that the FCFS scheduling algorithm is non preemptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is important that
each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep
the CPU for an extended period.

DETAILED EXAMPLE

Algorithm: First Come First Serve


Mode: Non-Preemptive AT on Chrt -
Criteria: Arrival Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P0 8 5 20 12 7 7
P1 1 3 11 10 7 7
P2 0 8 8 8 0 0
P3 15 6 26 11 5 5
P4 5 4 15 10 6 6
Avg. CT=16 Avg. TAT=10.20 Avg. WT = 5 Avg. RT = 5
Gantt Chart
P2 P1 P4 P0 P3
0 8 11 15 20 26

ToT TAT (total turnaround time) = 12+10+8+11+10 = 51


Avg. TAT (average turnaround time) ToT TAT 51
= = 10.20
ToT# of process 5
ToT WT (total waiting time)
= 7 + 7 + 0 + 5 + 6 = 25
Avg. WT (average waiting time) ToT WT 51
= = 5
ToT# of process 5

Throughput = ToT# of process 5 19.23 %


∗ 100 = 26−0
∗ 100
Max completion Time– minimum Arrival Time
CPU Util = ToT BT 5+3+8+6+4=26
* 100 ∗ 100
Last process time in chart = 26
100

2. SHORTEST-JOB-FIRST SCHEDULING – Shortest Job First is a non-preemptive scheduling algorithm in


which the process with the shortest burst or completion time is executed first by the CPU. That means
the lesser the execution time, the sooner the process will get the CPU. In this scheduling algorithm, the
arrival time of the processes must be the same, and the processor must be aware of the burst time of
all the processes in advance. If two processes have the same burst time, then First Come First Serve
(FCFS) scheduling is used to break the tie.
The preemptive mode of SJF scheduling is known as the Shortest Remaining Time First scheduling
algorithm.

Advantages
1. Results in increased Throughput by executing shorter jobs first, which mostly have a shorter
turnaround time.
2. Gives the minimum average waiting time for a given set of processes.
3. Best approach to minimize waiting time for other processes awaiting execution.
4. Useful for batch-type processing where CPU time is known in advance and waiting for jobs to
complete is not critical.

Disadvantages
1. May lead to starvation as if shorter processes keep on coming, then longer processes will never
get a chance to run.
2. Time taken by a process must be known to the CPU beforehand, which is not always possible.
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:

The waiting time is 3 milliseconds for process P1, 16milliseconds for process P2, 9milliseconds for
process P3, and 0milliseconds for process P4. Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7
milliseconds. By comparison, if we were using the FCFS scheduling scheme, the average waiting time
would be 10.25 milliseconds.

The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time
for a given set of processes. Moving a short process before a long one decreases the waiting time of
the short process more than it increases the waiting time of the long process. Consequently, the
average waiting time decreases.

Consider the following four processes, with the length of the CPU burst given in milliseconds:

If the processes arrive at the ready queue at the times shown and need the indicated burst times, then
the resulting preemptive SJF schedule is as depicted in the following Gantt chart:

ProcessP1 starts at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The
remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4
milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time for this
example is [(10−1) + (1−1) + (17−2) + (5−3)]/4 = 26/4 = 6.5 milliseconds. Non preemptive SJF scheduling
would result in an average waiting time of 7.75 milliseconds.
DETAILED EXAMPLE

Algorithm: Shortest Job First


Mode: Non-Preemptive AT on Chrt -
Criteria: Burst Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P1 0 3 3 3 0 0
P2 1 5 8 7 2 2
P3 2 7 17 15 8 8
P4 2 10 27 25 15 15
P5 4 2 10 6 4 4
Avg. CT=13 Avg. TAT=11.2 Avg. WT = 5.8 Avg. RT = 5.8

Gantt Chart
P1 P2 P5 P3 P4
0 3 8 10 17 27

Throughput = ToT# of process 5 18.52%


∗ ∗ 100
Max completion Time– minimum Arrival Time = 27−0
100
CPU Util = ToT BT 3+5+7+10+2=27 100
* 100
Last process time in chart = 27
∗ 100

3. SHORTEST REMAINING TIME FIRST (SRTF) SCHEDULING ALGORITHM - Shortest


Remaining Time First (SRTF) scheduling algorithm is basically a preemptive mode of the Shortest Job
First (SJF) algorithm in which jobs are scheduled according to the shortest remaining time. In this
scheduling technique, the process with the shortest burst time is executed first by the CPU, but the
arrival time of all processes need not be the same. If another process with the shortest burst time
arrives, then the current process will be preempted, and a newer ready job will be executed first.

Advantages
1. SRTF algorithm makes the processing of the jobs faster than SJN algorithm, given it’s overhead
charges are not counted.
2. Allows for easier management of library updates or replacements without recompiling the
program.
3. Enables efficient memory usage, as libraries can be shared among multiple instances of the
program.
4. Provides better portability, as the program can be executed on different systems with compatible
libraries available at runtime.

Disadvantages
1. The context switch is done a lot more times in SRTF than in SJN and consumes CPU’s valuable
time for processing. This adds to its processing time and diminishes it’s advantage of fast
processing.
2. Slightly slower program startup due to the additional linking process.
3. Requires proper handling of library dependencies to ensure correct execution.
4. Debugging can be slightly more complex, as libraries are separate entities loaded at runtime.

DETAILED EXAMPLE

Example 1

Algorithm: Shortest Remaining Time First


Mode: Preemptive AT on Chrt -
Criteria: Burst Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P1 0 8 17 17 9 0
P2 3 4 7 4 0 0
P3 4 5 22 18 13 13
P4 6 3 10 4 1 1
P5 10 2 12 2 0 0
Avg. CT=13.6 Avg. TAT= 9 Avg. WT = 4.6 Avg. RT = 2.8

Gantt Chart
P1 P2 P4 P5 P1 P3
0 3 7 10 12 17 22

Throughput = ToT# of process 5 22.72%


∗ ∗ 100
Max completion Time– minimum Arrival Time = 22−0
100
CPU Util = ToT BT 8+4+5+3+2=22
* 100 ∗ 100
Last process time in chart = 22
100

Example 2

Algorithm: Shortest Remaining Time First


Mode: Preemptive AT on Chrt -
Criteria: Burst Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P1 3 4 11 8 4 4
P2 5 9 30 25 16 16
P3 8 4 15 7 3 3
P4 0 7 7 7 0 0
P5 12 6 21 9 3 3
Avg. CT=13.6 Avg. TAT= 9 Avg. WT = 4.6 Avg. RT = 2.8

Gantt Chart
P4 P4 P4 P1 P1 P3 P3 P5 P2
0 3 5 7 8 11 12 15 21 30
P4 P4 P4 P1 P1 P2 P2
P1 P1 P2 P2 P3 P3
P2 P3 P5

Throughput = ToT# of process 5 16.6%


∗ ∗ 100
Max completion Time– minimum Arrival Time = 30−0
100
ToT BT 4+9+4+7+6=30
CPU Util = * 100 ∗ 100
Last process time in chart = 30
100

4. ROUND-ROBIN SCHEDULING – the round-robin (RR) scheduling algorithm is designed


especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added to enable
the system to switch between processes. A small unit of time, called a time quantum or time slice, is
defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is
treated as a circular queue. The ready queue of the processes is implemented using the circular
queue technique in which the CPU is allocated to each process for the given time quantum and then
added back to the ready queue to wait for its next turn. If the process completes its execution within
the given quantum of time, then it will be preempted, and other processes will execute for the given
period of time. But if the process is not completely executed within the given time quantum, then it
will again be added to the ready queue and will wait for its turn to complete its execution.

The round-robin scheduling is the oldest and simplest scheduling algorithm that derives its
name from the round-robin principle. In this principle, each person will take an equal share of
something in turn. This algorithm is mostly used for multitasking in time-sharing systems and operating
systems having multiple clients so that they can make efficient use of resources.

Advantages

1. All processes are given the same priority; hence all processes get an equal share of the CPU.
2. Since it is cyclic in nature, no process is left behind, and starvation doesn't exist.
Disadvantages

1. The performance of Throughput depends on the length of the time quantum. Setting it too short
increases the overhead and lowers the CPU efficiency, but if we set it too long, it gives a poor
response to short processes and tends to exhibit the same behavior as FCFS.
2. Average waiting time of the Round Robin algorithm is often long.
3. Context switching is done a lot more times and adds to the more overhead time.

Consider the following set of processes that arrive at time 0, with the length of the CPU burst given
in milliseconds:
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it
requires another 20 milliseconds, it is preempted after the first-time quantum, and the CPU is given
to the next process in the queue, process P2. Process P2does not need 4 milliseconds, so it quits
before its time quantum expires. The CPU is then given to the next process, process P3. Once each
process has received 1 time quantum, the CPU is returned to process P1 for an additional time
quantum. The resulting RR schedule is as follows:

Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds (10 - 4),P2waits
for 4 milliseconds, andP3waits for 7 milliseconds. Thus, the average waiting time is 17/3 = 5.66
milliseconds.

In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a
row (unless it is the only runnable process). If a process’s CPU burst exceeds 1 time quantum, that
process is preempted and is put back in the ready queue. The RR scheduling algorithm is thus
preemptive.

DETAILED EXAMPLE

Example 1

Algorithm: Round Robin


Mode: Preemptive Time Quantum 3 AT on Chrt -
Criteria: Burst Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P1 0 17 28 28 11 0
P2 0 3 6 6 3 3
P3 0 6 17 17 11 6
P4 0 2 11 11 9 9
Avg. CT=15.5 Avg. TAT= 9 Avg. WT = 4.6 Avg. RT = 2.8

Gantt Chart
P1 P2 P3 P4 P1 P3 P1
0 3 6 9 11 14 17 28
P1 P1 P1 P1 P1 P1 P1
P2 P2 P3 P3 P3 P3
P3 P3 P4 P4
P4 P4

ToT# of process 4
Throughput = ∗ ∗ 100 14.28%
Max completion Time– minimum Arrival Time = 28−0
100
ToT BT 17+3+6+2=28
CPU Util = * 100 ∗ 100
Last process time in chart = 28
100

Example 2

Algorithm: Round Robin


Mode: Preemptive Time Quantum 4 AT on Chrt -
Criteria: Burst Time CT - AT TAT - BT AT on Table
Process Arrival Burst Time Completion Turn Around Waiting Response
Time (AT) (BT) Time (CT) Time (TAT) Time (WT) Time (RT)
P1 0 8 21 21 13 0
P2 3 4 8 5 1 1
P3 4 5 22 18 13 4
P4 6 3 15 9 6 6
P5 10 2 17 7 5 5
Avg. CT=16.6 Avg. TAT= 12 Avg. WT = 7.6 Avg. RT = 3.2
Gantt Chart
P1 P2 P3 P4 P5 P1 P3
0 4 8 12 15 17 21 22
P1 P1 P1 P1 P1 P1 P3
P2 P3 P3 P3 P3
P3 P4 P4 P5
P5

Throughput = ToT# of process 5 22.72%


∗ ∗ 100
Max completion Time– minimum Arrival Time = 22−0
100
CPU Util = ToT BT 8+4+5+3+2=22 100
* 100
Last process time in chart = 22
∗ 100

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy