Cpu Scheduling Final
Cpu Scheduling Final
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive. In this lesson, basic CPU-
scheduling concepts and present several CPU-scheduling algorithms are introduced. The problem of
selecting an algorithm for a particular system is also considered.
CHAPTER OBJECTIVES
1. To introduce CPU scheduling, which is the basis for multiprogrammed operating systems.
2. To describe various CPU-scheduling algorithms.
3. To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system.
4. To examine the scheduling algorithms of several operating systems.
BASIC CONCEPT
In a single-processor system, only one process can run at a time. Others must wait until the CPU is
free and can be rescheduled. The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization. A process is executed until it must wait, typically for the completion of
some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is wasted;
no useful work is accomplished. With multiprogramming, it tries to use the time productively. When one
process must wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process must wait, another process can take
overuse of the CPU.
It is the amount of time; a process uses the CPU until it starts waiting for some input or interrupted by
some other process.
I/O BURST OR INPUT OUTPUT BURST
It is the amount of time; a process waits for input-output before needing CPU time.
BURST CYCLE
The execution process consists of a cycle of CPU burst and I/O burst. Usually, it starts with CPU burst and
then followed by I/O burst, another CPU burst, another I/O burst and so on. This cycle will continue until
the process is terminated.
CPU SCHEDULER
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term scheduler, or CPU
scheduler. The scheduler selects a process from the processes in memory that are ready to execute and
allocates the CPU to that process.
TYPES OF CPU SCHEDULING
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the result of an
I/O request or an invocation of wait () for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
5. When a process terminates
PREEMPTIVE SCHEDULING - Preemptive scheduling is used when a process switches from the running
state to the ready state or from the waiting state to the ready state. The resources (mainly CPU cycles)
are allocated to the process for a limited amount of time and then taken away, and the process is again
placed back in the ready queue if that process still has CPU burst time remaining. That process stays in
the ready queue till it gets its next chance to execute.
ADVANTAGES
1. Because a process may not monopolize the processor, it is a more reliable method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
4. Utilizing this method in a multi-programming environment is more advantageous.
5. The operating system makes sure that every process using the CPU is using the same amount of CPU
time.
DISADVANTAGES
SCHEDULING CRITERIA
Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm
may favor one class of processes over another. In choosing which algorithm to use in a particular situation,
we must consider the properties of the various algorithms.
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics are
used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria
include the following:
▪ CPU UTILIZATION - We want to keep the CPU as busy as possible. Conceptually, CPU utilization can
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily loaded system).
▪ THROUGHPUT – If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit, called throughput. For long
processes, this rate maybe one process per hour; for short transactions, it may be ten processes per
second.
▪ TURN AROUND TIME - From the point of view of a particular process, the important criterion is how
long it takes to execute that process. The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
▪ WAITING TIME – The CPU-scheduling algorithm does not affect the amount of time during which a
process executes or does I/O. It affects only the amount of time that a process spends waiting in the
ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
▪ RESPONSE TIME - In an interactive system, turnaround time may not be the best criterion. Often, a
process can produce some output fairly early and can continue computing new results while previous
results are being output to the user. Thus, another measure is the time from the submission of a
request until the first response is produced. This measure, called response time, is the time it takes to
start responding, not the time it takes to output the response. The turnaround time is generally
limited by the speed of the output device.
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be
allocated the CPU.
1. FIRST-COME, FIRST-SERVED SCHEDULING – is the simplest and most straightforward CPU scheduling
strategy to use is first come, first serve. The task that asks the CPU first is given priority when using this
type of scheduling technique. This means that the CPU will start by executing the process with the
shortest arrival time. It is a non-preemptive scheduling technique since processes are run regardless of
their priority when they come in front of the CPU. This scheduling algorithm is implemented with a
FIFO(First In First Out) queue. As the process is ready to be executed, its Process Control Block (PCB) is
linked with the tail of this FIFO queue. Now when the CPU becomes free, it is assigned to the process
at the beginning of the queue.
Advantages
1. Involves no complex logic and just picks processes from the ready queue one by one.
2. Easy to implement and understand.
3. Every process will eventually get a chance to run so no starvation occurs.
Disadvantages
1. Waiting time for processes with less execution time is often very long.
2. It favors CPU-bound processes then I/O processes.
3. Leads to convoy effect.
4. Causes lower device and CPU utilization.
5. Poor in performance as the average wait time is high.
If the processes arrive in the order P1, P2, P3, and a reserved in FCFS order, we get the result shown in the
following Gantt chart, which is a bar chart that illustrates a particular schedule, including the start and
finish times of each of the participating processes:
The waiting time is 0 milliseconds for process P1, 24milliseconds for process P2, and 27milliseconds for
process P3. Thus, the average waiting time is(0+24+27)/3=17milliseconds. If the processes arrive in the
order P2, P3, P1, however, the results will be as shown in the following Gantt chart:
The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial. Thus, the
average waiting time under an FCFS policy is generally not minimal and may vary substantially if the
processes’ CPU burst times vary greatly.
Note also that the FCFS scheduling algorithm is non preemptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is important that
each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep
the CPU for an extended period.
DETAILED EXAMPLE
Advantages
1. Results in increased Throughput by executing shorter jobs first, which mostly have a shorter
turnaround time.
2. Gives the minimum average waiting time for a given set of processes.
3. Best approach to minimize waiting time for other processes awaiting execution.
4. Useful for batch-type processing where CPU time is known in advance and waiting for jobs to
complete is not critical.
Disadvantages
1. May lead to starvation as if shorter processes keep on coming, then longer processes will never
get a chance to run.
2. Time taken by a process must be known to the CPU beforehand, which is not always possible.
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:
The waiting time is 3 milliseconds for process P1, 16milliseconds for process P2, 9milliseconds for
process P3, and 0milliseconds for process P4. Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7
milliseconds. By comparison, if we were using the FCFS scheduling scheme, the average waiting time
would be 10.25 milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time
for a given set of processes. Moving a short process before a long one decreases the waiting time of
the short process more than it increases the waiting time of the long process. Consequently, the
average waiting time decreases.
Consider the following four processes, with the length of the CPU burst given in milliseconds:
If the processes arrive at the ready queue at the times shown and need the indicated burst times, then
the resulting preemptive SJF schedule is as depicted in the following Gantt chart:
ProcessP1 starts at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The
remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4
milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time for this
example is [(10−1) + (1−1) + (17−2) + (5−3)]/4 = 26/4 = 6.5 milliseconds. Non preemptive SJF scheduling
would result in an average waiting time of 7.75 milliseconds.
DETAILED EXAMPLE
Gantt Chart
P1 P2 P5 P3 P4
0 3 8 10 17 27
Advantages
1. SRTF algorithm makes the processing of the jobs faster than SJN algorithm, given it’s overhead
charges are not counted.
2. Allows for easier management of library updates or replacements without recompiling the
program.
3. Enables efficient memory usage, as libraries can be shared among multiple instances of the
program.
4. Provides better portability, as the program can be executed on different systems with compatible
libraries available at runtime.
Disadvantages
1. The context switch is done a lot more times in SRTF than in SJN and consumes CPU’s valuable
time for processing. This adds to its processing time and diminishes it’s advantage of fast
processing.
2. Slightly slower program startup due to the additional linking process.
3. Requires proper handling of library dependencies to ensure correct execution.
4. Debugging can be slightly more complex, as libraries are separate entities loaded at runtime.
DETAILED EXAMPLE
Example 1
Gantt Chart
P1 P2 P4 P5 P1 P3
0 3 7 10 12 17 22
Example 2
Gantt Chart
P4 P4 P4 P1 P1 P3 P3 P5 P2
0 3 5 7 8 11 12 15 21 30
P4 P4 P4 P1 P1 P2 P2
P1 P1 P2 P2 P3 P3
P2 P3 P5
The round-robin scheduling is the oldest and simplest scheduling algorithm that derives its
name from the round-robin principle. In this principle, each person will take an equal share of
something in turn. This algorithm is mostly used for multitasking in time-sharing systems and operating
systems having multiple clients so that they can make efficient use of resources.
Advantages
1. All processes are given the same priority; hence all processes get an equal share of the CPU.
2. Since it is cyclic in nature, no process is left behind, and starvation doesn't exist.
Disadvantages
1. The performance of Throughput depends on the length of the time quantum. Setting it too short
increases the overhead and lowers the CPU efficiency, but if we set it too long, it gives a poor
response to short processes and tends to exhibit the same behavior as FCFS.
2. Average waiting time of the Round Robin algorithm is often long.
3. Context switching is done a lot more times and adds to the more overhead time.
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given
in milliseconds:
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it
requires another 20 milliseconds, it is preempted after the first-time quantum, and the CPU is given
to the next process in the queue, process P2. Process P2does not need 4 milliseconds, so it quits
before its time quantum expires. The CPU is then given to the next process, process P3. Once each
process has received 1 time quantum, the CPU is returned to process P1 for an additional time
quantum. The resulting RR schedule is as follows:
Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds (10 - 4),P2waits
for 4 milliseconds, andP3waits for 7 milliseconds. Thus, the average waiting time is 17/3 = 5.66
milliseconds.
In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a
row (unless it is the only runnable process). If a process’s CPU burst exceeds 1 time quantum, that
process is preempted and is put back in the ready queue. The RR scheduling algorithm is thus
preemptive.
DETAILED EXAMPLE
Example 1
Gantt Chart
P1 P2 P3 P4 P1 P3 P1
0 3 6 9 11 14 17 28
P1 P1 P1 P1 P1 P1 P1
P2 P2 P3 P3 P3 P3
P3 P3 P4 P4
P4 P4
ToT# of process 4
Throughput = ∗ ∗ 100 14.28%
Max completion Time– minimum Arrival Time = 28−0
100
ToT BT 17+3+6+2=28
CPU Util = * 100 ∗ 100
Last process time in chart = 28
100
Example 2