0% found this document useful (0 votes)
2 views23 pages

Scheduling Part1

Chapter 4 discusses CPU scheduling in operating systems, emphasizing its role in maximizing CPU utilization and ensuring fair allocation of CPU time among processes. It outlines the concepts of CPU bursts, I/O waits, and the functions of the CPU scheduler and dispatcher, as well as the differences between preemptive and non-preemptive scheduling. The chapter also covers various scheduling algorithms, their criteria for optimization, and examples of their implementation.

Uploaded by

nourkouider05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views23 pages

Scheduling Part1

Chapter 4 discusses CPU scheduling in operating systems, emphasizing its role in maximizing CPU utilization and ensuring fair allocation of CPU time among processes. It outlines the concepts of CPU bursts, I/O waits, and the functions of the CPU scheduler and dispatcher, as well as the differences between preemptive and non-preemptive scheduling. The chapter also covers various scheduling algorithms, their criteria for optimization, and examples of their implementation.

Uploaded by

nourkouider05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 4: CPU Scheduling

Presented by: Dr. Ikram Saadaoui

Operating System
Basic Concepts

● CPU scheduling is the basis of multi-programmed operating


systems. By switching the CPU among processes, the operating
system can make the computer more productive.

● In a single-processor system, only one process can run at a time.


Others must wait until the CPU is free and can be rescheduled. The
objective of multitasking is to have some process running at all
times, to maximize CPU utilization (minimize CPU idle times).

Operating System 5.5


Basic Concepts

▪ process execution consists of a cycle


of CPU execution (CPU burst) and
I/O wait (I/O burst) .

▪ A CPU burst is the period a process


spends actively using the CPU before
needing to wait for I/O or some
other event.

▪ Process execution begins with a CPU


burst. that is followed by an I/O
burst, which is followed by another
CPU burst, then another I/O burst,
and so on.

Operating System 5.6


Histogram of CPU-burst Times

Large number of processes with short bursts

Small number of processes with longer bursts

Operating System 5.7


CPU scheduler

▪ Whenever the CPU becomes idle, the operating system must select
one of the processes in the ready queue to be executed.
▪ The selection is carried out by the CPU scheduler.

Definition: Scheduling is the process by which the operating system


decides which tasks (threads/processes) should be executed by the
processor at any given time. The main goals of scheduling are:
▪ To maximize CPU utilization,
▪ To ensure fair allocation of CPU time,
▪ and provide a balanced and responsive system.

Scheduling

Operating System 5.8


The dispatcher
The dispatcher: The dispatcher is the module that gives control of
the CPU to the process selected by the scheduler.

▪ This function involves the following:


 Switching context
 Switching to user mode
 Jumping to the proper location in the user program
to restart that program (PC).

▪ The dispatch latency: The time taken by the dispatcher


to stop one process and start another.

The dispatcher should be as fast as possible


since it is invoked during every process switch.

Operating System 5.9


Scheduling

CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from running to waiting state


▪ for example, as the result of an I/O request or an invocation of wait()
for the termination of a child process
2. When a process switches from running to ready state
▪ for example, when an interrupt occurs (timeout), or a higher-priority
process needs the CPU
3. When a process switches from waiting to ready state
▪ for example, at the completion of I/O
4. When a process terminates.

Operating System 5.11


Representation of Process Scheduling

Operating System 5.12


Scheduling
▪ For situations 1 (Running to Waiting) and 4 (Termination):
▪ The scheduler has no choice but to select a new process from
the ready queue because the current process cannot continue
running.

▪ For situations 2 (Running to Ready) and 3 (Waiting to Ready):


▪ The scheduler has a choice. It can decide whether to preempt
the current running process or allow it to continue based on the
scheduling policy in place.

When scheduling takes place only under circumstances 1 and 4, we say


that the scheduling is non-preemptive otherwise, it is preemptive.

Operating System 5.13


Preemptive and Nonpreemptive Scheduling

▪ Under non-preemptive scheduling, once the CPU has been allocated to


a process, the process keeps the CPU until it releases it either by
terminating or by switching to the waiting state.
• Virtually all modern operating systems including Windows, MacOS,
Linux, and UNIX use preemptive scheduling algorithms.

▪ Preemptive scheduling can result in race conditions when data are


shared among several processes.
• Consider the case of two processes that share data. While one
process is updating the data, it is preempted so that the second
process can run. The second process then tries to read the data,
which are in an inconsistent state.

Operating System 5.14


Scheduling Criteria
▪ Choosing the right CPU scheduling algorithms depends on the specific

needs and priorities of the system, and several criteria like:

▪ CPU utilization – keep the CPU as busy as possible

▪ Throughput – number of processes that complete their execution per time

unit

▪ Turnaround time – amount of time to execute a particular process

▪ Waiting time – amount of time a process has been waiting in the ready

queue

▪ Response time – amount of time it takes from when a request was

submitted until the first response is produced.

Operating System 5.16


Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling algorithms.

▪ CPU utilization: keep the CPU as busy as possible.

▪ Throughput: The number of processes that are completed per time unit.

▪ Turnaround time: The interval from the time of submission of a process to the
time of completion is the turnaround time.
▪ Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.

▪ Waiting time: the amount of time a process spends waiting in the ready
queue.
▪ Waiting time is the sum of the periods spent waiting in the ready queue.

▪ Response time: the amount of time it takes from when a request was
submitted until the first response is produced.

Operating System 5.17


Scheduling Algorithm Optimization Criteria

▪ Ideally, we aim to maximize:

• CPU utilization (keeping the CPU busy) and

• throughput (number of processes completed).

▪ Simultaneously, we want to minimize:

• turnaround time (time to complete a process),

• waiting time (time spent in the ready queue), and

• response time (time to first response).

Operating System 5.18


Scheduling Algorithm Optimization Criteria

Frequently used scheduling criteria

● Turnaround time = termination time - submission time


● Average Turnaround time = ∑ Turnaround time / nbr processes

● Waiting time = Turnaround Time - Execution time (te) - I/O time


Average waiting time = ∑ waiting time / nbr processes

Operating System 5.19


Non-Preemptive Scheduling Algorithms

I. FCFS / FIFO (First come First Served )


Priority criteria: Submission time / Arrival time
Principle: The first comes to the RAM the first elected.

II. SJF (Shortest Job First)


Priority criteria: Burst time (An Estimated value of computing time needed)
Principle: The process that requires the least Burst time is elected.

III. Priority without interruption


Priority criteria: Static priority (Pr) of the process according to its class (system,
kernel, user).
Principle: The process with the highest priority value is elected.

Operating System 5.20


First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
▪ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

▪ Waiting time for P1 = 0; P2 = 24; P3 = 27


▪ Average waiting time: (0 + 24 + 27)/3 = 17

Operating System 5.21


FCFS Scheduling Limitations
Suppose that the processes arrive in the order:
P2 , P3 , P1
▪ The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

▪ Waiting time for P1 = 6; P2 = 0; P3 = 3


▪ Average waiting time: (6 + 0 + 3)/3 = 3
▪ Much better than previous case
▪ Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes
▪ Note:
• A CPU-bound process is a process that spends most of its time
actively using the CPU to perform computations.
• A I/O-bound process is a process that spends most of its time
waiting for input/output (I/O) operations to complete.
Operating System 5.22
Shortest-Job-First (SJF) Scheduling

▪ Associate with each process the length of its next CPU burst

• Use these lengths to schedule the process with the shortest time

▪ How do we determine the length of the next CPU burst?

• Could ask the user

• Estimate

Operating System 5.24


Example of SJF

Process Burst Time


P1 6
P2 8
P3 7
P4 3
▪ SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

▪ Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


▪ SJF is optimal – gives minimum average waiting time for a given set of
processes
▪ the SJF algorithm cannot be implemented at the level of CPU scheduling,
as there is no way to know the length of the next CPU burst.
▪ One approach to this problem is to try to approximate SJF scheduling.

Operating System 5.25


Prediction of the Length of the Next CPU Burst

▪ Since the next CPU burst length is unknown, one practical approach is
to predict it based on past behavior.
▪ A common method is Exponential Averaging, which estimates the
next CPU burst using a weighted average of previous bursts:

Operating System 5.26


Prediction of the Length of the Next CPU Burst

Operating System 5.27


Shortest Remaining Time First Scheduling

▪ Preemptive version of Shortest Job First


▪ Whenever a new process arrives in the ready queue, the decision on
which process to schedule next is redone using the SJF algorithm.
▪ Is SRT more “optimal” than SJF in terms of the minimum average
waiting time for a given set of processes?

▪ SRT is always at least as good as SJF in terms of average waiting


time and often performs better.
▪ However, SRT has higher overhead due to frequent context
switching, which can impact system efficiency.

Operating System 5.28


Example of Shortest-remaining-time-first

▪ Now we add the concepts of varying arrival times and preemption to


the analysis
Process i Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
▪ SRT (Preemptive SJF) Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

▪ 𝑻𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅 𝑻𝒊𝒎𝒆 = 𝑪𝒐𝒎𝒑𝒍𝒆𝒕𝒊𝒐𝒏 𝑻𝒊𝒎𝒆 − 𝑨𝒓𝒓𝒊𝒗𝒂𝒍 𝑻𝒊𝒎𝒆


▪ 𝑾𝒂𝒊𝒕𝒊𝒏𝒈 𝑻𝒊𝒎𝒆 = 𝑻𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅 𝑻𝒊𝒎𝒆 − 𝑩𝒖𝒓𝒔𝒕 𝑻𝒊𝒎𝒆
▪ Average waiting time = [(9)+(0)+(15)+(2)]/4 = 26/4 = 6.5

Operating System 5.29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy