0% found this document useful (0 votes)
13 views50 pages

Part 3

The document discusses uniprocessor scheduling in operating systems, focusing on short-term scheduling, context switching, and various scheduling algorithms such as FCFS, priority-based, and time-sharing. It outlines performance criteria for scheduling, including response time, waiting time, and CPU utilization, as well as the importance of context switches and the ready queue. Additionally, it covers process models and parameters that influence scheduling decisions.

Uploaded by

Said Bouargalne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views50 pages

Part 3

The document discusses uniprocessor scheduling in operating systems, focusing on short-term scheduling, context switching, and various scheduling algorithms such as FCFS, priority-based, and time-sharing. It outlines performance criteria for scheduling, including response time, waiting time, and CPU utilization, as well as the importance of context switches and the ready queue. Additionally, it covers process models and parameters that influence scheduling decisions.

Uploaded by

Said Bouargalne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Operating Systems

“Uniprocessor scheduling”

Mathieu Delalandre
University of Tours, Tours city, France
mathieu.delalandre@univ-tours.fr

Lecture available at http://mathieu.delalandre.free.fr/teachings/operating1.html

1
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

2
About short-term scheduling (1)

(Short-term) scheduler is a system process running an algorithm to decide which of the ready processes
are to be executed (i.e. allocated to the CPU). Different performance criteria have to be considered:

Performance criteria Performance criteria


related to the system related to the user
Response time: total time between submission of a request and its completion
Predictability: to predict execution time of processes and avoid wide variations
in response time

Waiting time: amount of time a process has been waiting in the ready queue
Throughput: number of processes that complete their execution per time unit
CPU utilization: to keep the CPU as busy as possible
Fairness : a process should not suffer of starvation i.e. never loaded to CPU
Enforcing priorities:when processes are assigned with priorities, the scheduling policy should
favor the high priority processes
Balancing resources: the scheduling policy should keep the resources of the system busy
Etc.

3
About short-term scheduling (2)

Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:

on-line off-line on-line


preemptive no preemptive both
algorithm’s
relative deadline strict deadline relative deadline
features
static priority dynamic priority both
optimal not optimal both
Scheduling Standard parameters in
problems independants dependants both a time-sharing system
Processus
without resource with resource both
model
aperiodic periodic aperiodic

Type of mono-core multi-core mono-core


system centralized distributed centralized

Parameters

4
About short-term scheduling (3)

Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:

On-line/off-line: off-line scheduling builds complete planning sequences with all the parameters of the process.
The schedule is known before the process execution and can be implemented efficiently.

Preemptive/non-preemptive: in a preemptive scheduling, an elected process may be preempted and the


processor allocated to a more urgent process with a higher priority.

Relative/strict deadline: a process is said with no (or a relative) deadline if its response time doesn’t affect the
performance of the system and jeopardize the correct behavior.

Dynamic/static priority: static algorithms are those in which the scheduling decisions are based on fixed
parameters, assigned to processes before their activation. Dynamic scheduling employs parameters that may
change during the system evolution.

Optimal: an algorithm is said optimal if it minimizes a given cost function.

5
About short-term scheduling (4)

Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:

Dependent /independent process: a process is dependent (or cooperating) if it can affect (or be affected by) the
other processes. Clearly, any process than share data and uses IPC is a cooperating process.

Resource sharing: from a process point of view, a resource is any software structure that can be used by the
process to advance its execution.

Periodic/aperiodic process: a process is said periodic if, each time it is ready, it releases a periodic request.

Mono-core / Multi-core: when a computer system contains a set of processor that share a common main
memory, we’re talking about a multiprocessor /multi-core scheduling.

Centralized/distributed: scheduling is centralized when it is implemented on a standalone architecture.


Scheduling is distributed when each site defines a local scheduling, and the cooperation between sites leads to a
global scheduling strategy.

6
About short-term scheduling (5)

The general algorithm of a short-term scheduler is


While
1. A timer interrupt causes the scheduler to run once every time slice
2. Data acquisition (i.e. to list processes in the ready queue and update their parameters)
3. Selection of the process to run based on the scheduling criteria of the algorithm
4. If the process to run is different of the current process, to order to the dispatcher to switch the context
5. System execution will go on …

The real problem with the scheduling is the definition of the scheduling criteria, algorithm is little discussed.

7
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

8
Context switch, quantum and ready queue (1)
Dispatcher is in charge of passing the control of the CPU to the process selected by the short-term scheduler.

Context switch is the operation of storing and restoring state (context) of a CPU so that the execution can be resumed from the
same point at a later time. It is based on two distinct sub-operations, state safe and state restore. Switching from one process to
another requires a certain amount of time (saving and loading the registers, the memory maps, etc.).

Quantum (or time slice) is the period of time for which a process is allowed to run in a preemptive multitasking system. The
scheduler is run once every time slice to choose the next process to run.

Process P0 Operating system Process P1


interrupt or system call
P0 running, P1 waiting

save state into PCB0


Context switch
P0 / P1 waiting reload state from PCB1

exit

(time slice)
quantum
P0 waiting, P1 running

interrupt or
save state into PCB1 system call
Context switch
P0 / P1 waiting reload state from PCB0

P0 running, P1 waiting
exit
9
Context switch, quantum and ready queue (2)
e.g. We consider the case of Cycle Instructions
1 5000 A starts
i. Three processes A, B, C and a dispatcher which traces (i.e. 2 5001
instructions listing), given in the next table. 3 5002
4 5003 Process A
Process A Process B Process C Dispatcher 5 5004
6 5005
5000 8000 12000 100 7 100 A interrupted
5001 8001 12001 101 8 101
…. 8002 … … 9 102
5011 8003 12011 105 10 103 Dispatcher
11 104
ii. Processes are scheduled in a predefined order (A, B then C) 12 105
iii. The OS here only allows a process to continue for a 13 8000
B starts
maximum of six instruction cycles (the quantum), after 14 8001 Process B
which it is interrupted. 15 8002
16 8003 B ends
17 100
18 101
19 102 Dispatcher
20 103
21 104
22 105 C starts
23 12000
24 12001
25 12002 Process C
26 12003
27 12004
28 12005 C interrupted
10
Context switch, quantum and ready queue (3)
e.g. We consider the case of Cycle Instructions
29 100
i. Three processes A, B, C and a dispatcher which traces (i.e. 30 101
instructions listing), given in the next table. 31 102
32 103 Dispatcher
Process A Process B Process C Dispatcher 33 104
34 105
5000 8000 12000 100 35 5006
A continues
5001 8001 12001 101 36 5007
…. 8002 … … 37 5008
5011 8003 12011 105 38 5009
Process A
39 5010
ii. Processes are scheduled in a predefined order (A, B then C) 40 5011
iii. The OS here only allows a process to continue for a 41 100
A ends
maximum of six instruction cycles (the quantum), after 42 101
which it is interrupted. 43 102
44 103
Dispatcher
45 104
46 105
C continues
47 12006
48 12007
49 12008
Process C
50 12009
51 12010
52 1211
C ends

11
Context switch, quantum and ready queue (4)
e.g. We consider the case of Quantum < i i+1 i+2 i+3 i+4
Instruction cycle Na 6 4 6 6 6
i. Three processes A, B, C and a dispatcher which traces (i.e. Scheduled process Na A B C A C
instructions listing), given in the next table. by the CPU

Process A Process B Process C Dispatcher Ready queue state A B C A C


B C A
5000 8000 12000 100 C
5001 8001 12001 101
…. 8002 … …
5 quanta / 4 context switches (n-1 quanta)
5011 8003 12011 105
28 process instruction (6+4+6+6+6)
ii. Processes are scheduled in a predefined order (A, B then C) 64=24 dispatcher instructions
iii. The OS here only allows a process to continue for a a maximum of two processes in the ready queue
maximum of six instruction cycles (the quantum), after
which it is interrupted.
The length of the quantum can be critical to balance the
system performance vs. process responsiveness.
• If the quantum is too short then the scheduler will
consume too much processing time.
• If the quantum is too long, processes will take longer to
respond to inputs.

12
Context switch, quantum and ready queue (5)

The ready queue is a huge-data list generally composed of PCB pointers, it is stored
as a linked list in the main memory, managing pointers from the first to the last PCB.

Linked list of PCBs


Head of the
queue
PCB1 PCB2 PCB3 PCB4
First, last and next are PCB pointers in the list.
First Next Next Next Next
If we delete a PCB (i), pointer of the previous
Last
PCB (i-1) jumps to next one (i+1) i.e. it is not
Data Data Data Data
necessary to fill the empty space or to move
(copy) the PCBs.

Delete operation

13
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

14
Process and diagram models (1)

Process model and context parameters


RT
PID process number

parameters

process
rank rank in the ready queue WT C WT C

Process
w0 wakeup time
C capacity C(t)
P priority

s start time (run as a first time) w0 s t e t


e end time (termination)
RT = e- w0 response time C(t)
WT = RT-C waiting time C
parameters
context
C(t) residual capacity at t
C(w0) = C, C(e)=0
T(t)=C-C(t) CPU time consumed at t
0
T(w0)=0, T(e) = C, t
E(t)=t-w0 CPU time entitled
E(w0)=0, E(e)=RT
WT(t)=E(t)-T(t) waiting time at t
WT(w0)=0, WT(e)=WT

15
Process and diagram models (2)

Process model and context parameters Process

PID process number

parameters
rank rank in the ready queue

Process
w0 wakeup time
C capacity
1 4 6 7 10 12 t
P priority
w0 1 (if = s) s 1
s start time (run as a first time) C 6 e 12
e end time (termination) P Na RT 12-1 = 11
RT = e- w0 response time WT 11-6 = 5
WT = RT-C waiting time

parameters
context
C(t) residual capacity at t
C(w0) = C, C(e)=0
T(t)=C-C(t) CPU time consumed at t
T(w0)=0, T(e) = C,
E(t)=t-w0 CPU time entitled
E(w0)=0, E(e)=RT
WT(t)=E(t)-T(t) waiting time at t
WT(w0)=0, WT(e)=WT

16
Process and diagram models (3)
Process
C t=1 = + =3+3
Process model and context parameters C t=4 = =3

CPU burst time is an assumption of how long


a process requires the CPU between I/O waits.
1 4 6 7 10 12 t

context parameters
It means the amount of time that a process uses
the CPU without interruption.
I/O Interrupt S Interrupt I/O Interrupt
There is a direct and relationship between the
durations of the burst tn to come and the Burst
residual capacity C(t) (i.e. any future burst is a id Position Duration t
fraction of the residual capacity): t0 1, 4 4-1=3
t1 4, 12 (7-6)+(12-10)=3
C(t) = t
within the

same burst

short-term
scheduling
CPU Exit
ready queue
I/O completion
next burst
I/O wait

blocked queue

17
Process and diagram models (4)

Process behavior: some processes spend most of their Time bound process
time computing (time-bound), while others spend most of
their time waiting for I/O (I/O bound).

The key factor is the length of the CPU burst, not the
CPU CPU CPU
length of the I/O burst i.e. The I/O bound processes do not
switch switch switch
compute much between the I/O requests.
I/O bound process
It is worth nothing that as a CPU gets faster, processes
tend to be bounded with I/O. As a consequence, resource
scheduling become an important issue.

I/O I/O I/O I/O


interrupt interrupt interrupt interrupt

Time measurement is related to the analysis of the duration of CPU


bursts. The CPU bursts tend to have a frequency characterized as an

frequency
exponential. This law varies from process to process and from
computer to computer.

Burst duration (s) 18


Process and diagram models (5)
• Gantt diagram
Scheduling diagrams vary from book to book
and from lecture to lecture. P1 P2 P3

• Process diagram
Process t0 t1 t2 t3
P1 P2 P1
• Table
time or quantum 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9

id processus
t P1 C(t) 7-6 6-5 5-4 4-3 3-2 2-2 2-2 2-1 1-0

Process P2 C(t) 2-2 2-1 1-0

P1 variation of C(t) only


x-x waiting process
value when the value when the
P2 quantum starts quantum stops x-x running process
C(t=2) = 5 C(t=3) = 4
P3
Process t
time or quantum 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9
C(t) 7-6 6-5 5-4 4-3 3-2 2-2 2-2 2-1 1-0
id processus

P1
(t) 14-13 13-12 12-11 11-10 10-9 9-9 9-9 9-8 8-7
P1
t P2 C(t) 2-2 2-1 1-0

P2 (t) 10-10 10-9 9-8


t
variation of C(t)
with an other criterion (t)
x-x waiting process
P3
t x-x running process

P1 • The diagram is a text ….


Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

20
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“First Come, First Served (FCFS)”
First Come First Serve (FCS): processes are Processes Wakeup (w0) Capacity (C)
scheduled regarding their positions in the ready
queue (1, 2, 3, …). With equal arrival date P1 0 24
(wakeup time) w0, the process id could be used P2 5 3
P1>P2>P3 etc.
P3 9 3
P4 9 3

P1 P2 P3 P4

0 24 27 30 33

P2, P3, P4 arrive at t=5 and t=9 while With a similar wakeup time w0
P1 is scheduled, in a non-preemptive for P3, P4, we use the process id
policy P1 will terminate first P3>P4 for scheduling

P1 arrives at t=0, the When P1 ends, P2, P3, P4


single process in the are scheduled regarding their
ready queue arrival dates w0 in the ready
queue, then P2 starts
22
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Priority Scheduling (PS)” (1)
Priority Scheduling (PS): when a process is Processes Wakeup (w0) Capacity (C) Priority (P)
finished, we shift to the process with the highest
priority (i.e. the lowest P value). P1 0 6 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 6 2

P1 P2 P5 P3 P4

0 6 7 13 15 16

P1 alone in the ready At t=6, P2, P3, P4, P5 are in the ready
queue arrives at t=0 queue, the scheduling will go on with the
priority order P2>P5>P3>P4

24
Scheduling algorithms
“Priority Scheduling (PS)” (2)
Priority Scheduling (PS): the preemptive case, at Processes Wakeup (w0) Capacity (C) Priority (P)
any time, we look for the process of the highest
priority (i.e. the lowest P value). P1 0 6 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 6 2

t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 C(t) 6-5 5-5 5-4 4-3 3-3 3-3 3-3 3-3 3-3 3-3 3-2 2-1 1-0
P2 C(t) 1-0
P3 C(t) 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-1 1-0
P4 C(t) 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-0
P5 C(t) 6-5 5-4 4-3 3-2 2-1 1-0

P2 of highest priority P5 of highest priority When a process ends, the


takes the CPU preempts P1 process with the lowest
priority is scheduled

25
Scheduling algorithms
“Dynamic Priority Scheduling (DPS)”
Dynamic Priority Scheduling (DPS): works with Processes Wakeup (w0) Capacity (C) Priority (P)
a dynamic priority P(t) and is a preemptive algorithm
1. a process starts with a P(t=w0) = P, its initial priority value P1 0  1
2. when a process is running, P(t) is constant P2 0  3
3. when a process is waiting P(t+1) = P(t)+1 P3 0  5
4. at any time, the process of highest P(t) takes the CPU
5. if Pi(t) = Pj(t) for two processes i,j, thus we look for Pi (w0), Pj (w0)
6. when a process recovers the CPU at tn, we reset P(tn) = P(w0) = P

t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 P(t) 1-2 2-3 3-4 4-5 5-6 1-1 1-2 2-3 3-4 4-5 5-6 6-7 1-1 1-2 2-3 3-4
P2 P(t) 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3
P3 P(t) 5-5 5-5 5-5 5-6 5-5 5-6 5-5 5-6 5-5 5-5 5-5 5-6 6-7 5-5 5-5 5-6

A context switch
P3 is running,
we reset P(t)
P(t) is constant Equivalence case,
we look for P (w0)
Equivalence case,
we look for P (w0)

26
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Highest Response Ratio Next (HRRN)” (1)
For each process, we would like to minimize a normalized turnaround time defined as
+
= = +1

with WTi(t) the waiting time of process i at t and Ci the capacity. Let’s note that 1  Ri(t)  

Distribution
Misc
Considering a non-preemptive scheduling we have T(t) = 0 at t<s, scheduling
then WT(t) = E(t) – (T(t)=0) = E(t) = t – w0, R(t) is then
( − )+ ( − )
= = +1 HRRN

The scheduling is non-preemptive and looks for the highest R(t) value at any context switch. RT
The idea behind this method is to get the mean response ratio low,
so if a job has a high response ratio, it should be run at once to reduce the mean.

28
Scheduling algorithms
“Highest Response Ratio Next (HRRN)” (2)
For each process, we would like to minimize a normalized Processes Wakeup (w0) Capacity (C)
turnaround time defined as P1 0 3
+ ( − )+
= = P2 2 6
P3 4 4
P4 6 5
P5 8 2

P1 P2 P3 P5 P4

0 3 9 13 15 20

P1 arrives at t=0, it is P3, P4 and P5 are here, we P4 the last process is


the single process in compute the Ri(t), we have scheduled next
the ready queue and R3(t) > R4(t) > R5(t),
then scheduled P3 is scheduled next P4 and P5 are still waiting, with
(9 − 4) + 4 9 R5(t) > R4(t) P5 is the next process.
P2 arrives at w0=2, the single = = = 2,25
4 4 We observe a priority inversion between P4,
waiting process in the ready (9 − 6) + 5 8 P5 at t=9 and t=13 due to the dynamic R(t)
= = = 1,6
queue, it is scheduled next 5 5 (13 − 6) + 5 12
(9 − 8) + 2 3 = = = 2,4
= = = 1,5 5 5
2 2 (13 − 8) + 2 7 29
= = = 3,5
2 2
Scheduling algorithms
“Shortest Job First (SJF)”
Shorted Job First (SJF): in the preemptive case, at Processes Wakeup (w0) Capacity (C)
any time, it looks for the process of the shortest
residual capacity C(t) in the ready queue. It is also P1 0 7
called Shortest Remaining Time (SRT). The non P2 2 4
preemptive version is called the Shortest Process Next
P3 4 1
(SPN). When a process ends, it looks for the process of
the shortest capacity C in the ready queue. P4 5 4

t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 C(t) 7-6 6-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-4 4-3 3-2 2-1 1-0
P2 C(t) 4-3 3-2 2-2 2-1 1-0
P3 C(t) 1-0
P4 C(t) 4-4 4-4 4-3 3-2 2-1 1-0

A shortest process arises, When a process ends,


we shift the context we shift to the process of shortest remaining C(t)

30
One difficulty with the SJF algorithm is the need to know
Scheduling algorithms
the required residual capacity. When the system cannot
guaranty a predictability, we can use the time prediction. “Time prediction” (1)
For the I/O bound processes, the OS may keep a CPU
burst average Tn for each of the processes. This criterion T
interpolates a fraction 1/n of the CPU time consumed (and
then the residual capacity C(t)).
1 n
The simplest calculation for Tn would be the following Tn 1   ti
n i 1
To avoid recalculating the entire summation each time, 1 n 1
Tn 1  t n  Tn
we can rewrite the previous equation as n n

A common technique for predicting a future value on the


basis of a time series is exponential averaging

with,

Tn+1 is the prediction of the next CPU burst “n+1” Tn 1    t n  1    Tn


Tn time prediction of the current CPU burst “n” Tn 1    tn   1     tn 1  ...   1     tn  j  ...   1     T0
j n

tn time value of the current CPU burst “n”


 controls the relative weight (0-1) between the
because   [0-1], each term has
next (Tn+1) and the previous (Tn) prediction
less weight than its predecessor

31
Scheduling algorithms
“Time prediction” (2)
A common technique for predicting a future value on the Tn 1    tn  1    Tn
basis of a time series is exponential averaging
with,
Tn+1 is the prediction of the next CPU burst “n+1”
Tn time prediction of the current CPU burst “n”
tn time value of the current CPU burst “n”
 controls the relative weight (0-1) between the
next (Tn+1) and the previous (Tn) prediction

=0 Tn 1  Tn recent history has no effect


=1 Tn 1  t n only the most recent CPU burst matters

Ti
ti alpha
0,1 0,5 0,9
If first execution 6,00 10,00 10,00 10,00
(i.e. w0), T0 is a chosen 4,00 9,60 8,00 6,40
6,00 9,04 6,00 4,24
as a constant (e.g. the
4,00 8,74 6,00 5,82
overall system average)
13,00 8,26 5,00 4,18
13,00 8,74 9,00 12,12
13,00 9,16 11,00 12,91
13,00 9,55 12,00 12,99

9,6  0,1 6  0,9  10 6,4  0,9  6  0,110


32
Scheduling algorithms
“Time prediction” (3)
I/O completion
e.g. Time prediction with the SRT algorithm (SJF preemptive) events
Process A Process B
1 A
i. We consider the case of two processes A, B with the Ti Ti
2 B
following observed CPU bursts and I/O completion events at ti alpha ti alpha
a time interval [t0, t0+T] At t0, A, B are in the blocking queue. 0,4 0,4 3 A
ii. We have T0 = 5 and  = 0.4 as parameters. 4,00 5,00 3,00 5,00
4 A,B
iii. We assume that at any I/O completion event A, B are 5,00 4,60 6,00 4,20
3,00 4,76 4,00 4,92 5 B
concurrent for the CPU access (i.e. when B released A is
scheduled and vice-versa).

short-term
scheduling
CPU Exit
ready queue
I/O completion
I/O wait

blocked queue

33
Scheduling algorithms
“Time prediction” (4)
I/O completion
e.g. Time prediction with the SRT algorithm (SJF preemptive) events
Process A Process B
1 A
i. We consider the case of two processes A, B with the Ti Ti
2 B
following observed CPU bursts and I/O completion events at ti alpha ti alpha
a time interval [t0, t0+T] At t0, A, B are in the blocking queue. 0,4 0,4 3 A
ii. We have T0 = 5 and  = 0.4 as parameters. 4,00 5,00 3,00 5,00
4 A,B
iii. We assume that at any I/O completion event A, B are 5,00 4,60 6,00 4,20
3,00 4,76 4,00 4,92 5 B
concurrent for the CPU access (i.e. when B released A is
scheduled and vice-versa).

events <1 1 2 3 4 5
blocked queue A,B B A A A,B B A A,B
ready queue B(5) B(5) A(4.7) B(4.9)
CPU A(5) A(5) B(5) A(4.6) B(5) B(4.2) A(4.7) A(4.7) B(4.9)
B is released while A is scheduled (iii),
with TBTA B waits in the ready queue
Two releases occur at the same time,
A is released and alone in the TBTA then A waits in the queue
ready queue, then scheduled When A ends, B shifts from the ready queue to the
CPU then ends and returns to the blocked queue
B is released while A is A is released while B is scheduled (iii), with
scheduled (iii), with TB=TA then T  TB then B is preempted
B waits in the ready queue When A ends, A
34
B is scheduled
Scheduling algorithms
“Time prediction” (5)
I/O completion
e.g. Time prediction with the SRT algorithm (SJF preemptive) events
Process A Process B
1 A
i. We consider the case of two processes A, B with the Ti Ti
2 B
following observed CPU bursts and I/O completion events at ti alpha ti alpha
a time interval [t0, t0+T] At t0, A, B are in the blocking queue. 0,4 0,4 3 A
ii. We have T0 = 5 and  = 0.4 as parameters. 4,00 5,00 3,00 5,00
4 A,B
iii. We assume that at any I/O completion event A, B are 5,00 4,60 6,00 4,20
3,00 4,76 4,00 4,92 5 B
concurrent for the CPU access (i.e. when B released A is
scheduled and vice-versa).

A B,A, B B A B
(4) (3,5) (6) (3) (4)

t0 t0+4 t0+12 t1 t1+6 t1+9 t1+13

35
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Guaranteed Scheduling (GS)” (1)
With n processes running, all things being equal, each one should get 1/n of
the CPU utilization. For a process i, the scheduling algorithm:

1. keeps a track of the actual CPU time consumed, F1 (t )  Ti (t )  Ci  Ci (t )


Ei (t ) t  wi
2. it then computes the CPU time entitled ratio, F2 (t )  
n n
3. the CPU time consumed is normalized F1 (t ) Ti (t ) T (t )
Ri (t )   n  i n
with the CPU time entitled ratio, F2 (t ) E i (t ) t  wi
the lowest value has the higher priority.
 1 Ti (t )  Ei (t ) / n Pi got more CPU time than guaranteed.

Ri (t )  1 Ti (t )  Ei (t ) / n Pi got a right faction of the CPU.
 1 T ( t )  E ( t ) / n
 i i Pi is in a starvation and has a high priority.

37
Scheduling algorithms
“Guaranteed Scheduling (GS)” (2)
With n processes running, all things being equal, each one should get 1/n of Processes Wakeup (w0) Capacity (C)
the CPU utilization.
P1 0 ∞
T (t )
the CPU time consumed is normalized Ri (t )  i n P2 2 ∞
with the CPU time entitled ratio, t  wi
P3 4 ∞
the lowest value has the higher priority

t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8


n 1 1 2 2 3 3 3 3

T(t) 0-1 1-2 2 2-3 3 3 3-4 4


P1 t-w0 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8
R(t) 0=(0/01) 1=(1/11) 2=(2/22) 1.3=(2/32) 2.2=(3/43) 1.8=(3/53) 1.5=(3/63) 1.7=(4/73)
T(t) 0-1 1 1 1-2 2 2
P2 t-w0 0-1 1-2 2-3 3-4 4-5 5-6
R(t) 0=(0/02) 2=(1/12) 1.5=(1/23) 1=(1/33) 1.5=(2/43) 1.2=(2/53)
T(t) 0-1 1 1 1-2
P3 t-w0 0-1 1-2 2-3 3-4
R(t) 0=(0/03) 3=(1/13) 1.5=(1/23) 1=(1/33)

When n increases, R(t) increases, R1(t)=R2(t)=R3(t), we apply a


we shift to the lowest R(t) selection on id P1>P2>P3
While a process is scheduled, R(t) increases as T(t) increases After a while, the algorithm looks for a
While a process is waiting, R(t) decreases as T(t) is constant convergence R1(t)  R2(t)  R3(t)  1
38
Scheduling algorithms
“Round Robin (RR)”
We assign a quantum set m to each process in equal Processes Wakeup (w0) Capacity (C)
portions and in a circular order (A look-like FCFS),
handling all the processes without priority. P1 0 53
i. Every m quantum, we shift to the following P2 0 17
We will use here m = 20
process in the ready queue.
P3 0 68
ii. When a process is ended and a rest of quantum
appears, we shift to the next process. P4 0 24

t or q 0-20 20-37 37-57 57-77 77-97 97-117 117-121 121-134 134-154 154-162
P1 C(t) 53-33 33-33 33-33 33-33 33-13 13-13 13-13 13-0
P2 C(t) 17-17 17-0
P3 C(t) 68-68 68-68 68-48 48-48 48-48 48-28 28-28 28-28 28-8 8-0
P4 C(t) 24-24 24-24 24-24 24-4 4-4 4-4 4-0

Processes are scheduled regarding


their positions in the ready queue The last process is
A process is ended before m terminated in some
we shift to the next process successive steps

39
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Fair-Share Scheduling (FSS)” (1)
Applications may be organized with multiple processes. The FSS scheduling algorithm allocates a fraction of the
processor resources to each group. An hybrid scheduling, mixing the round robin & priority scheduling (using a base
priority, a exponential iterative reduction rule, a group weighting), assures a fair share of the CPU for each process.

Pj (i) is the priority of process j at beginning of interval i,


lower values equal higher priorities
Basej is the base “or root” priority of process j
CPUj(i) is the measure of processor utilization by process j
through the interval i
GCPUk (i) is the measure of processor utilization by group k
through the interval i
wk is the weight assigned to group k, with the constraint
0 wk 1 and  wk  1
k

The scheduler applies a round robin and looks for minimization of the criterion Pj (i) at each round.

CPU j (i ) GCPU k (i ) Priority


Pj (i )  Base j  
2 4  wk scheduling
CPU j (i  1) (2)
with CPU j (i)  (1)
2
GCPU k (i  1) Round Robin
and GCPU k (i ) 
2

If the Pj(i) are equal j, (1) If for j , two or more min Pj(i) appear
41
we apply a selection based on the round robin e.g. P1 > P2 > P3. (2) Whenever for j, a standalone min Pj(i) is here
Scheduling algorithms
“Fair-Share Scheduling (FSS)” (2)
The scheduler applies a round robin and looks
for minimization of the criterion Pj (i) at each round. Processes Wakeup Priority Capacity Group
CPU j (i ) GCPU k (i ) (w0) (C)
Pj (i )  Base j  
2 4  wk P1 0 60 ∞ 1
CPU j (i  1) P2 0 60 ∞ 2
with CPU j (i ) 
2 P3 0 60 ∞ 2
GCPU k (i  1)
and GCPU k (i)  e.g. w1 = w2 = 0.5 and m = 60
2
t or q 00-60 60-120 120-180 180-240 240-300 300-360
CPU(t) 0-60 30 15-75 37 18-78 39
P1 GCPU(t) 0-60 30 15-75 37 18-78 39
P(t) 60 (60+0+0) 90 (60+15+15) 74 (60+7+7) 96(60+18+18) 78 (60+9+9) 98(60+19+19)
CPU(t) 0 0-60 30 15 7 3-63
P2 GCPU(t) 0 0-60 30 15-75 37 18-78
P(t) 60 (60+0+0) 60 (60+0+0) 90 (60+15+15) 74 (60+7+7) 81(60+3+18) 70(60+1+9)
CPU(t) 0 0 0 0-60 30 15
P3 GCPU(t) 0 0-60 30 15-75 37 18-78
P(t) 60 (60+0+0) 60 (60+0+0) 75 (60+0+15) 67 (60+0+7) 93(60+15+18) 76(60+7+9)

P1(t) = P2(t) = P3(t), we apply a P2(t) ≠ P3(t) with a same


Scheduling will go on, P1 will have more
selection based on the round robin GCPU2(t) and CPU2(t) ≠ CPU3(t)
chance to get the CPU as it constitutes a
P1 > P2 > P3 When P1 is scheduled, P1(t) single group
increases and P2(t) = P3(t) remains 42
constant, the RR policy applies here
Scheduling algorithms
“Multilevel feedback queue scheduling (MLFQ)” (1)
The multilevel feedback queue scheduling algorithm allows processes to Priority
move between queues. The idea is to separate the processes according to their scheduling
CPU bursts.
dispatcher RQ0
n
1. The queues are organized according to priority levels, U RQ
i 0
i
feedback
control
2. When a process first enters in the system, it is placed in RQ0. RQ1
with Round
Robin
3. In general, a process scheduled from RQi is allowed to execute a maximum …
of m = 2k+i time units (i.e. quantum) before a preemption. ready RQ2
queue
4. After a preemption at level i, a process shifts to the level i+1.
RQn
5. Within each queue, a simple FCFS mechanism is used.

6. A process at a priority level i can preempt any process at a priority level > i.

43
Scheduling algorithms
“Multilevel feedback queue scheduling (MLFQ)” (2)
Processes Wakeup (w0) Capacity (C) t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10

RQ0 P1 P2 P3 P4 P5
P1 0 3
P1 P1 P1, P2 P2, P2, P2, P3, P3,
P2 2 6 RQ1 P2 P3 P3 P3, P4 P4,
P4 P5
P3 4 4
RQ2 P2 P2
P4 6 5 C(t) 3-2 2-1 1 1-0
P1
P5 8 2 P(t) 0-1 1 1 1
C(t) 6-5 5 5 5-4 4 4-3 3 3
P2
m with P(t) 0-1 1 1 1 1 1-2 2 2
k=0 C(t) 4-3 3 3 3 3 3-2
P3
RQ0 20+0=1 P(t) 0-1 1 1 1 1 1
C(t) 5-4 4 4 4
RQ1 20+1=2 P4
P(t) 0-1 1 1 1
RQ2 20+2=4 C(t) 2-1 1
P5
P(t) 0-1 1

P1 ends Same case with P5 arrives as P2


P1 starts in the queue RQ0
before to P2, P3 and P4 shifts to RQ2 after
After a quantum 20=1,
shift to RQ2 20+21=3 times
P1 shifts to RQ1
units
When P2 shifts to Same case with
P2 starts and RQ , within the
1
preempts P1 as FCFS policy P1 P2, P3
44
i=0 < i=1 is scheduled first
Scheduling algorithms
“Multilevel feedback queue scheduling (MLFQ)” (3)
Processes Wakeup (w0) Capacity (C) t or q 10-11 11-12 12-13 13-14 14-15 15-16 16-17 17-18 18-19 19-20

RQ0
P1 0 3
P3, P4, P4, P5
P2 2 6 RQ1 P4, P5 P5
P5
P3 4 4
P2, P2, P2, P2, P2, P2, P2, P3, P4 P4
P4 6 5 RQ2 P3 P3 P3, P3, P3, P3, P4
P4 P4 P4 P4
P5 8 2 C(t)
P1
P(t)
m with C(t) 3 3 3 3 3-2 2-1 1-0
P2
k=1 P(t) 2 2 2 2 2 2 2
RQ0 21×0=1 C(t) 2-1 1 1 1 1 1 1 1-0
P3
21×1=2 P(t) 1-2 2 2 2 2 2 2 2
RQ1
C(t) 4 4-3 3-2 2 2 2 2 2 2-1 1-0
RQ2 21×2=4 P4
P(t) 1 1 1-2 2 2 2 2 2 2 2
C(t) 1 1 1 1-0
P5
P(t) 1 1 1 1

After a quantum 21=2, P5 ends before Scheduling will go on


P3 shifts to RQ2 to shift to RQ2
Within RQ2, P2 can
Same case with execute on 3 < 22 time
P4 units before completion 45
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity

First Come First Serve no rank in the queue static no Arrival time Arrival

Priority Scheduling yes/no process priority static no Respecting the priority


Dynamic Priority process priority Respecting the priority and Priority
yes dynamic no
Scheduling with aging avoiding the fairness

Highest Response Ratio Optimal


no response ratio dynamic yes
Next response time
shortest remaining static/ Optimal
Shortest Job First yes/no yes Optimization
time dynamic waiting time
shortest predicted Achieving the
Time prediction no/yes dynamic no
time predictability with the SJF

Guaranteed Scheduling yes CPU use ratio dynamic no


Enforcing the
rank in the queue Time sharing
Round-Robin yes dynamic no response time
and round

Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

47
Modeling multiprogramming
Modeling multiprogramming: from a probabilistic point of view,
suppose that a process spends a fraction p of its time waiting for
I/O to complete.

With n processes in memory, the probability that these processes


are waiting for I/O (the case where the CPU will be idle) is pn. The
CPU utilization is then given by the formula

CPU utilization  1  p n n is the number of processes


p is their (common) I/O rate

e.g. 80% I/O rate, 4 processes CPU utilization  1  0,84  0,5904

When the I/O rates are different, formula can be expressed as


n
CPU utilization  1   pi n is the number of processes
i 1 pi is the I/O rate of process i

e.g. P1 (80%), P2(60%), P3(40%) P4(60%) CPU utilization  1  0,8  06  0,4  0,6  0,8704

48
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms

49
Evaluation of algorithms

Simulation aims to handle a model of the OS for evaluation (scheduling algorithm, processes, etc.). The simulator has a variable
representing a clock, when increasing the simulator modifies the state of the system.
The data to drive simulation can be generated in two main ways:
- to use synthetic data with a random number generator.
- to record trace tapes by monitoring a real system.

process workload
P r0 C
P1 0 10 WT
Random number
generator P2 5 20 FCFS 28
Statistics
P3 3 3 SJF 13
P4 7 7 RR 23

P r0 Bursts
P5 9 12
P1 0 {7,3}
P2 5 {1,8,7,4} As the simulation reflects a real system, statistics about the algorithm
Recording a real
system P3 3 {2,1} performances could be computed. However, simulation requires hours of
P4 7 {2,1,2,2} computation and a huge amount of data. In addition, design, coding and
P5 9 {8,1,3} debugging of a simulator can be a major task.
trace tapes

50

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy