Part 3
Part 3
“Uniprocessor scheduling”
Mathieu Delalandre
University of Tours, Tours city, France
mathieu.delalandre@univ-tours.fr
1
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms
2
About short-term scheduling (1)
(Short-term) scheduler is a system process running an algorithm to decide which of the ready processes
are to be executed (i.e. allocated to the CPU). Different performance criteria have to be considered:
Waiting time: amount of time a process has been waiting in the ready queue
Throughput: number of processes that complete their execution per time unit
CPU utilization: to keep the CPU as busy as possible
Fairness : a process should not suffer of starvation i.e. never loaded to CPU
Enforcing priorities:when processes are assigned with priorities, the scheduling policy should
favor the high priority processes
Balancing resources: the scheduling policy should keep the resources of the system busy
Etc.
3
About short-term scheduling (2)
Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:
Parameters
4
About short-term scheduling (3)
Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:
On-line/off-line: off-line scheduling builds complete planning sequences with all the parameters of the process.
The schedule is known before the process execution and can be implemented efficiently.
Relative/strict deadline: a process is said with no (or a relative) deadline if its response time doesn’t affect the
performance of the system and jeopardize the correct behavior.
Dynamic/static priority: static algorithms are those in which the scheduling decisions are based on fixed
parameters, assigned to processes before their activation. Dynamic scheduling employs parameters that may
change during the system evolution.
5
About short-term scheduling (4)
Depending of the considered systems (mainframes, server computers, personal computers, real-time systems, embedded
systems, etc.), different scheduling problems have to be considered:
Dependent /independent process: a process is dependent (or cooperating) if it can affect (or be affected by) the
other processes. Clearly, any process than share data and uses IPC is a cooperating process.
Resource sharing: from a process point of view, a resource is any software structure that can be used by the
process to advance its execution.
Periodic/aperiodic process: a process is said periodic if, each time it is ready, it releases a periodic request.
Mono-core / Multi-core: when a computer system contains a set of processor that share a common main
memory, we’re talking about a multiprocessor /multi-core scheduling.
6
About short-term scheduling (5)
The real problem with the scheduling is the definition of the scheduling criteria, algorithm is little discussed.
7
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms
8
Context switch, quantum and ready queue (1)
Dispatcher is in charge of passing the control of the CPU to the process selected by the short-term scheduler.
Context switch is the operation of storing and restoring state (context) of a CPU so that the execution can be resumed from the
same point at a later time. It is based on two distinct sub-operations, state safe and state restore. Switching from one process to
another requires a certain amount of time (saving and loading the registers, the memory maps, etc.).
Quantum (or time slice) is the period of time for which a process is allowed to run in a preemptive multitasking system. The
scheduler is run once every time slice to choose the next process to run.
exit
(time slice)
quantum
P0 waiting, P1 running
interrupt or
save state into PCB1 system call
Context switch
P0 / P1 waiting reload state from PCB0
P0 running, P1 waiting
exit
9
Context switch, quantum and ready queue (2)
e.g. We consider the case of Cycle Instructions
1 5000 A starts
i. Three processes A, B, C and a dispatcher which traces (i.e. 2 5001
instructions listing), given in the next table. 3 5002
4 5003 Process A
Process A Process B Process C Dispatcher 5 5004
6 5005
5000 8000 12000 100 7 100 A interrupted
5001 8001 12001 101 8 101
…. 8002 … … 9 102
5011 8003 12011 105 10 103 Dispatcher
11 104
ii. Processes are scheduled in a predefined order (A, B then C) 12 105
iii. The OS here only allows a process to continue for a 13 8000
B starts
maximum of six instruction cycles (the quantum), after 14 8001 Process B
which it is interrupted. 15 8002
16 8003 B ends
17 100
18 101
19 102 Dispatcher
20 103
21 104
22 105 C starts
23 12000
24 12001
25 12002 Process C
26 12003
27 12004
28 12005 C interrupted
10
Context switch, quantum and ready queue (3)
e.g. We consider the case of Cycle Instructions
29 100
i. Three processes A, B, C and a dispatcher which traces (i.e. 30 101
instructions listing), given in the next table. 31 102
32 103 Dispatcher
Process A Process B Process C Dispatcher 33 104
34 105
5000 8000 12000 100 35 5006
A continues
5001 8001 12001 101 36 5007
…. 8002 … … 37 5008
5011 8003 12011 105 38 5009
Process A
39 5010
ii. Processes are scheduled in a predefined order (A, B then C) 40 5011
iii. The OS here only allows a process to continue for a 41 100
A ends
maximum of six instruction cycles (the quantum), after 42 101
which it is interrupted. 43 102
44 103
Dispatcher
45 104
46 105
C continues
47 12006
48 12007
49 12008
Process C
50 12009
51 12010
52 1211
C ends
11
Context switch, quantum and ready queue (4)
e.g. We consider the case of Quantum < i i+1 i+2 i+3 i+4
Instruction cycle Na 6 4 6 6 6
i. Three processes A, B, C and a dispatcher which traces (i.e. Scheduled process Na A B C A C
instructions listing), given in the next table. by the CPU
12
Context switch, quantum and ready queue (5)
The ready queue is a huge-data list generally composed of PCB pointers, it is stored
as a linked list in the main memory, managing pointers from the first to the last PCB.
Delete operation
13
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms
14
Process and diagram models (1)
parameters
process
rank rank in the ready queue WT C WT C
Process
w0 wakeup time
C capacity C(t)
P priority
15
Process and diagram models (2)
parameters
rank rank in the ready queue
Process
w0 wakeup time
C capacity
1 4 6 7 10 12 t
P priority
w0 1 (if = s) s 1
s start time (run as a first time) C 6 e 12
e end time (termination) P Na RT 12-1 = 11
RT = e- w0 response time WT 11-6 = 5
WT = RT-C waiting time
parameters
context
C(t) residual capacity at t
C(w0) = C, C(e)=0
T(t)=C-C(t) CPU time consumed at t
T(w0)=0, T(e) = C,
E(t)=t-w0 CPU time entitled
E(w0)=0, E(e)=RT
WT(t)=E(t)-T(t) waiting time at t
WT(w0)=0, WT(e)=WT
16
Process and diagram models (3)
Process
C t=1 = + =3+3
Process model and context parameters C t=4 = =3
context parameters
It means the amount of time that a process uses
the CPU without interruption.
I/O Interrupt S Interrupt I/O Interrupt
There is a direct and relationship between the
durations of the burst tn to come and the Burst
residual capacity C(t) (i.e. any future burst is a id Position Duration t
fraction of the residual capacity): t0 1, 4 4-1=3
t1 4, 12 (7-6)+(12-10)=3
C(t) = t
within the
∀
same burst
short-term
scheduling
CPU Exit
ready queue
I/O completion
next burst
I/O wait
blocked queue
17
Process and diagram models (4)
Process behavior: some processes spend most of their Time bound process
time computing (time-bound), while others spend most of
their time waiting for I/O (I/O bound).
The key factor is the length of the CPU burst, not the
CPU CPU CPU
length of the I/O burst i.e. The I/O bound processes do not
switch switch switch
compute much between the I/O requests.
I/O bound process
It is worth nothing that as a CPU gets faster, processes
tend to be bounded with I/O. As a consequence, resource
scheduling become an important issue.
frequency
exponential. This law varies from process to process and from
computer to computer.
• Process diagram
Process t0 t1 t2 t3
P1 P2 P1
• Table
time or quantum 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9
id processus
t P1 C(t) 7-6 6-5 5-4 4-3 3-2 2-2 2-2 2-1 1-0
P1
(t) 14-13 13-12 12-11 11-10 10-9 9-9 9-9 9-8 8-7
P1
t P2 C(t) 2-2 2-1 1-0
20
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“First Come, First Served (FCFS)”
First Come First Serve (FCS): processes are Processes Wakeup (w0) Capacity (C)
scheduled regarding their positions in the ready
queue (1, 2, 3, …). With equal arrival date P1 0 24
(wakeup time) w0, the process id could be used P2 5 3
P1>P2>P3 etc.
P3 9 3
P4 9 3
P1 P2 P3 P4
0 24 27 30 33
P2, P3, P4 arrive at t=5 and t=9 while With a similar wakeup time w0
P1 is scheduled, in a non-preemptive for P3, P4, we use the process id
policy P1 will terminate first P3>P4 for scheduling
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Priority Scheduling (PS)” (1)
Priority Scheduling (PS): when a process is Processes Wakeup (w0) Capacity (C) Priority (P)
finished, we shift to the process with the highest
priority (i.e. the lowest P value). P1 0 6 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 6 2
P1 P2 P5 P3 P4
0 6 7 13 15 16
P1 alone in the ready At t=6, P2, P3, P4, P5 are in the ready
queue arrives at t=0 queue, the scheduling will go on with the
priority order P2>P5>P3>P4
24
Scheduling algorithms
“Priority Scheduling (PS)” (2)
Priority Scheduling (PS): the preemptive case, at Processes Wakeup (w0) Capacity (C) Priority (P)
any time, we look for the process of the highest
priority (i.e. the lowest P value). P1 0 6 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 6 2
t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 C(t) 6-5 5-5 5-4 4-3 3-3 3-3 3-3 3-3 3-3 3-3 3-2 2-1 1-0
P2 C(t) 1-0
P3 C(t) 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-2 2-1 1-0
P4 C(t) 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-1 1-0
P5 C(t) 6-5 5-4 4-3 3-2 2-1 1-0
25
Scheduling algorithms
“Dynamic Priority Scheduling (DPS)”
Dynamic Priority Scheduling (DPS): works with Processes Wakeup (w0) Capacity (C) Priority (P)
a dynamic priority P(t) and is a preemptive algorithm
1. a process starts with a P(t=w0) = P, its initial priority value P1 0 1
2. when a process is running, P(t) is constant P2 0 3
3. when a process is waiting P(t+1) = P(t)+1 P3 0 5
4. at any time, the process of highest P(t) takes the CPU
5. if Pi(t) = Pj(t) for two processes i,j, thus we look for Pi (w0), Pj (w0)
6. when a process recovers the CPU at tn, we reset P(tn) = P(w0) = P
t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 P(t) 1-2 2-3 3-4 4-5 5-6 1-1 1-2 2-3 3-4 4-5 5-6 6-7 1-1 1-2 2-3 3-4
P2 P(t) 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3 3-4 4-5 5-6 3-3
P3 P(t) 5-5 5-5 5-5 5-6 5-5 5-6 5-5 5-6 5-5 5-5 5-5 5-6 6-7 5-5 5-5 5-6
A context switch
P3 is running,
we reset P(t)
P(t) is constant Equivalence case,
we look for P (w0)
Equivalence case,
we look for P (w0)
26
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Highest Response Ratio Next (HRRN)” (1)
For each process, we would like to minimize a normalized turnaround time defined as
+
= = +1
with WTi(t) the waiting time of process i at t and Ci the capacity. Let’s note that 1 Ri(t)
Distribution
Misc
Considering a non-preemptive scheduling we have T(t) = 0 at t<s, scheduling
then WT(t) = E(t) – (T(t)=0) = E(t) = t – w0, R(t) is then
( − )+ ( − )
= = +1 HRRN
The scheduling is non-preemptive and looks for the highest R(t) value at any context switch. RT
The idea behind this method is to get the mean response ratio low,
so if a job has a high response ratio, it should be run at once to reduce the mean.
28
Scheduling algorithms
“Highest Response Ratio Next (HRRN)” (2)
For each process, we would like to minimize a normalized Processes Wakeup (w0) Capacity (C)
turnaround time defined as P1 0 3
+ ( − )+
= = P2 2 6
P3 4 4
P4 6 5
P5 8 2
P1 P2 P3 P5 P4
0 3 9 13 15 20
t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12 12-13 13-14 14-15 15-16
P1 C(t) 7-6 6-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-5 5-4 4-3 3-2 2-1 1-0
P2 C(t) 4-3 3-2 2-2 2-1 1-0
P3 C(t) 1-0
P4 C(t) 4-4 4-4 4-3 3-2 2-1 1-0
30
One difficulty with the SJF algorithm is the need to know
Scheduling algorithms
the required residual capacity. When the system cannot
guaranty a predictability, we can use the time prediction. “Time prediction” (1)
For the I/O bound processes, the OS may keep a CPU
burst average Tn for each of the processes. This criterion T
interpolates a fraction 1/n of the CPU time consumed (and
then the residual capacity C(t)).
1 n
The simplest calculation for Tn would be the following Tn 1 ti
n i 1
To avoid recalculating the entire summation each time, 1 n 1
Tn 1 t n Tn
we can rewrite the previous equation as n n
with,
31
Scheduling algorithms
“Time prediction” (2)
A common technique for predicting a future value on the Tn 1 tn 1 Tn
basis of a time series is exponential averaging
with,
Tn+1 is the prediction of the next CPU burst “n+1”
Tn time prediction of the current CPU burst “n”
tn time value of the current CPU burst “n”
controls the relative weight (0-1) between the
next (Tn+1) and the previous (Tn) prediction
Ti
ti alpha
0,1 0,5 0,9
If first execution 6,00 10,00 10,00 10,00
(i.e. w0), T0 is a chosen 4,00 9,60 8,00 6,40
6,00 9,04 6,00 4,24
as a constant (e.g. the
4,00 8,74 6,00 5,82
overall system average)
13,00 8,26 5,00 4,18
13,00 8,74 9,00 12,12
13,00 9,16 11,00 12,91
13,00 9,55 12,00 12,99
short-term
scheduling
CPU Exit
ready queue
I/O completion
I/O wait
blocked queue
33
Scheduling algorithms
“Time prediction” (4)
I/O completion
e.g. Time prediction with the SRT algorithm (SJF preemptive) events
Process A Process B
1 A
i. We consider the case of two processes A, B with the Ti Ti
2 B
following observed CPU bursts and I/O completion events at ti alpha ti alpha
a time interval [t0, t0+T] At t0, A, B are in the blocking queue. 0,4 0,4 3 A
ii. We have T0 = 5 and = 0.4 as parameters. 4,00 5,00 3,00 5,00
4 A,B
iii. We assume that at any I/O completion event A, B are 5,00 4,60 6,00 4,20
3,00 4,76 4,00 4,92 5 B
concurrent for the CPU access (i.e. when B released A is
scheduled and vice-versa).
events <1 1 2 3 4 5
blocked queue A,B B A A A,B B A A,B
ready queue B(5) B(5) A(4.7) B(4.9)
CPU A(5) A(5) B(5) A(4.6) B(5) B(4.2) A(4.7) A(4.7) B(4.9)
B is released while A is scheduled (iii),
with TBTA B waits in the ready queue
Two releases occur at the same time,
A is released and alone in the TBTA then A waits in the queue
ready queue, then scheduled When A ends, B shifts from the ready queue to the
CPU then ends and returns to the blocked queue
B is released while A is A is released while B is scheduled (iii), with
scheduled (iii), with TB=TA then T TB then B is preempted
B waits in the ready queue When A ends, A
34
B is scheduled
Scheduling algorithms
“Time prediction” (5)
I/O completion
e.g. Time prediction with the SRT algorithm (SJF preemptive) events
Process A Process B
1 A
i. We consider the case of two processes A, B with the Ti Ti
2 B
following observed CPU bursts and I/O completion events at ti alpha ti alpha
a time interval [t0, t0+T] At t0, A, B are in the blocking queue. 0,4 0,4 3 A
ii. We have T0 = 5 and = 0.4 as parameters. 4,00 5,00 3,00 5,00
4 A,B
iii. We assume that at any I/O completion event A, B are 5,00 4,60 6,00 4,20
3,00 4,76 4,00 4,92 5 B
concurrent for the CPU access (i.e. when B released A is
scheduled and vice-versa).
A B,A, B B A B
(4) (3,5) (6) (3) (4)
35
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Guaranteed Scheduling (GS)” (1)
With n processes running, all things being equal, each one should get 1/n of
the CPU utilization. For a process i, the scheduling algorithm:
37
Scheduling algorithms
“Guaranteed Scheduling (GS)” (2)
With n processes running, all things being equal, each one should get 1/n of Processes Wakeup (w0) Capacity (C)
the CPU utilization.
P1 0 ∞
T (t )
the CPU time consumed is normalized Ri (t ) i n P2 2 ∞
with the CPU time entitled ratio, t wi
P3 4 ∞
the lowest value has the higher priority
t or q 0-20 20-37 37-57 57-77 77-97 97-117 117-121 121-134 134-154 154-162
P1 C(t) 53-33 33-33 33-33 33-33 33-13 13-13 13-13 13-0
P2 C(t) 17-17 17-0
P3 C(t) 68-68 68-68 68-48 48-48 48-48 48-28 28-28 28-28 28-8 8-0
P4 C(t) 24-24 24-24 24-24 24-4 4-4 4-4 4-0
39
Scheduling Predictable
Algorithm Preemptive Priority Performance criteria Taxonomy
criterion capacity
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Scheduling algorithms
“Fair-Share Scheduling (FSS)” (1)
Applications may be organized with multiple processes. The FSS scheduling algorithm allocates a fraction of the
processor resources to each group. An hybrid scheduling, mixing the round robin & priority scheduling (using a base
priority, a exponential iterative reduction rule, a group weighting), assures a fair share of the CPU for each process.
The scheduler applies a round robin and looks for minimization of the criterion Pj (i) at each round.
If the Pj(i) are equal j, (1) If for j , two or more min Pj(i) appear
41
we apply a selection based on the round robin e.g. P1 > P2 > P3. (2) Whenever for j, a standalone min Pj(i) is here
Scheduling algorithms
“Fair-Share Scheduling (FSS)” (2)
The scheduler applies a round robin and looks
for minimization of the criterion Pj (i) at each round. Processes Wakeup Priority Capacity Group
CPU j (i ) GCPU k (i ) (w0) (C)
Pj (i ) Base j
2 4 wk P1 0 60 ∞ 1
CPU j (i 1) P2 0 60 ∞ 2
with CPU j (i )
2 P3 0 60 ∞ 2
GCPU k (i 1)
and GCPU k (i) e.g. w1 = w2 = 0.5 and m = 60
2
t or q 00-60 60-120 120-180 180-240 240-300 300-360
CPU(t) 0-60 30 15-75 37 18-78 39
P1 GCPU(t) 0-60 30 15-75 37 18-78 39
P(t) 60 (60+0+0) 90 (60+15+15) 74 (60+7+7) 96(60+18+18) 78 (60+9+9) 98(60+19+19)
CPU(t) 0 0-60 30 15 7 3-63
P2 GCPU(t) 0 0-60 30 15-75 37 18-78
P(t) 60 (60+0+0) 60 (60+0+0) 90 (60+15+15) 74 (60+7+7) 81(60+3+18) 70(60+1+9)
CPU(t) 0 0 0 0-60 30 15
P3 GCPU(t) 0 0-60 30 15-75 37 18-78
P(t) 60 (60+0+0) 60 (60+0+0) 75 (60+0+15) 67 (60+0+7) 93(60+15+18) 76(60+7+9)
6. A process at a priority level i can preempt any process at a priority level > i.
43
Scheduling algorithms
“Multilevel feedback queue scheduling (MLFQ)” (2)
Processes Wakeup (w0) Capacity (C) t or q 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10
RQ0 P1 P2 P3 P4 P5
P1 0 3
P1 P1 P1, P2 P2, P2, P2, P3, P3,
P2 2 6 RQ1 P2 P3 P3 P3, P4 P4,
P4 P5
P3 4 4
RQ2 P2 P2
P4 6 5 C(t) 3-2 2-1 1 1-0
P1
P5 8 2 P(t) 0-1 1 1 1
C(t) 6-5 5 5 5-4 4 4-3 3 3
P2
m with P(t) 0-1 1 1 1 1 1-2 2 2
k=0 C(t) 4-3 3 3 3 3 3-2
P3
RQ0 20+0=1 P(t) 0-1 1 1 1 1 1
C(t) 5-4 4 4 4
RQ1 20+1=2 P4
P(t) 0-1 1 1 1
RQ2 20+2=4 C(t) 2-1 1
P5
P(t) 0-1 1
RQ0
P1 0 3
P3, P4, P4, P5
P2 2 6 RQ1 P4, P5 P5
P5
P3 4 4
P2, P2, P2, P2, P2, P2, P2, P3, P4 P4
P4 6 5 RQ2 P3 P3 P3, P3, P3, P3, P4
P4 P4 P4 P4
P5 8 2 C(t)
P1
P(t)
m with C(t) 3 3 3 3 3-2 2-1 1-0
P2
k=1 P(t) 2 2 2 2 2 2 2
RQ0 21×0=1 C(t) 2-1 1 1 1 1 1 1 1-0
P3
21×1=2 P(t) 1-2 2 2 2 2 2 2 2
RQ1
C(t) 4 4-3 3-2 2 2 2 2 2 2-1 1-0
RQ2 21×2=4 P4
P(t) 1 1 1-2 2 2 2 2 2 2 2
C(t) 1 1 1 1-0
P5
P(t) 1 1 1 1
First Come First Serve no rank in the queue static no Arrival time Arrival
Fair-Share Scheduling yes process priority dynamic no Respecting the priority and
Priority &
Multilevel feedback process priority static/ enforcing the response
yes no time sharing
queue scheduling and queue position dynamic time
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms
47
Modeling multiprogramming
Modeling multiprogramming: from a probabilistic point of view,
suppose that a process spends a fraction p of its time waiting for
I/O to complete.
e.g. P1 (80%), P2(60%), P3(40%) P4(60%) CPU utilization 1 0,8 06 0,4 0,6 0,8704
48
Operating Systems
“Uniprocessor scheduling”
1. About short-term scheduling
2. Context switch, quantum and ready queue
3. Process and diagram models
4. Scheduling algorithms
4.1. FCFS scheduling
4.2. Priority based scheduling
4.3. Optimal scheduling
4.4. Time-sharing based scheduling
4.5. Priority/Time-sharing based scheduling
5. Modeling multiprogramming
6. Evaluation of algorithms
49
Evaluation of algorithms
Simulation aims to handle a model of the OS for evaluation (scheduling algorithm, processes, etc.). The simulator has a variable
representing a clock, when increasing the simulator modifies the state of the system.
The data to drive simulation can be generated in two main ways:
- to use synthetic data with a random number generator.
- to record trace tapes by monitoring a real system.
process workload
P r0 C
P1 0 10 WT
Random number
generator P2 5 20 FCFS 28
Statistics
P3 3 3 SJF 13
P4 7 7 RR 23
P r0 Bursts
P5 9 12
P1 0 {7,3}
P2 5 {1,8,7,4} As the simulation reflects a real system, statistics about the algorithm
Recording a real
system P3 3 {2,1} performances could be computed. However, simulation requires hours of
P4 7 {2,1,2,2} computation and a huge amount of data. In addition, design, coding and
P5 9 {8,1,3} debugging of a simulator can be a major task.
trace tapes
50