0% found this document useful (0 votes)
92 views27 pages

Real Time System - : BITS Pilani

The document discusses three priority-driven scheduling algorithms: 1) Latest Release Time (LRT) schedules jobs in reverse chronological order based on deadline as release time. 2) Least Slack Time First (LST) prioritizes the job with the smallest slack time, where slack is the difference between deadline and remaining execution time. 3) Earliest Deadline First (EDF) prioritizes the job with the earliest absolute deadline.

Uploaded by

vithya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views27 pages

Real Time System - : BITS Pilani

The document discusses three priority-driven scheduling algorithms: 1) Latest Release Time (LRT) schedules jobs in reverse chronological order based on deadline as release time. 2) Least Slack Time First (LST) prioritizes the job with the smallest slack time, where slack is the difference between deadline and remaining execution time. 3) Earliest Deadline First (EDF) prioritizes the job with the earliest absolute deadline.

Uploaded by

vithya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Real Time System –

Lecture 7
BITS Pilani Rekha.A
Pilani Campus
Priority driven schedulers
BITS Pilani Rekha.A
Pilani Campus
LRT (Latest Release Time)
Algorithm
 schedule is built from right to left (in reverse chronological order)
 It treats release time as deadline and deadline as release time and schedule
the jobs backward starting from latest deadline in ‘priority-driven’ manner.
Example

 Latest deadline is 8. So start scheduling


backward from time 8.
J1 3 (0, 6] J2 2 (5, 8]  J2 should start at time 6, so that it completes
at time 8 (=6 + execution time 2)
(Execution times are  At time 7, J3 is ready to be scheduled (its
J3 2 (2, 7]
mentioned before the deadline is 7). So it gets scheduled at time 6
feasible intervals) (after J2).
 J3 should start at time 4, so that it completes
J1 J3 J2 at time 6 (=4 + execution time 2)
 At time 6, J1 is ready to be scheduled. So it
0 1 4 6 8 gets scheduled at time 4 after J3.
 J1 should start at time 1, so that it completes
at time 4 (=1 + execution time 3)
3
BITS Pilani, Pilani Campus
Least Laxity First (LLF)
LL strategy is another run-time optimal scheduler.
It is also known as Minimum-Laxity-First (MLF) or Least-
Slack-Time-First (LST) algorithm.
Let c(i) denote the remaining computation time of a task at
time i.
Let d(i) denote the deadline of a task relative to the current
time i.
The laxity is the maximum time a task can delay execution
without missing its deadline in the future.
The LL scheduler executes at every instant the ready task
with the smallest laxity.

BITS Pilani, Pilani Campus


LST (Least-Slack-Time-First)
Algorithm
 Task level dynamic algorithm, and job level dynamic algorithm
 Job with smallest slack has highest priority
 At time t, the slack of a job whose remaining execution time is x and whose deadline
is d
= d - t- x

Example: Consider two tasks T1 = (2, 0.9) and T2 = (5, 2.3)

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

J
T2 J2,1 J2,1 J2,2 J2,2 2, J2,3
2

0 1 2 3 4 5 6 7 8 9 10 11

Slack: Slack: Slack: J2,2 Slack: Slack: Slack:


J1,1 : 1.1 J1,2 : 1.1 J1,3 : 1.1 only J1,4 : 1.1 J1,5 : 1.1 J1,6 : 1.1
J2,1 : 2.7 J2,1 : 1.8 J2,1 : 0.9 job J2,2 : 2.7 J2,2 : 1.8 J2,3 : 2.7
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,4 scheduled J1,5 scheduled J1,6 scheduled
5
BITS Pilani, Pilani Campus
LST (Least Slack Time First)
Algorithm
 Also known as Minimum-Laxity-First (MLF) Algorithm
 The laxity of a process is defined as the deadline minus remaining computation time.
 Smaller the slack, higher the priority.
 Slack time = d – t – remaining time for execution
So slack is same as long as the job is executing. It changes after preemption of the job
Example
 t = 0: J1 is released, no other job in the system, so gets
scheduled
J1 3 (0, 6] J2 2 (5, 8]  t = 2: J3 is released.
 Slack of J1 = 6 – 2 – (3 – 2) = 3,
 Slack of J3 = 8 – 2 – 3 = 3.
J3 3 (2, 8] (Execution times are
 So both J1 and J3 are of same priority. Let J1
continue
mentioned before the
 t = 3: J1 completes, J3 starts.
feasible intervals)
 t = 5: J2 is released
 Slack of J2 = 8 – 5 – 2 = 1
 Slack of J3 = 8 – 5 – (3 – 2) = 2
 So J2 will have higher priority, hence J2 will
J1 J3 J2 J3 preempt J3.
 t = 7: J2 is done, J3 gets scheduled
0 3 5 7 8  Slack of J3 = 8 – 7 – (3 – 2) = 0
 t = 8: J3 is done 6
BITS Pilani, Pilani Campus
EDF (Earliest Deadline First)
Algorithm
 Dynamic priority algorithm
 Job with earliest deadline has highest priority

Example: Consider two tasks T1 = (2, 0.9) and T2 = (5, 2.3)

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

J
T2 J2,1 J2,1 J2,2 J2,2 2, J2,3
2

0 1 2 3 4 5 6 7 8 9 10 11

Deadlines: Deadlines: Deadlines: J2,2 Deadlines: Deadlines: Deadlines:


J1,1 : 2 J1,2 : 4 J1,3 : 6 only J1,4 : 8 J1,5 : 10 J1,6 : 12
J2,1 : 5 J2,1 : 5 J2,1 : 5 job J2,2 : 10 J2,2 : 10 J2,3 : 15
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,4 scheduled J1,5 scheduled J1,6 scheduled
7
BITS Pilani, Pilani Campus
EDF (Earliest Deadline First)
Algorithm
 Job with earliest (absolute) deadline has highest priority (Process closest to its deadline has
highest priority)
 Priorities is assigned dynamically, since deadlines of the jobs varied. So it is a dynamic-
priority algorithm.

Example  t = 0: J1 is released, no other job in the system, so gets


scheduled
J1 3 (0, 6] J2 2 (5, 8]  t = 2: J3 is released.
 Deadline of J1 = 6,
 Deadline of J3 = 8.
J3 3 (2, 8] (Execution times are
 So J1 has higher priority, hence it continues
 t = 3: J1 completes, J3 starts.
mentioned before the
feasible intervals)  t = 5: J2 is released
 Deadline of J2 = 8
 Deadline of J3 = 8
 So both have same priority, let J3 continue.
 t = 6: J3 is done, J2 gets scheduled
J1 J3 J2  t = 8: J2 is done

0 3 6 8
8
BITS Pilani, Pilani Campus
Exercise

J1 (0, 10] J2 (1, 4] J3 (0, 5]

J4 (1, 6] J6 (2, 10]


J5 (3, 9]

J8 (1, 12] J9 (1, 12]


J7 (1, 12]

The feasible interval of each job in the precedence graph is given next to its
name. The execution time of all jobs are equal to 1.
a) Find out the effective release times and deadlines of the jobs

9
BITS Pilani, Pilani Campus
Exercise – Answer (a)

Effective Release Times Jobs Effective


 J1, J4 and J5 has no predecessors. So Release Times
effective release times for these jobs are J1 0
same as their own release times. J2 1
 Then traverse the diagram from left. J3 3
 Effective release time of other jobs = Max
J4 1
(own release time, effective release times
of the predecessors) J5 3
 Effective Rel Time (J2) = Max (0, 1, 1) = 1 J6 3
 Effective Rel Time (J7) = Max (1, 1) = 1 J7 1
 Effective Rel Time (J3) = Max (0, 1, 3) = 3
J8 3
 Effective Rel Time (J6) = Max (2, 3, 3) = 3
 Effective Rel Time (J8) = Max (1, 3) = 3 J9 3
 Effective Rel Time (J9) = Max (1, 3) = 3

10
BITS Pilani, Pilani Campus
Exercise – Answer (a)

Effective Deadlines Jobs Effective


 J3, J6 and J9 has no successors. So Deadlines
effective deadlines for these jobs are same J1 4
as their own deadlines. J2 4
 Then traverse the diagram from right. J3 5
 Effective deadline of other jobs = Min (own
J4 4
deadline, effective deadlines of the
successors) J5 5
 Effective Deadline(J8) = Min (12, 12) = 12 J6 10
 Effective Deadline(J2) = Min (4, 5, 10) = 4 J7 12
 Effective Deadline(J5) = Min (9, 5, 10, 12) = 5
J8 12
 Effective Deadline(J7) = Min (12, 12) = 12
 Effective Deadline(J4) = Min (6, 4, 12) = 4 J9 12
 Effective Deadline(J1) = Min (10, 4) = 4

11
BITS Pilani, Pilani Campus
Exercise – Answer (b)
EDF Schedule Jobs Effective Effective
 Time 0: J1 is released and scheduled. Release Deadlines
Times
 Time 1: J2, J4, J7 gets released. J2 and J4 have same deadlines,
J1 0 4
which is earlier than that of J7. But J4 is predecessor of J2.
Hence we have to schedule J4. J2 1 4
 Time 2: Out of J2 and J7, J2 has earlier deadline i.e. 4, so gets J3 3 5
scheduled.
 Time 3: All other jobs gets released. J3 and J5 have earlier J4 1 4
deadlines i.e. 5. But J5 is predecessor of J3. So we have to J5 3 5
schedule J5.
 J6 3 10
Time 4: J3 gets scheduled, since it has the earliest deadline i.e. 5.
 Time 5: J6 gets scheduled, since it has the earliest deadline i.e. J7 1 12
10. J8 3 12
 Time 6, 7, 8: J7, J8, J9 have same deadlines i.e. 12, so same
priority. But J7 is predecessor of J8 and J8 is predecessor of J9. J9 3 12
So they have to be scheduled in the order J7, J8 and J9.

J1 J4 J2 J5 J3 J6 J7 J8 J9

0 1 2 3 4 5 6 7 8 9
12
BITS Pilani, Pilani Campus
Exercise – Answer (b)
Jobs Effective Effective
J1 (0, 10] J2 (1, 4] J3 (0, 5] Release
Times
Deadlines

J1 0 4
J2 1 4

J4 (1, 6] J6 (2, 10] J3 3 5


J5 (3, 9] J4 1 4
J5 3 5
J6 3 10
J8 (1, 12] J9 (1, 12]
J7 1 12
J7 (1, 12]
J8 3 12
J9 3 12

J1 J4 J2 J5 J3 J6 J7 J8 J9

0 1 2 3 4 5 6 7 8 9
13
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm
 Fixed priority algorithm
 Shorter the period, higher the priority
 Rate is inverse of the period. Hence higher the rate, higher the priority. – so the name ‘rate
monotonic’

Example: 3 tasks T1 = (4,1), T2 = (5, 2), T3 = (20, 5) to be scheduled based on RM algorithm.

 T1 has shortest period (i.e. 4), so should have higher priority, followed by T2 and T3.
 T1 get scheduled at after 4 time units i.e at times 0, 4, 8, 12, 16, 20, ...
 T2 gets scheduled at time units 1, 5, 11 for 2 time slots. When T2 gets released at time 15, it is
given this slot, preempting T3. But at time 16, T1 gets released. It gets scheduled preempting
T2. Once T1 is done, T2 again gts scheduled at time 18.
 T3 gets scheduled in the remaining slots 3, 7. It gets time slot 9, since T1 is done and T2 is not
released that time. But it gets preempted in the next slot because T2 gets released. Again it gets
scheduled in slot 13 and 14. With this T3 completes for one period 20.

T1 T2 T2 T3 T1 T2 T2 T3 T1 T3 T2 T2 T1 T3 T3 T2 T1 T2 T1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 29 20

14
BITS Pilani, Pilani Campus
Rate Monotonic (RM)
Algorithm

T1
J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

T2
J2,1 J2,2 J2,3 J2,4 J2,4

T3
J3,1 J3,1 J3,1 J3,1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

(A different representation of the schedule)

15
BITS Pilani, Pilani Campus
Deadline Monotonic (DM)
Algorithm
 Fixed priority algorithm
 Shorter the relative deadline, higher the priority
 When the relative deadlines of every task is proportional to their period, the schedule produced by RM and
DM algorithms are identical.
 But when the relative deadlines are arbitrary, DM may produce a feasible schedule when RM fails.

Example:
T1 = (50, 50, 25, 100), T2 = (0, 60, 10, 20), T3 = (0, 125, 25, 50)
According to DM algorithm, T2 has highest priority because its relative deadline is 20.
Similarly T1 has lowest priority and T3 has priority in between.
J J
T1 J1,1 J1,1 J1,2 1
J1,3 1
J1,4
, ,

2 3

T2 J2,1 J2,2 J2,3 J2,4

T3 J3,1 J3,2

0 10 20 35 40 50 60 70 80 85 100 120 130 140 155 160 180 190 200 220 225

16
BITS Pilani, Pilani Campus
Schedulable Utilization
A scheduling algorithm can feasibly schedule any set of
periodic task on a processor if the total utilization of the tasks
is equal to or less than the schedulable utilization of the
algorithm.

No algorithm can schedule a set of tasks with a total


utilization greater than 1

17
BITS Pilani, Pilani Campus
Schedulable Utilization
Example for U > 1:
Consider EDF schedule for tasks: T1 = (2,1), T2 =(5,3)
Total utilization U = 1/2+ 3/5 = 0.5 + 0.6 = 1.1
J2,2 missed its deadline 10

T1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6

T2 J2,1 J2,1 J2,2 J2,2

0 1 2 3 4 5 6 7 8 9 10 11

Deadlines: Deadlines: Deadlines: Deadlines: Deadlines: Deadlines: Deadlines:


J1,1 : 2 J1,2 : 4 J1,3 : 6 J1,3 : 6 J1,4 : 8 J1,5 : 10 J1,6 : 12
J2,1 : 5 J2,1 : 5 J2,1 : 5 J2,2 : 10 J2,2 : 10 J2,2 : 10 J2,2 : 10
J1,1 scheduled J1,2 scheduled J2,1 scheduled J1,3 scheduled J1,4 scheduled Either can be J2,2 scheduled
scheduled

18
BITS Pilani, Pilani Campus
Relative Merits
 Criterion to measure performance: Schedulable Utilization
 So higher the schedulable utilization, better is the algorithm.
 An algorithm whose schedulable utilization is 1, is an optimal
algorithm.
 Optimal dynamic-priority algorithms outperforms fixed-
priority algorithms in terms of schedulable utilization.
 But advantage of fixed-priority algorithms is predictability.

19
BITS Pilani, Pilani Campus
Example

T1( 2,1) T2( 5, 2.5)


J2,1 missed deadline

Rate Monotonic
Algorithm
J1,1 J2,1 J1,2 J2,1 J1,3
0 1 2 3 4 5 6 7

EDF Algorithm

J1,1 J2,1 J1,2 J2,1 J1,3


0 1 2 3 4 5 6 7

BITS Pilani, Pilani Campus


Schedulable Utilization of EDF
Algorithm
Theorem:
A system T of independent, preemptable tasks with relative deadlines equal to
their respective periods or larger than their periods can be feasibly scheduled
on a processor if and only if its total utilization is equal to or less than 1.

Schedulable utilization of the LST algorithm is also 1.

21
BITS Pilani, Pilani Campus
Schedulablity Test of EDF
Algorithm
Schedulablity condition for EDF algorithm:

n
ek

k 1 min( Dk , pk )
1

If above condition is satisfied, the system is schedulable according to EDF


algorithm.

22
BITS Pilani, Pilani Campus
Example

Check whether the following periodic task is schedulable


using EDF/ LLF

J1 J2 J3
n
ek

r 0 0 0
1
k 1 min( Dk , pk )
e 1 1 5

d 1 2 5

BITS Pilani, Pilani Campus


Optimality of RM & DM
Algorithms
 Since these algorithms assign fixed priorities, they can’t be
optimal.
 While RM algorithm is not optimal for arbitrary periods, it is
optimal in the special case when the periodic tasks in the
system are simply periodic.
 A system of periodic tasks is simply periodic if for every pair
of tasks Ti and Tk in the system and pi < pk, pk is an integer
multiple of pi.
 In other words, for simply periodic tasks, the period of all
tasks are integer multiple of the shortest period.

24
BITS Pilani, Pilani Campus
Optimality of RM Algorithm
Theorem:

A system of simply periodic, independent, preemptable tasks


whose relative deadlines are equal to or larger than their
periods is schedulable on a processor according to the RM
algorithm if and only if it total utilization is equal to or less
than 1.

RM Algorithm is optimal among all fixed-priority algorithms


whenever the relative deadlines of the tasks are proportional
to their periods.
25
BITS Pilani, Pilani Campus
Optimality of DM Algorithm
Theorem:

A system T of periodic, independent, preemptable tasks that


are in phase and have relative deadlines are equal to or less
than their periods can be feasibly scheduled on a processor
according to the DM algorithm whenever it can be feasibly
scheduled according to any fixed-priority algorithms.

26
BITS Pilani, Pilani Campus
Sufficient schedulability
condition for RM algorithm
Theorem:
A system of ‘m’ independent, preemptable periodic tasks
with relative deadlines equal to their respective periods can
be feasibly scheduled on a processor according to RM
algorithm if its total utilization ‘U’ is less than or equal to

1
URM  m(2  1)
m

27
BITS Pilani, Pilani Campus

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy