CH 05
CH 05
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the
shortest time
SJF is optimal – gives minimum average waiting time for a
given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
If two processes have same length next CPU burst, FCFS
scheduling is used to break the tie.
Two schemes:
nonpreemptive – once CPU given to the process it cannot be
preempted until it completes its CPU burst.
Preemptive – if a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt. This scheme is know as the shortest-Remaining-
Time-First (SRTF).
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready
queue.
RR scheduling is implemented as a FIFO queue of processes.
New processes are added to the tail of the ready queue.
A process may itself leave the CPU if its CPU burst is less
than 1 quantum, otherwise timer will cause an interrupt and
the process will be put at the tail of the queue
Scheduling
A new job enters queue Q0 which is
served FCFS
When it gains CPU, job receives 8
milliseconds
If it does not finish in 8
milliseconds, job is moved to
queue Q1
At Q1 job is again served FCFS and
receives 16 additional milliseconds
If it still does not complete, it is
preempted and moved to queue Q2
Processes in Q2 are run on an FCFS
basis, only when Q0 and Q1 are empty
Operating System Concepts – 10 th Edition 5.33 Silberschatz, Galvin and Gagne
Fairness
Unfair in scheduling:
STCF: *must* be unfair to be optimal
Favor I/O jobs? Penalize CPU.
How do we implement fairness?
Could give each queue a fraction of the CPU.
But this isn’t always fair. What if there’s one long-
running job, and 100 short-running ones?
Could adjust priorities: increase priority of jobs, as they
don’t get service. This is what’s done in UNIX.
Problem is that this is ad hoc – what rate should you
increase priorities?
And, as system gets overloaded, no job gets CPU time,
so everyone increases in priority.
The result is that interactive jobs suffer – both short
and long jobs have high priority!
1/1 91% 9%
0/2 NA 50%
2/0 50% NA
10/1 10% 1%
1/10 50% 5%
Silberschatz, Galvin and Gagne
Operating System Concepts – 10 th Edition 5.36
Thread Scheduling
Hyper-threading (simultaneous
multithreading or SMT) - assigning multiple
hardware threads to a single processing
core.
Intel processors—such as the i7—support
two threads per core
Oracle Sparc M7 processor supports eight
threads per core, with eight cores per
processor, thus providing the operating
system with 64 logical CPUs.
Linux scheduling
Windows scheduling
Solaris scheduling
A distribution-driven
simulation may be
inaccurate
Trace tapes record
sequences of real
events in real systems
Trace files provide an
excellent way to
compare two
algorithms on exactly
the same set of real
inputs
Simulations can be
expensive in terms of
computing time