OS Unit 4 DC
OS Unit 4 DC
Date:23-08-2024
DISCLAIMER: The information contained in this e-book is compiled and distributed for
educational purposes only. This e-book has been designed to help learners understand
relevant concepts with a more dynamic interface. The compiler of this e-book and
Vidyalankar Institute of Technology give full and due credit to the authors of the contents,
developers and all websites from wherever information has been sourced. We
acknowledge our gratitude towards the websites YouTube, Wikipedia, and Google search
engine. No commercial benefits are being drawn from this project.
Unit 4 Scheduling
Contents:
Scheduling: Uniprocessor Scheduling, Multiprocessor and Real-Time Scheduling
Recommended Books:
1. Operating Systems – Internals and Design Principles Willaim Stallings Pearson 9th edition
2. Operating System Concepts Abraham Silberschatz, Wiley 8th edition
3. Operating Systems Godbole and Kahate McGraw Hill 3rd edition
a. Long-term scheduling:
a. The decision to add to the pool of processes to be executed. Long-term scheduling is
performed when a new process is created. This is a decision whether to add a new process
to the set of processes that are currently active.
b. The long-term scheduler determines which programs are admitted to the system for
processing. Thus, it controls the degree of multiprogramming. Once admitted, a job or user
program becomes a process and is added to the queue for the short-term scheduler.
c. In some systems, a newly created process begins in a swapped-out condition, in which case it
is added to a queue for the medium-term scheduler.
d. In a batch system, or for the batch portion of an OS, newly submitted jobs are routed to disk
and held in a batch queue.
e. The long-term scheduler creates processes from the queue when it can. There are two
decisions involved. The scheduler must decide when the OS can take on one or more
additional processes.
f. The scheduler must decide which job or jobs to accept and turn into processes.
g. The decision as to when to create a new process is generally driven by the desired.
h. degree of multiprogramming. The more processes that are created, the smaller is the
percentage of time that each process can be executed.
i. Thus, the long-term scheduler may limit the degree of multiprogramming to provide
satisfactory service to the current set of processes. Each time a job terminates, the scheduler
may decide to add one or more new jobs.
j. The decision as to which job to admit next can be on a simple first-come-first-served (FCFS)
basis, or it can be a tool to manage system performance. The criteria used may include
priority, expected execution time, and I/O requirements.
b. Medium-term scheduling:
a. The decision to add to the number of processes that are partially or fully in main memory.
b. Medium-term scheduling is a part of the swapping function. This is a decision whether to
add a process to those that are at least partially in main memory and therefore available
for execution.
c. the swapping-in decision is based on the need to manage the degree of multiprogramming.
d. On a system that does not use virtual memory, memory management is also an issue.
e. Thus, the swapping-in decision will consider the memory requirements of the
swapped-out processes.
c. Short-term scheduling:
a. The decision as to which available process will be executed by the processor.
b. Short-term scheduling is the actual decision of which ready process to execute next. The
short-term scheduler is invoked whenever an event occurs that may lead to the blocking of
the current process or that may provide an opportunity to preempt a
c. currently running process in favor of another. Examples of such events include:
d. Clock interrupts I/O interrupts.
e. Operating system calls
f. Signals (e.g., semaphores)
Fig shows working of types of scheduler
Non-preemptive:
• In this case, once a process is in the Running state, it continues to execute until (a) it terminates or
(b) it blocks itself to wait for I/O or to request some OS service.
• Preemptive policies incur greater overhead than non-preemptive ones but may provide better
service to the total population of processes, because they prevent any one process from
monopolizing the processor for very long.
• In addition, the cost of preemption may be kept relatively low by using efficient process-switching
mechanisms (as much help from hardware as possible) and by providing a large main memory to
keep a high percentage of programs in main memory.
Preemptive:
• The currently running process may be interrupted and moved to the Ready state by the OS.
• The decision to preempt may be performed when a new process arrives; when an interrupt
occurs that places a blocked process in the Ready state; or periodically, based on a clock
interrupt.
This kind of situation is easily handled as a set of concurrent processes running on a multi programmed
uniprocessor and can be supported on a multiprocessor with little or no change to user software.
In general, any collection of concurrent processes that need to communicate or synchronize can benefit
from the use of a multiprocessor architecture
MEDIUM-GRAINED PARALLELISM:
single application can be effectively implemented as a collection of threads within a single process. In
this case, the programmer must explicitly specify the potential
parallelism of an application. Typically, there will need to be rather a high degree of coordination and
interaction among the threads of an application, leading to a
medium-grain level of synchronization
FINE-GRAINED PARALLELISM:
Fine-grained parallelism represents a much more complex use of parallelism than is found in the use
of threads. Although much work has been done on highly parallel applications, this is so far a
specialized and fragmented area, with many different approaches
Design Issues
Thread Scheduling:
• An application can be implemented as a set of threads that cooperate and execute
concurrently in the same address space.
• On a uniprocessor, threads can be used as a program structuring aid and to overlap I/O with
processing. Because of the minimal penalty in doing a thread switch compared to a process switch,
these benefits are realized with little cost.
• Load sharing: Processes are not assigned to a particular processor. A global queue of ready
threads is maintained, and each processor, when idle, selects a thread from the queue. The term
load sharing is used to distinguish this strategy from load-balancing schemes in which work is
allocated on a more permanent basis.
• Gang scheduling: A set of related threads is scheduled to run on a set of processors at the same
time, on a one-to-one basis.
• Dedicated processor assignment: This is the opposite of the load-sharing approach and provides
implicit scheduling defined by the assignment of threads to processors. Each program, for the
duration of its execution, is allocated a number of processors equal to the number of threads in
the program. When the program terminates, the processors return to the general pool for
possible allocation to another program.
• Dynamic scheduling: The number of threads in a process can be altered during execution
Real-time operating systems can be characterized as having unique requirements in five general areas:
1. Determinism:
An operating system is deterministic to the extent that it performs operations at fixed, predetermined
times or within predetermined time intervals. When multiple processes are competing for resources and
processor time, no system will be fully deterministic. In a real-time operating system, process requests for
service are dictated by external events and timings
The extent to which an operating system can deterministically satisfy requests depends first on the speed
with which it can respond to interrupts and, second, on whether the system has sufficient capacity to
handle all requests within the required time
2. Responsiveness:
Responsiveness is concerned with how long, after acknowledgment, it takes an operating system to
service the interrupt.
The amount of time required to initially handle the interrupt and begin execution of the interrupt service
routine (ISR). If execution of the ISR requires a process switch, then
the delay will be longer than if the ISR can be executed within the context of the current process.
3. User control:
User control is generally much broader in a real-time operating system than in ordinary operating systems.
In a typical non-real-time operating system, the user either has no control over the scheduling function of
the operating system or can only provide broad guidance, such as grouping users into more than one
priority class. In a real-time.
system, however, it is essential to allow the user fine-grained control over task priority.
4. Reliability:
Reliability is typically far more important for real-time systems than nonreal-time systems. A transient
failure in a non-real-time system may be solved by simply rebooting the system.
A processor failure in a multiprocessor non-real-time system may result in a reduced level of service until
the failed processor is repaired or replaced. But
a real-time system is responding to and controlling events in real time.
5. Fail-soft operation
Fail-soft operation is a characteristic that refers to the ability of a system to fail in such a way as to
preserve as much capability and data as possible. For example, a typical
traditional UNIX system, when it detects a corruption of data within the kernel, issues a failure message on
the system console, dumps the memory contents to disk for later failure analysis, and terminates execution
of the system.
CPU SCHEDULING ALGORITHM
According to the algorithm the scheduler selects the process with the shortest time
non-preemptive: A process once scheduled will continue running until the end of
its CPU burst time
SJF is an optimal algorithm which gives minimum average waiting time for any set
of processes but suffers from the drawback of assuming the run times are known
in advance.
2
SJF (Preemptive)
PRIORITY SCHEDULING
• Priority scheduling associates a priority number with each process in its PCB
block
• The runnable process with the highest priority is assigned to the CPU
• A clash of 2 processes with the same priority is handled using FCFS
• The need to take external factors into account leads to priority scheduling.
3
Round Robin scheduling algorithm has been designed specifically for time sharing
system
Time quantum or time slice is a small unit of time defined after which pre-emption
of process would take place.
The ready queue is based on FIFO order with each process getting access in
circular manner
Each process must wait no longer than (n − 1) × q time units until its next time
quantum.
if q is very small, it will result into too many context switch leading to the overhead
time
Gantt chart:
= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 –
65) + (67 – 65)
= 632
= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122)
+ (183 – 124)
= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59
= 236
SCAN Scheduling
In the SCAN algorithm, the disk arm starts at one end of the disk and moves
toward the other end, servicing requests as it reaches each cylinder, until it gets to
the other end of the disk. At the other end, the direction of head movement is
reversed, and servicing continues. The head continuously scans back and forth
across the disk. The SCAN algorithm is sometimes called the elevator algorithm,
since the disk arm behaves just like an elevator in a building, first servicing all the
requests going up and then reversing to service requests the other way
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122,
14, 124, 65, 67. The SCAN scheduling algorithm is used. The head is initially at
cylinder number 53 moving towards larger cylinder numbers on its servicing pass.
The cylinders are numbered from 0 to 199.
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 –
183) + (199 – 41) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 158 + 27
7
=331
CSAN Scheduling
Provides a more uniform wait time than SCAN.
The head moves from one end of the disk to the other. servicing requests as it
goes. When it reaches the other end, however, it immediately returns to the
beginning of the disk, without servicing any requests on the return trip.
Treats the cylinders as a circular list that wraps around from the last cylinder to the
first one
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 –
183) + (199 – 0) + (14 – 0) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27
= 386
LOOK
• LOOK Algorithm is an improved version of the SCAN Algorithm
• Head starts from the first request at one end of the disk and moves towards
the last request at the other end servicing all the requests in between.
• After reaching the last request at the other end, head reverses its direction.
• It then returns to the first request at the starting end servicing all the
requests in between.
• The same process repeats
8
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 –
41) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 142 + 27
= 299
CLOOK
Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses direction
immediately, without first going all the way to the end of the disk.
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122,
14, 124, 65, 67. The C-LOOK scheduling algorithm is used. The head is initially at
cylinder number 53 moving towards larger cylinder numbers on its servicing pass.
The cylinders are numbered from 0 to 199.
9
= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 –
14) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 169 + 27
= 326
The optimal policy selects for replacement that page for which the time to the next
reference is the longest. It can be shown that this policy results in the fewest
number of page faults
Clearly, this policy is impossible to implement, because it would require the OS to
have perfect knowledge of future events. However, it does serve as a standard
against which to judge real-world algorithms
Banker’s Algorithm
The total amount of resources R1, R2, and R3 are 9, 3, and 6 units, respectively. In
the current state allocations have been made to the four processes, leaving 1 unit
of R2 and 1 unit of R3 available. Is this a safe state?
11
BUDDY SYSTEM
Both fixed and dynamic partitioning schemes have drawbacks. A fixed partitioning scheme limits
the number of active processes and may use space inefficiently if there is a poor match between
available partition sizes and process sizes. A dynamic partition ing scheme is more complex to
maintain and includes the overhead of compaction. An interesting compromise is the buddy
system
12
13
ASSIGNMENT QUESTIONS
Calculate the average waiting and turnaround time for the following using FCFS,
SRT,
priority (a smaller priority number implies higher priority) , round robin with
quantum 30 ms.
A disk has 200 cylinders numbered 0 to 199. The disk arm starts at cylinder 53 and
the direction of movement is toward the outer tracks. If the sequence of disk
requests is as follows: 98, 183, 37, 122, 14, 124, 65, 67, calculate the total seek time
using the FCFS, SSTF, SCAN, CSCAN, LOOK, CLook disk scheduling algorithm.
Suppose a disk has 1000 cylinders numbered from 0 to 999. The disk arm starts at
cylinder 143 and is moving upward (toward higher cylinder numbers). Given the
sequence of disk requests: 562, 345, 699, 143, 892, 42, 500, 400, 67, calculate the
total seek time using the Elevator (SCAN) disk scheduling algorithm.
A 1-Mbyte block of memory is allocated using the buddy system. a. Show the
results of the following sequence:
Request 70; Request 35; Request 80; Return A; Request 60; Return B; Return D;
Return C. b. Show the binary tree representation following Return B
Given the following state for the Banker’s Algorithm.
6 processes P0 through P5
c. Show that the current state is safe, that is, show a safe sequence of processes. In
addition, to the sequence show how the Available (working array) changes as each
process terminates.
d. Given the request (3,2,3,3) from Process P5. Should this request be granted?
Why
or why not
Consider the following string of page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2.
Show the frame allocation and calculate the page fault
a. FIFO (first-in-first-out)
b. LRU (least recently used)
c. OPT(Optimal Page Replacement)
Practice Questions:
1. Express your view on the below mentioned design issues related to processor: Assignment of
processes to processor.
2. Explain the working and the functioning of Medium-term scheduling.
3. Mention and explain the unique requirements of Real-time operating systems.
4. Compare Shortest Remaining Time algorithm with Shortest Process Next.
5. Explain the following terms related to performance of a processor: Turnaround time, Response
time, Deadlines, throughput.
6. Explain the working and the functioning of Short-term scheduling.
7. Mention and explain three different versions of Load Sharing.
8. Explain independent and fine-grained parallelism in operating systems.
9. Explain the working and the functioning of Long-term scheduling.
10. What are preemptive and non-preemptive processes in operating systems?
11. Mention and explain the five different types of Granularity in operating systems.
12. Mention the three design issues of scheduling on multiprocessor. Explain any one of them in
detail.
13. Mention and explain the different classification of Multiprocessor system.
14. Calculate the average waiting and turnaround time for the following using priority scheduling &
FCFS.
15. Complete the below table with proper description related to granularity in operating systems:
Grain Size Description
Fine
Medium
Coarse
Very Coarse
Independent
YT Links:
1. https://www.youtube.com/watch?v=2dJdHMpCLIg
2. https://www.youtube.com/watch?v=MZdVAVMgNpA
3. https://www.youtube.com/watch?v=zFnrUVqtiOY
4. https://www.youtube.com/watch?v=Q-4jzYIcdxM
Practice problems:
1. Gate Vidyalaya: https://www.gatevidyalay.com/cpu-scheduling-practice-problems-
numericals/
2. Geeksforgeeks: https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
3. NotesJam: https://www.notesjam.com/2018/11/scheduling-algorithms-examples.html