0% found this document useful (0 votes)
11 views24 pages

OS Unit 4 DC

Uploaded by

memowe8980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views24 pages

OS Unit 4 DC

Uploaded by

memowe8980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

MODULE-4: Scheduling

Compiled by: Bhavesh


Shah
bhavesh.shah@vsit.edu.in
Vidyalankar School of
Information Technology
Wadala (E), Mumbai
www.vsit.edu.in
Certificate
This is to certify that the e-book titled “OPERATING SYSTEMS”
comprises all elementary learning tools for a better understating of the relevant
concepts. This e-book is comprehensively compiled as per the predefined eight
parameters and guidelines.

Date:23-08-2024

Mr. Bhavesh Shah


Assistant Professor
Department of IT

DISCLAIMER: The information contained in this e-book is compiled and distributed for
educational purposes only. This e-book has been designed to help learners understand
relevant concepts with a more dynamic interface. The compiler of this e-book and
Vidyalankar Institute of Technology give full and due credit to the authors of the contents,
developers and all websites from wherever information has been sourced. We
acknowledge our gratitude towards the websites YouTube, Wikipedia, and Google search
engine. No commercial benefits are being drawn from this project.
Unit 4 Scheduling

Contents:
Scheduling: Uniprocessor Scheduling, Multiprocessor and Real-Time Scheduling

Recommended Books:
1. Operating Systems – Internals and Design Principles Willaim Stallings Pearson 9th edition
2. Operating System Concepts Abraham Silberschatz, Wiley 8th edition
3. Operating Systems Godbole and Kahate McGraw Hill 3rd edition

Prerequisites and Linking


UNIT-I Pre-requisites SEM-II SEM-III SEM-IV SEM-V SEM-VI
Process Basics of Linux Project
and Computers
Thread
Processor Scheduling
1. TYPES OF PROCESSOR SCHEDULING
The aim of processor scheduling is to assign processes to be executed by the processor or processors over
time, in a way that meets system objectives, such as response time, throughput, and processor efficiency.
In many systems, this scheduling activity is broken down into three separate functions: long-, medium-,
and short-term scheduling.

a. Long-term scheduling:
a. The decision to add to the pool of processes to be executed. Long-term scheduling is
performed when a new process is created. This is a decision whether to add a new process
to the set of processes that are currently active.
b. The long-term scheduler determines which programs are admitted to the system for
processing. Thus, it controls the degree of multiprogramming. Once admitted, a job or user
program becomes a process and is added to the queue for the short-term scheduler.
c. In some systems, a newly created process begins in a swapped-out condition, in which case it
is added to a queue for the medium-term scheduler.
d. In a batch system, or for the batch portion of an OS, newly submitted jobs are routed to disk
and held in a batch queue.
e. The long-term scheduler creates processes from the queue when it can. There are two
decisions involved. The scheduler must decide when the OS can take on one or more
additional processes.
f. The scheduler must decide which job or jobs to accept and turn into processes.
g. The decision as to when to create a new process is generally driven by the desired.
h. degree of multiprogramming. The more processes that are created, the smaller is the
percentage of time that each process can be executed.
i. Thus, the long-term scheduler may limit the degree of multiprogramming to provide
satisfactory service to the current set of processes. Each time a job terminates, the scheduler
may decide to add one or more new jobs.
j. The decision as to which job to admit next can be on a simple first-come-first-served (FCFS)
basis, or it can be a tool to manage system performance. The criteria used may include
priority, expected execution time, and I/O requirements.
b. Medium-term scheduling:
a. The decision to add to the number of processes that are partially or fully in main memory.
b. Medium-term scheduling is a part of the swapping function. This is a decision whether to
add a process to those that are at least partially in main memory and therefore available
for execution.
c. the swapping-in decision is based on the need to manage the degree of multiprogramming.
d. On a system that does not use virtual memory, memory management is also an issue.
e. Thus, the swapping-in decision will consider the memory requirements of the
swapped-out processes.

c. Short-term scheduling:
a. The decision as to which available process will be executed by the processor.
b. Short-term scheduling is the actual decision of which ready process to execute next. The
short-term scheduler is invoked whenever an event occurs that may lead to the blocking of
the current process or that may provide an opportunity to preempt a
c. currently running process in favor of another. Examples of such events include:
d. Clock interrupts I/O interrupts.
e. Operating system calls
f. Signals (e.g., semaphores)
Fig shows working of types of scheduler

What are the scheduling criteria?


Characteristics of Scheduling algorithm

Non-preemptive:
• In this case, once a process is in the Running state, it continues to execute until (a) it terminates or
(b) it blocks itself to wait for I/O or to request some OS service.
• Preemptive policies incur greater overhead than non-preemptive ones but may provide better
service to the total population of processes, because they prevent any one process from
monopolizing the processor for very long.
• In addition, the cost of preemption may be kept relatively low by using efficient process-switching
mechanisms (as much help from hardware as possible) and by providing a large main memory to
keep a high percentage of programs in main memory.

Preemptive:
• The currently running process may be interrupted and moved to the Ready state by the OS.
• The decision to preempt may be performed when a new process arrives; when an interrupt
occurs that places a blocked process in the Ready state; or periodically, based on a clock
interrupt.
This kind of situation is easily handled as a set of concurrent processes running on a multi programmed
uniprocessor and can be supported on a multiprocessor with little or no change to user software.
In general, any collection of concurrent processes that need to communicate or synchronize can benefit
from the use of a multiprocessor architecture

MEDIUM-GRAINED PARALLELISM:
single application can be effectively implemented as a collection of threads within a single process. In
this case, the programmer must explicitly specify the potential
parallelism of an application. Typically, there will need to be rather a high degree of coordination and
interaction among the threads of an application, leading to a
medium-grain level of synchronization

FINE-GRAINED PARALLELISM:
Fine-grained parallelism represents a much more complex use of parallelism than is found in the use
of threads. Although much work has been done on highly parallel applications, this is so far a
specialized and fragmented area, with many different approaches

Design Issues

Scheduling on a multiprocessor involves three interrelated issues: The assignment


of processes to processors
The use of multiprogramming on individual processors The actual
dispatching of a process

ASSIGNMENT OF PROCESSES TO PROCESSORS


If we assume that the architecture of the multiprocessor is uniform, in the sense that no processor
has a particular physical advantage with respect to access to main
memory or to I/O devices, then the simplest scheduling approach is to treat the processors as a pooled
resource and assign processes to processors on demand.
The question then arises as to whether the assignment should be static or dynamic If a process is
permanently assigned to one processor from activation until its
completion, then a dedicated short-term queue is maintained for each processor. An advantage of this
approach is that there may be less overhead in the scheduling function, because the processor
assignment is made once and for all

THE USE OF MULTIPROGRAMMING ON INDIVIDUAL PROCESSORS


When each process is statically assigned to a processor for the duration of its lifetime, a new question
arises: Should that processor be multi programmed? The reader’s first reaction may be to wonder why
the question needs to be asked; it would appear
particularly wasteful to tie up a processor with a single process when that process may frequently be
blocked waiting for I/O or because of concurrency/synchronization considerations.
In the traditional multiprocessor, which is dealing with coarse-grained or independent synchronization
granularity it is clear that each individual processor should be able to switch among a number of
processes to achieve high utilization and therefore better
performance.
PROCESS DISPATCHING:
The final design issue related to multiprocessor scheduling is the actual selection of a process to run. We
have seen that, on a multi programmed uniprocessor, the use of priorities or of sophisticated scheduling
algorithms based on past usage may improve performance over a simple-minded first-come-first-served
strategy.

Mention three different version of Load Sharing


• First-come-first-served (FCFS): When a job arrives, each of its threads is placed consecutively at the
end of the shared queue. When a processor becomes idle, it picks the next ready thread, which it
executes until completion or blocking.
• Smallest number of threads first: The shared ready queue is organized as a priority queue, with
highest priority given to threads from jobs with the smallest number of unscheduled threads. Jobs
of equal priority are ordered according to which job arrives first. As with FCFS, a scheduled thread
is run to completion or blocking.
• Preemptive smallest number of threads first: Highest priority is given to jobs with the smallest
number of unscheduled threads. An arriving job with a smaller number of threads than an
executing job will preempt threads belonging to the scheduled job

Thread Scheduling:
• An application can be implemented as a set of threads that cooperate and execute
concurrently in the same address space.
• On a uniprocessor, threads can be used as a program structuring aid and to overlap I/O with
processing. Because of the minimal penalty in doing a thread switch compared to a process switch,
these benefits are realized with little cost.
• Load sharing: Processes are not assigned to a particular processor. A global queue of ready
threads is maintained, and each processor, when idle, selects a thread from the queue. The term
load sharing is used to distinguish this strategy from load-balancing schemes in which work is
allocated on a more permanent basis.
• Gang scheduling: A set of related threads is scheduled to run on a set of processors at the same
time, on a one-to-one basis.
• Dedicated processor assignment: This is the opposite of the load-sharing approach and provides
implicit scheduling defined by the assignment of threads to processors. Each program, for the
duration of its execution, is allocated a number of processors equal to the number of threads in
the program. When the program terminates, the processors return to the general pool for
possible allocation to another program.
• Dynamic scheduling: The number of threads in a process can be altered during execution

Real Time Scheduling


Real-time computing may be defined as that type of computing in which the correctness of the system
depends not only on the logical result of the computation but also on the time at which the results are
produced.
• In a real-time system, some of the tasks are real-time tasks, and these have a certain degree of urgen
• Such tasks are attempting to control or react to events that take place in the outside world.
• Because these events occur in “real time,” a real-time task must be able to keep up with the
events with which it is concerned.
• A hard real-time task is one that must meet its deadline; otherwise, it will cause unacceptable
damage or a fatal error to the system.
• A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes
sense to schedule and complete the task even if it has passed its deadline. Another characteristic of
real-time tasks is whether they are periodic or aperiodic. An aperiodic task has a deadline by which
it must finish or start, or it may have a constraint on both start and finish time. In the case of a
periodic task, the requirement may be stated as “once per period T” or “exactly T units apart.”

Real-time operating systems can be characterized as having unique requirements in five general areas:
1. Determinism:
An operating system is deterministic to the extent that it performs operations at fixed, predetermined
times or within predetermined time intervals. When multiple processes are competing for resources and
processor time, no system will be fully deterministic. In a real-time operating system, process requests for
service are dictated by external events and timings
The extent to which an operating system can deterministically satisfy requests depends first on the speed
with which it can respond to interrupts and, second, on whether the system has sufficient capacity to
handle all requests within the required time
2. Responsiveness:
Responsiveness is concerned with how long, after acknowledgment, it takes an operating system to
service the interrupt.
The amount of time required to initially handle the interrupt and begin execution of the interrupt service
routine (ISR). If execution of the ISR requires a process switch, then
the delay will be longer than if the ISR can be executed within the context of the current process.
3. User control:
User control is generally much broader in a real-time operating system than in ordinary operating systems.
In a typical non-real-time operating system, the user either has no control over the scheduling function of
the operating system or can only provide broad guidance, such as grouping users into more than one
priority class. In a real-time.
system, however, it is essential to allow the user fine-grained control over task priority.
4. Reliability:
Reliability is typically far more important for real-time systems than nonreal-time systems. A transient
failure in a non-real-time system may be solved by simply rebooting the system.
A processor failure in a multiprocessor non-real-time system may result in a reduced level of service until
the failed processor is repaired or replaced. But
a real-time system is responding to and controlling events in real time.
5. Fail-soft operation
Fail-soft operation is a characteristic that refers to the ability of a system to fail in such a way as to
preserve as much capability and data as possible. For example, a typical
traditional UNIX system, when it detects a corruption of data within the kernel, issues a failure message on
the system console, dumps the memory contents to disk for later failure analysis, and terminates execution
of the system.
CPU SCHEDULING ALGORITHM

FIRST COME FIRST SERVE(FCFS)

• It is a non preemtive algorithm where the ready queue is based on FIFO


procedure.
• Processes are assigned to the CPU in the order they requested it.
• The strength of FCFS algorithm is that it is easy to understand and equally
easy to program.
• It has a major disadvantage of high amount of waiting time
• It also suffers from convoy effect where in many small process have to wait
for a longer process to release CPU.

Process Burst Time Arrival Start Wait Finish TA


1 24 0 0 0 24 24
2 3 0 24 24 27 27
3 3 0 27 27 30 30
Gantt chart:

average waiting time: (0+24+27)/3 = 17


average turnaround time: (24+27+30) = 27

SHORTEST JOB FIRST(SJF)

Each process is associated the length of its next CPU burst.

According to the algorithm the scheduler selects the process with the shortest time

SJF is of two types

non-preemptive: A process once scheduled will continue running until the end of
its CPU burst time

preemptive also known as shortest remaining time next(SRTN): A process preempt


if a new process arrives with a CPU burst of less length than the remaining time of
the currently executing process.

SJF is an optimal algorithm which gives minimum average waiting time for any set
of processes but suffers from the drawback of assuming the run times are known
in advance.
2

SJF (Non Preemptive)

Process Burst Time Arrival Start Wait Finish TA


1 6 0 3 3 9 9
2 8 0 16 16 24 24
3 7 0 9 9 16 16
4 3 0 0 0 3 3
Gantt chart:

average waiting time: (3+16+9+0)/4 = 7

average turnaround time: (9+24+16+3)/4 = 13

SJF (Preemptive)

Process Burst Time Arrival Start Wait Finish TA


1 8 0 0 9 17 17
2 4 1 1 0 5 4
3 9 2 17 15 26 24
4 5 3 5 2 10 7
Gantt chart:

average waiting time: (9+0+15+2)/4 = 6.5

average turnaround time: (17+4+24+7)/4 = 13

PRIORITY SCHEDULING

• Priority scheduling associates a priority number with each process in its PCB
block
• The runnable process with the highest priority is assigned to the CPU
• A clash of 2 processes with the same priority is handled using FCFS
• The need to take external factors into account leads to priority scheduling.
3

• To prevent high-priority processes from running indefinitely, the scheduler


may decrease the priority of the currently running process at each clock tick
• Priorities can be assigned to processes statically or dynamically
• The algorithm faces with starvation low priority processes may never
execute, they may have to wait indefinitely for the CPU therefore as a
solution ageing is attached with each
Burst Priority
Process Arrival Start Wait Finish TA
Time number
1 10 3 0 6 6 16 16
2 1 1 0 0 0 1 1
3 2 4 0 16 16 18 18
4 1 5 0 18 18 19 19
5 5 2 0 1 1 6 6

average waiting time: (6+0+16+18+1)/5 = 8.2

average turnaround time: (16+1+18+19+6)/4 = 12

ROUND ROBIN SCHEDULING(RR)

Round Robin scheduling algorithm has been designed specifically for time sharing
system

Time quantum or time slice is a small unit of time defined after which pre-emption
of process would take place.

The ready queue is based on FIFO order with each process getting access in
circular manner

The RR scheduling algorithm is thus preemptive. If there are n processes in the


ready queue and the time quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units.

Each process must wait no longer than (n − 1) × q time units until its next time
quantum.

The selection of time slice (q) plays an important role.

if q is very large, Round Robin behaves like FCFS


4

if q is very small, it will result into too many context switch leading to the overhead
time

Process Burst Time Arrival Start Wait Finish TA


1 24 0 0 6 30 30
2 3 0 4 4 7 7
3 3 0 7 7 10 10
Time quantum = 4

Gantt chart:

average waiting time: (6+4+7)/3 = 5.67

average turnaround time: (30+7+10)/3 = 15.67

DISK SCHEDULING ALGORITHM

FIRST COME FIRST SERVE(FCFS)


• The simplest form of disk scheduling is, of course, the first-come, first-
served (FCFS) algorithm. This algorithm is intrinsically fair, but it generally
does not provide the fastest service.
• If the disk head is initially at cylinder 53, it will first move from 53 to 98, then
to 183, 37, 122, 14, 124, 65, and finally to 67, for a total head movement of
640 cylinders
• The wild swing from 122 to 14 and then back to 124 illustrates the problem
with this schedule. If the requests for cylinders 37 and 14 could be serviced
together, before or after the requests for 122 and 124, the total head
movement could be decreased substantially, and performance could be
thereby improved
• Consider, for example, a disk queue with requests for I/O to blocks on
cylinders
• 98, 183, 41, 122, 14, 124, 65, 67,
5

Total head movements incurred while servicing these requests

= (98 – 53) + (183 – 98) + (183 – 41) + (122 – 41) + (122 – 14) + (124 – 14) + (124 –
65) + (67 – 65)

= 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2

= 632

SHORTEST-SEEK TIME FIRST (SSTF)


It seems reasonable to service all the requests close to the current head position
before moving the head far away to service other requests. This assumption is the
basis for the shortest-seek-time-first (SSTF) algorithm. The SSTF algorithm selects
the request with the least seek time from the current head position. In other words,
SSTF chooses the pending request closest to the current head position.
SSTF scheduling is essentially a form of shortest-job-first (SJF) scheduling; and like
SJF scheduling, it may cause starvation of some requests. Remember that requests
may arrive at any time
6

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14) + (98 – 14) + (122 – 98) + (124 – 122)
+ (183 – 124)

= 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59

= 236

SCAN Scheduling
In the SCAN algorithm, the disk arm starts at one end of the disk and moves
toward the other end, servicing requests as it reaches each cylinder, until it gets to
the other end of the disk. At the other end, the direction of head movement is
reversed, and servicing continues. The head continuously scans back and forth
across the disk. The SCAN algorithm is sometimes called the elevator algorithm,
since the disk arm behaves just like an elevator in a building, first servicing all the
requests going up and then reversing to service requests the other way

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122,
14, 124, 65, 67. The SCAN scheduling algorithm is used. The head is initially at
cylinder number 53 moving towards larger cylinder numbers on its servicing pass.
The cylinders are numbered from 0 to 199.

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 –
183) + (199 – 41) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 158 + 27
7

=331

CSAN Scheduling
Provides a more uniform wait time than SCAN.
The head moves from one end of the disk to the other. servicing requests as it
goes. When it reaches the other end, however, it immediately returns to the
beginning of the disk, without servicing any requests on the return trip.
Treats the cylinders as a circular list that wraps around from the last cylinder to the
first one

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 –
183) + (199 – 0) + (14 – 0) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 27

= 386

LOOK
• LOOK Algorithm is an improved version of the SCAN Algorithm
• Head starts from the first request at one end of the disk and moves towards
the last request at the other end servicing all the requests in between.
• After reaching the last request at the other end, head reverses its direction.
• It then returns to the first request at the starting end servicing all the
requests in between.
• The same process repeats
8

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 –
41) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 142 + 27

= 299

CLOOK
Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses direction
immediately, without first going all the way to the end of the disk.

Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122,
14, 124, 65, 67. The C-LOOK scheduling algorithm is used. The head is initially at
cylinder number 53 moving towards larger cylinder numbers on its servicing pass.
The cylinders are numbered from 0 to 199.
9

Total head movements incurred while servicing these requests

= (65 – 53) + (67 – 65) + (98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (183 –
14) + (41 – 14)

= 12 + 2 + 31 + 24 + 2 + 59 + 169 + 27

= 326

PAGE REPLACEMENT ALGORITHM


FIRST IN FIRST OUT (FIFO)
The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A
FIFO replacement algorithm associates with each page the time when that page
was brought into memory. When a page must be replaced, the oldest page is
chosen. Notice that it is not strictly necessary to record the time when a page is
brought in. We can create a FIFO queue to hold all pages in memory. We replace
the page at the head of the queue. When a page is brought into memory, we insert
it at the tail of the queue.

Total Page Fault = 15


10

OPTIMAL PAGE REPLACEMENT (OPT)

The optimal policy selects for replacement that page for which the time to the next
reference is the longest. It can be shown that this policy results in the fewest
number of page faults
Clearly, this policy is impossible to implement, because it would require the OS to
have perfect knowledge of future events. However, it does serve as a standard
against which to judge real-world algorithms

Total Page fault = 9

LEAST RRECENTLY USED PAGE REPLACEMENT (LRU)


The least recently used (LRU) policy replaces the page in memory that has not
been referenced for the longest time. By the principle of locality, this should be the
page least likely to be referenced in the near future. And, in fact, the LRU policy
does nearly as well as the optimal policy. The problem with this approach is the
difficulty in implementation. One approach would be to tag each page with the
time of its last reference; this would have to be done at each memory reference,
both instruction and data. Even if the hardware would support such a scheme, the
overhead would be tremendous. Alternatively, one could maintain a stack of page
references, again an expensive prospect

Total Page Fault : 12

Banker’s Algorithm
The total amount of resources R1, R2, and R3 are 9, 3, and 6 units, respectively. In
the current state allocations have been made to the four processes, leaving 1 unit
of R2 and 1 unit of R3 available. Is this a safe state?
11

BUDDY SYSTEM
Both fixed and dynamic partitioning schemes have drawbacks. A fixed partitioning scheme limits
the number of active processes and may use space inefficiently if there is a poor match between
available partition sizes and process sizes. A dynamic partition ing scheme is more complex to
maintain and includes the overhead of compaction. An interesting compromise is the buddy
system
12
13

ASSIGNMENT QUESTIONS
Calculate the average waiting and turnaround time for the following using FCFS,
SRT,
priority (a smaller priority number implies higher priority) , round robin with
quantum 30 ms.

A disk has 200 cylinders numbered 0 to 199. The disk arm starts at cylinder 53 and
the direction of movement is toward the outer tracks. If the sequence of disk
requests is as follows: 98, 183, 37, 122, 14, 124, 65, 67, calculate the total seek time
using the FCFS, SSTF, SCAN, CSCAN, LOOK, CLook disk scheduling algorithm.
Suppose a disk has 1000 cylinders numbered from 0 to 999. The disk arm starts at
cylinder 143 and is moving upward (toward higher cylinder numbers). Given the
sequence of disk requests: 562, 345, 699, 143, 892, 42, 500, 400, 67, calculate the
total seek time using the Elevator (SCAN) disk scheduling algorithm.
A 1-Mbyte block of memory is allocated using the buddy system. a. Show the
results of the following sequence:
Request 70; Request 35; Request 80; Return A; Request 60; Return B; Return D;
Return C. b. Show the binary tree representation following Return B
Given the following state for the Banker’s Algorithm.

6 processes P0 through P5

4 resource types: A (15 instances); B (6 instances)

C (9 instances); D (10 instances)

Snapshot at time T0:


14

a. Verify that the Available array has been calculated correctly.

b. Calculate the Need matrix.

c. Show that the current state is safe, that is, show a safe sequence of processes. In

addition, to the sequence show how the Available (working array) changes as each

process terminates.

d. Given the request (3,2,3,3) from Process P5. Should this request be granted?
Why

or why not
Consider the following string of page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2.
Show the frame allocation and calculate the page fault
a. FIFO (first-in-first-out)
b. LRU (least recently used)
c. OPT(Optimal Page Replacement)
Practice Questions:
1. Express your view on the below mentioned design issues related to processor: Assignment of
processes to processor.
2. Explain the working and the functioning of Medium-term scheduling.
3. Mention and explain the unique requirements of Real-time operating systems.
4. Compare Shortest Remaining Time algorithm with Shortest Process Next.
5. Explain the following terms related to performance of a processor: Turnaround time, Response
time, Deadlines, throughput.
6. Explain the working and the functioning of Short-term scheduling.
7. Mention and explain three different versions of Load Sharing.
8. Explain independent and fine-grained parallelism in operating systems.
9. Explain the working and the functioning of Long-term scheduling.
10. What are preemptive and non-preemptive processes in operating systems?
11. Mention and explain the five different types of Granularity in operating systems.
12. Mention the three design issues of scheduling on multiprocessor. Explain any one of them in
detail.
13. Mention and explain the different classification of Multiprocessor system.
14. Calculate the average waiting and turnaround time for the following using priority scheduling &
FCFS.

15. Complete the below table with proper description related to granularity in operating systems:
Grain Size Description
Fine
Medium
Coarse
Very Coarse
Independent

YT Links:
1. https://www.youtube.com/watch?v=2dJdHMpCLIg
2. https://www.youtube.com/watch?v=MZdVAVMgNpA
3. https://www.youtube.com/watch?v=zFnrUVqtiOY
4. https://www.youtube.com/watch?v=Q-4jzYIcdxM

Practice problems:
1. Gate Vidyalaya: https://www.gatevidyalay.com/cpu-scheduling-practice-problems-
numericals/
2. Geeksforgeeks: https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
3. NotesJam: https://www.notesjam.com/2018/11/scheduling-algorithms-examples.html

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy