0% found this document useful (0 votes)
2 views119 pages

lec_1_intro_Processes_sheduling_alg_l

The document provides an overview of operating system basics, covering topics such as system calls, processes, threads, memory management, and scheduling algorithms. It discusses various types of operating systems, including batch systems, multiprogramming, multitasking, and real-time operating systems (RTOS), along with their characteristics and functionalities. Additionally, it explains process management, attributes of processes, and different types of schedulers used to optimize CPU utilization and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views119 pages

lec_1_intro_Processes_sheduling_alg_l

The document provides an overview of operating system basics, covering topics such as system calls, processes, threads, memory management, and scheduling algorithms. It discusses various types of operating systems, including batch systems, multiprogramming, multitasking, and real-time operating systems (RTOS), along with their characteristics and functionalities. Additionally, it explains process management, attributes of processes, and different types of schedulers used to optimize CPU utilization and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 119

Operating System Basics

Lesson-1

Prepared by
JAISE JOSE
Operating System
Syllabus
• System calls, processes, threads,
inter‐process communication, concurrency
and synchronization. Deadlock. CPU and
I/O scheduling. Memory management and
virtual memory. File systems.
Planned class flow
• Introduction and discussion.
• Processes and Threads
• Interprocess Communication, Concurrency
and Synchronization
• Deadlock and CPU Scheduling
• Memory Management and Virtual Memory
• File Systems, I/O Systems, Protection and
Security
Hardware Basics
• OS and hardware closely tied together
• Many useful hardware features have been
invented to compliment the OS
• Basic hardware resources
– CPU
– Memory
– Disk
– I/O
CPU
• CPU controls everything in the system
– if work needs to be done, CPU gets involved
• Most precious resource
– this is what your paying for
– want to get high utilization (from useful work)
• Only one process on a CPU at a time
• Hundreds of millions of instructions / sec
– and getting faster all the time
Memory
• Limitted in capacity
– never enough memory
• Temporary (volatile) storage
• Electronic storage
– fast, random access
• Any program to run on the CPU must be in
memory
Disk
• Virtually infinite capacity
• Permanent storage
• Orders of magnitude slower than memory
– mechanical device
– millions of CPU instructions can execute in the
time it takes to access a single piece of data on
disk
• All data is accessed in blocks
– usually 512 bytes
I/O
• Disk is actually part of the I/O subsystem
– they are of special interest to the OS
• Many other I/O devices
– printers, monitor, keyboard, etc.
• Most I/O devices are painfully slow
• Need to find ways to hide I/O latency
– like multiprogramming
What is an Operating System?

✓ An operating system is an
An operating system is an event intermediary between a
driven program which acts as an computer user and the
interface between a user of a hardware.
computer, and the computer ✓ Make the hardware
hardware. convenient to use.
✓ Manages system resources.
✓ Use the hardware in an
efficient manner.
System Call
• A System Call is a mechanism by which a
program requests a service from the
operating system's kernel. It acts as a bridge
between user-level applications and the OS,
allowing programs to execute functions like
reading from a file, writing to a disk,
allocating memory, and more.
“inside view”
To consider the “inside
view” of the system, the
OS is written as a
collection of
procedures, each of
which can call any
other, whenever
required. Each
parameter has a well-
defined interface with
input parameters and
output results.
System Diagram
user user ... user
1 2 n

compiler text editor ls

system and application programs

operating system

hardware
Types of Systems
Batch System

• Submit large number of


jobs at one time
• System decides what to
run and when
Multiprogramming Operating
System
• A single user cannot keep either the CPU or
the I/O devices busy at all times.
Multiprogramming increases CPU
utilization by organizing jobs so that the
CPU is not idle at any time. Jobs to be
executed are maintained in the memory
simultaneously and the OS switches among
these jobs for their execution.
• When one job waits for some input (WAIT
state), the CPU switches to another job. This
process is followed for all jobs in memory.
When the wait for the first job is over,the
CPU is back to serve it.
• The CPU will be busy till all the jobs have
been executed. Thus, increasing the CPU
utilization and throughput
Multitasking Operating System
• Multitasking system is the extension of
multiprogramming system. In this system,
jobs will be executed in the time-sharing
mode.
• It uses CPU scheduling and multi
programming techniques to provide a small
portion of a timeslot to each user. Each user
has at least one separate program in
memory.
Multiprocessor operating system
Multiprocessor operating system consists of more than one
processor for execution .
It improves the throughput of the system and it is also more
reliable than the single processor system because if one process r
breaks down then the other can share the load of that
processor.
RTOS
The Real-Time Operating System (RTOS) is specialized to
handle applications that need to process data in real time,
meaning it must respond to inputs and deliver outputs within a
specific time frame to meet the requirements of time-sensitive
tasks.

Deterministic Response Time:

An RTOS is designed to handle requests with minimal


latency, ensuring tasks are completed within precise time
constraints. This is crucial for applications like embedded
systems, robotics, and aerospace.
Types of RTOS:

• Soft RTOS: Can tolerate occasional deadline misses without severe


consequences. Multimedia systems (e.g., video streaming) are typical
examples, where slight delays do not impact overall functionality.
• Hard RTOS: Must meet strict deadlines without fail. Failure to do so
may result in system failure or hazards. Examples include control
systems in airplanes or medical devices
Which of the following systems is an example of a hard real-
time system?

a) Online transaction system


b) Multimedia streaming
c) Anti-lock braking system
d) Web server

Ans. Anti-lock braking system


• Consider three periodic tasks with execution
times of 2 ms, 1 ms, and 2 ms, and
respective periods of 5 ms, 10 ms, and 20
ms. Check if these tasks are schedulable on
a single processor.
• Consider three periodic tasks with execution
times of 2 ms, 1 ms, and 2 ms, and
respective periods of 5 ms, 10 ms, and 20
ms. Check if these tasks are schedulable on
a single processor.
An operating system is concerned with
the management functions of the
following resources:
(a) process management,
(b) memory management,
(c) file management
(d) device management.
PROCESS MANAGEMENT
• A process is a program in execution. It is an active
entity that represents a running instance of a
program, which includes not only the program’s
code but also its current activity, such as the
program counter, register contents, memory
allocation, and other dynamic elements. A process
is managed by the operating system, which keeps
track of its state, resources, and scheduling needs.
Attributes of a Process
Each process in an operating system is represented by a set of attributes stored in a Process
Control Block (PCB). The PCB keeps track of essential details about each process,
allowing the OS to manage and schedule processes effectively.
Process States
• A process can exist in various states during its lifecycle. These states help the operating system
manage and schedule processes effectively. Here’s a summary of each:
• New: This is the initial state, where a new process is created. The OS sets up the necessary resources
and structures for the process before it begins execution.
• Running: In this state, the process is currently being executed by the CPU. Only one process per
CPU core can be in the running state at any moment.
• Waiting: The process is waiting for a resource that is currently unavailable, such as an I/O operation
to complete or a shared resource to be released by another process.
• Ready: The process has all the resources it needs and is ready to run but is waiting for CPU time. It’s
in a queue for the CPU to become available.
• Terminated: The process has completed execution, and the OS has released its resources. The
process is removed from the process table.
• Suspended Ready: When there are more processes than available main memory, some ready
processes are moved to secondary storage to free up space. These processes are in a ready state but
are temporarily stored in secondary memory.
• Suspended Wait: Similar to suspended ready, but the process is in a waiting state. It’s waiting for a
resource while also being in secondary storage due to memory limitations.

These states, particularly the suspended states, help manage memory efficiently and allow
the OS to handle more processes than the main memory can accommodate.
Schedulers
• In an operating system, there are three types of
schedulers that manage the processes in different stages of
execution. These schedulers are responsible for deciding
which process runs and when, based on the system's goals
of maximizing CPU utilization, throughput, and
minimizing wait times. Here are the three types of
schedulers:
– Long-term Scheduler (or Admission Scheduler)
– Short-term Scheduler (or CPU Scheduler)
– Mid-term Scheduler (or Swapper)
Long-term Scheduler (or Admission Scheduler):
•Function: The long-term scheduler controls the process admission into
the system. It decides when a new process is brought into the ready
queue for execution. Essentially, it controls the degree of
multiprogramming—i.e., how many processes can be in memory at
once.
•Timing: It runs less frequently compared to the other two schedulers,
typically when new processes need to be loaded into the system from
secondary storage (disk).
•Objective: It ensures that the system does not get overloaded by too
many processes, maintaining a balance between CPU-bound and I/O-
bound processes.
Short-term Scheduler (or CPU Scheduler):
•Function: The short-term scheduler selects a process from
the ready queue and allocates CPU time to it. It moves
processes from the ready state to the running state, typically
making scheduling decisions on a very short time scale
(milliseconds).
•Timing: It is invoked frequently, as often as every few
milliseconds, depending on the scheduling algorithm.
•Objective: Its primary goal is to ensure that the CPU is
efficiently utilized by allocating CPU time to processes that
are ready to execute.
Mid-term Scheduler (or Swapper):
•Function: The mid-term scheduler is responsible for suspending
processes and resuming them when necessary. It decides which processes
should be swapped out of main memory to secondary storage (disk) and
which ones should be swapped back into memory. This helps to manage
memory usage when there are more processes than available physical
memory.
•Timing: It is invoked when the system is under memory pressure or
when a process needs to be suspended and swapped out.
•Objective: It aims to optimize memory usage and system performance by
managing the processes in secondary memory and main memory
efficiently.
Dispatcher
The dispatcher is responsible for:
• Context Switching: Saving the state (context) of
the currently running process and loading the state
of the next process to be executed.
• Switching Processes: Handling the transition
from one process to another by managing the
Program Counter and other registers.
Degree of Multiprogramming
• This term refers to the number of processes currently held in
memory and is controlled by the long-term scheduler.
• Higher Degree of Multiprogramming typically means more
processes are in the system, leading to higher CPU utilization
but possibly also more overhead.
Ready Queue and Data Structure
• The ready queue holds all processes that are ready to execute
and waiting for CPU time.
• A doubly linked list is often preferred for implementing the
ready queue because it allows efficient insertion and deletion of
processes from both ends
Each type of scheduler (particularly the short-term
scheduler) uses a scheduling algorithm to choose which
process to prioritize:

• First Come, First Serve (FCFS): Processes are scheduled in the order of arrival.
• Shortest Job First (SJF): Prioritizes processes with the shortest execution time.
• Shortest Remaining Time First (SRTF): A preemptive version of SJF that always
picks the process with the shortest remaining time.
• Round Robin: Assigns each process a fixed time slice in a cyclic order.
• Priority-based: Prioritizes processes based on their assigned priority levels.
• Highest Response Ratio Next (HRRN): Balances waiting time and burst time to select
the process with the highest response ratio.
• Multilevel Queue: Separates processes into multiple queues based on type or priority,
each queue possibly having its own algorithm.
• Multilevel Feedback Queue: Allows processes to move between queues based on their
behavior and aging, optimizing both short and long tasks.
How Schedulers Use
Scheduling Algorithms
• Short-term Scheduler: Uses scheduling algorithms like
FCFS, SJF, Round Robin, or Priority to determine which
process in the ready queue gets CPU time.
• Long-term Scheduler: Sometimes relies on admission
criteria or priority algorithms to determine which processes
enter the system from secondary storage.
• Mid-term Scheduler: May use algorithms that prioritize
processes based on memory usage or system load, deciding
which processes to suspend or swap out.
CPU sheduling
CPU sheduling
• CPU scheduling is the technique to put a process from ready queue to running
state. The objective of CPU scheduling is to minimize the turnaround time and
average waiting time of a process. The short-term scheduler is used to apply
the CPU scheduling in the ready state to select the process to schedule on to
the processor (running state).
• The short-term scheduler (also known as the CPU scheduler) is responsible
for selecting the next process from the ready queue and allocating the CPU to
that process. This decision is made based on the scheduling algorithm being
used. The short-term scheduler typically makes decisions every few
milliseconds, ensuring that processes get fair access to the CPU.
Key Scheduling Objectives:
• Maximize CPU utilization: Ensuring the CPU is used as efficiently as
possible.
• Fair process execution: Preventing starvation and ensuring that all processes
get a fair share of CPU time.
• Optimizing throughput: Maximizing the number of processes completed
within a given time.
CPU Scheduling Algorithms
CPU scheduling algorithms are used by the operating system to decide the order in which processes
are executed by the CPU. Each algorithm has its own way of managing the ready queue to optimize
various performance metrics like waiting time, turnaround time, and response time.
There are many CPU scheduling algorithms as follows:
• First come, first serve (FCFS)
• Shortest job first (SJF)
• Shortest remaining time first (SRTF)
• Round robin
• Priority based
• Highest response ratio next (HRRN)
• Multilevel queue
• Multilevel feedback queue
Preemptive scheduling vs Non-
preemptive scheduling
Preemptive scheduling is a CPU scheduling method where a process can be interrupted during its execution and
moved back to the ready queue to allow another process to run. This is useful for prioritizing shorter or higher-
priority tasks, allowing for more efficient use of CPU resources in multitasking systems. Preemptive scheduling
makes it possible to switch the CPU from one process to another, helping to prevent long processes from
monopolizing the CPU, and is especially useful in real-time and interactive systems where responsiveness is
important.
Key Characteristics of Preemptive Scheduling:
• Interruption: Processes can be interrupted, which allows the system to allocate CPU time to more urgent or
shorter tasks.
• Responsive: Increases responsiveness, especially for high-priority tasks, as the CPU isn’t tied up by long-
running processes.
• Better resource allocation: Makes it easier to balance CPU usage across multiple processes, improving
overall system performance.
Examples of Preemptive Scheduling Algorithms:
– Shortest Remaining Time First (SRTF): Schedules the process with the shortest remaining execution time.
– Round Robin (RR): Processes are given a fixed time slice, after which they’re preempted if not complete.
– Priority Scheduling (preemptive): Processes are interrupted if a higher-priority process arrives.
Comparison with Non-Preemptive Scheduling:
• Non-preemptive scheduling allows a process to run to completion once it starts, without interruptions.
• Preemptive scheduling is generally better suited for time-sharing and real-time systems where responsiveness
and dynamic allocation of CPU resources are required, while non-preemptive scheduling may be preferred in
simpler, batch-processing systems.
Time Periods
First-Come, First-Serve (FCFS)
Scheduling
• FCFS (First-Come, First-Serve) is one of the simplest CPU
scheduling algorithms. It works on the principle that the process that
arrives first gets executed first. The CPU is allocated to processes in
the order they arrive in the ready queue.
Characteristics of FCFS Scheduling:
• Non-preemptive: Once a process starts executing, it runs to
completion without being interrupted by other processes.
• Queue-based: A queue (usually a FIFO queue) is used to manage the
processes. The first process that arrives is added to the end of the
queue, and the CPU scheduler selects the process at the head of the
queue for execution.
• Simple: FCFS is easy to implement and understand.
• Fair: It treats all processes equally based on their arrival time.
However, it may not always be efficient in terms of performance.
Steps in FCFS Scheduling:

• Process Arrival: When a new process arrives in the


system, it enters the ready queue.
• Process Scheduling: The process at the head of the
queue is selected for execution.
• Execution: The selected process runs to completion,
and then the next process in the queue is selected.
• Queue Management: After the running process
completes, it is removed from the queue, and the
process at the front of the queue is scheduled next.
Advantages, Disadvantages of FCFS
Scheduling:
Advantages of FCFS Scheduling:
• Simplicity: Easy to understand and implement.
• Fairness: Processes are scheduled in the order they arrive, ensuring no process is treated
unfairly.

Disadvantages of FCFS Scheduling:


• Convoy Effect: If a long process arrives before several short processes, it can delay the
execution of short processes, increasing their waiting time significantly.
• Poor Average Waiting Time: Processes with long burst times can cause a high average waiting
time for other processes.
• Non-preemptive: It doesn’t allow interrupting processes once they start, even if a new process
with a higher priority arrives.
Steps in FCFS Scheduling:
• Process Arrival: When a new process arrives in the
system, it enters the ready queue.
• Process Scheduling: The process at the head of the
queue is selected for execution.
• Execution: The selected process runs to completion,
and then the next process in the queue is selected.
• Queue Management: After the running process
completes, it is removed from the queue, and the
process at the front of the queue is scheduled next.
Calculate the average turnaround time using FDFS
scheduling
Gantt Chart:
Calculate the average turnaround time and average waiting time
using FDFS scheduling
Shortest Job First (SJF)
Description: Shortest Job First (SJF) is a CPU scheduling algorithm
that selects the process with the shortest processing time (or burst time)
from the set of processes that are ready to execute. Here’s how it works
and its advantages and disadvantages:
Pros: Minimizes average waiting time.
• Cons: It is difficult to predict burst times accurately. Can cause
starvation for longer processes.
• Example: If process A needs 4 ms, B needs 2 ms, and C needs 1 ms,
then C will be executed first, followed by B, and then A.
Key Features of SJF Scheduling
Key Concepts of Non-preemptive SJF
1.Selection Criteria:
1. The CPU is assigned to the process with the shortest burst time.
2. If multiple processes have the same burst time, they are scheduled in the order of their arrival.
2.Non-preemptive Nature:
1. Once the CPU starts executing a process, it runs to completion without interruption.
2. This differs from preemptive algorithms where a process can be stopped if a new, higher-
priority process arrives.
3.Turnaround and Waiting Time:
1. Turnaround Time is the total time taken from a process’s arrival to its completion.
2. Waiting Time is the time a process spends in the ready queue, waiting for its turn to execute.
4.Average Turnaround and Waiting Times:
1. Non-preemptive SJF aims to minimize the average waiting time by prioritizing shorter jobs,
which usually results in lower average turnaround time as well.
5.Suitability:
1. SJF is optimal for minimizing average waiting time when all jobs arrive simultaneously or
when they have predictable burst times.
2. It’s best suited for systems with non-interactive tasks, like batch processing systems.
3. However, it is less suitable for real-time systems or environments where processes may arrive
continuously and require rapid response times.
Steps in SJF Scheduling
•When the CPU is free, the scheduler checks the
list of processes that are in the ready queue.
•It chooses the process with the minimum burst
time and allocates the CPU to it.
•This repeats for each process until all processes
are executed.
Following the non-preemptive SJF scheduling rule:
• At time 0, P1 is the only process available, so it starts execution.
• At time 1, P2 arrives, but P1 continues to execute since it is non-preemptive.
• At time 2, P3 arrives. Although P3 has the shortest burst time (1), P1 continues
executing until it finishes.
• Once P1 completes at time 7, P3 (which has the shortest burst time among the remaining
processes) runs next.
• After P3 completes, P2 will run as the only remaining process
Shortest Remaining Time First
(SRTF)
Shortest Remaining Time First (SRTF) is the preemptive version of
Shortest Job First (SJF) scheduling. In this approach, the CPU is assigned
to the process with the shortest remaining burst time, even if a process is
currently running. If a new process arrives with a burst time shorter than
the remaining time of the currently running process, SRTF will preempt
the current process and allocate the CPU to the new process.
How SRTF Works
• Process Arrival and Preemption: Whenever a new process arrives,
the scheduler checks if this new process has a shorter burst time than
the remaining time of the current process. If it does, the current process
is interrupted, and the CPU is reassigned to the new process.
• Context Switching: This frequent checking and possible preemption
require saving the state of the interrupted process and loading the new
process, which leads to context switching overhead.
SRTF
Advantages of SRTF
• Optimal Waiting Time: By always choosing the process with the shortest
remaining time, SRTF minimizes the average waiting time, making it one of
the most efficient algorithms for reducing turnaround and waiting times.
• Improved Response for Short Jobs: Small, quick jobs get executed faster,
which improves response time for interactive systems with short tasks.
Disadvantages of SRTF
• Context Switching Overhead: Because SRTF can frequently interrupt
processes, it leads to higher context switching, which can reduce overall
system efficiency.
• Starvation of Long Jobs: Similar to SJF, SRTF can cause long processes to
wait indefinitely if shorter jobs keep arriving, which is known as starvation.
• Requires Precise Burst Time: The algorithm needs an accurate prediction of
burst times, which may be difficult in many systems.
Key Characteristics of SRTF
• Preemptive Nature:
– In SRTF, a process may be preempted if another process with a shorter remaining burst time arrives.
This makes SRTF more dynamic than the non-preemptive version (SJF).
• Selection Criteria:
– The CPU is assigned to the process with the shortest remaining burst time.
– If a new process arrives with a shorter remaining burst time than the currently running process, the
running process is suspended, and the new process is executed.
• Completion Time, Turnaround Time, and Waiting Time:

• Optimizing Average Waiting Time:


– SRTF tries to minimize the average waiting time, just like SJF, but it can achieve better results in some
cases because it allows more frequent context switching to favor shorter processes.
• Starvation:
– Similar to SJF, SRTF may lead to starvation for processes with long burst times if shorter processes
keep arriving.
How SRTF Works
• SRTF works by continuously comparing the remaining
burst time of all processes in the ready queue. It always
selects the process with the shortest remaining burst
time to execute. Here’s the step-by-step process for
scheduling:
• At any given time, SRTF looks at all the processes in the
ready queue and selects the process with the shortest
remaining burst time.
• If a new process arrives with a shorter remaining burst
time than the currently running process, the CPU
switches to the new process (preemption).
• This continues until all processes complete execution.
Round Robin (RR) Scheduling
• In the Round Robin (RR) scheduling algorithm, each process in the
ready queue is given a fixed amount of CPU time, known as the time
quantum. Processes are executed in a circular order, with each process
receiving the CPU in turn for one time quantum, ensuring fair access to
the CPU. If a process requires less time than the quantum, it completes
its execution within that cycle and releases the CPU before the
quantum expires. If a process requires more time than the quantum, it
will be paused when the quantum is up and placed back in the ready
queue to await its next turn.
Option (a): P = 4, Q = 10, R = 6, S = 2
• P completes in one time quantum (4 units). No context switch needed after S to P since P completes in one go.
• S completes in half a quantum (2 units).
• Q and R will need multiple turns due to their burst times.
Based on the context switching requirements, this setup could satisfy all given conditions.
So, Option (a) is possible.
Option (b): P = 2, Q = 9, R = 5, S = 1
• P completes in less than one time quantum (2 units).
• S completes in less than one time quantum (1 unit).
• Q and R need multiple cycles.
This setup could satisfy the context-switching requirements.
So, Option (b) is possible.
Option (c): P = 4, Q = 12, R = 5, S = 4
• P completes in one full time quantum (4 units).
• S completes in one full time quantum (4 units).
• Q and R would require multiple turns due to their burst times.
This setup could also satisfy the context-switching requirements. So, Option (c) is possible.

Option (d): P = 3, Q = 7, R = 7, S = 3
1.P and S complete in less than one time quantum (3 units each).
2.Q and R would need multiple turns (7 units each), but the context-switching requirements would not
match this burst time distribution.
This setup does not satisfy the context-switching requirements because P and S both complete too
quickly, leaving too few possible context switches to match the requirement.
Priority Scheduling
• In Priority Scheduling, each process is assigned a
priority value, and the process with the highest
priority is selected for CPU execution first. This
approach makes use of a priority queue data
structure, where the highest priority process
appears at the front of the queue. When two or
more processes have the same priority, they are
scheduled in First-Come, First-Serve (FCFS)
order to ensure fair handling.
Key Features of Priority
Scheduling:
• Priority-Based Execution: Processes are executed based on their priority
levels, not just arrival time or burst time. This means that a process with a
higher priority will preempt a lower-priority one if it becomes ready.
• Starvation Issue: One major drawback of this algorithm is starvation. Lower-
priority processes can be indefinitely delayed if higher-priority processes
continuously arrive. This leads to a situation where a low-priority process
might never get a chance to execute, as it keeps getting pushed back in the
queue.
• Possible Solution - Aging: To mitigate starvation, a technique called aging is
often implemented. In aging, the priority of a waiting process is gradually
increased the longer it stays in the queue, so that it eventually gets a chance to
execute, even if it initially had a low priority.
Types of Priority Scheduling:
• Preemptive Priority Scheduling: If a new
process arrives with a higher priority than
the currently executing process, the CPU is
preempted and allocated to the new process.
• Non-Preemptive Priority Scheduling:
Once a process starts execution, it continues
until completion, even if a higher-priority
process arrives.
Example 1
Highest Response Ratio Next
(HRRN)
• The Highest Response Ratio Next
(HRRN) algorithm is a non-preemptive
CPU scheduling approach that selects the
process with the highest response ratio for
execution. This method is designed to
balance between giving priority to short
processes while preventing long processes
from waiting indefinitely.
Highest Response Ratio Next
(HRRN)
In HRRN algorithm, a processor is assigned to a process
on the basis of response ratio. Response ratio is calcu-
lated by the following formula:

The HRRN algorithm favours the shorter jobs and


limits the waiting time of larger jobs. It is non- preemptive
scheduling algorithm.
How HRRN Works:
• Select Process with Highest Ratio: The scheduler
calculates the response ratio for each process in the
ready queue.
• Execute Process with Highest Ratio: The process
with the highest response ratio is selected for
execution. Since HRRN is non-preemptive, this
process will complete its execution before the next
scheduling decision.
• Recalculate Ratios: Once the current process finishes,
the response ratios are recalculated for all remaining
processes.
Using HRRN, we balance the need to prioritize short jobs while allowing
longer jobs to eventually run. This leads to reduced waiting times for
processes and limits starvation, making it a fair scheduling method in
many scenarios.

Benefits of HRRN:
• Reduces Starvation: Unlike pure shortest job first (SJF) scheduling,
HRRN prevents long processes from waiting indefinitely by increasing
their response ratios over time.
• Balances Short and Long Jobs: Shorter jobs still tend to have higher
response ratios initially, so they are often scheduled first, but longer
jobs don’t suffer excessive delays.
Example 2

Step-by-Step Execution with HRRN Scheduling


1.At Time 0:
1. P1 is the only available process, so it starts execution.
2.Completion of P1:
1. P1 completes after 4 units of time (0 + 4 = 4).
2. Calculate response ratios for the remaining processes:
1.At time 4, P2, P3, and P4 have arrived and are in the ready
queue.
Step-by-Step Execution with HRRN Scheduling
1.At Time 0:
1. P1 is the only available process, so it starts execution.
2.Completion of P1:
1. P1 completes after 4 units of time (0 + 4 = 4).
2. Calculate response ratios for the remaining processes:
1. At time 4, P2, P3, and P4 have arrived and are in the ready queue.
4. Completion of P2:
P2 completes after 2 units of time (4 + 2 = 6).
Calculate response ratios for the remaining processes (P3 and
P4) at time 6.

P4 has the highest response ratio, so it will execute next.


6. Completion of P4:
P4 completes after 3 units of time (6 + 3 = 9).
Now, only P3 is left.
7. Execution of P3:
P3 starts at time 9 and completes after 8 units of time (9 + 8 = 17).
Multilevel Queue Scheduling
Multilevel Queue Scheduling is a type of CPU scheduling where the
ready queue is divided into multiple queues, each handling a specific
category or priority of processes. These categories can be based on
process priority, memory requirements, process types (e.g., system
processes vs. user processes), or any other relevant factor.

This method is often used in


environments where process
types are well-defined and
need separate handling for
optimized performance.
Key Points of Multilevel Queue
Scheduling
• Separate Queues for Different Categories: Each queue holds a specific type
of process, like interactive, batch, or system processes. Each queue can also
implement its own scheduling algorithm, such as First-Come, First-Serve
(FCFS) for batch processes or Round Robin for interactive processes.
• Priority Levels: Different queues may have different priority levels. Higher-
priority queues are given CPU access before lower-priority ones. For
instance, a system process might have priority over a user process.
• Scheduling Within and Between Queues:
– Within Queue: Processes within a queue are scheduled according to the
algorithm assigned to that specific queue.
– Between Queues: Processes in higher-priority queues preempt those in lower-
priority queues. Lower-priority queues only get CPU time when the higher-
priority queues are empty.
Advantages and Disadvantages
• Advantages: Clear priority levels, efficient
handling of different types of processes, and
flexibility with scheduling policies.
• Disadvantages: Lower-priority processes
may suffer from starvation if high-priority
queues are constantly filled. Adjustments
are needed to ensure fairness.
Multilevel Feedback Queue
(MLFQ) Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling is an advanced CPU scheduling
technique that enhances the basic multilevel queue approach by allowing processes to
move between queues based on their behavior and requirements. This flexibility helps
balance short-term tasks with longer-running processes and reduces the risk of starvation
for lower-priority tasks.
Key Features of MLFQ
Scheduling
• Dynamic Queue Assignment: Unlike traditional multilevel queue scheduling,
processes in MLFQ are not permanently assigned to a queue. Processes can be
promoted to higher-priority queues or demoted to lower-priority queues based
on their behavior, such as their CPU burst time or how long they have waited.
• Time-Slice (Quantum) Variation Across Queues: Each queue has a different
time slice or quantum value. Higher-priority queues (at the top) have smaller
quanta, giving short tasks quick access to the CPU. Lower-priority queues (at
the bottom) have larger quanta, allowing longer processes more uninterrupted
time.
• Preemption Rules:
– A process entering a high-priority queue preempts a process in a lower-
priority queue.
– When a process does not finish within its quantum, it moves to the next
lower queue. This reduces the priority for processes with high CPU
demands, allowing other processes to get CPU time.
– Processes that wait too long in lower-priority queues may be moved up,
preventing starvation.
Questions
Interrupts
• The priority of interrupts is essential for managing the
order in which the CPU responds to various interrupt
requests. High-priority interrupts are handled first, while
lower-priority ones are deferred. Non-maskable interrupts
ensure that critical issues are immediately addressed, and
interrupt nesting allows for more flexible and efficient
handling of multiple interrupts with different priorities.
This system of priorities ensures the stability and
responsiveness of the system, especially in environments
where multiple processes and hardware devices require
attention simultaneously.
Types of Interrupts and Their Priorities
• Hardware Interrupts: These are generated by external hardware devices
(e.g., keyboard, mouse, I/O devices, etc.) to get the CPU's attention for
tasks like input/output operations. These interrupts are usually assigned
priorities based on the criticality of the device.
• Software Interrupts: These interrupts are generated by software (e.g.,
system calls, exceptions, or errors) when a process needs attention. These
may have different priorities depending on the type of software interrupt.
• Maskable Interrupts: These interrupts can be temporarily disabled or
"masked" by the CPU if it is currently processing a higher-priority
interrupt. Maskable interrupts are typically used for less critical tasks that
can wait.
• Non-Maskable Interrupts (NMI): These interrupts cannot be disabled or
masked by the CPU, meaning they always take precedence over other
interrupts. NMIs are typically used for critical system errors like hardware
failures or emergency situations that require immediate attention.
Priority Levels of Interrupts
• Highest Priority: Non-Maskable Interrupts (NMI) – Used for urgent
tasks that cannot be deferred, like hardware failure detection or power-
down signals.
• High Priority: Interrupts related to important hardware devices such as
timers, system clocks, or critical peripherals.
• Medium Priority: Interrupts from I/O devices like keyboards or serial
ports, which are important but can generally wait for a brief moment
while other tasks are processed.
• Low Priority: Interrupts from less critical devices, such as peripheral
devices or devices that do not require immediate attention. These are the
least urgent and can be delayed in favor of higher-priority interrupts.
Explanation:
Interrupts are prioritized based on the criticality of the event to the system's stability
and functionality. Here's why each interrupt has a particular priority:
1. CPU Temperature Sensor Interrupt:
This interrupt is critical as it signals a potential hardware failure due to
overheating. If the CPU temperature rises too high, the system could sustain
permanent damage. Hence, this interrupt is handled with the highest priority to
ensure the system can take immediate corrective action (e.g., throttling the CPU,
shutting down, or activating cooling systems).
2. Hard Disk Interrupt:
This interrupt typically indicates the completion of a disk operation. While
important for system performance, it is not as critical as a CPU overheating event.
It is handled with a lower priority.
3. Mouse Interrupt:
This interrupt is user-interface-related and indicates mouse movement or button
presses. While necessary for user interaction, it has a low priority, as it does not
affect system stability.
4. Keyboard Interrupt:
Like the mouse interrupt, the keyboard interrupt is related to user interaction and
is given a low priority, similar to the mouse interrupt.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy