lec_1_intro_Processes_sheduling_alg_l
lec_1_intro_Processes_sheduling_alg_l
Lesson-1
Prepared by
JAISE JOSE
Operating System
Syllabus
• System calls, processes, threads,
inter‐process communication, concurrency
and synchronization. Deadlock. CPU and
I/O scheduling. Memory management and
virtual memory. File systems.
Planned class flow
• Introduction and discussion.
• Processes and Threads
• Interprocess Communication, Concurrency
and Synchronization
• Deadlock and CPU Scheduling
• Memory Management and Virtual Memory
• File Systems, I/O Systems, Protection and
Security
Hardware Basics
• OS and hardware closely tied together
• Many useful hardware features have been
invented to compliment the OS
• Basic hardware resources
– CPU
– Memory
– Disk
– I/O
CPU
• CPU controls everything in the system
– if work needs to be done, CPU gets involved
• Most precious resource
– this is what your paying for
– want to get high utilization (from useful work)
• Only one process on a CPU at a time
• Hundreds of millions of instructions / sec
– and getting faster all the time
Memory
• Limitted in capacity
– never enough memory
• Temporary (volatile) storage
• Electronic storage
– fast, random access
• Any program to run on the CPU must be in
memory
Disk
• Virtually infinite capacity
• Permanent storage
• Orders of magnitude slower than memory
– mechanical device
– millions of CPU instructions can execute in the
time it takes to access a single piece of data on
disk
• All data is accessed in blocks
– usually 512 bytes
I/O
• Disk is actually part of the I/O subsystem
– they are of special interest to the OS
• Many other I/O devices
– printers, monitor, keyboard, etc.
• Most I/O devices are painfully slow
• Need to find ways to hide I/O latency
– like multiprogramming
What is an Operating System?
✓ An operating system is an
An operating system is an event intermediary between a
driven program which acts as an computer user and the
interface between a user of a hardware.
computer, and the computer ✓ Make the hardware
hardware. convenient to use.
✓ Manages system resources.
✓ Use the hardware in an
efficient manner.
System Call
• A System Call is a mechanism by which a
program requests a service from the
operating system's kernel. It acts as a bridge
between user-level applications and the OS,
allowing programs to execute functions like
reading from a file, writing to a disk,
allocating memory, and more.
“inside view”
To consider the “inside
view” of the system, the
OS is written as a
collection of
procedures, each of
which can call any
other, whenever
required. Each
parameter has a well-
defined interface with
input parameters and
output results.
System Diagram
user user ... user
1 2 n
operating system
hardware
Types of Systems
Batch System
These states, particularly the suspended states, help manage memory efficiently and allow
the OS to handle more processes than the main memory can accommodate.
Schedulers
• In an operating system, there are three types of
schedulers that manage the processes in different stages of
execution. These schedulers are responsible for deciding
which process runs and when, based on the system's goals
of maximizing CPU utilization, throughput, and
minimizing wait times. Here are the three types of
schedulers:
– Long-term Scheduler (or Admission Scheduler)
– Short-term Scheduler (or CPU Scheduler)
– Mid-term Scheduler (or Swapper)
Long-term Scheduler (or Admission Scheduler):
•Function: The long-term scheduler controls the process admission into
the system. It decides when a new process is brought into the ready
queue for execution. Essentially, it controls the degree of
multiprogramming—i.e., how many processes can be in memory at
once.
•Timing: It runs less frequently compared to the other two schedulers,
typically when new processes need to be loaded into the system from
secondary storage (disk).
•Objective: It ensures that the system does not get overloaded by too
many processes, maintaining a balance between CPU-bound and I/O-
bound processes.
Short-term Scheduler (or CPU Scheduler):
•Function: The short-term scheduler selects a process from
the ready queue and allocates CPU time to it. It moves
processes from the ready state to the running state, typically
making scheduling decisions on a very short time scale
(milliseconds).
•Timing: It is invoked frequently, as often as every few
milliseconds, depending on the scheduling algorithm.
•Objective: Its primary goal is to ensure that the CPU is
efficiently utilized by allocating CPU time to processes that
are ready to execute.
Mid-term Scheduler (or Swapper):
•Function: The mid-term scheduler is responsible for suspending
processes and resuming them when necessary. It decides which processes
should be swapped out of main memory to secondary storage (disk) and
which ones should be swapped back into memory. This helps to manage
memory usage when there are more processes than available physical
memory.
•Timing: It is invoked when the system is under memory pressure or
when a process needs to be suspended and swapped out.
•Objective: It aims to optimize memory usage and system performance by
managing the processes in secondary memory and main memory
efficiently.
Dispatcher
The dispatcher is responsible for:
• Context Switching: Saving the state (context) of
the currently running process and loading the state
of the next process to be executed.
• Switching Processes: Handling the transition
from one process to another by managing the
Program Counter and other registers.
Degree of Multiprogramming
• This term refers to the number of processes currently held in
memory and is controlled by the long-term scheduler.
• Higher Degree of Multiprogramming typically means more
processes are in the system, leading to higher CPU utilization
but possibly also more overhead.
Ready Queue and Data Structure
• The ready queue holds all processes that are ready to execute
and waiting for CPU time.
• A doubly linked list is often preferred for implementing the
ready queue because it allows efficient insertion and deletion of
processes from both ends
Each type of scheduler (particularly the short-term
scheduler) uses a scheduling algorithm to choose which
process to prioritize:
• First Come, First Serve (FCFS): Processes are scheduled in the order of arrival.
• Shortest Job First (SJF): Prioritizes processes with the shortest execution time.
• Shortest Remaining Time First (SRTF): A preemptive version of SJF that always
picks the process with the shortest remaining time.
• Round Robin: Assigns each process a fixed time slice in a cyclic order.
• Priority-based: Prioritizes processes based on their assigned priority levels.
• Highest Response Ratio Next (HRRN): Balances waiting time and burst time to select
the process with the highest response ratio.
• Multilevel Queue: Separates processes into multiple queues based on type or priority,
each queue possibly having its own algorithm.
• Multilevel Feedback Queue: Allows processes to move between queues based on their
behavior and aging, optimizing both short and long tasks.
How Schedulers Use
Scheduling Algorithms
• Short-term Scheduler: Uses scheduling algorithms like
FCFS, SJF, Round Robin, or Priority to determine which
process in the ready queue gets CPU time.
• Long-term Scheduler: Sometimes relies on admission
criteria or priority algorithms to determine which processes
enter the system from secondary storage.
• Mid-term Scheduler: May use algorithms that prioritize
processes based on memory usage or system load, deciding
which processes to suspend or swap out.
CPU sheduling
CPU sheduling
• CPU scheduling is the technique to put a process from ready queue to running
state. The objective of CPU scheduling is to minimize the turnaround time and
average waiting time of a process. The short-term scheduler is used to apply
the CPU scheduling in the ready state to select the process to schedule on to
the processor (running state).
• The short-term scheduler (also known as the CPU scheduler) is responsible
for selecting the next process from the ready queue and allocating the CPU to
that process. This decision is made based on the scheduling algorithm being
used. The short-term scheduler typically makes decisions every few
milliseconds, ensuring that processes get fair access to the CPU.
Key Scheduling Objectives:
• Maximize CPU utilization: Ensuring the CPU is used as efficiently as
possible.
• Fair process execution: Preventing starvation and ensuring that all processes
get a fair share of CPU time.
• Optimizing throughput: Maximizing the number of processes completed
within a given time.
CPU Scheduling Algorithms
CPU scheduling algorithms are used by the operating system to decide the order in which processes
are executed by the CPU. Each algorithm has its own way of managing the ready queue to optimize
various performance metrics like waiting time, turnaround time, and response time.
There are many CPU scheduling algorithms as follows:
• First come, first serve (FCFS)
• Shortest job first (SJF)
• Shortest remaining time first (SRTF)
• Round robin
• Priority based
• Highest response ratio next (HRRN)
• Multilevel queue
• Multilevel feedback queue
Preemptive scheduling vs Non-
preemptive scheduling
Preemptive scheduling is a CPU scheduling method where a process can be interrupted during its execution and
moved back to the ready queue to allow another process to run. This is useful for prioritizing shorter or higher-
priority tasks, allowing for more efficient use of CPU resources in multitasking systems. Preemptive scheduling
makes it possible to switch the CPU from one process to another, helping to prevent long processes from
monopolizing the CPU, and is especially useful in real-time and interactive systems where responsiveness is
important.
Key Characteristics of Preemptive Scheduling:
• Interruption: Processes can be interrupted, which allows the system to allocate CPU time to more urgent or
shorter tasks.
• Responsive: Increases responsiveness, especially for high-priority tasks, as the CPU isn’t tied up by long-
running processes.
• Better resource allocation: Makes it easier to balance CPU usage across multiple processes, improving
overall system performance.
Examples of Preemptive Scheduling Algorithms:
– Shortest Remaining Time First (SRTF): Schedules the process with the shortest remaining execution time.
– Round Robin (RR): Processes are given a fixed time slice, after which they’re preempted if not complete.
– Priority Scheduling (preemptive): Processes are interrupted if a higher-priority process arrives.
Comparison with Non-Preemptive Scheduling:
• Non-preemptive scheduling allows a process to run to completion once it starts, without interruptions.
• Preemptive scheduling is generally better suited for time-sharing and real-time systems where responsiveness
and dynamic allocation of CPU resources are required, while non-preemptive scheduling may be preferred in
simpler, batch-processing systems.
Time Periods
First-Come, First-Serve (FCFS)
Scheduling
• FCFS (First-Come, First-Serve) is one of the simplest CPU
scheduling algorithms. It works on the principle that the process that
arrives first gets executed first. The CPU is allocated to processes in
the order they arrive in the ready queue.
Characteristics of FCFS Scheduling:
• Non-preemptive: Once a process starts executing, it runs to
completion without being interrupted by other processes.
• Queue-based: A queue (usually a FIFO queue) is used to manage the
processes. The first process that arrives is added to the end of the
queue, and the CPU scheduler selects the process at the head of the
queue for execution.
• Simple: FCFS is easy to implement and understand.
• Fair: It treats all processes equally based on their arrival time.
However, it may not always be efficient in terms of performance.
Steps in FCFS Scheduling:
Option (d): P = 3, Q = 7, R = 7, S = 3
1.P and S complete in less than one time quantum (3 units each).
2.Q and R would need multiple turns (7 units each), but the context-switching requirements would not
match this burst time distribution.
This setup does not satisfy the context-switching requirements because P and S both complete too
quickly, leaving too few possible context switches to match the requirement.
Priority Scheduling
• In Priority Scheduling, each process is assigned a
priority value, and the process with the highest
priority is selected for CPU execution first. This
approach makes use of a priority queue data
structure, where the highest priority process
appears at the front of the queue. When two or
more processes have the same priority, they are
scheduled in First-Come, First-Serve (FCFS)
order to ensure fair handling.
Key Features of Priority
Scheduling:
• Priority-Based Execution: Processes are executed based on their priority
levels, not just arrival time or burst time. This means that a process with a
higher priority will preempt a lower-priority one if it becomes ready.
• Starvation Issue: One major drawback of this algorithm is starvation. Lower-
priority processes can be indefinitely delayed if higher-priority processes
continuously arrive. This leads to a situation where a low-priority process
might never get a chance to execute, as it keeps getting pushed back in the
queue.
• Possible Solution - Aging: To mitigate starvation, a technique called aging is
often implemented. In aging, the priority of a waiting process is gradually
increased the longer it stays in the queue, so that it eventually gets a chance to
execute, even if it initially had a low priority.
Types of Priority Scheduling:
• Preemptive Priority Scheduling: If a new
process arrives with a higher priority than
the currently executing process, the CPU is
preempted and allocated to the new process.
• Non-Preemptive Priority Scheduling:
Once a process starts execution, it continues
until completion, even if a higher-priority
process arrives.
Example 1
Highest Response Ratio Next
(HRRN)
• The Highest Response Ratio Next
(HRRN) algorithm is a non-preemptive
CPU scheduling approach that selects the
process with the highest response ratio for
execution. This method is designed to
balance between giving priority to short
processes while preventing long processes
from waiting indefinitely.
Highest Response Ratio Next
(HRRN)
In HRRN algorithm, a processor is assigned to a process
on the basis of response ratio. Response ratio is calcu-
lated by the following formula:
Benefits of HRRN:
• Reduces Starvation: Unlike pure shortest job first (SJF) scheduling,
HRRN prevents long processes from waiting indefinitely by increasing
their response ratios over time.
• Balances Short and Long Jobs: Shorter jobs still tend to have higher
response ratios initially, so they are often scheduled first, but longer
jobs don’t suffer excessive delays.
Example 2