0% found this document useful (0 votes)
10 views105 pages

Unit 3

The document covers the basics of operating systems, focusing on embedded operating systems, task scheduling algorithms, and process states. It discusses various scheduling methods including preemptive and non-preemptive strategies, along with their advantages and drawbacks. Additionally, it explains the criteria for selecting scheduling algorithms and provides examples of task scheduling scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views105 pages

Unit 3

The document covers the basics of operating systems, focusing on embedded operating systems, task scheduling algorithms, and process states. It discusses various scheduling methods including preemptive and non-preemptive strategies, along with their advantages and drawbacks. Additionally, it explains the criteria for selecting scheduling algorithms and provides examples of task scheduling scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 105

Unit-3

Operating system basics, types of embedded


operating systems, tasks, task scheduling
algorithms: Preemptive, non-preemptive, round
robin, weighted round robin. Kernel objects,
semaphores, mutex, pipes and message queues.

06/07/2019 1
06/07/2019 2
06/07/2019 3
06/07/2019 4
06/07/2019 5
06/07/2019 6
06/07/2019 7
06/07/2019 8
06/07/2019 9
06/07/2019 10
Examples: Linux, Unix, Windows

06/07/2019 11
Examples: Minix, QNX,L4,macOS, WindowsNT

06/07/2019 12
06/07/2019 13
06/07/2019 14
06/07/2019 15
06/07/2019 16
06/07/2019 17
06/07/2019 18
06/07/2019 19
06/07/2019 20
06/07/2019 21
06/07/2019 22
06/07/2019 23
06/07/2019 24
06/07/2019 25
06/07/2019 26
06/07/2019 27
06/07/2019 28
Tasks, Processes & Threads
•In the operating system context, task is defined as the program in
execution and the related information maintained by the operating system
for the program.
•Task is also known as Job in the operating system context
•A program or part of it in execution is also called a process
•The term Task, Job and process refer to the same entity in the operating
system context and most often they are used interchangeably
•A process requires various system resources like CPU for executing the
process, memory for storing the code corressponding to the process and
associated variables, I/O devices for information exchange etc.

29
30
31
32
06/07/2019 33
06/07/2019 34
06/07/2019 35
06/07/2019 36
06/07/2019 37
06/07/2019 38
Process States & State Transition:
Created State: The state at which a process is being created is
referred as ‘Created State’. The Operating System recognizes a
process in the ‘Created State’ but no resources are allocated to
the process
Ready State: The state, where a process is incepted into the
memory and awaiting the processor time for execution, is
known as ‘Ready State’. At this stage, the process is placed in
the ‘Ready list’ queue maintained by the OS
Running State: The state where in the source code
instructions corresponding to the process is being executed is
called ‘Running State’. Running state is the state at which the
process execution happens

06/07/2019 39
Blocked State/Wait State: Refers to a state where a running
process is temporarily suspended from execution and does not
have immediate access to resources. The blocked state might
have invoked by various conditions like- the process enters a
wait state for an event to occur (E.g. Waiting for user inputs
such as keyboard input) or waiting for getting access to a
shared resource like semaphore, mutex etc
Completed State: A state where the process completes its
execution
The transition of a process from one state to another is known
as ‘Statetransition’
When a process changes its state from Ready to running or
from running toblocked or terminated or from blocked to
running, the CPU allocation for the process may alsochange
context switching.
06/07/2019 40
Task Scheduling:
• In a multitasking system, there should be some mechanism in place to
share the CPU among the different tasks and to decide which process/task
is to be executed at a given point of time
• Determining which task/process is to be executed at a given point of time
is known as task/process scheduling
• Task scheduling forms the basis of multitasking
• Scheduling policies forms the guidelines for determining which task is to
be executed when the scheduling policies are implemented in an algorithm
and it is run by the kernel as a service
• The kernel service/application, which implements the scheduling
algorithm, is known as ‘Scheduler’
• The task scheduling policy can be pre-emptive, non-preemptive or co-
operative
• Depending on the scheduling policy the process scheduling decision may
take place when a process switches its state to
– ‘Ready’ state from ‘Running’ state
– ‘Blocked/Wait’ state from ‘Running’ state
– ‘Ready’ state from ‘Blocked/Wait’ state
– ‘Completed’ state

06/07/2019 41
Task Scheduling - Scheduler Selection:
The selection of a scheduling criteria/algorithm should consider
• CPU Utilization: The scheduling algorithm should always make the CPU utilization high.
CPU utilization is a direct measure of how much percentage of the CPU is being utilized.
• Throughput: This gives an indication of the number of processes executed per unit of time.
The throughput for a good scheduler should always be higher.
• Turnaround Time: It is the amount of time taken by a process for completing its
execution. It includes the time spent by the process for waiting for the main memory, time
spent in the ready queue, time spent on completing the I/O operations, and the time spent in
execution. The turnaround time should be a minimum for a good schedulingalgorithm.
• Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue waiting to
get the CPU time for execution. The waiting time should be minimal for a good scheduling
algorithm.
• Response Time: It is the time elapsed between the submission of a process and the first
response. For a good scheduling algorithm, the response time should be as least as possible.

06/07/2019 42
Task Scheduling - Queues
• The various queues maintained by OS in association with CPU scheduling are
• Job Queue: Job queue contains all the processes in the system
• Ready Queue: Contains all the processes, which are ready for execution and
waiting for CPU to get their turn for execution. The Ready queue is empty
when there is no process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O
device

06/07/2019 43
Non-preemptive scheduling – First Come First Served (FCFS)/First In First
Out (FIFO) Scheduling:
• Allocates CPU time to the processes based on the order in which they
enters the ‘Ready’ queue
• The first entered process is serviced first
• It is same as any real world application where queue systems are used;
E.g. Ticketing
Drawbacks:
• Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
• In general, FCFS favors CPU bound processes and I/O bound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device utilization.
• The average waiting time is not minimal for FCFS scheduling algorithm

06/07/2019 44
EXAMPLE: Three processes with process IDs P1, P2, P3 with
estimated completion time 10, 5, 7 milliseconds respectively enters
the ready queue together in the order P1, P2, P3. Calculate the
waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no
I/O waiting for the processes).
Solution: The sequence of execution of the processes by the CPU is
represented as

06/07/2019 45
Assuming the CPU is readily available at the time of arrival of P1, P1 starts
executing without any waiting in the ‘Ready’ queue. Hence the waiting
time for P1 is zero.
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P2 = 10 ms (P2 starts executing after completing P1) Waiting
Time for P3 = 15 ms (P3 starts executing after completing P1 and P2)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3 = 8.33 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 15 ms (-Do-) Turn Around Time (TAT) for P3
= 22 ms (-Do-)
Average Turn Around Time= (Turn Around Time for all processes) / No. of
Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3
= 15.66 milliseconds

06/07/2019 46
Non-preemptive scheduling – Last Come First Served
(LCFS)/Last In First Out (LIFO) Scheduling:
• Allocates CPU time to the processes based on the order in
which they are entered in the ‘Ready’ queue
• The last entered process is serviced first
• LCFS scheduling is also known as Last In First Out (LIFO)
where the process, which is put last into the ‘Ready’ queue,
is serviced first
Drawbacks:
• Favors monopoly of process. A process, which does not
contain any I/O operation, continues its execution until it
finishes its task
• In general, LCFS favors CPU bound processes and I/O bound
processes may have to wait until the completion of CPU
bound process, if the currently executing process is a CPU
bound process. This leads to poor device utilization.
• The average waiting time is not minimal for LCFS scheduling
algorithm
06/07/2019 47
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue
together in the order P1, P2, P3 (Assume only P1 is present in the ‘Ready’
queue when the scheduler picks up it and P2, P3 entered ‘Ready’ queue after
that). Now a new process P4 with estimated completion time 6ms enters the
‘Ready’ queue after 5ms of scheduling P1. Calculate the waiting time and
Turn Around Time (TAT) for each process and the Average waiting time and
Turn Around Time (Assuming there is no I/O waiting for the
processes).Assume all the processes contain only CPU operation and no I/O
operations are involved.
Solution: Initially there is only P1 available in the Ready queue and the
scheduling sequence will be P1, P3, P2. P4 enters the queue during the
execution of P1 and becomes the last process entered the ‘Ready’ queue. Now
the order of execution changes to P1, P4, P3, and P2 as given below.

06/07/2019 48
The waiting time for all the processes are given as Waiting Time for P1 = 0 ms (P1 starts
executing first)
Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4 arrived after 5ms of
execution of P1. Hence its waiting time = Execution start time– Arrival Time = 10-5 = 5)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4) Waiting Time for
P2 = 23 ms (P2 starts executing after completing P1, P4 and P3) Average waiting time =
(Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue +
Execution Time = (Execution Start Time – Arrival Time) + Estimated Execution Time = (10-5) +
6 = 5 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds

06/07/2019 49
Non-preemptive scheduling – Shortest Job First (SJF) Scheduling.
Allocates CPU time to the processes based on the execution completion time for tasks
The average waiting time for a given set of processes is minimal in SJF scheduling
Optimal compared to other non-preemptive scheduling like FCFS
Drawbacks:
A process whose estimated execution completion time is high may not get a chance to
execute if more and more processes with least estimated execution time enters the ‘Ready’
queue before the process with longest estimated execution time starts its execution
May lead to the ‘Starvation’ of processes with high estimated completion time
Difficult to know in advance the next shortest process in the ‘Ready’ queue for scheduling
since new processes with different estimated execution time keep entering the ‘Ready’ queue
at any point of time.
Non-preemptive scheduling – Priority based Scheduling
A priority, which is unique or same is associated with each task
The priority of a task is expressed in different ways, like a priority number, the time required
to complete the execution etc.
In number based priority assignment the priority is a number ranging from 0 to the maximum
priority supported by the OS. The maximum level of priority is OS dependent.
Windows CE supports 256 levels of priority (0 to 255 priority numbers, with 0 being the
highest priority)
The priority is assigned to the task on creating it. It can also be changed dynamically (If the
Operating System supports this feature)
The non-preemptive priority based scheduler sorts the ‘Ready’ queue based on the priority
and picks the process with the highest level of priority for execution
06/07/2019 50
EXAMPLE: Three processes with process IDs P1, P2, P3 with
estimated completion time 10, 5, 7 milliseconds and
priorities 0, 3, 2 (0- highest priority, 3 lowest priority)
respectively enters the ready queue together. Calculate the
waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in
priority based scheduling algorithm.
Solution: The scheduler sorts the ‘Ready’ queue based on the
priority and schedules the process with the highest priority
(P1 with priority number 0) first and the next high priority
process (P3 with priority number 2) as second and so on.
The order in which the processes are scheduled for
execution is represented as

06/07/2019 51
The waiting time for all the processes are given as Waiting
Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P3 = 10 ms (P3 starts executing after
completing P1) Waiting Time for P2 = 17 ms (P2 starts
executing after completing P1 and P3) Average waiting time
= (Waiting time for all processes) / No. of Processes=
(Waiting time for (P1+P3+P2)) / 3
= (0+10+17)/3 = 27/3= 9 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready
Queue + Execution Time)
Turn Around Time (TAT) for P3 = 17 ms (-Do-) Turn
Around Time (TAT) for P2 = 22 ms (-Do-)
Average Turn Around Time= (Turn Around Time for all
processes) / No. of Processes
= (Turn Around Time for (P1+P3+P2)) / 3
= (10+17+22)/3 = 49/3= 16.33 milliseconds

06/07/2019 52
Preemptive scheduling:
• Employed in systems, which implements preemptive multitasking
model
• Every task in the ‘Ready’ queue gets a chance to execute. When and
how often each process gets a chance to execute (gets the CPU
time) is dependent on the type of preemptive scheduling algorithm
used for scheduling the processes
• The scheduler can preempt (stop temporarily) the currently
executing task/process and select another task from the ‘Ready’
queue for execution
• When to pre-empt a task and which task is to be picked up from the
‘Ready’ queue for execution after preempting the current task is
purely dependent on the scheduling algorithm
• A task which is preempted by the scheduler is moved to the ‘Ready’
queue. The act of moving a ‘Running’ process/task into the ‘Ready’
queue by the scheduler, without the processes requesting for it is
known as‘Preemption’
• Time-based preemption and priority-based preemption are the two
important approaches adopted in preemptive scheduling

06/07/2019 53
Preemptive scheduling – Preemptive SJF Scheduling/ Shortest
Remaining Time (SRT):
• The non preemptive SJF scheduling algorithm sorts the ‘Ready’
queue only after the current process completes execution or enters
wait state, whereas the preemptive SJF scheduling algorithm sorts
the ‘Ready’ queue when a new process enters the ‘Ready’ queue
and checks whether the execution time of the new process is
shorter than the remaining of the total estimated execution time of
the currently executing process
• If the execution time of the new process is less, the currently
executing process is preempted and the new process is scheduled
for execution
• Always compares the execution completion time (ie the remaining
execution time for the new process) of a new process entered the
‘Ready’ queue with the remaining time for completion of the
currently executing process and schedules the process with
shortest remaining time for execution.

06/07/2019 54
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue
together. A new process P4 with estimated completion time 2ms enters the
‘Ready’ queue after 2ms. Assume all the processes contain only CPU
operation and no I/O operations are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3)
available in the ‘Ready’ queue and the SRT scheduler picks up the process
with the Shortest remaining time for execution completion (In this example
P2 with remaining time 5ms) for scheduling. Now process P4 with
estimated execution completion time 2ms enters the ‘Ready’ queue after
2ms of start of execution of P2. The processes are re-scheduled for
execution in the following order

06/07/2019 55
The waiting time for all the processes are given as
Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is
interrupted by P4 and has to wait till the completion of P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the
execution time for completion of P4 (2ms) is less than that of the Remaining time for execution
completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3) Average
waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4= (0 + 2 + 7 + 14)/4 = 23/4= 5.75
milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution
Time)
Turn Around Time (TAT) for P4 = 2 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time) +
Estimated Execution Time = (2-2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue
+ Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue +
Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4= 11.75 milliseconds
06/07/2019 56
Preemptive scheduling – Round Robin (RR)
Scheduling:
• Each process in the ‘Ready’ queue
is executed for a pre-defined time slot.
• The execution starts with picking up the first
process in the ‘Ready’ queue. It is executed for a
pre-defined time
• When the pre-defined time elapses or the process
completes (before the pre- defined time slice),
the next process in the ‘Ready’ queue is selected
for execution.
• This is repeated for all the processes in the
‘Ready’ queue
• Once each process in the ‘Ready’ queue is
executed for the pre-defined time period, the
scheduler comes back and picks the first process
in the ‘Ready’ queue again for execution.
• Round Robin scheduling is similar to the FCFS
scheduling and the only difference is that a time
slice based preemption is added to switch the
execution between the processes in the ‘Ready’
queue

06/07/2019 57
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion time 6,
4, 2 milliseconds respectively, enters the ready queue together in the order P1, P2, P3.
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average
waiting time and Turn Around Time (Assuming there is no I/O waiting for the processes) in
RR algorithm with Time slice= 2ms.
Solution: The scheduler sorts the ‘Ready’ queue based on the FCFS policy and picks up the
first process P1 from the ‘Ready’ queue and executes it for the time slice 2ms. When the
time slice is expired, P1 is preempted and P2 is scheduled for execution. The Time slice
expires after 2ms of execution of P2. Now P2 is preempted and P3 is picked up for
execution. P3 completes its execution within the time slice and the scheduler picks P1
again for execution for the next time slice. This procedure is repeated till all the processes
are serviced. The order in which the processes are scheduled for execution is represented
as

06/07/2019 58
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first
and waits for two time slices to get execution back and again 1 time slice for getting
CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1
executes for 1 time slice and waits for two time slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first
time slices for P1 and P2 and completes its execution in a single time slice.)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (6+6+4)/3 = 16/3= 5.33 milliseconds
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 10 ms (-Do-)
Turn Around Time (TAT) for P3 = 6 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all the processes) /
No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (12+10+6)/3 = 28/3= 9.33 milliseconds.

06/07/2019 59
Preemptive scheduling – Priority based Scheduling
Same as that of the non-preemptive priority based scheduling except for the switching
of execution between tasks
In preemptive priority based scheduling, any high priority process entering the ‘Ready’
queue is immediately scheduled for execution whereas in the non-preemptive
scheduling any high priority process entering the ‘Ready’ queue is scheduled only after
the currently executing process completes its execution or only when it voluntarily
releases the CPU
The priority of a task/process in preemptive priority based scheduling is indicated in
the same way as that of the mechanisms adopted for non- preemptive multitasking.
EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3 lowest priority)
respectively enters the ready queue together. A new process P4 with estimated
completion time 6ms and priority 0 enters the ‘Ready’ queue after 5ms of start of
execution of P1. Assume all the processes contain only CPU operation and no I/O
operations are involved.
Solution: At the beginning, there are only three processes (P1, P2 and P3) available in
the ‘Ready’ queue and the scheduler picks up the process with the highest priority (In
this example P1 with priority 1) for scheduling. Now process P4 with estimated
execution completion time 6ms and priority 0 enters the ‘Ready’ queue after 5ms of
start of execution of P1. The processes are re-scheduled for execution in the following
order06/07/2019 60
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets
Preempted by P4 after 5ms and again gets the CPU time after completion of P4)
Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the
‘Ready’ queue, by preempting P1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4) Waiting Time for
P2 = 23 ms (P2 starts executing after completing P1, P4 and P3) Average waiting time = (Waiting
time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4= (6 + 0 + 16 + 23)/4 = 45/4= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time
= (Execution Start Time – Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time) Turn
Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time) Average Turn
Around Time= (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4= (16+6+23+28)/4 = 73/4= 18.25 milliseconds
06/07/2019 61
Weighted Round Robin (WRR) Scheduling Algorithm
• The Weighted Round Robin (WRR) scheduling algorithm is an enhancement of the
standard Round Robin (RR) scheduling technique.
• It is designed to address the limitation of equal resource allocation in traditional Round
Robin by assigning different weights to tasks or processes based on their priority, resource
demands, or importance.
• This makes WRR particularly useful in CPU scheduling, network traffic management,
and load balancing scenarios.
How It Works
• Assign Weights: Each process or task is assigned a weight, which determines how many
time slices (or requests) it gets in each cycle.
• Serve Based on Weights: The scheduler allocates CPU or resources to each process in
proportion to its weight. A process with a higher weight gets more execution time before
moving to the next process.
• Cycle Repeats: Once all processes have been scheduled according to their assigned
62
weights, the cycle repeats.
Advantages of WRR
•Ensures fair resource distribution while considering priority.
•Prevents starvation by guaranteeing execution for lower-weighted tasks.
•Useful in network traffic management where bandwidth allocation is required.
Disadvantages of WRR
•Can be inefficient if weight assignments are not optimized.
•May still cause delays for lower-weighted processes.
•Not ideal for real-time systems where strict deadlines exist.
Applications of WRR
•CPU scheduling in operating systems.
•Load balancing in web servers and cloud computing.
•Network packet scheduling in Quality of Service (QoS) management.

63
Kernel Objects in Operating Systems
What Are Kernel Objects?
•Kernel objects are data structures maintained by the
operating system kernel to manage system resources such as
processes, threads, memory, and synchronization
mechanisms. These objects facilitate communication between
user-mode applications and the kernel, ensuring efficient
resource management and process coordination.
Types of Kernel Objects
•Semaphore Object
– Synchronization mechanism that controls access to shared
resources.
– Prevents race conditions by allowing a limited number of
64
processes/threads to access a resource.
• Mutex Object (Mutual Exclusion)
– Ensures that only one thread can access a critical section at a
time.
– Used to prevent data inconsistency in multi-threaded
environments.
• Event Object
– Used for signaling between threads or processes.
– Can be manual-reset (remains set until explicitly cleared) or
auto-reset (resets automatically after a single wait operation).
• Timer Object
– Triggers events after a specified time interval.
– Used for scheduling tasks, process timeouts, and periodic
execution.

65
• Message queue is a kernel object that enables processes to
exchange discrete messages instead of a continuous stream of
bytes (like in pipes). Messages can be stored in a queue until they
are retrieved.
– Supports asynchronous communication between processes.
– Messages can have priorities, allowing urgent messages to be
processed first.
– Messages persist even if the receiving process is not active.
• Pipe is a kernel object that provides a unidirectional or
bidirectional communication channel between processes. It
allows one process to write data that another process can read,
functioning like a FIFO (First In, First Out) buffer.

66
Why Are Kernel Objects Important?
•Efficient Resource Management: Helps the OS allocate
and de-allocate resources dynamically.
•Process Synchronization & Communication: Prevents
race conditions and deadlocks.
•Security & Access Control: Ensures only authorized
processes can access system resources.
•Multitasking Support: Enables concurrent execution of
processes and threads.

67
Semaphores
• A semaphore (sometimes called a semaphore token) is a kernel
object that one or more threads of execution can acquire or
release for the purposes of synchronization or mutual exclusion.
• When a semaphore is first created, the kernel assigns to it an
associated semaphore control block (SCB), a unique ID, a value
(binary or a count), and a task-waiting list,

68
• A kernel can support many different types of semaphores,
including binary, counting, and mutual-exclusion (mutex)
semaphores.
• Binary Semaphores
• A binary semaphore can have a value of either 0 or 1. When a
binary semaphore s value is 0, the semaphore is considered
unavailable (or empty); when the value is 1, the binary semaphore
is considered available (or full ).
Note: when a binary semaphore is first created, it can be initialized to
either available or unavailable (1 or 0, respectively).

69
Counting Semaphores
•A counting semaphore uses a count to allow it to be acquired or
released multiple times. When creating a counting semaphore, assign
the semaphore a count that denotes the number of semaphore tokens it
has initially. If the initial count is 0, the counting semaphore is created
in the unavailable state. If the count is greater than 0, the semaphore is
created in the available state, and the number of tokens it has equals its
count.

70
Mutex
•A mutual exclusion (mutex) semaphore is a special binary
semaphore that supports ownership, recursive access,task
deletion safety, and one or more protocols for avoiding
problems inherent to mutual exclusion.
•Note: Some kernels might use the terms lock and unlock
for a mutex instead of acquire and release.

71
Typical Semaphore Operations
Typical operations that developers might want to perform
with the semaphores in an application include:·
– creating and deleting semaphores,
– acquiring and releasing semaphores,
– clearing a semaphore s task-waiting list, and getting
semaphore information.

72
Creating and Deleting Semaphores
Operation Description

Create Creates a semaphore

Delete Deletes a semaphore

If a kernel supports different types of semaphores, different calls might


be used for creating binary, counting, and mutex semaphores, as
follows:

binary specify the initial semaphore state and the task-waiting order.

counting specify the initial semaphore count and the task-waiting


order.

mutex specify the task-waiting order and enable task deletion safety,
recursion, and priority-inversion avoidance protocols, if supported. 73
Acquiring and Releasing Semaphores
•The operations for acquiring and releasing a semaphore
might have different names, depending on the kernel:
• for example, take and give , sm_p and sm_v , pend and post
, and lock and unlock . Regardless of the name, they all
effectively acquire and release semaphores.

Operation Description

Acquire Acquire Acquire a semaphore token

Release Release Release a semaphore token

74
Tasks typically make a request to acquire a semaphore in one
of the following ways:·
•Wait forever task remains blocked until it is able to acquire
a semaphore.
•Wait with a timeout task remains blocked until it is able to
acquire a semaphore or until a set interval of time, called the
timeout interval , passes. At this point, the task is removed
from the semaphores task-waiting list and put in either the
ready state or the running state.·
•Do not wait task makes a request to acquire a semaphore
token, but, if one is not available, the task does not block.

75
Clearing Semaphore Task-Waiting Lists
Operation Description

Flush Unblocks all tasks waiting on a


semaphore

Getting Semaphore Information


Operation Description

Show info Show general information about


semaphore

Show blocked tasks Get a list of IDs of tasks that are


blocked on a semaphore

76
Typical Semaphore Use
•Semaphores are useful either for synchronizing execution of
multiple tasks or for coordinating access to a shared resource.
– wait-and-signal synchronization,
– multiple-task wait-and-signal synchronization,
– credit-tracking synchronization,
– single shared-resource-access synchronization,
– recursive shared-resource-access synchronization, and
– multiple shared-resource-access synchronization.

77
Wait-and-Signal Synchronization
•Two tasks can communicate for the purpose of
synchronization without exchanging data.
• For example, a binary semaphore can be used between two
tasks to coordinate the transfer of execution control

78
Multiple-Task Wait-and-Signal Synchronization
•When coordinating the synchronization of more than two
tasks, use the flush operation on the task-waiting list of a
binary semaphore.

79
Credit-Tracking Synchronization
•Sometimes the rate at which the signaling task
executes is higher than that of the signaled task. In this
case, a mechanism is needed to count each signaling
occurrence. The counting semaphore provides just this
facility.
•With a counting semaphore, the signaling task can
continue to execute and increment a count at its own
pace, while the wait task, when unblocked, executes at
its own pace

80
• Single Shared-Resource-Access Synchronization

Recursive Shared-Resource-Access Synchronization

81
• Multiple Shared-Resource-Access Synchronization

82
Semaphore Issues (Task Synchronisation
issues)
• Forgetting to take semaphore
• Forgetting to release semaphore
• Taking wrong semaphore
• Holding semaphore for too long
• Priority Inversion
• Deadlock

83
• Mutual Exclusion (Mutex) Semaphores

Mutex Ownership
• Ownership of a mutex is gained when a task first locks the mutex
by acquiring it. Conversely, a task loses ownership of the mutex
when it unlocks it by releasing it. When a task owns the mutex, it
is not possible for any other task to lock or unlock that mutex.
• Contrast this concept with the binary semaphore, which can be
released by any task, even a task that did not originally acquire
the semaphore.
84
Recursive Locking
•Many mutex implementations also support recursive locking, which
allows the task that owns the mutex to acquire it multiple times in
the locked state. Depending on the implementation, recursion within
a mutex can be automatically built into the mutex, or it might need
to be enabled explicitly when the mutex is first created.
•The mutex with recursive locking is called a recursive mutex. This
type of mutex is most useful when a task requiring exclusive access
to a shared resource calls one or more routines that also require
access to the same resource. A recursive mutex allows nested
attempts to lock the mutex to succeed, rather than cause deadlock,
which is a condition in which two or more tasks are blocked and are
waiting on mutually locked resources.

85
Mutex vs Semaphore
Feature
Mutex Semaphore

No ownership, any task can


Ownership Owned by acquiring task
signal

Mutual exclusion for single Synchronization or counting


Usage
resource purposes

Supports priority No built-in priority


Priority Inversion
inheritance management

Recursive Locking Possible (Recursive Mutex) Not applicable

86
Message Queues
• A message queue is a buffer-like object through which
tasks and ISRs send and receive messages to
communicate and synchronize with data.

87
Message Queue States

88
Typical message queue operations include the following:
– creating and deleting message queues,
– sending and receiving messages, and
– obtaining message queue information.

Creating and Deleting Message Queues


Operation
Description

Creates a message queue


Create

Deletes a message queue


Delete

89
Sending and Receiving Messages

Operation Description

Send Sends a message to a message queue

Receive Receives a message from a message


queue

Broadcast Broadcasts messages

90
Sending Messages
In any case, messages are sent
to a message queue in the
following ways:
– not block (ISRs and
tasks),
– block with a timeout
(tasks only), and
– block forever (tasks
only).

91
Receiving Messages
Messages can be read from the head of a message queue
in two different ways:
– destructive read, and
– non-destructive read.

92
Obtaining Message Queue Information

Operation Description

Show queue info Gets information on a message queue

Show queue s task-waiting list Gets a list of tasks in the queue s task-
waiting list

93
Typical Message Queue Use
The following are typical ways to use message queues
within an application:
•non-interlocked, one-way data communication,
•interlocked, one-way data communication,
•interlocked, two-way data communication, and
•broadcast communication.

94
• Non-Interlocked, One-Way Data Communication

• Interlocked, One-Way Data Communication

95
• Interlocked, Two-Way Data Communication

• Broadcast Communication

96
Pipes
•Pipes are kernel objects that provide unstructured data
exchange and facilitate synchronization among tasks. In a
traditional implementation, a pipe is a unidirectional data
exchange facility. Two descriptors, one for each end of the
pipe (one end for reading and one for writing), are returned
when the pipe is created.
•Data is written via one descriptor and read via the other.
The data remains in the pipe as an unstructured byte
stream. Data is read from the pipe in FIFO order.

97
• A pipe provides a simple data flow facility so that the reader
becomes blocked when the pipe is empty, and the writer
becomes blocked when the pipe is full.
• Typically, a pipe is used to exchange data between a data-
producing task and a data-consuming task, It is also
permissible to have several writers for the pipe with multiple
readers on it.

98
Pipe Control Blocks

99
Pipe States

100
Named and Unnamed Pipes
A kernel typically supports two kinds of pipe objects: named
pipes and unnamed pipes.
•A named pipe , also known as FIFO, has a name similar to a
file name and appears in the file system as if it were a file or
a device. Any task or ISR that needs to use the named pipe
can reference it by name.
•The unnamed pipe does not have a name and does not
appear in the file system. It must be referenced by the
descriptors that the kernel returns when the pipe is created.

101
Typical Pipe Operations
– create and destroy a pipe,
– read from or write to a pipe,
– issue control commands on the pipe, and
– select on a pipe.
Create and Destroy
Operation Description

Pipe Creates a pipe

Open Opens a pipe

Close Deletes or closes a pipe

102
Read and Write
Operation Description

Read Reads from the pipe

Write Writes to a pipe

Control
Operation Description

Fcntl Provides control over the pipe


descriptor

103
Select
Operation Description

Select Waits for conditions to occur on a pipe

104
Typical Uses of Pipes

105

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy