Os Unit-Ii
Os Unit-Ii
Scheduling criteria:
Many criteria have been suggested for comparing CPU-scheduling algorithms. The criteria
include the following:
CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may range from
0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system)
to 90 percent (for a heavily used system).
Throughput: If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes completed per time unit, called throughput. For long processes,
this rate may be 1 process per hour; for short transactions, throughput might be 10 processes per
second.
Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
Response time: In an interactive system, turnaround time may not be the best criterion.
Another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the amount of time it takes to start responding,
but not the time that it takes to output that response.
CPU Scheduling Algorithm:
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated first to the CPU. There are four types of CPU scheduling that exist.
1. First Come, First Served Scheduling (FCFS) Algorithm:
This is the simplest CPU scheduling algorithm. In this scheme, the process which requests the
CPU first, that is allocated to the CPU first. The implementation of the FCFS algorithm is easily
managed with a FIFO queue. When a process enters the ready queue its PCB is linked onto the
rear of the queue. The average waiting time under FCFS policy is quiet long. Consider the
following example:
P1 P2 P3
0 24 27 30
P2 , P3 , P1
P2 P3 P1
0 3 6 30
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
3.Priority Scheduling
In this scheduling a priority is associated with each process and the CPU is allocated
to the process with the highest priority. Equal priority processes are scheduled in
FCFS manner.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0
to 4,095. However, there is no general agreement on whether 0 is the highest or
lowest priority.
Preemptive
Nonpreemptive
Starvation : A major problem with priority scheduling algorithms is indefinite blocking,
or starvation. A process that is ready to run but waiting for the CPU can be considered
blocked. A priority scheduling algorithm can leave some lowpriority processes waiting
indefinitely
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds.
Since it requires another 20 milliseconds, it is preempted after the first time quantum, and
the CPU is given to the next process in the queue, process P2 . Process P2 does not need
4 milliseconds, so it quits before its time quantum expires. The CPU is then given to the
next process, process P3. Once each process has received 1 time quantum, the CPU is
returned to process P1 for an additional time quantum
P1 waits for 6 millisconds (10- 4), P2 waits for 4 millisconds, and P3 waits for 7
millisconds.
System call interface for process management-fork, exit, wait, waitpid, exec
System call interfaces are a crucial part of the operating system, providing a way
for user-level processes to request services or functionality from the kernel. In
process management, several system calls are commonly used to create, manage,
and control processes. Here are some important system calls related to process
management:
System Model:
A system consists of a finite number of resources to be distributed among a number of competing
processes. The resources are partitioned into several types each of which consists of a number of
identical instances. A process may utilized a resources in the following sequence
• Request: In this state one can request a resource.
• Use: In this state the process operates on the resource.
• Release: In this state the process releases the resources.
Deadlock Characteristics:
In a deadlock process never finish executing and system resources are tied up. A deadlock
situation can arise if the following four conditions hold simultaneously in a system.
• Mutual Exclusion: At a time only one process can use the resources. If another process
requests that resource, requesting process must wait until the resource has been released.
• Hold and wait: A process must be holding at least one resource and waiting to additional
resource that is currently held by other processes.
• No Preemption: Resources allocated to a process can’t be forcibly taken out from it unless it
releases that resource after completing the task.
• Circular Wait: A set {P0 , P1 , …….Pn} of waiting state/ process must exists such that P0 is
waiting for a resource that is held by P1 , P1 is waiting for the resource that is held by P2 …..
P(n – 1) is waiting for the resource that is held by Pn and Pn is waiting for the resources that is
held by P4 .
Resource Allocation Graph:
Deadlock can be described more clearly by directed graph which is called system resource
allocation graph.
The graph consists of a set of vertices ‘V’ and a set of edges ‘E’.
The set of vertices ‘V’ is partitioned into two different types of nodes such as P = {P1 , P2 ,
…….Pn}, the set of all the active processes in the system and R = {R1 , R2 , …….Rm}, the set
of all the resource type in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj .
It signifies that process Pi is an instance of resource type Rj and waits for that resource.
A directed edge from resource type Rj to the process Pi which signifies that an instance of
resource type Rj has been allocated to process Pi . A directed edge Pi → Rj is called as request
edge and Rj → Pi is called as assignment edge.
When a process Pi requests an instance of resource type Rj then a request edge is inserted as
resource allocation graph. When this request can be fulfilled, the request edge is transformed to
an assignment edge. When the process no longer needs access to the resource it releases the
resource and as a result the assignment edge is deleted. The resource allocation graph shown in
below figure has the following situation.
• The sets P, R, E
P = {P1 , P2 , P3}
R = {R1 , R2 , R3 , R4}
E = {P1 → R1 ,P2 → R3 ,R1 → P2 ,R2 → P2 ,R2 → P1 ,R3 → P3}
The resource instances are
Resource R1 has one instance
Resource R2 has two instances.
Resource R3 has one instance
Resource R4 has three instances.
Example of a Resource Allocation Graph
Deadlock Avoidance:
Deadlock Avoidance Requires additional information about how resources are to be
used.Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can never be a circular-wait condition.Resource-
allocation state is defined by the number of available and allocated resources, and the maximum
demands of the processes.
Safe State
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state.Systems are in safe state if there exists a safe sequence of all
process. A sequence <P1,P2,…,Pn>of ALL the processes is the system such that for each Pi , the
resources that Pi can still request can be satisfied by currently available resources + resources
held by all the Pj , with j<i. That is.
If Pi resource needs are not immediately available, then Pi can wait until all Pjhave
finished
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
When Pi terminates, Pi +1 can obtain its needed resources, and so on.
If system is in safe state => No deadlock.
If system in not in safe state => possibility of deadlock .
OS cannot prevent processes from requesting resources in a sequence that leads to
deadlock
Avoidance => ensue that system will never enter an unsafe state, prevent getting into
deadlock.
Avoidance Algorithms
Single instance of a resource type
Use a resource-allocation graph
Multiple instances of a resource type
Use the banker’s algorithm
Resource-Allocation Graph
Safety Algorithm
Work = Available
4. If Finish [i] == true for all i, then the system is in a safe state.
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k
instances of resource type Rj
P1 302 020
P2 302 600
P3 211 011
P4 002 431
Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2>
satisfies safety requirement
Deadlock Detection
• An algorithm to remove the deadlock is applied either to a system which pertains single in
instance each resource type or a system which pertains several instances of a resource type.
If all resources only a single instance then we can define a deadlock detection algorithm
which uses a new form of resource allocation graph called “Wait for graph”. We obtain this
graph from the resource allocation graph by removing the nodes of type resource and
collapsing the appropriate edges. The below figure describes the resource allocation graph
and corresponding wait for graph.
Detection Algorithm
P1 200 202
P2 303 000
P3 211 100
P4 002 002
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all
i
Request
ABC
P0 000
P1 202
P2 001
P3 100
P4 002
State of system?
Process Termination
Resource Preemption
Rollback – return to some safe state, restart process for that state