UNIT 2 Notes-OS
UNIT 2 Notes-OS
Virtualization technology enables a single PC or server to simultaneously run multiple operating systems
or multiple sessions of a single OS
A machine with virtualization software can host numerous applications, including those that run on
different operating systems, on a single platform
The host operating system can support a number of virtual machines, each of which has the
characteristics of a particular OS
The solution that enables virtualization is a virtual machine monitor (VMM), or hypervisor
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the opera
The resources of the physical computer are shared to create the virtual machines.
1. CPU scheduling can create the appearance that users have their own processor.
2. Spooling and a file system can provide virtual card readers and virtual line printers.
The virtual-machine concept provides complete protection of system resources since each virtual
machine is isolated from all other virtual machines.
A virtual-machine system is a perfect vehicle for operating-systems research and development. System
development is done on the virtual machine, instead of on a physical machine and so does not disrupt
normal system operation.
The virtual machine concept is difficult to implement due to the effort required to provide an exact
duplicate to the underlying machine.
CS6401-OPERATING SYSTEMS
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess
Communication; Threads- Overview, Multicore Programming, Multithreading Models;
Windows 7 -Thread and SMP Management. Process Synchronization - Critical Section
Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
PROCESS CONCEPTS
Process Concept
A process can be thought of as a program in execution.
A process is the unit of the unit of work in a modern time-sharing system.
A process is more than the program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the programcounter
and the contents of the registers.
A process generally also includes the process stack, which contains temporary data (such
as function parameters, return addresses, and local variables), and a data section, which
contains global variables. A process may also include a heap, which is memory that is
dynamically allocated during process run time.
PROCESS SCHEDULING
As processes enter the system, they are put into a job queue.
The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue.
The list of processes waiting for an I/O device is kept in a device queue for that
particular device.
A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
Once the process is assigned to the CPU and is executing, one of several events could occur:
The process could issue an I/O request, and then be placed in an I/O
queue.
The process could create a new subprocess and wait for its
termination.
The process could be removed forcibly from the CPU, as a result of
aninterrupt, and be put back in the ready Queue.
A common representation of process scheduling is a queueing diagram.
Schedulers
The operating system must select, for scheduling purposes, processes from these queues
in some order
The selection process is carried out by the appropriate scheduler.
They are:
1. Long-term Scheduler or Job Scheduler
2. Short-term Scheduler or CPU Scheduler
3. Medium term Scheduler
Long-Term Scheduler
The long-term scheduler, or job scheduler, selects processes from this pool and loads
them into memory for execution. It is invoked very infrequently. It controls thedegree
of multiprogramming.
Short-Term Scheduler
The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute, and allocates the CPU to one of them. It is invoked
very frequently.
Processes can be described as either I/O bound or CPU bound.
An I\O-bound process spends more of its time doing I/O than it spends doing
computations.
A CPU-bound process, on the other hand, generates I/O requests infrequently,
using more of its time doing computation than an I/O-bound
process uses.
The system with the best performance will have a combination of CPU- bound
and I/O-bound processes.
Medium Term Scheduler
Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling.
The key idea is medium-term scheduler, removes processes from memory and thus
reduces the degree of multiprogramming.
At some later time, the process can be reintroduced into memory and its execution
can be continued where it left off. This scheme is called swapping.
Context Switching
the CPU to another process requires saving the state of the old process
and loading the saved state for the new process.This task is known as a context
switch.
Context-switch time is pure overhead, because the system does no useful work
while switching.
Its speed varies from machine to machine, depending on the memory speed, the
number of registers that must be copied, and the existence of special
instructions.
OPERATIONS ON PROCESS
1. Process Creation
A process may create several new processes, during execution.
The creating process is called a parent process, whereas the new processes are called
the children of that process.
When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
In UNIX, each process is identified by its process identifier, which is a unique
integer. A new process is created by the fork system call.
The kthreadd process is responsible for creating additional processes that perform tasks on
behalf of the kernel (in this situation, khelper and pdflush).
The sshd process is responsible for managing clients that connect to the system by using ssh
(which is short for secure shell). The login process is responsible for managing clients that
directly log onto the system.
In general, when a process creates a child process, that child process will need certain
resources (CPU time, memory, files, I/O devices) to accomplish its task.
A child process may be able to obtain its resources directly from the operating system, or it
may be constrained to a subset of the resources of the parent process.
The parent may have to partition its resources among its children, or it may be able to share
some resources (such as memory or files) among several of its children. Restricting a child
system
by creating too many child processes.
2. Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit system call.
At that point, the process may return data (output) to its parent process (via the wait
system call).
A process can cause the termination of another process via an appropriate system
call.
A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
1. The child has exceeded its usage of some of the resources that it has
Been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a child to continue
if its parent terminates. On such systems, if a process terminates (either normally
or abnormally), then all its children must also be t e r m i n a t e d . This p h e n o m e
n o n , r e f e r r e d to as c a s c a d i n g termination, is normallyinitiated by the
operating system.
When a process terminates, its resources are de-allocated by the operating system.
A process that has terminated, but whose parent has not yet called wait(), is known as a zombie
process.
Now consider what would happen if a parent did not invoke wait() and instead terminated,
thereby leaving its child processes as orphans.
CO-OPERATING PROCESS
Processes executing concurrently in the operating system may be either independent processes
or cooperating processes.
A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.
A process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
Information sharing. Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to such
information.
Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a speedup
can be achieved only if the computer has multiple processing cores.
Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
Convenience. Even an individual user may work on many tasks at the same time. For instance,
a user may be editing, listening to music, and compiling in parallel.
INTERPROCESS COMMUNICATION
Cooperating processes require an Inter Process Communication (IPC) mechanism that will
allow them to exchange data and information. There are two fundamental models of interprocess
communication: shared memory and message passing.
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region.
Other processes that wish to communicate using this shared-memory segment must attach it to
their address space.
Message passing
1. Basic Structure:
If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them.
There are several methods for logically implementing a link and the
operations:
1. Direct or indirect communication
2. Symmetric or asymmetric communication
3. Automatic or explicit buffering
4. Send by copy or send by reference
5. Fixed-sized or variable-sized messages
2. Naming
Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
1. Direct Communication
Each process that wants to communicate must explicitly name the recipient or
sender of the communication.
A communication link in this scheme has the following properties:
i. A link is established automatically between every pair of processes
that want to communicate. The processes need to know only each
other's identity to communicate.
ii. A link is associated with exactly two processes.
iii. Exactly one link exists between each pair of processes.
There are two ways of addressing namely
Symmetry in addressing
Asymmetry in addressing
In symmetry in addressing, the send and receive primitives are definedas:
send(P, message) Send a message to process P
receive(Q, message) Receive a message from Q
THREADS
Thread
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack.
It shares with other threads belonging to the same process its code section, data section, and
other operating-system resources, such as open files and signals. Traditional (or heavyweight)
process has a single thread of control.
If a process has multiple threads of control, it can perform more than one task at a time.
Motivation
Most software applications that run on modern computers are multithreaded. An application
typically is implemented as a separate process with several threads of control.
A web browser might have one thread display images or text while another thread retrieves data
from the network.
A word processor may have a thread for displaying graphics, another thread for responding to
keystrokes from the user, and a third thread for performing spelling and grammar checking in the
background.
MULTITHREADING
Multithreading is the ability of a program or an operating system process to manage its use by
more than one user at a time and to even manage multiple requests by the same user without
having to have multiple copies of the programming running in the computer.
Benefits
There are four major categories of benefits to multi-threading:
1. Responsiveness - One thread may provide rapid response while other threads are blocked
or slowed down doing intensive calculations.
2. Resource sharing - By default threads share common code, data, and other resources,
which allows multiple tasks to be performed simultaneously in a single address space.
3. Economy - Creating and managing threads ( and context switches between them ) is
much faster than performing the same tasks for processes.
4. Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process
can only run on one CPU, no matter how many may be available, whereas the execution of
a multi-threaded application may be split amongst available processors
Multithreading Models
1. Many-to-One
2. One-to-One
3. Many-to-Many
1. Many-to-One:
Many to one model maps many user level threads to
one Kernel level thread. Thread management is done in user
space. When thread makes a blocking system call, the entire
process will be blocks. Only one thread can access the Kernel
at a time,so multiple threads are unable to run in parallel on
multiprocessors.
THREADING ISSUES:
MULTICORE PRORGAMMING
Single-CPU systems evolved into multi-CPU systems. A more recent, similar trend in system
design is to place multiple computing cores on a single chip.
Each core appears as a separate processor to the operating system. Whether the cores appear across
CPU chips or within CPU chips, we call these systems multicore or multiprocessor systems.
Multithreaded programming provides a mechanism for more efficient use of these multiple
computing cores and improved concurrency.
A concurrent system supports more than one task by allowing all the tasks to make progress. Thus,
it is possible to have concurrency without parallelism.
1. Identifying tasks. This involves examining applications to find areas that can be divided into
separate, concurrent tasks.
2. Balance. While identifying tasks that can run in parallel, programmers must also ensure that
the tasks perform equal work of equal value.
3. Data splitting. Just as applications are divided into separate tasks, the data accessed and
manipulated by the tasks must be divided to run on separate cores.
4. Data dependency. The data accessed by the tasks must be examined for dependenciesbetween
two or more tasks. When one task depends on data from another, programmers must ensure that
the execution of the tasks is synchronized to accommodate the data dependency.
5. Testing and debugging. When a program is running in parallel on multiple cores, many
different execution paths are possible. Testing and debugging such concurrent programs is
inherently more difficult than testing and debugging single-threaded applications.
Types of Parallelism
In general, there are two types of parallelism: data parallelism and task parallelism.
Data parallelism focuses on distributing subsets of the same data across multiple computing
cores and performing the same operation on each core.
Task parallelism involves distributing not data but tasks (threads) across multiple computing
cores.
PROCESS SYNCHRONIZATION
Suppose that we modify the producer-consumer code by adding a variable counter, initialized
to 0 and increment it each time a new item is added to the buffer
Race condition: The situation where several processes access and manipulate shared data
concurrently. The final value of the shared data depends upon which process finishes last.
Definition: Each process has a segment of code, called a critical section (CS), in which the
process may be changing common variables, updating a table, writing a file, and so on.
mplementing this
request is the entry section.
Requirements to be satisfied for a Solution to the Critical-Section Problem:
exit section
remainder section
} while (1);
Two general approaches are used to handle critical sections in operating systems: preemptive
kernels and nonpreemptive kernels.
A non-preemptive kernel does not allow a process running in kernel mode to be preempted; a
kernelmode process will run until it exits kernel mode, blocks, or voluntarily yields control of the
CPU.
MUTEX LOCKS
-systems designers build software tools to solve the critical-section problem. The
simplest of these tools is the mutex lock.
t race conditions.
or not.
is available, a call to acquire() succeeds, and the lock is then considered
unavailable.
acquire()
{
while (!available); /* busy wait */
available = false;;
}
release()
{
available = true;
}
single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other
process might be able to use productively.
SEMAPHORES
signal(S)
{
S++;
}
Semaphore Usage
number of instances.
processes that wish to use a resource will block until the count becomes greater than 0
with a statement S2. Suppose we require that S2 be executed only after S1 has completed. We
can implement this scheme readily by letting P1 and P2 share a common semaphore synch,
initialized to 0. In process P1, we insert the statements
S1;
signal(synch);
In process P2, we insert the statements
wait(synch);
S2;
as invoked signal(synch),
which is after statement S1 has been executed.
Semaphore Implementation
signal() operations as follows: When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.
the running process to the newly ready process, depending on the CPU-scheduling algorithm.)
typedef struct
{
int value;
struct process *list;
} semaphore;
t of processes list.
A signal() operation removes one process from the list of waiting processes and awakens that
process.
wait(semaphore *S)
{
S->value--;
if (S->value < 0)
{
add this process to S->list;
block();
}
}
signal(semaphore *S)
{
S->value++;
if (S->value <= 0)
{
remove a process P from S->list;
wakeup(P);
}
}
he block() operation suspends the process that invokes it.
or more processes are waiting indefinitely for an event that can be caused only by one of the
waiting processes
.
Priority Inversion
-priority process needs to read or modify kernel
data that are currently being accessed by a lower-priority process or a chain of lower-priority
processes.
The kernel data are typically protected with a lock, the higher-priority process will have to
wait for a lower-priority one to finish with the resource.
-priority process is preempted in favor
of another process with a higher priority.
re than two
priorities, so one solution is to have only two priorities.
-inheritance protocol.
According to this protocol, all processes that are accessing resources needed by a higher-priority
process inherit the higher priority until they are finished with the resources in question.
value n.
The producer and consumer processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
The structure of the producer process.
do {
...
/* produce an item in next produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
The R-W problem is another classic problem for which design of synchronization and concurrency
mechanisms can be tested. The producer/consumer is another such problem; the dining
philosophers is another.
Definition
There is a data area that is shared among a number of processes.
Any number of readers may simultaneously write to the data area.
Only one writer at a time may write to the data area.
If a writer is writing to the data area, no reader may read it.
If there is at least one reader reading the data area, no writer may write to it.
Readers only read and writers only write
A process that reads and writes to a data area must be considered a writer (consider
producer or consumer)
In the solution to the first readers writers problem, the reader processes share the following data
structures:
semaphore rw mutex = 1;
semaphore mutex = 1;
int read count = 0;
updated.
Consider there are five philosophers sitting around a circular dining table. The dining table has
five chopsticks and a bowl of rice in the middle.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses
two chopsticks - one from their left and one from their right.
When a philosopher wants to think, he keeps down both chopsticks at their original place.
while(TRUE) {
wait(stick[i]);
wait(stick[(i+1) % 5]); // mod is used because if i=5, next
// chopstick is 1 (dining table is circular)
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that
chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating,
he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick,
then a deadlock situation occurs because they will be waiting for another chopstick forever.
1) A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.
2) Allow only four philosophers to sit at the table. That way, if all the four philosophers pick
up four chopsticks, there will be one chopstick left on the table. So, one philosopher can
start eating and eventually, two chopsticks will be available. In this way, deadlocks can
be avoided.
MONITORS
directly access the monitor's internal data structures from procedures declared outside the
monitor.
Monitor Usage
procedure defined within a monitor can access only those variables declared locally within the
monitor and its formal parameters.
The monitor construct is not sufficiently powerful for modeling some synchronization schemes.
The only operations that can be invoked on a condition variable are wait() and signal(). The
operation
x. wait();
means that the process invoking this operation is suspended until another process invokes
x.signal();
The x.signal() operation resumes exactly one suspended process.
monitor DiningPhilosophers
{
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i)
{
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].wait();
}
void putdown(int i)
{
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
}
void test(int i)
{
if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] !=
EATING))
{
state[i] = EATING;
self[i].signal();
}
}
initialization code()
{
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
CPU SCHEDULING
By switching the CPU among processes, the operating system can make the computer more
productive.
Basic Concepts
The objective of multi-programming is to have some process running at all times, to
maximize CPU utilization.
For a Uni-processor system, there will never be more than one running process.
Scheduling is a fundamental operating system function.
The idea of multi-programming is to execute a process until it must wait, typically for the
completion of some I/O request.
The CPU is one of the primary computer resources.
The CPU scheduling is central to operating system design.
Cpu Scheduler
When the CPU becomes idle, the operating system must select on of the processes in the
ready queue to be executed.
The selection process is carried out by the short-term scheduler (CPU scheduler)
The scheduler selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them.
A ready queue may be implemented as a FIFO queue, a priority queue, a tree or simply an
unordered link list.
All the processes in the ready queue are lined up waiting for a chance to run on the CPU.
Nonpreemptive Scheduling A scheduling discipline is non preemptive if, once a process has
been given the CPU, the CPU cannot be taken away from that process.
Preemptive Scheduling A scheduling discipline is preemptive if, once a process has been
given the CPU can taken away.
Dispatcher
Dispatcher is a module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:
Dispatch latency The time taken for the dispatcher to stop one process and start another
running.
Scheduling criteria
1. CPU utilization keep the CPU as busy as possible Throughput # of processes that
complete their execution per time unit .
2. Turnaround time amount of time to execute a particular process
3. Waiting time amount of time a process has been waiting in the ready queue
4. Response time amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
5. Throughput The number of processes that complete their execution per time unit.
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms.
1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
CPU first.
Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart:
Waiting time
P2 P3 P1
0 3 6 30
Waiting time
For P1=6,P2=0,P3=3
Average Waiting Time=(6+0+3)/3=3 ms.
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(6+24)=30,P2=(0+3)=3,P3=(3+3)=6
Average Turnaround Time=(30+3+6)/3=13ms
Priority Scheduling
The SJF algorithm is a special case of the general priority-scheduling algorithm.
A priority number (integer) is associated with each process and the CPU is allocated to
the process with the highest priority.
Equal-priority processes are scheduled in FCFS order.
The CPU is allocated to the process with the highest priority (smallest integer º highest
priority) .
Gantt Chart
P2 P1 P3
0 3 27 30
Waiting time
For P1=3,P2=0,P3=27
Average Waiting Time=(3+0+27)/3=10ms
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(3+24)=27,P2=(0+3)=3,P3=(27+3)=30
Average Turn Around Time=(27+3+30)/3=20ms.
If the CPU burst time is less than the time quantum, the process itself will release the
CPU voluntarily. Otherwise, if the CPU burst of the currently running process is longer
than the time quantum a context switch will be executed and the process will be put at
the tail of the ready queue.
Gantt Chart
Waiting time
Average waiting time = [6+4+7]/3 = 17/3 =5.66
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(6+24)=30,P2=(4+3)=7,P3=(7+3)=10
Average Turnaround Time=(30+7+10)/3=15.6ms.
The
scheduler first
executes all
processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1.
Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process that arrives for queue 0 will, in turn, preempt a process in queue 1.
A multilevel feedback q u e u e scheduler is d e f i n e d by the following
parameters:
DEAD LOCK
Definition:
A process request resources, if the resources are not available at that time, the process enters in
to a wait state. It may happen that waiting processes will never again change the state, because
the resources they have requested are held by other waiting processes. This situation is called as
dead lock.
System Model
nsists of a finite number of resources to be distributed among a number of
competing processes.
A process must request a resource before using it and must release the resource after using it.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request. The process requests the resource. If the request cannot be granted immediately then
the requesting process must wait until it can acquire the resource.
2. Use. The process can operate on the resource
3. Release. The process releases the resource.
Deadlock Characterizations:-
In a deadlock, processes never finish executing, and system resources are tied up, preventing
other jobs from starting.
A dead lock situation can arise if the following four conditions hold simultaneously in a system.
1) MUTUAL EXCLUSION:- At least one resource must be held in a on-sharable mode. i.e only
one process can hold this resource at a time . other requesting processes should wait till it is
released.
2) HOLD & WAIT:- there must exist a process that is holding at least one resource and is waiting
to acquire additional resources that are currently being held by other processes.
3) NO PREEMPTION:- Resources cannot be preempted, that is a resource can be released
voluntarily by the process holding it, after that process has completed its task.
4) CIRCULAR WAIT:-
p0 is waiting for a resource that is held by the p1, p1 is waiting for the resource that is held by the
And so on. pn is waiting for a resource that is held by the p0.
Resource-Allocation Graph
A deadlock can be described in terms of a directed graph called system resource-allocation
graph.
A set of vertices V and a set of edges E.
V is partitioned into two types:
7 P = {P1, P2, Pn}, the set consisting of all the processes in the system.
7 R = {R1, R2, Rm}, the set consisting of all resource types in the system.
request edge directed edge Pi Rj
assignment edge directed edge Rj
P2,
Resource instances:
Process states:
rce type R2 and is waiting for an instance of
resource type R1.
instance of R3.
DEADLOCK PREVENTION
a deadlock.
1. Mutual Exclusion
not required for sharable resources; must hold for non-sharable resources.
For example, a printer cannot be simultaneously shared by several processes.
A process never needs to wait for a sharable resource.
2. Hold and Wait
must guarantee that whenever a process requests a resource, it does not hold any other
resources.
One protocol requires each process to request and be allocated all its resources before it
begins execution,
Or another protocol allows a process to request resources only when the process has
none. So, before it can request any additional resources, it must release all the resources
that it is currently allocated.
3. Denying No preemption
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released.
Preempted resources are added to the list of resources for which the process is waiting.
Process will be restarted only when it can regain its old resources, as well as the new
ones that it is requesting.
4. Denying Circular wait
Impose a total ordering of all resource types and allow each process to
request for resources in an increasing order of enumeration.
Let R = {R1,R2,...Rm} be the set of resource types.
Assign to each resource type a unique integer number.
If the set of resource types R includes tapedrives, disk drives and printers.
F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
Each process can request resources only in an increasing order of enumeration.
DEADLOCK AVOIDANCE
is the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j < i.
That is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
When Pi terminates, Pi +1 can obtain its needed resources, and so on.
Algorithm
ensure that the bank never allocated its available cash in such a way that it could no
longer satisfy the needs of all its customers.
Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
Let n = number of processes, and m = number of resources types.
1. Available: indicates the number of available resources of each type.
2. Max: Max[i, j]=k then process Pi may request at most k instances of
resource type Rj
3. Allocation : Allocation[i. j]=k, then process Pi is currently allocated K
instances of resource type Rj
4. Need : if Need[i, j]=k then process Pi may need K more instances of
resource type Rj ,Need [i, j]=Max[i, j]-Allocation[i, j]
c) Is the system in a safe state? If so, show one sequence of processes which allows
the system to complete. If not, explain why.
1. Initialize the Work and Finish vectors.
Work = Available = (1, 2, 0)
Finish = (false, false, false, false, false)
2. Find index i such that Finish[i] = false and Needi <= Work
3. Since Finish[i] = true for all i, hence the system is in a safe state. The sequence
of processes which allows the system to complete is P1, P3, P2, P4, P0.
d) Given the request (1, 2, 0) from Process P2. Should this request be granted? Why
or why not?
1. Check that Request2 <= Need2.
Since (1, 2, 0) <= (2, 3, 3), hence, this condition is satisfied.
2. Check that Request2 <= Available.
Since (1, 2, 0) <= (1, 2, 0), hence, this condition is satisfied.
3. Modify the state as follows:
Available = Available Request2 = (1, 2, 0) (1, 2, 0) = (0, 0, 0)
Allocation2 = Allocation2 + Request2 = (0, 3, 0) + (1, 2, 0) = (1, 5, 0)
Need2 = Need2 Request2 = (2, 3, 3) (1, 2, 0) = (1, 1, 3)
4. Apply the safety algorithm to check if granting this request leaves the
system in a safe state.
1. Initialize the Work and Finish vectors.
Work = Available = (0, 0, 0)
Finish = (false, false, false, false, false)
2. At this point, there does not exist an index i such that Finish[i] = false
and Needi <= Work.
Since Finish[i true for all i, hence the system is not in a safe state.
Therefore, this request from process P2 should not be granted.
Resource-Request Algorithm
Let Requesti be the request vector for process Pi . If Requesti [ j] == k, then process Pi
wants k instances of resource type Rj . When a request for resources is made by process
Pi , the following actions are taken:
1. If Requesti , go to step 2. Otherwise, raise an error condition, since the
process has exceeded its maximum claim.
2. If Requesti Available, go to step 3. Otherwise, Pi must wait, since the
resources are not available.
3. Have the system pretend to have allocated the requested resources to process Pi
by modifying the state as follows:
Available = Available Requesti ;
Allocationi = Allocationi + Requesti ;
Needi = Needi Requesti ;
DEADLOCK DETECTION
Deadlock Detection
(i) Single instance of each resource type
If all resources have only a single instance, then we can define a deadlock
detection algorithm that use a variant of resource-allocation graph called a wait for graph.
Resource Allocation Graph Wait for Graph
DEADLOCK RECOVERY.
The native process structures and services provided by the Windows Kernel are relatively simple
and general purpose, allowing each OS subsystem to emulate a particular process structure and
functionality.
Related to the process is a series of blocks that define the virtual address space
currently assigned to this process.
The process cannot directly modify these structures but must rely on the virtual
memory manager, which provides a memory allocation service for the process.
The process includes an object table, with handles to other objects known to this
process. The process has access to a file object and to a section object that defines
a section of shared memory.
Thread States
Problem
1. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:
P1 23 0 2
P2 3 1 1
P3 6 2 4
P4 2 3 3
a. Draw four Gantt charts illustrating the execution of these processes using FCFS,
SJF(Preemptive), a non-preemptive priority (a smaller priority number implies a higher
priority), and RR (quantum = 1) scheduling.
c. What is the waiting time of each process for each of the scheduling algorithms in part a?
d. Which of the schedules in part a results in the minimal average waiting time (over all
processes)?
FCFS SCHEDULING
SJF(SHORTEST JOB FIRST)
In Pre-emptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as
a process with short burst time arrives, the existing process is pre-empted.
PRIORITY
ROUND ROBIN
2.
SJF
Priority
Round Robin