0% found this document useful (0 votes)
12 views52 pages

UNIT 2 Notes-OS

The document discusses virtualization technology, focusing on virtual machines (VMs) and their advantages and disadvantages, including isolation and resource sharing. It also covers process management concepts such as process states, scheduling, and interprocess communication mechanisms, highlighting the difference between independent and cooperating processes. Additionally, it introduces threads as units of CPU utilization that allow multiple tasks to be performed concurrently within a process.

Uploaded by

padhu6121985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views52 pages

UNIT 2 Notes-OS

The document discusses virtualization technology, focusing on virtual machines (VMs) and their advantages and disadvantages, including isolation and resource sharing. It also covers process management concepts such as process states, scheduling, and interprocess communication mechanisms, highlighting the difference between independent and cooperating processes. Additionally, it introduces threads as units of CPU utilization that allow multiple tasks to be performed concurrently within a process.

Uploaded by

padhu6121985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Additional Topics

Virtual Machines (VM)

Virtualization technology enables a single PC or server to simultaneously run multiple operating systems
or multiple sessions of a single OS

A machine with virtualization software can host numerous applications, including those that run on
different operating systems, on a single platform
The host operating system can support a number of virtual machines, each of which has the
characteristics of a particular OS
The solution that enables virtualization is a virtual machine monitor (VMM), or hypervisor

A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the opera

with its own (virtual) memory.

The resources of the physical computer are shared to create the virtual machines.

1. CPU scheduling can create the appearance that users have their own processor.

2. Spooling and a file system can provide virtual card readers and virtual line printers.

3. A normal user time-sharing terminal serves as the virtual machine console.

Advantages/Disadvantages of Virtual Machines

The virtual-machine concept provides complete protection of system resources since each virtual
machine is isolated from all other virtual machines.

This isolation, however, permits no direct sharing of resources.

A virtual-machine system is a perfect vehicle for operating-systems research and development. System
development is done on the virtual machine, instead of on a physical machine and so does not disrupt
normal system operation.

The virtual machine concept is difficult to implement due to the effort required to provide an exact
duplicate to the underlying machine.
CS6401-OPERATING SYSTEMS
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess
Communication; Threads- Overview, Multicore Programming, Multithreading Models;
Windows 7 -Thread and SMP Management. Process Synchronization - Critical Section
Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.

PROCESS CONCEPTS
Process Concept
A process can be thought of as a program in execution.
A process is the unit of the unit of work in a modern time-sharing system.

A process is more than the program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the programcounter
and the contents of the registers.

A process generally also includes the process stack, which contains temporary data (such
as function parameters, return addresses, and local variables), and a data section, which
contains global variables. A process may also include a heap, which is memory that is
dynamically allocated during process run time.

Difference between program and process


A program is a passive entity, such as the contents of a file stored on disk, whereas a
process is an active entity, with a program counter specifying the next instruction to
execute and a set of associated resources.

Process Control Block (PCB)


Each process is represented in the operating system by a process control block
(PCB)-also called a task control block.
A PCB defines a process to the operating system.
It contains the entire information about a process.
Some of the information a PCB.
Process state: The state may be new, ready, running, and waiting, halted,
and SO on.
Program counter: The counter indicates the address of the next
instruction to be executed for this process.
CPU registers: The registers vary in number and type, depending
on the computer architecture.
CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other
scheduling parameters.
Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or
the segment tables, depending on the memory system used by the
operating system.
Accounting information: This information includes the amount
of CPU and real time used, time limits, account numbers, job or process numbers, and
so on.
Status information: The information includes the list of I/Odevices allocated to this
process, a list of open files, and so on.
Process States:
As a process executes, it changes state.
The state of a process is defined in part by the current activity of that process.
Each process may be in one of the following states:
New: The process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
Ready: The process is waiting to be assigned to a processor.
Terminated: The process has finished execution.
Diagram shows CPU switch from process to process.

PROCESS SCHEDULING

The objective of multiprogramming is to have some process running at all


times, so as to maximize CPU utilization.
Scheduling Queues
There are 3 types of scheduling queues .They are :
1. Job Queue
2. Ready Queue
3. Device Queue

As processes enter the system, they are put into a job queue.

The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue.

The list of processes waiting for an I/O device is kept in a device queue for that
particular device.

A new process is initially put in the ready queue. It waits in the ready queue until it is selected for

Once the process is assigned to the CPU and is executing, one of several events could occur:

The process could issue an I/O request, and then be placed in an I/O
queue.
The process could create a new subprocess and wait for its
termination.
The process could be removed forcibly from the CPU, as a result of
aninterrupt, and be put back in the ready Queue.
A common representation of process scheduling is a queueing diagram.

Schedulers
The operating system must select, for scheduling purposes, processes from these queues
in some order
The selection process is carried out by the appropriate scheduler.

They are:
1. Long-term Scheduler or Job Scheduler
2. Short-term Scheduler or CPU Scheduler
3. Medium term Scheduler
Long-Term Scheduler
The long-term scheduler, or job scheduler, selects processes from this pool and loads
them into memory for execution. It is invoked very infrequently. It controls thedegree
of multiprogramming.
Short-Term Scheduler
The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute, and allocates the CPU to one of them. It is invoked
very frequently.
Processes can be described as either I/O bound or CPU bound.
An I\O-bound process spends more of its time doing I/O than it spends doing
computations.
A CPU-bound process, on the other hand, generates I/O requests infrequently,
using more of its time doing computation than an I/O-bound
process uses.
The system with the best performance will have a combination of CPU- bound
and I/O-bound processes.
Medium Term Scheduler
Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling.
The key idea is medium-term scheduler, removes processes from memory and thus
reduces the degree of multiprogramming.
At some later time, the process can be reintroduced into memory and its execution
can be continued where it left off. This scheme is called swapping.

Context Switching
the CPU to another process requires saving the state of the old process
and loading the saved state for the new process.This task is known as a context
switch.
Context-switch time is pure overhead, because the system does no useful work
while switching.
Its speed varies from machine to machine, depending on the memory speed, the
number of registers that must be copied, and the existence of special
instructions.

OPERATIONS ON PROCESS

1. Process Creation
A process may create several new processes, during execution.
The creating process is called a parent process, whereas the new processes are called
the children of that process.
When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
In UNIX, each process is identified by its process identifier, which is a unique
integer. A new process is created by the fork system call.

A tree of processes on a typical Linux system.

we see two children of init kthreadd and sshd.

The kthreadd process is responsible for creating additional processes that perform tasks on
behalf of the kernel (in this situation, khelper and pdflush).

The sshd process is responsible for managing clients that connect to the system by using ssh
(which is short for secure shell). The login process is responsible for managing clients that
directly log onto the system.

In general, when a process creates a child process, that child process will need certain
resources (CPU time, memory, files, I/O devices) to accomplish its task.

A child process may be able to obtain its resources directly from the operating system, or it
may be constrained to a subset of the resources of the parent process.

The parent may have to partition its resources among its children, or it may be able to share
some resources (such as memory or files) among several of its children. Restricting a child
system
by creating too many child processes.

2. Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit system call.

At that point, the process may return data (output) to its parent process (via the wait
system call).

A process can cause the termination of another process via an appropriate system
call.

A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
1. The child has exceeded its usage of some of the resources that it has
Been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a child to continue
if its parent terminates. On such systems, if a process terminates (either normally
or abnormally), then all its children must also be t e r m i n a t e d . This p h e n o m e
n o n , r e f e r r e d to as c a s c a d i n g termination, is normallyinitiated by the
operating system.
When a process terminates, its resources are de-allocated by the operating system.

A process that has terminated, but whose parent has not yet called wait(), is known as a zombie
process.

Now consider what would happen if a parent did not invoke wait() and instead terminated,
thereby leaving its child processes as orphans.

CO-OPERATING PROCESS

Processes executing concurrently in the operating system may be either independent processes
or cooperating processes.

A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.

A process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process.

There are several reasons for providing an environment that allows process cooperation:

Information sharing. Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to such
information.

Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a speedup
can be achieved only if the computer has multiple processing cores.

Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.

Convenience. Even an individual user may work on many tasks at the same time. For instance,
a user may be editing, listening to music, and compiling in parallel.

INTERPROCESS COMMUNICATION

Cooperating processes require an Inter Process Communication (IPC) mechanism that will
allow them to exchange data and information. There are two fundamental models of interprocess
communication: shared memory and message passing.
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region.

In the message-passing model, communication takes place by means of messages exchanged


between the cooperating processes.

(a) Message passing. (b) Shared memory.


Shared-Memory Systems
Interprocess communication using shared memory requires communicating
processes to establish a region of shared memory.

Other processes that wish to communicate using this shared-memory segment must attach it to
their address space.

Message passing

Message passing provides a mechanism to allow processes to communicate and to synchronize


their actions without sharing the same address space.

1. Basic Structure:
If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them.

Physical implementation of the link is done through a hardware bus , network


etc,

There are several methods for logically implementing a link and the
operations:
1. Direct or indirect communication
2. Symmetric or asymmetric communication
3. Automatic or explicit buffering
4. Send by copy or send by reference
5. Fixed-sized or variable-sized messages
2. Naming
Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
1. Direct Communication
Each process that wants to communicate must explicitly name the recipient or
sender of the communication.
A communication link in this scheme has the following properties:
i. A link is established automatically between every pair of processes
that want to communicate. The processes need to know only each
other's identity to communicate.
ii. A link is associated with exactly two processes.
iii. Exactly one link exists between each pair of processes.
There are two ways of addressing namely
Symmetry in addressing
Asymmetry in addressing
In symmetry in addressing, the send and receive primitives are definedas:
send(P, message) Send a message to process P
receive(Q, message) Receive a message from Q

In asymmetry in addressing , the send & receive primitives are defined


as:
send (p, message) send a message to process p receive(id,
message) receive message from any process
2. Indirect Communication
With indirect communication, the messages are sent to and received from
mailboxes, or ports.
The send and receive primitives are defined as follows:
send (A, message) Send a message to mailbox A. receive
(A, message) Receive a message from mailbox A.
A communication link has the following properties:
i. A link is established between a pair of processes only if both members
of the pair have a shared mailbox.
ii. A link may be associated with more than two processes.
iii. A number of different links may exist between each pair of
communicating processes, with each link corresponding to one mailbox.
3. Buffering
A link has some capacity that determines the number of message that
can reside in it temporarily. This property can be viewed as a queue of messages
attached to the link.
There are three ways that such a queue can be implemented.
Zero capacity : Queue length of maximum is 0. No message is waiting
in a queue. The sender must wait until the recipient receives the message.
Bounded capacity: The queue has finite length n. Thus at most n
messages can reside in it.
Unbounded capacity: The queue has potentially infinite length. Thus
any number of messages can wait in it. The sender is never delayed
4. Synchronization
Message passing may be either blocking or non-blocking.
1. Blocking Send - The sender blocks itself till the message sent by it is
received by the receiver.
2. Non-blocking Send - The sender does not block itself after sending
the message but continues with its normal operation.
3. Blocking Receive - The receiver blocks itself until it receives the
message.
4. Non-blocking Receive The receiver does not block itself.

THREADS

Thread
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack.

It shares with other threads belonging to the same process its code section, data section, and
other operating-system resources, such as open files and signals. Traditional (or heavyweight)
process has a single thread of control.

If a process has multiple threads of control, it can perform more than one task at a time.

Motivation
Most software applications that run on modern computers are multithreaded. An application
typically is implemented as a separate process with several threads of control.

A web browser might have one thread display images or text while another thread retrieves data
from the network.

A word processor may have a thread for displaying graphics, another thread for responding to
keystrokes from the user, and a third thread for performing spelling and grammar checking in the
background.

MULTITHREADING

Multithreading is the ability of a program or an operating system process to manage its use by
more than one user at a time and to even manage multiple requests by the same user without
having to have multiple copies of the programming running in the computer.
Benefits
There are four major categories of benefits to multi-threading:
1. Responsiveness - One thread may provide rapid response while other threads are blocked
or slowed down doing intensive calculations.
2. Resource sharing - By default threads share common code, data, and other resources,
which allows multiple tasks to be performed simultaneously in a single address space.
3. Economy - Creating and managing threads ( and context switches between them ) is
much faster than performing the same tasks for processes.
4. Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process
can only run on one CPU, no matter how many may be available, whereas the execution of
a multi-threaded application may be split amongst available processors
Multithreading Models
1. Many-to-One
2. One-to-One
3. Many-to-Many
1. Many-to-One:
Many to one model maps many user level threads to
one Kernel level thread. Thread management is done in user
space. When thread makes a blocking system call, the entire
process will be blocks. Only one thread can access the Kernel
at a time,so multiple threads are unable to run in parallel on
multiprocessors.

If the user level thread libraries are implemented in the


operating system in such a way that system does not support
them then Kernel threads use the many to one relationship
modes.
2.One-to-One:
There is one to one relationship of user level
thread to the kernel level thread.
This model provides more concurrency than
the many to one model.

It also another thread to run when a thread


makes a blocking system call. It supports multiple thread to execute in parallel
on microprocessors.
3.Many-to-Many Model:
In this model, many user level threads multiplexes to
the Kernel thread of smaller or equal numbers.

The number of Kernel threads may be specific to either


a particular application or a particular machine.

In this model, developers can create as many userthreads


as necessary and the corresponding Kernel threads can
run in parallels on a multiprocessor.

THREADING ISSUES:

1. fork() and exec() system calls.


A fork () system call may duplicate all threads or duplicate only the thread that invoked
fork().
If a thread invoke exec() system call ,the program specified in the parameter to exec will
replace the entire process.
2. Thread cancellation.
It is the task of terminating a thread before it has completed . A thread
that is to be cancelled is called a target thread.
There are two types of cancellation namely
1. Asynchronous Cancellation One thread immediately terminates the target
thread.
2. Deferred Cancellation The target thread can periodically check if it should
terminate , and does so in an orderly fashion.
3. Signal handling
1. A signal is a used to notify a process that a particular event has occurred.
2. A generated signal is delivered to the process.
a. Deliver the signal to the thread to which the signal applies. b.
Deliver the signal to every thread in the process.
c. Deliver the signal to certain threads in the process.
d. Assign a specific thread to receive all signals for the process.
3. Once delivered the signal must be handled. a.
Signal is handled by
i. A default signal handler
ii. A user defined signal handler
4. Thread pools
Creation of unlimited threads exhaust system resources such as CPU time or
memory. Hence we use a thread pool.
In a thread pool , a number of threads are created at process startup and placed in
the pool.
When there is a need for a thread the process will pick a thread from the pool and
assign it a task.
After completion of the task,the thread is returned to the pool.
5. Thread specific data
Threads belonging to a process share the data of the process. However each thread might
need its own copy of certain data known as thread-specific data.

MULTICORE PRORGAMMING

Single-CPU systems evolved into multi-CPU systems. A more recent, similar trend in system
design is to place multiple computing cores on a single chip.

Each core appears as a separate processor to the operating system. Whether the cores appear across
CPU chips or within CPU chips, we call these systems multicore or multiprocessor systems.

Multithreaded programming provides a mechanism for more efficient use of these multiple
computing cores and improved concurrency.

A system is parallel if it can perform more than one task simultaneously.

A concurrent system supports more than one task by allowing all the tasks to make progress. Thus,
it is possible to have concurrency without parallelism.

In general, five areas present challenges in programming for multicore systems:

1. Identifying tasks. This involves examining applications to find areas that can be divided into
separate, concurrent tasks.

2. Balance. While identifying tasks that can run in parallel, programmers must also ensure that
the tasks perform equal work of equal value.

3. Data splitting. Just as applications are divided into separate tasks, the data accessed and
manipulated by the tasks must be divided to run on separate cores.
4. Data dependency. The data accessed by the tasks must be examined for dependenciesbetween
two or more tasks. When one task depends on data from another, programmers must ensure that
the execution of the tasks is synchronized to accommodate the data dependency.

5. Testing and debugging. When a program is running in parallel on multiple cores, many
different execution paths are possible. Testing and debugging such concurrent programs is
inherently more difficult than testing and debugging single-threaded applications.

Types of Parallelism
In general, there are two types of parallelism: data parallelism and task parallelism.
Data parallelism focuses on distributing subsets of the same data across multiple computing
cores and performing the same operation on each core.
Task parallelism involves distributing not data but tasks (threads) across multiple computing
cores.

PROCESS SYNCHRONIZATION

Concurrent access to shared data may result in data inconsistency.

Maintaining data consistency requires mechanisms to ensure the orderly execution of


cooperating processes.

Shared-memory solution to bounded-butter problem allows at most n 1 items in buffer at the


same time. A solution, where all N buffers are used is not simple.

Suppose that we modify the producer-consumer code by adding a variable counter, initialized
to 0 and increment it each time a new item is added to the buffer

Race condition: The situation where several processes access and manipulate shared data
concurrently. The final value of the shared data depends upon which process finishes last.

To prevent race conditions, concurrent processes must be synchronized.

THE CRITICAL-SECTION PROBLEM

Definition: Each process has a segment of code, called a critical section (CS), in which the
process may be changing common variables, updating a table, writing a file, and so on.

hat, when one process is executing in its CS, no other


process is to be allowed to execute in its CS.

mplementing this
request is the entry section.
Requirements to be satisfied for a Solution to the Critical-Section Problem:

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be
postponed indefinitely.
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.

General structure of process Pi


{
entry section
critical section

exit section

remainder section

} while (1);
Two general approaches are used to handle critical sections in operating systems: preemptive
kernels and nonpreemptive kernels.

A preemptive kernel allows a process to be preempted while it is running in kernel mode.

A non-preemptive kernel does not allow a process running in kernel mode to be preempted; a
kernelmode process will run until it exits kernel mode, blocks, or voluntarily yields control of the
CPU.

MUTEX LOCKS

-systems designers build software tools to solve the critical-section problem. The
simplest of these tools is the mutex lock.

t race conditions.

when it exits the critical section.

Solution to the critical-section problem using mutex locks.


do {
acquire lock
critical section
release lock
remainder section
} while (true);

or not.
is available, a call to acquire() succeeds, and the lock is then considered
unavailable.

acquire()
{
while (!available); /* busy wait */
available = false;;
}

release()
{
available = true;
}

often implemented using one of the hardware mechanisms.

Disadvantage of the implementation given here is that it requires busy waiting.

must loop continuously in the call to acquire().

waiting for the lock to become available.

single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other
process might be able to use productively.

SEMAPHORES

two standard atomic operations:


wait() and
signal().

was originally called V (from


wait(S)
{
while (S <= 0); // busy wait
S--;
}

signal(S)
{
S++;
}
Semaphore Usage

binary semaphore can range only between 0 and 1.


e similarly to mutex locks.

providing mutual exclusion.

number of instances.

wait() operation on the semaphore


(thereby decrementing the count).

signal() operation (incrementing the count).

processes that wish to use a resource will block until the count becomes greater than 0

ve various synchronization problems.

with a statement S2. Suppose we require that S2 be executed only after S1 has completed. We
can implement this scheme readily by letting P1 and P2 share a common semaphore synch,
initialized to 0. In process P1, we insert the statements
S1;
signal(synch);
In process P2, we insert the statements
wait(synch);
S2;
as invoked signal(synch),
which is after statement S1 has been executed.

Semaphore Implementation
signal() operations as follows: When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.

the state of the process is switched to the waiting state.

process executes a signal() operation.

state to the ready state.

the running process to the newly ready process, depending on the CPU-scheduling algorithm.)

typedef struct
{
int value;
struct process *list;
} semaphore;
t of processes list.

A signal() operation removes one process from the list of waiting processes and awakens that
process.

wait(semaphore *S)
{
S->value--;
if (S->value < 0)
{
add this process to S->list;
block();
}
}

signal(semaphore *S)
{
S->value++;
if (S->value <= 0)
{
remove a process P from S->list;
wakeup(P);
}
}
he block() operation suspends the process that invokes it.

Deadlocks and Starvation

or more processes are waiting indefinitely for an event that can be caused only by one of the
waiting processes
.

and P1, each accessing


two semaphores, S and Q, set to the value 1:
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
.. ..
.. ..
.. ..
signal(S); signal(Q);
signal(Q); signal(S);

).When P0 executes wait(Q), it


must wait until P1 executes signal(Q).

f processes is in a deadlocked state when every process in the set is waiting


for an event that can be caused only by another process in the set.

processes wait indefinitely within the semaphore.

semaphore in LIFO (last-in, first-out) order.

Priority Inversion
-priority process needs to read or modify kernel
data that are currently being accessed by a lower-priority process or a chain of lower-priority
processes.

The kernel data are typically protected with a lock, the higher-priority process will have to
wait for a lower-priority one to finish with the resource.
-priority process is preempted in favor
of another process with a higher priority.

re than two
priorities, so one solution is to have only two priorities.

-inheritance protocol.
According to this protocol, all processes that are accessing resources needed by a higher-priority
process inherit the higher priority until they are finished with the resources in question.

priority-inheritance protocol would allow process L to temporarily inherit the priority of


process.

CLASSIC PROBLEMS OF SYNCHRONIZATION

1. Bounded Buffer Problem


2. Reader Writer Problem
3. Dining Philosopher's Problem

The Bounded-Buffer Problem


of holding one item. The mutex
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value
1.

value n.

The producer and consumer processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
The structure of the producer process.

do {
...
/* produce an item in next produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);

The structure of the consumer process.


do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);

consumer producing empty buffers for the producer.

Reader Writer Problem

The R-W problem is another classic problem for which design of synchronization and concurrency
mechanisms can be tested. The producer/consumer is another such problem; the dining
philosophers is another.

Definition
There is a data area that is shared among a number of processes.
Any number of readers may simultaneously write to the data area.
Only one writer at a time may write to the data area.
If a writer is writing to the data area, no reader may read it.
If there is at least one reader reading the data area, no writer may write to it.
Readers only read and writers only write
A process that reads and writes to a data area must be considered a writer (consider
producer or consumer)

In the solution to the first readers writers problem, the reader processes share the following data
structures:
semaphore rw mutex = 1;
semaphore mutex = 1;
int read count = 0;

updated.

The structure of a writer process.


do {
wait(rw mutex);
...
/* writing is performed */
...
signal(rw mutex);
} while (true);

The structure of a reader process.


do {
wait(mutex);
readcount++;
if (read count == 1)
wait(rw mutex);
signal(mutex);
...
/* reading is performed */
wait(mutex);
read count--;
if (read count == 0)
signal(rw mutex);
signal(mutex);
} while (true);

Dining Philosophers Problem

Consider there are five philosophers sitting around a circular dining table. The dining table has
five chopsticks and a bowl of rice in the middle.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses
two chopsticks - one from their left and one from their right.

When a philosopher wants to think, he keeps down both chopsticks at their original place.

When a philosopher thinks, he does not interact with his others.


From time to time, a philosopher gets hungry and tries to pick up the two forks that are
closest to him (the forks that are between him and his left and right neighbors).
A philosopher may pick up only one fork at a time. Obviously, he cannot pick up a fork
that is already in the hand of a neighbor.
When a hungry philosopher has both his forks at the same time, he eats without releasing
his forks.
When he is finished eating, he puts down both of his forks and starts thinking again.
Solution:
From the problem statement, it is clear that a philosopher can think for an indefinite amount of
time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher
is in an endless cycle of thinking and eating.

An array of five semaphores, stick[5], for each of the five chopsticks.

The code for each philosopher looks like:

while(TRUE) {
wait(stick[i]);
wait(stick[(i+1) % 5]); // mod is used because if i=5, next
// chopstick is 1 (dining table is circular)
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that
chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating,
he puts both the chopsticks down.

But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick,
then a deadlock situation occurs because they will be waiting for another chopstick forever.

The possible solutions for this are:

1) A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.

2) Allow only four philosophers to sit at the table. That way, if all the four philosophers pick
up four chopsticks, there will be one chopstick left on the table. So, one philosopher can
start eating and eventually, two chopsticks will be available. In this way, deadlocks can
be avoided.
MONITORS

Definition: Monitor is a high-level language construct with a collection of procedures, variables,


and data structures that are all grouped together in a special kind of module or package.

directly access the monitor's internal data structures from procedures declared outside the
monitor.

chieving mutual exclusion:


only one process can be active in a monitor at any instant.

Monitor Usage

-defined operations that are provided mutual


exclusion within the monitor.

s the declaration of variables whose values define the state of an


instance of that type, along with the bodies of procedures or functions that operate on those
variables.

monitor monitor name


{
/* shared variable declarations */
function P1 ( . . . ) {
...
}
function P2 ( . . . ) {
...
}
.
.
function Pn ( . . . ) {
...
}
initialization code ( . . . ) {
...
}
}

procedure defined within a monitor can access only those variables declared locally within the
monitor and its formal parameters.

bles of a monitor can be accessed by only the local procedures.


Schematic view of a Monitor

The monitor construct is not sufficiently powerful for modeling some synchronization schemes.

mechanisms are provided by the condition construct condition x, y;

The only operations that can be invoked on a condition variable are wait() and signal(). The
operation
x. wait();

means that the process invoking this operation is suspended until another process invokes
x.signal();
The x.signal() operation resumes exactly one suspended process.

A monitor solution to the dining-philosopher problem.

monitor DiningPhilosophers
{
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i)
{
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].wait();
}
void putdown(int i)
{
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
}
void test(int i)
{
if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] !=
EATING))
{
state[i] = EATING;
self[i].signal();
}
}
initialization code()
{
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}

CPU SCHEDULING

CPU scheduling is the basis of multi-programmed operating systems.

By switching the CPU among processes, the operating system can make the computer more
productive.

Basic Concepts
The objective of multi-programming is to have some process running at all times, to
maximize CPU utilization.
For a Uni-processor system, there will never be more than one running process.
Scheduling is a fundamental operating system function.
The idea of multi-programming is to execute a process until it must wait, typically for the
completion of some I/O request.
The CPU is one of the primary computer resources.
The CPU scheduling is central to operating system design.

Cpu Scheduler
When the CPU becomes idle, the operating system must select on of the processes in the
ready queue to be executed.
The selection process is carried out by the short-term scheduler (CPU scheduler)
The scheduler selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them.
A ready queue may be implemented as a FIFO queue, a priority queue, a tree or simply an
unordered link list.
All the processes in the ready queue are lined up waiting for a chance to run on the CPU.

CPU scheduling decisions may take place when a process.


1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

Scheduling under 1 and 4 is nonpreemptive.


All other scheduling is preemptive.

Nonpreemptive Scheduling A scheduling discipline is non preemptive if, once a process has
been given the CPU, the CPU cannot be taken away from that process.

Preemptive Scheduling A scheduling discipline is preemptive if, once a process has been
given the CPU can taken away.

Dispatcher

Dispatcher is a module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:

art that program.

Dispatch latency The time taken for the dispatcher to stop one process and start another
running.

Scheduling criteria

1. CPU utilization keep the CPU as busy as possible Throughput # of processes that
complete their execution per time unit .
2. Turnaround time amount of time to execute a particular process
3. Waiting time amount of time a process has been waiting in the ready queue
4. Response time amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
5. Throughput The number of processes that complete their execution per time unit.

Best Algorithm consider following:

Min response time

Formulas to calculate Turn-around time & waiting time is:


Waiting time = Finishing Time (CPU Burst time + Arrival Time)
Turnaround time = Waiting Time + Burst Time
Scheduling Algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms.
1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling

First-Come, First-Served (FCFS) Scheduling algorithm.


-scheduling algorithm.

CPU first.

e, its PCB is linked onto the tail of the queue.

running process is then removed from the queue.


Example Problem
Consider the following set of processes that arrive at time 0, with the length of the CPU burst
time given in milliseconds:

Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart:

Waiting time

me: (0 + 24 + 27)/3 = 17 ms.


Turnaround Time = Waiting Time + Burst Time

Shortest-Job-First (SJF) Scheduling


s with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time.

CPU burst. It is also called as shortest next CPU burst.


to break the tie.
Process Burst Time
P1 24
P2 3
P3 3
Gantt Chart

P2 P3 P1
0 3 6 30
Waiting time
For P1=6,P2=0,P3=3
Average Waiting Time=(6+0+3)/3=3 ms.
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(6+24)=30,P2=(0+3)=3,P3=(3+3)=6
Average Turnaround Time=(30+3+6)/3=13ms
Priority Scheduling
The SJF algorithm is a special case of the general priority-scheduling algorithm.
A priority number (integer) is associated with each process and the CPU is allocated to
the process with the highest priority.
Equal-priority processes are scheduled in FCFS order.
The CPU is allocated to the process with the highest priority (smallest integer º highest
priority) .

Process Burst Time Priority


P1 24 2
P2 3 1
P3 3 3

Gantt Chart

P2 P1 P3
0 3 27 30
Waiting time
For P1=3,P2=0,P3=27
Average Waiting Time=(3+0+27)/3=10ms
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(3+24)=27,P2=(0+3)=3,P3=(27+3)=30
Average Turn Around Time=(27+3+30)/3=20ms.

Round robin scheduling


Round robin scheduling is designed especially for time-sharing
systems.
It is similar to FCFS, but preemption is added to switch between
processes.
Each process gets a small unit of CPU time called a time quantum or
time slice.
To implement RR scheduling, the ready queue is kept as a FIFO queue of processes.
New processes are added to the tail of the ready queue. The CPU scheduler picks the
first process from the ready queue, sets a timer to interrupt after 1 time quantum and
dispatches the process.

If the CPU burst time is less than the time quantum, the process itself will release the
CPU voluntarily. Otherwise, if the CPU burst of the currently running process is longer
than the time quantum a context switch will be executed and the process will be put at
the tail of the ready queue.

Gantt Chart

Waiting time
Average waiting time = [6+4+7]/3 = 17/3 =5.66
Turnaround Time = Waiting Time + Burst Time
Turnaround Time for P1=(6+24)=30,P2=(4+3)=7,P3=(7+3)=10
Average Turnaround Time=(30+7+10)/3=15.6ms.

Multilevel Queue Scheduling


It partitions the ready queue into several separate queues .
The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
There must be scheduling between the queues, which is commonly
implemented as a fixed-priority preemptive scheduling.
For example the foreground queue may have absolute priority over the
background queue.
Example : of a multilevel queue scheduling algorithm with five queues
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queue.
Multilevel Feedback Queue Scheduling
It allows a process to move between queues.
The idea is to separate processes with different CPU-burst characteristics.
If a process uses too much CPU time, it will be moved to a lower-priority queue.
This scheme leaves I/O-bound and interactive processes in the higher- priority
queues.
Similarly, a process that waits too long in a lower priority queue may be moved to
a higher-priority queue.
This form of aging prevents starvation.
Example:

Consider a multilevel feedback queue scheduler with three


queues, numbered from 0 to 2 .

The
scheduler first
executes all
processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1.

Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process that arrives for queue 0 will, in turn, preempt a process in queue 1.
A multilevel feedback q u e u e scheduler is d e f i n e d by the following
parameters:

1. The number of queues


2. The scheduling algorithm for each queue
3. The method used to determine when to upgrade a process to a higher
priority queue
4. The method used to determine when to demote a process to a lower-
priority queue
5. The method used to determine which queue a process will enter when
that process needs service
Multiple Processor Scheduling
If multiple CPUs are available, the scheduling problem is correspondingly more
complex.
If several identical processors are available, then load-sharing can occur.
It is possible to provide a separate queue for each processor.
In this case however, one processor could be idle, with an empty queue, while
another processor was very busy.
To prevent this situation, we use a common ready queue.
All processes go into one queue and are scheduled onto any available processor.
In such a scheme, one of two scheduling approaches may be used.
1. Self Scheduling - Each processor is self-scheduling.
Each processor examines the common ready queue and selects a process to
execute. We must ensure that two processors do not choose the same process,
and that processes are not lost from the queue.
2. Master Slave Structure - This avoids the problem by appointing one
processor as scheduler for the other processors, thus creating a master-slave
structure.
Real-Time Scheduling
Real-time computing is divided into two types.
1. Hard real-time systems
2. Soft real-time systems
Hard real-time systems
Hard RTS are required to complete a critical task within a guaranteed
amount of time.
Generally, a process is submitted along with a statement of the
amount of time in which it needs to complete or perform I/O.
The scheduler then either admits the process, guaranteeing that the
process will complete on time, or rejects the request as impossible. This is
known as resource reservation.
Soft real-time systems
Soft real-time computing is less restrictive. It requires that critical processes recieve
priority over less fortunate ones.
The system must have priority scheduling, and real-time processes must have the
highest priority.
The priority of real-time processes must not degrade over time, even though the
priority of non-real-time processes may.
Dispatch latency must be small. The smaller the latency, the faster a real- time
process can start executing.
The high-priority process would be waiting for a lower-priority one to finish.
This situation is known as priority inversion.

DEAD LOCK

Definition:
A process request resources, if the resources are not available at that time, the process enters in
to a wait state. It may happen that waiting processes will never again change the state, because
the resources they have requested are held by other waiting processes. This situation is called as
dead lock.

System Model
nsists of a finite number of resources to be distributed among a number of
competing processes.

some number of identical instances.

vices (such as printers and DVD drives) are examples of


resource types.

A process must request a resource before using it and must release the resource after using it.

Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request. The process requests the resource. If the request cannot be granted immediately then
the requesting process must wait until it can acquire the resource.
2. Use. The process can operate on the resource
3. Release. The process releases the resource.

Deadlock Characterizations:-
In a deadlock, processes never finish executing, and system resources are tied up, preventing
other jobs from starting.

Necessary Conditions for Deadlock:-

A dead lock situation can arise if the following four conditions hold simultaneously in a system.
1) MUTUAL EXCLUSION:- At least one resource must be held in a on-sharable mode. i.e only
one process can hold this resource at a time . other requesting processes should wait till it is
released.
2) HOLD & WAIT:- there must exist a process that is holding at least one resource and is waiting
to acquire additional resources that are currently being held by other processes.
3) NO PREEMPTION:- Resources cannot be preempted, that is a resource can be released
voluntarily by the process holding it, after that process has completed its task.
4) CIRCULAR WAIT:-
p0 is waiting for a resource that is held by the p1, p1 is waiting for the resource that is held by the
And so on. pn is waiting for a resource that is held by the p0.

Resource-Allocation Graph
A deadlock can be described in terms of a directed graph called system resource-allocation
graph.
A set of vertices V and a set of edges E.
V is partitioned into two types:
7 P = {P1, P2, Pn}, the set consisting of all the processes in the system.
7 R = {R1, R2, Rm}, the set consisting of all resource types in the system.
request edge directed edge Pi Rj
assignment edge directed edge Rj

The resource-allocation graph depicts the following situation.


The sets P, R, and E:

P2,

Resource instances:

Process states:
rce type R2 and is waiting for an instance of
resource type R1.

instance of R3.

Resource-allocation graph with a deadlock.


Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which
is held by process P3. Process P3 is waiting for either process P1 or process P2 to release
resource R2. In addition, process P1 is waiting for process P2 to release resource R1.
We also have a cycle:
If the graph contains no cycles, then no process in the system is deadlocked.

If the graph does contain a cycle, then a deadlock may exist.

Resource-allocation graph with a cycle but no deadlock.

Methods for Handling Deadlocks


We can deal with the deadlock problem in one of three ways:
1. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlocked state
2. We can allow the system to enter a deadlocked state, detect it, and recover.
3. We can ignore the problem altogether and pretend that deadlocks never occur in the
system.
The third solution is the one used by most operating systems, including Linux and
Windows.
Deadlock prevention provides a set of methods to ensure that at least one of the
necessary conditions cannot hold.
Deadlock avoidance requires that the operating system be given additional information
in advance concerning which resources a process will request and use during its lifetime.

DEADLOCK PREVENTION

a deadlock.
1. Mutual Exclusion
not required for sharable resources; must hold for non-sharable resources.
For example, a printer cannot be simultaneously shared by several processes.
A process never needs to wait for a sharable resource.
2. Hold and Wait
must guarantee that whenever a process requests a resource, it does not hold any other
resources.
One protocol requires each process to request and be allocated all its resources before it
begins execution,
Or another protocol allows a process to request resources only when the process has
none. So, before it can request any additional resources, it must release all the resources
that it is currently allocated.
3. Denying No preemption
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released.
Preempted resources are added to the list of resources for which the process is waiting.
Process will be restarted only when it can regain its old resources, as well as the new
ones that it is requesting.
4. Denying Circular wait
Impose a total ordering of all resource types and allow each process to
request for resources in an increasing order of enumeration.
Let R = {R1,R2,...Rm} be the set of resource types.
Assign to each resource type a unique integer number.
If the set of resource types R includes tapedrives, disk drives and printers.
F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
Each process can request resources only in an increasing order of enumeration.

DEADLOCK AVOIDANCE

about how resources are to be requested.

the resources currently available,


the resources currently allocated to each process,
the future requests and releases of each process.

A deadlock-avoidance algorithm dynamically examines the resource-allocation state to


ensure that a circular-wait condition can never exist.

The resource-allocation state is defined by the number of available and allocated


resources and the maximum demands of the processes.
Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state.

is the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j < i.
That is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
When Pi terminates, Pi +1 can obtain its needed resources, and so on.

Algorithm

-allocation-graph algorithm is not applicable to a resource allocation


system with multiple instances of each resource type.

ensure that the bank never allocated its available cash in such a way that it could no
longer satisfy the needs of all its customers.
Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
Let n = number of processes, and m = number of resources types.
1. Available: indicates the number of available resources of each type.
2. Max: Max[i, j]=k then process Pi may request at most k instances of
resource type Rj
3. Allocation : Allocation[i. j]=k, then process Pi is currently allocated K
instances of resource type Rj
4. Need : if Need[i, j]=k then process Pi may need K more instances of
resource type Rj ,Need [i, j]=Max[i, j]-Allocation[i, j]

Need [i,j] = Max[i,j] Allocation [i,j].


Safety algorithm
1. Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
2.Find an i such that both
a. Finish[i]=false
b. Needi<= Work if no such i exists, goto step 4
3. work :=work+ allocationi; Finish[i]:=true goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
Example:
Given the following state for the Algorithm.
5 processes P0 through P4
3 resource types A (6 instances), B (9 instances) and C (5 instances).

Snapshot at time T0:

a) Calculate the available vector.


b) Calculate the Need matrix.
c) Is the system in a safe state? If so, show one sequence of processes which allows
the system to complete. If not, explain why.
d) Given the request (1, 2, 0) from Process P2. Should this request be granted? Why
or why not?

a) Calculate the available vector.

b) Calculate the Need matrix.

c) Is the system in a safe state? If so, show one sequence of processes which allows
the system to complete. If not, explain why.
1. Initialize the Work and Finish vectors.
Work = Available = (1, 2, 0)
Finish = (false, false, false, false, false)
2. Find index i such that Finish[i] = false and Needi <= Work

3. Since Finish[i] = true for all i, hence the system is in a safe state. The sequence
of processes which allows the system to complete is P1, P3, P2, P4, P0.
d) Given the request (1, 2, 0) from Process P2. Should this request be granted? Why
or why not?
1. Check that Request2 <= Need2.
Since (1, 2, 0) <= (2, 3, 3), hence, this condition is satisfied.
2. Check that Request2 <= Available.
Since (1, 2, 0) <= (1, 2, 0), hence, this condition is satisfied.
3. Modify the state as follows:
Available = Available Request2 = (1, 2, 0) (1, 2, 0) = (0, 0, 0)
Allocation2 = Allocation2 + Request2 = (0, 3, 0) + (1, 2, 0) = (1, 5, 0)
Need2 = Need2 Request2 = (2, 3, 3) (1, 2, 0) = (1, 1, 3)
4. Apply the safety algorithm to check if granting this request leaves the
system in a safe state.
1. Initialize the Work and Finish vectors.
Work = Available = (0, 0, 0)
Finish = (false, false, false, false, false)
2. At this point, there does not exist an index i such that Finish[i] = false
and Needi <= Work.
Since Finish[i true for all i, hence the system is not in a safe state.
Therefore, this request from process P2 should not be granted.
Resource-Request Algorithm
Let Requesti be the request vector for process Pi . If Requesti [ j] == k, then process Pi
wants k instances of resource type Rj . When a request for resources is made by process
Pi , the following actions are taken:
1. If Requesti , go to step 2. Otherwise, raise an error condition, since the
process has exceeded its maximum claim.
2. If Requesti Available, go to step 3. Otherwise, Pi must wait, since the
resources are not available.
3. Have the system pretend to have allocated the requested resources to process Pi
by modifying the state as follows:
Available = Available Requesti ;
Allocationi = Allocationi + Requesti ;
Needi = Needi Requesti ;

DEADLOCK DETECTION

Deadlock Detection
(i) Single instance of each resource type
If all resources have only a single instance, then we can define a deadlock
detection algorithm that use a variant of resource-allocation graph called a wait for graph.
Resource Allocation Graph Wait for Graph

(ii) Several Instance of a resource type


Available : Number of available resources of each type
Allocation : number of resources of each type currently allocated to each process
Request : Current request of each process
If Request [i,j]=k, then process Pi is requesting K more instances of resource
typeRj.
1. Initialize work := available
Finish[i]=false, otherwise finish [i]:=true
2. Find an index i such that both
a. Finish[i]=false
b. Requesti<=work
if no such i exists go to step4.
3. Work:=work+allocationi
Finish[i]:=true goto step2
4. If finish[i]=false then process Pi is deadlocked

DEADLOCK RECOVERY.

There are three basic approaches to recovery from deadlock:


1. Inform the system operator, and allow him/her to take manual intervention.
2. Terminate one or more processes involved in the deadlock
3. Preempt resources.
1. Process Termination
Two basic approaches, both of which recover resources allocated to terminated
processes:
Terminate all processes involved in the deadlock. This definitely solves the
deadlock, but at the expense of terminating more processes than would be
absolutely necessary.
Terminate processes one by one until the deadlock is broken. This is more
conservative, but requires doing deadlock detection after each step.
In the latter case there are many factors that can go into deciding which
processes to terminate next:
Process priorities.
How long the process has been running, and how close it is to finishing.
How many and what type of resources is the process holding. (Are they
easy to preempt and restore? )
1. How many more resources does the process need to complete.
2. How many processes will need to be terminated
3. Whether the process is interactive or batch.
4. (Whether or not the process has made non-restorable changes to any
resource.)
2. Resource Preemption
When preempting resources to relieve deadlock, there are three important
issues to be addressed:
1. Selecting a victim - Deciding which resources to preempt from which processes
involves many of the same decision criteria outlined above.
2. Rollback - Ideally one would like to roll back a preempted process to a safe state
prior to the point at which that resource was originally allocated to the process.
Unfortunately it can be difficult or impossible to determine what such a safe
state is, and so the only safe rollback is to roll back all the way back to the
beginning. ( I.e. abort the process and make it start over. )
3. Starvation - How do you guarantee that a process won't starve because its resources
are constantly being preempted? One option would be to use a priority system, and
increase the priority of a process every time its resources get preempted. Eventually
it should get a high enough priority that it won't get preempted any more.

WINDOWS 7 THREAD AND SMP MANAGEMENT

The native process structures and services provided by the Windows Kernel are relatively simple
and general purpose, allowing each OS subsystem to emulate a particular process structure and
functionality.

Characteristics of Windows processes:


Windows processes are implemented as objects.
A process can be created as new process, or as a copy of an existing process.
An executable process may contain one or more threads.
Both process and thread objects have built-in synchronization capabilities.
A Windows Process and Its Resources
Each process is assigned a security access token, called the primary token of the
process. When a user first logs on, Windows creates an access token that
includes the security ID for the user.
Every process that is created by or runs on behalf of this user has a copy of this
access token.
Windows uses the token to validate the ability to access secured objects or
to perform restricted functions on the system and on secured objects. The access
token controls whether the process can change its own attributes.

Related to the process is a series of blocks that define the virtual address space
currently assigned to this process.

The process cannot directly modify these structures but must rely on the virtual
memory manager, which provides a memory allocation service for the process.

The process includes an object table, with handles to other objects known to this
process. The process has access to a file object and to a section object that defines
a section of shared memory.

Process and Thread Objects


-oriented structure of Windows facilitates the development of a general-
purpose process facility.
-related objects: processes and threads.
application that owns resources,
such as memory and open files.

so that the processor can turn to another thread.


Windows Process and Thread Objects
Windows Process Object Attributes
Windows Thread Object Attributes

Thread States
Problem

1. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds:

Process Burst Arrival Priority


Time Time

P1 23 0 2

P2 3 1 1

P3 6 2 4

P4 2 3 3

a. Draw four Gantt charts illustrating the execution of these processes using FCFS,
SJF(Preemptive), a non-preemptive priority (a smaller priority number implies a higher
priority), and RR (quantum = 1) scheduling.

c. What is the waiting time of each process for each of the scheduling algorithms in part a?

d. Which of the schedules in part a results in the minimal average waiting time (over all
processes)?

FCFS SCHEDULING
SJF(SHORTEST JOB FIRST)

In Pre-emptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as
a process with short burst time arrives, the existing process is pre-empted.

PRIORITY
ROUND ROBIN

2.
SJF

Priority

Round Robin

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy