Os - Unit 2
Os - Unit 2
Process Management
Prachi Shah
IT Dept.
BVM
Contents
Processes:
◦ Definition, Process Model, Process Creation & Termination, Process Hierarchies,
Process States, Implementation of process.
Threads:
◦ Concept of multithreads, Thread Usage, Thread Model, Thread Implementation.
Interprocess Communication:
◦ Race Conditions, Critical Section, Mutual Exclusion, Hardware Solution, Strict
Alternation, Peterson’s Solution, The Producer Consumer Problem, Semaphores,
Event Counters, Monitors, Message Passing, Classical IPC Problems: Reader’s &
Writer Problem, Dinning Philosopher Problem.
Process Scheduling:
◦ Introduction to Scheduling, Definition, Scheduling Objectives, Scheduling Algorithm
Goals, Scheduling in Batch Systems, Scheduling in Interactive Systems, Scheduling
in Real-Time Systems, Thread Scheduling.
What is a process?
• System initialization.
• A process has only one parent but zero, one, two or more
children
17
The Thread Model (2)
18
The Thread Model (3)
22
How do threads work?
• Start with one thread in a process
• Thread contains (id, registers, attributes)
• Use library call to create new threads and to use threads
Thread_create includes parameter indicating what procedure to run
Thread_exit causes thread to exit and disappear (can’t schedule it)
Thread_join Thread blocks until another thread finishes its work
Thread_yield allows a thread to voluntarily give up the CPU to let
another thread run
Implementing Threads in User Space
• If thread blocks, run time system stores thread info in table and finds
new thread to run.
• State save and scheduling are invoked faster then kernel call (no
trap, no cache flush)
Threads in user space-the bad
• Can’t let thread execute system call which blocks because it will block
all of the other threads
• No elegant solution
o Hack system library to avoid blocking calls
o Could use select system calls-in some versions of Unix which do
same thing
• Threads don’t voluntarily give up CPU
o Could interrupt periodically to give control to run time system
o Overhead of this solution is a problem…..
Threads in kernel space-the good
• Kernel keeps same thread table as user table
• If thread blocks, kernel just picks another one
Not necessarily from same process!
30
INTERPROCESS COMMUNICATION (IPC)
• Three problems:
• Solution:
Ensure that two processes can’t be in the critical region at the same
time
No assumptions about speeds or number of CPU’s
No process outside critical region can block other processes
No starvation-no process waits forever to enter critical region
What we are trying to do
First attempts-Busy Waiting
1. Disabling interrupts
2. Lock variables
3. Strict alternation
4. Peterson's solution
5. The TSL instruction
1. Disabling Interrupts
• Idea: process disables interrupts, enters CR (Critical
Region), enables interrupts when it leaves CR
• Problems:
Process might never enable interrupts, crashing
system
Won’t work on multi-core chips as disabling interrupts
only effects one CPU at a time
Good for kernel, but not appropriate for user processes
2. Lock variables
Problems:
• If one process is outside the CR and it is its turn, then other process
has to wait until outside guy finishes both outside AND inside (CR) work
4. Peterson's Solution
.
5. TSL Instruction (Hardware solution)
• TSL reads lock into register and stores NON ZERO VALUE in lock
(e.g. process number)
• Mutex: variable which can be in one of two states: locked (0), unlocked(1
or other value)
• Easy to implement
Itis similar to the “next customer number” used in systems where each
customer takes a sequentially numbered ticket and waits for that number
to be called
Operations on Event Counters
read(E): return the count associated with event counter E.
await(E,v):
◦ if E.count >= v, then.
◦ Otherwise, block until E.cocontinueunt >= v.
More about Event Counters
Twoevent counters are used, in and out, each of which has an initial
count of zero.
Each
item has a unique location in the buffer (based on its sequence
number), so mutually exclusive access to the buffer is not required.
Producer-Consumer with Event Counters (1)
#define N 100
typedef int event_counter;
event_counter in = 0; /* counts inserted items */
event_counter out = 0; /* items removed from buffer */
void producer(void)
{
int item, sequence = 0;
while(TRUE) {
produce_item(&item);
sequence = sequence + 1; /* counts items produced */
await(out, sequence - N); /* wait for room in buffer */
enter_item(item); /* insert into buffer */
advance(&in); /* inform consumer */
}
}
Producer-Consumer with Event Counters (2)
void consumer(void)
{
int item, sequence = 0;
while(TRUE) {
sequence = sequence + 1; /* count items consumed */
await(in, sequence); /* wait for item */
remove_item(&item); /* take item from buffer */
advance(&out); /* inform producer */
consume_item(item);
}
}
[D] Monitors
• Easy to make a mess of things using Mutexes. Little errors cause
disasters.
• Producer consumer with semaphores – interchange two downs in
producer code causes deadlock
• Monitor is a language construct which enforces mutual exclusion
and blocking mechanism
• Monitor consists of {procedures, data structures, and variables}
grouped together in a “module”
• A process can call procedures inside the monitor, but cannot
directly access the stuff inside the monitor
• C does not have monitors
More about Monitors
• In a monitor it is the job of the compiler, not the programmer to
enforce mutual exclusion.
• Only one process at a time can be in the monitor
When a process calls a monitor, the first thing done is to check if another
process is in the monitor. If so, calling process is suspended.
• Need to enforce blocking as well –
use condition variables
use wait and signal operations on condition variables
• Monitor discovers that it can’t continue (e.g. buffer is full), issues a
signal on a condition variable (e.g. full) causing process (e.g.
producer) to block
• Another process is allowed to enter the monitor (e.g.
consumer).This process can issue a signal, causing blocked
process (producer) to wake up
• Process issuing signal leaves monitor
Monitor – Syntax:
Producer Consumer Monitor
Monitors: Good vs. Bad
• The good-No messy direct programmer control of semaphores
• The bad- You need a language which supports monitors (Java).
• OS’s are written in C
Reality
• Monitors and semaphores only work for shared memory
• Don’t work for multiple CPU’s which have their own private memory,
e.g. workstations on an Ethernet
[E] Message Passing
• Information exchange between machines
• Two primitives
Send(destination, &message)
Receive(source, &message)
• Lots of design issues
Message loss
o acknowledgements, time outs deal with loss
Authentication – how does a process know the identity of the sender
...
The Readers and Writers Problem (Continued)
...
PROCESS SCHEDULING
1) First-come first-served
2) Shortest job first (Non-preemptive)
3) Shortest remaining time next (Shortest job first-
preemptive)
1.1 First come first serve
• Easy to implement
• Won’t work for a varied workload
o I/O-bound process (long execution time) runs in front of compute-
bound process (short execution time)
1.2 Shortest Job First
• Need to know run times in advance
• Non pre-emptive algorithm
• Provably optimal
• Smallest time has to come first to minimize the mean turnaround time
1.3 Shortest Running Time Next
• Pick job with shortest time to execute next
• Pre-emptive: compare running time of new job to
remaining time of existing job
• Need to know the run times of jobs in advance
2. Scheduling in Interactive Systems
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.1 Round-Robin Scheduling
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.2 Priority Scheduling
• Run jobs according to their priority
• Can be static or can do it dynamically
• Typically combine RR with priority. Each priority class uses
RR inside
2. Scheduling in Interactive Systems
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.3 Multiple Queues with Priority Scheduling
• Processes in Highest class run for one quantum,
second highest gets 2 quanta…..
o If highest finishes during quantum, great. Otherwise bump it to
second highest priority and so on
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.4 Shortest Process Next
• Cool idea if you know the remaining times
• Exponential smoothing can be used to estimate a jobs’ run time
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.5 Guaranteed Scheduling
• Promise:
If there are n users logged in while you are working, you will
receive about 1/n of the CPU power.
Similarly, on a single-user system with n processes running, all
things being equal, each one should get 1/n of the CPU cycles.
• The system must keep track of how much CPU each process has had
since its creation
• Then compute the amount of CPU each one is entitled to, i.e. the time
since creation divided by n
• Compute the ratio of actual CPU time consumed to CPU time entitled.
• The algorithm is then to run the process with the lowest ratio until its
ratio has moved above its closest competitor.
2. Scheduling in Interactive Systems
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.6 Lottery Scheduling
• Tickets are given to processes for various resources
• Random selection of tickets is done by scheduler
• Hold lottery for CPU time several times a second
• Can enforce priorities by allowing more tickets for “more
important” processes
• For e.g. 20 tickets = 20% chance of wining each lottery
1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.7 Fair-share Scheduling
• Until now, with round robin or equal priorities, the scenario was:
If user 1 starts up 9 processes and user 2 starts up 1 process,
user 1 will get 90% of the CPU and user 2 will get only 10% of it
• To prevent this situation, some systems take into account who owns a
process before scheduling it.
• In this model, each user is allocated some fraction of the CPU and the
scheduler picks processes
• For example, consider a system with two users, each have been
promised 50% of the CPU.
• User 1 has four processes, A, B, C, and D, and user 2 has only 1
process, E.
• If round-robin scheduling is used: A E B E C E D E A E B E C E D E . .
• On the other hand, if user 1 is entitled to twice as much CPU time as
user 2, we might get: A B E C D E A B E C D E ...
3. Real Time Scheduling
• Time plays an essential role; For example,
in a disc player system, converting bits into music within a very tight time interval
patient monitoring in a hospital intensive-care unit
the autopilot in an aircraft
robot control in an automated factory
• Two categories: (i) Hard real time and (ii) soft real time
Hard: robot control in a factory
Soft: CD player
• Real-time behavior is achieved by dividing the program into a number of
processes, each of whose behavior is predictable and known in advance.
• When an external event is detected, it is the job of the scheduler to schedule
the processes in such a way that all deadlines are met
3. Real Time Scheduling (Continued)
• Events can be periodic or aperiodic
• if there are m periodic events and
• event i occurs with period Pi and requires Ci seconds of CPU time to handle
each event, then the load can only be handled if