0% found this document useful (0 votes)
17 views103 pages

Os - Unit 2

Uploaded by

sujaltlrj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views103 pages

Os - Unit 2

Uploaded by

sujaltlrj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 103

Unit -2

Process Management

Prachi Shah
IT Dept.
BVM
Contents
 Processes:
◦ Definition, Process Model, Process Creation & Termination, Process Hierarchies,
Process States, Implementation of process.
 Threads:
◦ Concept of multithreads, Thread Usage, Thread Model, Thread Implementation.
 Interprocess Communication:
◦ Race Conditions, Critical Section, Mutual Exclusion, Hardware Solution, Strict
Alternation, Peterson’s Solution, The Producer Consumer Problem, Semaphores,
Event Counters, Monitors, Message Passing, Classical IPC Problems: Reader’s &
Writer Problem, Dinning Philosopher Problem.
 Process Scheduling:
◦ Introduction to Scheduling, Definition, Scheduling Objectives, Scheduling Algorithm
Goals, Scheduling in Batch Systems, Scheduling in Interactive Systems, Scheduling
in Real-Time Systems, Thread Scheduling.
What is a process?

• An instance of an executing program, that includes current


values of registers, variables and a program counter

• It has a program, input, output and a state

• Why is this idea necessary?


 A computer manages many computations concurrently-need an
abstraction to describe how it does it
 Example of process handling – recipe v/s first aid book
Process Model - Multiprogramming

(a) Multiprogramming of four programs.


(b) Conceptual model of four independent, sequential processes.
(c) Only one program is active at once.
Process Creation

Events which can cause process creation

• System initialization.

• Execution of a process creation system call by a running process.

• A user request to create a new process.

• Initiation of a batch job.


Process Termination
Events which cause process termination:

• Normal exit (voluntary)

• Error exit (voluntary) – file does not exist

• Fatal error (involuntary) – error caused by some process

• Killed by another process (involuntary)


Process Hierarchies

• A process has only one parent but zero, one, two or more
children

• Process group – each process can catch, ignore or kill


signal

• When a process is created, the parent is given a special


token (called a handle) that it can use to control the child.

• However, it is free to pass this token to some other


process
Process States

A process can be in running, blocked, or ready state. Transitions


between these states are as shown.
Implementation of Processes (1)

The lowest layer of a process-structured operating system


handles interrupts and scheduling. Above that layer are
sequential processes.
Implementation of Processes (2)

Some of the fields of a typical process table entry.


OS processes an interrupt

Skeleton of what the lowest level of the operating system


does when an interrupt occurs.
Threads

• Multithreading - The ability of an OS to support


multiple, concurrent paths of execution within a single
process

• The unit of resource ownership is referred to as a


process or task

• The unit of dispatching is referred to as a thread or


lightweight process
Single Threaded Approaches
 A singleexecution path
per process, in which the
concept of a thread is not
recognized, is referred to
as a single-threaded
approach

 MS-DOS, some versions


of UNIX supported only
this type of process.
Multithreaded Approaches
 Theright half of Figure 4.1
depicts multithreaded
approaches
 A Java run-time
environment is a system of
one process with multiple
threads; Windows, some
UNIXes, support multiple
multithreaded processes.
Thread model types
Threads are like processes
• Have same states
 Running
 Ready
 Blocked
• Have their own stacks –same as processes
• Stacks contain frames for (un-returned) procedure calls
 Local variables
 Return address to use when procedure comes back

• Processes are used to group resources together; threads are the


entities scheduled for execution on the CPU.
The Thread Model (1)

(a) Three processes each with one thread


(b) One process with three threads

17
The Thread Model (2)

Items shared by all threads in a process


Items private to each thread

18
The Thread Model (3)

Each thread has its own stack


19
Thread Usage (1)
• Parallel processes
• Lighter-weight
• Easy to create and destroy
• Performance argument
• Systems with multiple CPU
(where real parallelism is
possible)

A word processor with three threads


Thread Usage (2)

A multithreaded Web server


21
Thread Usage (3)

Rough outline of code for previous slide


(a) Dispatcher thread
(b) Worker thread

22
How do threads work?
• Start with one thread in a process
• Thread contains (id, registers, attributes)
• Use library call to create new threads and to use threads
 Thread_create includes parameter indicating what procedure to run
 Thread_exit causes thread to exit and disappear (can’t schedule it)
 Thread_join Thread blocks until another thread finishes its work
 Thread_yield allows a thread to voluntarily give up the CPU to let
another thread run
Implementing Threads in User Space

A user-level threads package


25
Implementing Threads in the Kernel

A threads package managed by the kernel


26
Threads in user space-the good

• Thread table contains info about threads (program counter, stack


pointer...) so that run time system can manage them

• If thread blocks, run time system stores thread info in table and finds
new thread to run.

• State save and scheduling are invoked faster then kernel call (no
trap, no cache flush)
Threads in user space-the bad

• Can’t let thread execute system call which blocks because it will block
all of the other threads
• No elegant solution
o Hack system library to avoid blocking calls
o Could use select system calls-in some versions of Unix which do
same thing
• Threads don’t voluntarily give up CPU
o Could interrupt periodically to give control to run time system
o Overhead of this solution is a problem…..
Threads in kernel space-the good
• Kernel keeps same thread table as user table
• If thread blocks, kernel just picks another one
 Not necessarily from same process!

Threads in kernel space-the bad


• Expensive to manage the threads in the kernel and takes
valuable kernel space
• Signals are sent to processes, not to threads (what if two threads
register for same signal)
Hybrid Implementations

Multiplexing user-level threads onto kernel- level threads

30
INTERPROCESS COMMUNICATION (IPC)
• Three problems:

1. How to actually pass information


2. How to deal with process conflicts (2 airline reservations for
same seat)
3. How to do correct sequencing when dependencies are
present-aim the gun before firing it

• SAME ISSUES FOR THREADS AS FOR PROCESSES-SAME


SOLUTIONS AS WELL (only for 2nd and 3rd point)
Race Conditions

In is local variable containing pointer to next free slot


Out is local variable pointing to next file to be printed
How to avoid races
• Mutual exclusion – only one process at a time can use a
shared variable/file
• Critical regions – part of the program where the shared
memory is accessed (which leads to races)

• Solution:
 Ensure that two processes can’t be in the critical region at the same
time
 No assumptions about speeds or number of CPU’s
 No process outside critical region can block other processes
 No starvation-no process waits forever to enter critical region
What we are trying to do
First attempts-Busy Waiting

A list of proposals to achieve mutual exclusion

1. Disabling interrupts
2. Lock variables
3. Strict alternation
4. Peterson's solution
5. The TSL instruction
1. Disabling Interrupts
• Idea: process disables interrupts, enters CR (Critical
Region), enables interrupts when it leaves CR

• Problems:
 Process might never enable interrupts, crashing
system
 Won’t work on multi-core chips as disabling interrupts
only effects one CPU at a time
 Good for kernel, but not appropriate for user processes
2. Lock variables

• A software solution-everyone shares a lock (variable)


 When lock is 0, process turns it to 1 and enters CR
 When exit CR, turn lock to 0
• Problem: Race condition
• Same situation like Spooler example
3. Strict Alternation

Problems:

• Employs busy waiting-while waiting for the CR, a process spins

• If one process is outside the CR and it is its turn, then other process
has to wait until outside guy finishes both outside AND inside (CR) work
4. Peterson's Solution

.
5. TSL Instruction (Hardware solution)

• TSL reads lock into register and stores NON ZERO VALUE in lock
(e.g. process number)

• Instruction is atomic: done by freezing access to bus line (bus


disable)
5.1 Using TSL

TSL is atomic. Memory bus is locked until it is finished executing.


5.2 XCHG instruction
What’s wrong with Peterson, TSL ,XCHG?
Problems:
• Busy waiting – waste of CPU time
• Priority inversion problem
 H runs forever, waiting until L leaves CR

Solution: Replace busy waiting by blocking calls


• Sleep blocks process
• Wakeup unblocks process
• Both system calls have one parameter of memory
address used by the particular process call to match
sleeps / wakes of the other process
[A] The Producer-Consumer Problem
(Or Bounded Buffer
Problem)
The problem with sleep and wake-up calls
• Empty buffer, count==0
• Consumer gets replaced by producer before it goes to sleep
• Produces something, count++, sends wakeup to consumer
• Consumer not asleep, ignores wakeup, thinks count= = 0, goes to
sleep
• Producer fills buffer, goes to sleep
• P and C sleep forever
• So the problem is lost wake-up calls
[B] Semaphores
• Semaphore is an integer variable
• Used to sleeping processes/wakeups
• Two operations, down and up
• Down checks semaphore.
 If not zero, decrements semaphore.
 If zero, process goes to sleep
• Up increments semaphore.
 If more then one process asleep, one is chosen randomly and
enters critical region (first does a down)
• ATOMIC IMPLEMENTATION-interrupts disabled
Producer Consumer with Semaphores
• Semaphores solve the lost-wakeup problem

• 3 semaphores: full, empty and mutex


• Full counts full slots (initially 0)
• Empty counts empty slots (initially N)
• Mutex protects variable which contains the items produced and
consumed
• Semaphores that are initialized to 1 and used by two or more
processes to ensure that only one of them can enter its critical region
at the same time are called binary semaphores
Producer Consumer with semaphores
[C] Mutexes
• Don’t always need counting operation of semaphore, just mutual
exclusion part

• Mutex: variable which can be in one of two states: locked (0), unlocked(1
or other value)

• Easy to implement

• Good for using with thread packages in user space


 Thread (process) wants access to cr, calls mutex_lock.
 If mutex is unlocked, call succeeds. Otherwise, thread blocks until thread in the cr
does a mutex_unlock.
User space code for mutex lock and unlock
More about Mutex
• Enter region code for TSL results in busy waiting (process keeps
testing lock)
 Clock runs out, process is bumped from CPU

• Use thread_yield to give up the CPU to another thread.


 Thread yield is fast (in user space).
 No kernel calls needed for mutex_lock or mutex_unlock.
[C] Event Counters
 An event counter is another data structure that can be used for process
synchronization.

 Like a semaphore, it has an integer count and a set of waiting process


identifications.

 Unlike semaphores, the count variable only increases.

 Itis similar to the “next customer number” used in systems where each
customer takes a sequentially numbered ticket and waits for that number
to be called
Operations on Event Counters
 read(E): return the count associated with event counter E.

 advance(E): atomically increment the count associated with


event counter E.

 await(E,v):
◦ if E.count >= v, then.
◦ Otherwise, block until E.cocontinueunt >= v.
More about Event Counters
 Twoevent counters are used, in and out, each of which has an initial
count of zero.

 Eachprocess includes a private sequence variable which indicates


the sequence number of the last item it has produced or will
consume.

 Items are sequentially produced and consumed.

 Each
item has a unique location in the buffer (based on its sequence
number), so mutually exclusive access to the buffer is not required.
Producer-Consumer with Event Counters (1)
#define N 100
typedef int event_counter;
event_counter in = 0; /* counts inserted items */
event_counter out = 0; /* items removed from buffer */

void producer(void)
{
int item, sequence = 0;
while(TRUE) {
produce_item(&item);
sequence = sequence + 1; /* counts items produced */
await(out, sequence - N); /* wait for room in buffer */
enter_item(item); /* insert into buffer */
advance(&in); /* inform consumer */
}
}
Producer-Consumer with Event Counters (2)
void consumer(void)
{
int item, sequence = 0;

while(TRUE) {
sequence = sequence + 1; /* count items consumed */
await(in, sequence); /* wait for item */
remove_item(&item); /* take item from buffer */
advance(&out); /* inform producer */
consume_item(item);
}
}
[D] Monitors
• Easy to make a mess of things using Mutexes. Little errors cause
disasters.
• Producer consumer with semaphores – interchange two downs in
producer code causes deadlock
• Monitor is a language construct which enforces mutual exclusion
and blocking mechanism
• Monitor consists of {procedures, data structures, and variables}
grouped together in a “module”
• A process can call procedures inside the monitor, but cannot
directly access the stuff inside the monitor
• C does not have monitors
More about Monitors
• In a monitor it is the job of the compiler, not the programmer to
enforce mutual exclusion.
• Only one process at a time can be in the monitor
 When a process calls a monitor, the first thing done is to check if another
process is in the monitor. If so, calling process is suspended.
• Need to enforce blocking as well –
 use condition variables
 use wait and signal operations on condition variables
• Monitor discovers that it can’t continue (e.g. buffer is full), issues a
signal on a condition variable (e.g. full) causing process (e.g.
producer) to block
• Another process is allowed to enter the monitor (e.g.
consumer).This process can issue a signal, causing blocked
process (producer) to wake up
• Process issuing signal leaves monitor
Monitor – Syntax:
Producer Consumer Monitor
Monitors: Good vs. Bad
• The good-No messy direct programmer control of semaphores
• The bad- You need a language which supports monitors (Java).
• OS’s are written in C

Semaphores: Good vs. Bad


• The good-Easy to implement
• The bad- Easy to mess up

Reality
• Monitors and semaphores only work for shared memory
• Don’t work for multiple CPU’s which have their own private memory,
e.g. workstations on an Ethernet
[E] Message Passing
• Information exchange between machines
• Two primitives
 Send(destination, &message)
 Receive(source, &message)
• Lots of design issues
 Message loss
o acknowledgements, time outs deal with loss
 Authentication – how does a process know the identity of the sender

Producer Consumer Using Message Passing


• Consumer sends N empty messages to producer
• Producer fills message with data and sends to consumer
Producer-Consumer Problem
with Message Passing (1)
Producer-Consumer Problem
with Message Passing (2)
Message Passing Approaches
• Have unique ID for address of recipient process
• Mailbox
o In producer consumer, have one for the producer and one for the
consumer
• No buffering-sending process blocks until the receive
happens. Receiver blocks until send occurs (Rendezvous)
• Usage: MPI
[F] Barriers

Barriers are intended for synchronizing groups of processes


Often used in scientific computations.
Classical IPC Problems:
(i) Dining Philosophers Problem

Lunch time in the Philosophy Department.


Dining Philosophers Problem – Solution 1
Dining Philosophers Problem – Solution 1
When a philosopher gets hungry, she tries to acquire her left and
right forks, one at a time, in either order.
If successful in acquiring two forks, she eats for a while, then puts
down the forks, and continues to think.
Problem:
◦ If all five philosophers take their left forks simultaneously, none will be
able to take their right forks, and there will be a deadlock.
Dining Philosophers Problem – Solution 2
After taking the left fork, the program checks to see if the right
fork is available.
If it is not, the philosopher puts down the left one, waits for some
time, and then repeats the whole process.
Problem:
◦ If all the philosophers could start the algorithm simultaneously, picking up
their left forks, seeing that their right forks were not available, putting
down their left forks and waiting.
◦ The above situation, in which all the programs continue to run indefinitely
but fail to make any progress, is called starvation.
Dining Philosophers Problem – Solution 3
Protectthe five statements following the call to think by a binary
semaphore.
Before starting to acquire forks, a philosopher would do a down
on mutex. After replacing the forks, she would do an up on mutex.
Problem:
◦ Only one philosopher can be eating at any instant. With five forks
available, we should be able to allow two philosophers to eat at the same
time.
Dining Philosophers Problem – Solution 4
 This solution allows the maximum parallelism for an arbitrary number of
philosophers.
 It uses an array, state, to keep track of whether a philosopher is eating,
thinking, or hungry (trying to acquire forks).
 A philosopher may only move into eating state if neither neighbor is eating.
 Philosopher i's neighbors are defined by the macros LEFT and RIGHT.
 The program uses an array of semaphores, one per philosopher, so hungry
philosophers can block if the needed forks are busy.
 Note that each process runs the procedure philosopher as its main code, but
the other procedures, take_forks, put_forks, and test, are ordinary procedures
and not separate processes.
Dining Philosophers Problem – Solution 4
Dining Philosophers Problem – Solution 4 (Continued)
Classical IPC Problems:
(ii) The Readers and Writers Problem

...
The Readers and Writers Problem (Continued)
...
PROCESS SCHEDULING

Who cares about scheduling algorithms?


• Batch servers
• Time sharing machines
• Networked servers
• You care if you have a bunch of users and/or if
the demands of the jobs differ

Who doesn’t care about scheduling algorithms?


• PC’s
o One user who only competes with himself for the CPU
Scheduling – Process Behavior

Bursts of CPU usage alternate with periods of waiting for I/O.


(a) A CPU-bound process. (b) An I/O-bound process.
When to make scheduling decisions
 Schedule when
• New process is created (run parent or child)
• A process exits
• A process blocks (e.g. on a semaphore)
• I/O interrupt happens

Two types of Process Scheduling:


 Preemptive scheduling – picks a process and lets it run for
maximum of some fixed time; clock interrupt occur at the end of time
interval; scheduler pics new process
 Nonpreemptive scheduling– pics a process to run and then just lets
it run until it blocks or voluntarily releases the CPU
Categories of Scheduling Algorithms

1. Batch (accounts receivable, payroll…..)


2. Interactive (servers)
3. Real time

Note: It depends on the use to which the CPU is being put


Scheduling Algorithm Goals
Note:
Prepare the sums of Scheduling
algorithms from the notes
1. Scheduling in Batch Systems

1) First-come first-served
2) Shortest job first (Non-preemptive)
3) Shortest remaining time next (Shortest job first-
preemptive)
1.1 First come first serve
• Easy to implement
• Won’t work for a varied workload
o I/O-bound process (long execution time) runs in front of compute-
bound process (short execution time)
1.2 Shortest Job First
• Need to know run times in advance
• Non pre-emptive algorithm
• Provably optimal
• Smallest time has to come first to minimize the mean turnaround time
1.3 Shortest Running Time Next
• Pick job with shortest time to execute next
• Pre-emptive: compare running time of new job to
remaining time of existing job
• Need to know the run times of jobs in advance
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.1 Round-Robin Scheduling

Process list - before and after

• Quantum – time interval assigned to each process, during which it is


allowed to run
• Quantum too short => too many process switches
• Quantum too long => wasted cpu time
• Don’t need to know run times in advance
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.2 Priority Scheduling
• Run jobs according to their priority
• Can be static or can do it dynamically
• Typically combine RR with priority. Each priority class uses
RR inside
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.3 Multiple Queues with Priority Scheduling
• Processes in Highest class run for one quantum,
second highest gets 2 quanta…..
o If highest finishes during quantum, great. Otherwise bump it to
second highest priority and so on

• Consequently, shortest (high priority) jobs get out of


town first
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.4 Shortest Process Next
• Cool idea if you know the remaining times
• Exponential smoothing can be used to estimate a jobs’ run time

where and are


successive runs of the same job
• The technique of estimating the next value in a series by taking
the weighted average of the current measured value and the
previous estimate is sometimes called aging.
• Aging is especially easy to implement when a = 1/2.
• All that is needed is to add the new value to the current estimate
and divide the sum by 2 (by shifting it right 1 bit).
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.5 Guaranteed Scheduling
• Promise:
 If there are n users logged in while you are working, you will
receive about 1/n of the CPU power.
 Similarly, on a single-user system with n processes running, all
things being equal, each one should get 1/n of the CPU cycles.
• The system must keep track of how much CPU each process has had
since its creation
• Then compute the amount of CPU each one is entitled to, i.e. the time
since creation divided by n
• Compute the ratio of actual CPU time consumed to CPU time entitled.
• The algorithm is then to run the process with the lowest ratio until its
ratio has moved above its closest competitor.
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.6 Lottery Scheduling
• Tickets are given to processes for various resources
• Random selection of tickets is done by scheduler
• Hold lottery for CPU time several times a second
• Can enforce priorities by allowing more tickets for “more
important” processes
• For e.g. 20 tickets = 20% chance of wining each lottery

• Cooperating processes may exchange tickets if they wish

• For example, in a video server, if frames are needed at 10, 20


and 25 frames/sec; allocate 10, 20 and 25 tickets respectively
to the processes
2. Scheduling in Interactive Systems

1) Round robin
2) Priority
3) Multiple Queues
4) Shortest Process Next
5) Guaranteed Scheduling
6) Lottery Scheduling
7) Fair Share Scheduling
2.7 Fair-share Scheduling
• Until now, with round robin or equal priorities, the scenario was:
 If user 1 starts up 9 processes and user 2 starts up 1 process,
user 1 will get 90% of the CPU and user 2 will get only 10% of it
• To prevent this situation, some systems take into account who owns a
process before scheduling it.
• In this model, each user is allocated some fraction of the CPU and the
scheduler picks processes
• For example, consider a system with two users, each have been
promised 50% of the CPU.
• User 1 has four processes, A, B, C, and D, and user 2 has only 1
process, E.
• If round-robin scheduling is used: A E B E C E D E A E B E C E D E . .
• On the other hand, if user 1 is entitled to twice as much CPU time as
user 2, we might get: A B E C D E A B E C D E ...
3. Real Time Scheduling
• Time plays an essential role; For example,
 in a disc player system, converting bits into music within a very tight time interval
 patient monitoring in a hospital intensive-care unit
 the autopilot in an aircraft
 robot control in an automated factory
• Two categories: (i) Hard real time and (ii) soft real time
 Hard: robot control in a factory
 Soft: CD player
• Real-time behavior is achieved by dividing the program into a number of
processes, each of whose behavior is predictable and known in advance.
• When an external event is detected, it is the job of the scheduler to schedule
the processes in such a way that all deadlines are met
3. Real Time Scheduling (Continued)
• Events can be periodic or aperiodic
• if there are m periodic events and
• event i occurs with period Pi and requires Ci seconds of CPU time to handle
each event, then the load can only be handled if

• A real-time system that meets this criterion is said to be schedulable.


• For example, consider a soft real-time system with three periodic events,
with periods of 100, 200, and 500 msec, respectively. If these events require
50, 30, and 100 msec of CPU time per event, the system is schedulable.
• Algorithms can be static (know run times in advance) or dynamic (run time
decisions)
Implementation of Thread
Thread Scheduling

• Depends on supported thread type (user-level thread or kernel-level thread or


both)
• User-level thread: Kernel picks process (left)
• Kernel-level thread: Kernel picks thread (right)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy