Lecture 2 Process Management
Lecture 2 Process Management
Each process is represented in the operating system by a PCB – also called as task control block
This single thread of control allows the process to perform only one task at a time.
The user cannot simultaneously type in characters and run the spell checker within
the same process, for example. Most modern operating systems have extended the
process concept to allow a process to have multiple threads of execution and thus to
perform more than one task at a time.
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.
Since there are many processes in the system, the disk may be busy with the I/O
request of some other process. The process therefore may have to wait for the disk.
The list of processes waiting for a particular I/O device is called a device queue.
In the first two cases, the process eventually switches from the waiting state
to the ready state and is then put back in the ready queue. A process
continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.
Long-term scheduler (or job scheduler) –selects processes from this pool and
loads them into memory for execution.
Long-term scheduler is invoked infrequently (seconds, minutes) (may
be slow)
The long-term scheduler controls the degree of multiprogramming (the
number of processes in memory).
If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of processes
leaving the system. Thus, the long-term scheduler may need to be invoked
only when a process leaves the system.
Because of the longer interval between executions, the long-term scheduler
can afford to take more time to decide which process should be selected for
execution.
A child process may be able to obtain its resources directly from the
operating system, or it may be constrained to a subset of the resources of
the parent process.
.
Execution options
When a process creates a new process, two possibilities for execution exist:
• Convenience. Individual user may also work on many tasks at the same time.
For instance, a user may be editing, listening to music, and compiling in parallel.
a. Shared-memory model =
a region of memory that is
shared by cooperating
processes is established.
Processes can then exchange
information by reading and
writing data to the shared region.
• The bounded buffer assumes a fixed buffer size. In this case, the
consumer must wait if the buffer is empty, and the producer must
wait if the buffer is full.
Producer Consumer Problem
• Let’s look more closely at how the
bounded buffer illustrates inter-
process communication using
shared memory. The following
variables reside in a region of
memory shared by the producer
and consumer processes:
• The shared buffer is implemented as
a circular array with two logical
pointers: in and out. The variable in
points to the next free position in the
buffer; out points to the first full
position in the buffer. The buffer is
empty when in == out; the buffer is full
when
((in + 1) % BUFFER SIZE) == out
(ex. (9+1)%10 == 0)
Producer Consumer Problem
• Codes below are for the producer process and consumer process.
• The producer process has a local variable next_produced in which the
new item to be produced is stored.
• The consumer process has a local variable next_consumed in which
the item to be consumed is stored.
Producer Consumer
Bounded Buffer Problem
• The Bounded Buffer problem is to make sure that the
producer won't try to add data into the buffer if it's full and
that the consumer won't try to remove data from an empty
buffer.
send(message) receive(message)
a) Zero capacity. The queue has a maximum length of zero; thus, the link
cannot have any messages waiting in it. In
this case, the sender must block until
the recipient receives the message.