0% found this document useful (0 votes)
6 views51 pages

Lecture 2 Process Management

Chapter 3 of 'Operating System Concepts' focuses on process management, defining a process as a program in execution and discussing its various states, features, and the role of the Process Control Block (PCB). It covers process scheduling, the distinction between I/O-bound and CPU-bound processes, and the importance of inter-process communication (IPC) mechanisms for cooperating processes. The chapter also addresses process creation and termination, emphasizing the parent-child relationship and resource sharing among processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views51 pages

Lecture 2 Process Management

Chapter 3 of 'Operating System Concepts' focuses on process management, defining a process as a program in execution and discussing its various states, features, and the role of the Process Control Block (PCB). It covers process scheduling, the distinction between I/O-bound and CPU-bound processes, and the importance of inter-process communication (IPC) mechanisms for cooperating processes. The chapter also addresses process creation and termination, emphasizing the parent-child relationship and resource sharing among processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Operating System Concepts

Chapter 3: Process Management

Operating System Concepts 9th Edition


Objectives

 To introduce the notion of a process -- a program in execution, which forms the


basis of all computation
 To describe the various features of processes, including scheduling, creation
and termination, and communication

Operating System Concepts 9th Edition


Process Concept
 An operating system executes a variety of programs:
 A batch system executes jobs
 Time-shared systems has user programs or tasks.
 Even on a single-user system, a user may be able to run several programs at
one time: a word processor, a Web browser, and an e-mail package.
 In many respects, all these activities are similar, so we call all of them
processes.
 Textbook uses the terms job and process almost interchangeably.
 Process – a program in execution; process execution must progress in sequential
fashion

Operating System Concepts 9th Edition


Process Concept (Contd.)
 Program is passive entity stored on disk (executable file), process is active
 Program becomes process when executable file loaded into memory.
 Execution of program started via GUI mouse clicks, command line entry of its
name, etc
 One program can be several processes
 Consider multiple users executing the same program

Operating System Concepts 9th Edition


Process in Memory
 A process has multiple parts:
 The program code, also called text section
 Current activity including program counter,
processor registers
 Stack containing temporary data
 Function parameters, return addresses, local
variables
 Data section containing global variables
 Heap containing memory dynamically allocated
during run time

Operating System Concepts 9th Edition


Process States
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution

Operating System Concepts 9th Edition


Process Control Block (PCB)
Information associated with each process
(also called task control block)
 Process state – may be new, ready, running,
waiting, halted, and so on.
 Program counter - indicates the address of the next
instruction to be executed for this process
 CPU registers – The registers vary in number and
type, depending on the computer architecture.
They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any
condition-code information.
 CPU scheduling information- includes a process
priority, pointers to scheduling queues, and any
other scheduling parameters.
 Memory-management information – memory allocated
to the process
 Accounting information – CPU used, clock time elapsed
since start, time limits
 I/O status information – I/O devices allocated to process,
list of open files.

Each process is represented in the operating system by a PCB – also called as task control block

Operating System Concepts 9th Edition


The dispatcher is the module that gives control of the
CPU to the process selected by the short term
scheduler.
CPU Switch from Process to Process

Operating System Concepts 9th Edition


Threads
The process model discussed so far has implied that a process is a program that
performs a single thread of execution. For example, when a process is running a
word-processor program, a single thread of instructions is being executed.

This single thread of control allows the process to perform only one task at a time.
The user cannot simultaneously type in characters and run the spell checker within
the same process, for example. Most modern operating systems have extended the
process concept to allow a process to have multiple threads of execution and thus to
perform more than one task at a time.

This feature is especially beneficial on multicore systems, where multiple threads


can run in parallel. On a system that supports threads, the PCB is expanded to
include information for each thread.

 Consider having multiple program counters per process


 Must then have storage for thread details, multiple program counters in PCB

Operating System Concepts 9th Edition


Process Scheduling
The objective of multiprogramming is to have some process running at all times,
to maximize CPU utilization. The objective of time sharing is to switch the CPU
among processes so frequently that users can interact with each program while it is
running.

To meet these objectives, the process scheduler selects an available process


(possibly from a set of several available processes) for program execution on the
CPU.

As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.

Operating System Concepts 9th Edition


Ready Queue And Various I/O Device Queues

Operating System Concepts 9th Edition


Process Scheduling (Contd.)
The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the occurrence
of a particular event, such as the completion of an I/O request. Suppose the process
makes an I/O request to a shared device, such as a disk.

Since there are many processes in the system, the disk may be busy with the I/O
request of some other process. The process therefore may have to wait for the disk.
The list of processes waiting for a particular I/O device is called a device queue.

Operating System Concepts 9th Edition


Process Scheduling (Contd.)
A new process is initially put in the ready queue. It waits there until it is
selected for execution, or dispatched. Once the process is allocated the CPU
and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O
queue.
• The process could create a new child process and wait for the child’s
termination.
• The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state
to the ready state and is then put back in the ready queue. A process
continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.

Operating System Concepts 9th Edition


Representation of Process
Scheduling
Schedulers
In a batch system, more processes are submitted than can be executed
immediately. These processes are spooled to a mass-storage device (typically a
disk), where they are kept for later execution.

Long-term scheduler (or job scheduler) –selects processes from this pool and
loads them into memory for execution.
 Long-term scheduler is invoked infrequently (seconds, minutes)  (may
be slow)
 The long-term scheduler controls the degree of multiprogramming (the
number of processes in memory).
 If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of processes
leaving the system. Thus, the long-term scheduler may need to be invoked
only when a process leaves the system.
 Because of the longer interval between executions, the long-term scheduler
can afford to take more time to decide which process should be selected for
execution.

Operating System Concepts 9th Edition


Schedulers
Short-term scheduler (or CPU scheduler) –selects from among the processes
that are ready to execute and allocates the CPU to one of them.
 The short-term scheduler must select a new process for the CPU
frequently.
 Short-term scheduler is invoked frequently (milliseconds). Often, the short-
term scheduler executes at least once every 100 milliseconds. Because of
the short time between executions, the short-term scheduler must be fast.

 Processes can be described as either:


 I/O-bound process – is one that spends more of its time doing I/O than it
spends doing computations.
 CPU-bound process – generates I/O requests infrequently, using more of its
time doing computations.

Operating System Concepts 9th Edition


Schedulers
 It is important that the long-term scheduler select a good process mix of I/O-
bound and CPU-bound processes.
 If all processes are I/O bound, the ready queue will almost always be empty,
and the short-term scheduler will have little to do. If all processes are CPU
bound, the I/O waiting queue will almost always be empty, devices will go unused.
 Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling. The key idea behind a medium-
term scheduler is that sometimes it can be advantageous to remove a process
from memory and thus reduce the degree of multiprogramming.
 Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping. The process is
swapped out, and is later swapped in, by the medium-term scheduler.

Operating System Concepts 9th Edition


Context Switch
 When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process via a context switch
 When a context switch occurs, the kernel saves the context of the old process
in its PCB and loads the saved context of the new process scheduled to run.
 Context-switch time is overhead; the system does no useful work while switching
 The more complex the OS and the PCB  the longer the context switch.
 Switching speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the
existence of special instructions
 Time dependent on hardware support
 Some hardware provides multiple sets of registers per CPU  multiple
contexts loaded at once

Operating System Concepts 9th Edition


Operations on Processes – Process Creation
 The processes in most systems can execute concurrently, and they may be
created and deleted dynamically. Thus, these systems must provide a
mechanism for process creation and termination.
 Parent process create children processes, which, in turn create other processes,
forming a tree of processes
 Generally, process identified and managed via a process identifier (pid)
 Resource sharing options
 In general, when a process creates a child process, that child process will
need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task.

 A child process may be able to obtain its resources directly from the
operating system, or it may be constrained to a subset of the resources of
the parent process.
 .

Operating System Concepts 9th Edition


Operations on Processes – Process Creation
 The parent may have to partition its resources among its children, or it may
be able to share some resources (such as memory or files) among several
of its children.

 Restricting a child process to a subset of the parent’s resources prevents


any process from overloading the system by creating too many child
processes

 Execution options
When a process creates a new process, two possibilities for execution exist:

1. The parent continues to execute concurrently with its children.


2. The parent waits until some or all of its children have terminated.

Operating System Concepts 9th Edition


Operations on Processes – Process Termination
 Process executes last statement and then asks the operating system to delete it
using the exit() system call.
 Returns status data from child to parent (via wait())
 Process’ resources are deallocated by operating system
 Parent may terminate the execution of children processes using the abort()
system call. Some reasons for doing so:
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates

Operating System Concepts 9th Edition


Operations on Processes – Process Termination
 Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be terminated.
 cascading termination. All children, grandchildren, etc. are terminated.
 The termination is initiated by the operating system.
 The parent process may wait for termination of a child process by using the
wait() system call. The call returns status information and the pid of the terminated
process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait , process is an orphan

Operating System Concepts 9th Edition


Pro
co ex cess
n e e
op curre cutin s
er a ntl g
t
m a i ng y in
i nd y b sy
s
th e
e e e te
c o p en d i t h e m
op
e r a en t o r
ti n r
g
There are several reasons for providing an environment that allows
process cooperation:

• Information sharing. Since several users may be interested in the same


piece of information (for instance, a shared file), we must provide an environment
to allow concurrent access to such information.

• Computation speedup. If we want a particular task to run faster, we must


break it into subtasks, each of which will be executing in parallel with the others.
Notice that such a speedup can be achieved only if the computer has multiple
processing cores.

• Modularity. We may want to construct the system in a modular fashion,


dividing the system functions into separate processes or threads.

• Convenience. Individual user may also work on many tasks at the same time.
For instance, a user may be editing, listening to music, and compiling in parallel.

Reasons for process cooperation


Cooperating processes require an IPC mechanism
that will allow them to exchange data and
information
IPC Models
• Cooperating processes require an inter-process
communication (IPC) mechanism that will allow them to
exchange data and information. There are two
fundamental models of inter-process communication:

a. Shared-memory model =
a region of memory that is
shared by cooperating
processes is established.
Processes can then exchange
information by reading and
writing data to the shared region.

b. Message-passing model, communication takes place by


means of messages exchanged between the
cooperating processes.
a) Shared-Memory System
• Inter-process communication using shared memory
requires communicating processes to establish a
region of shared memory. Typically, a shared-
memory region resides in the address space of the
process creating the shared-memory segment.

• Other processes that wish to communicate using this


shared-memory segment must attach it to their
address space.

• Recall that, normally, the operating system tries to


prevent one process from accessing another
process’s memory.

• Shared memory requires that two or more processes


agree to remove this restriction. Operating System Concepts 9th
Edition
a) Shared-Memory System
• They can then exchange information by reading and
writing data in the shared areas.

• The form of the data and the location are determined by


these processes and are not under the operating
system’s control.

• The processes are also responsible for ensuring that


they are not writing to the same location simultaneously.

• To illustrate the concept of cooperating processes, let’s


consider the producer–consumer problem, which is a
common paradigm for cooperating processes.

Operating System Concepts 9th


Edition
Producer Consumer Problem
• Also known as the bounded-buffer problem, a producer process
produces information that is consumed by a consumer process. For
example, a compiler may produce assembly code that is consumed
by an assembler. The assembler, in turn, may produce object
modules that are consumed by the loader.

• In case it is found that the buffer is full,


the producer is not allowed to produce
and store any data into the memory
buffer.

• Data can only be consumed by the


consumer if and only if the memory
buffer is not empty.

• In case it is found that the buffer is


empty, the consumer is not allowed to
use any data from the memory buffer.
Producer Consumer Problem
• The producer and consumer must be synchronized, so that the
consumer does not try to consume an item that has not yet been
produced.

• Two types of buffers can be used. The unbounded buffer places no


practical limit on the size of the buffer. The consumer may have to
wait for new items, but the producer can always produce new items.

• The bounded buffer assumes a fixed buffer size. In this case, the
consumer must wait if the buffer is empty, and the producer must
wait if the buffer is full.
Producer Consumer Problem
• Let’s look more closely at how the
bounded buffer illustrates inter-
process communication using
shared memory. The following
variables reside in a region of
memory shared by the producer
and consumer processes:
• The shared buffer is implemented as
a circular array with two logical
pointers: in and out. The variable in
points to the next free position in the
buffer; out points to the first full
position in the buffer. The buffer is
empty when in == out; the buffer is full
when
((in + 1) % BUFFER SIZE) == out
(ex. (9+1)%10 == 0)
Producer Consumer Problem
• Codes below are for the producer process and consumer process.
• The producer process has a local variable next_produced in which the
new item to be produced is stored.
• The consumer process has a local variable next_consumed in which
the item to be consumed is stored.

Producer Consumer
Bounded Buffer Problem
• The Bounded Buffer problem is to make sure that the
producer won't try to add data into the buffer if it's full and
that the consumer won't try to remove data from an empty
buffer.

• The solution for the producer is to either go to sleep / wait


state or discard data if the buffer is full. The next time the
consumer removes an item from the buffer, it notifies the
producer, who starts to fill the buffer again. In the same way,
the consumer can go to sleep / wait state if it finds the buffer
is empty.
b) Message Passing System
• Message passing provides a mechanism to allow
processes to communicate and to synchronize their
actions without sharing the same address space.
• It is particularly useful in a distributed environment,
where the communicating processes may reside on
different computers connected by a network.
• For example, an Internet chat program could be
designed so that chat participants communicate with
one another by exchanging messages.
• A message-passing facility provides at least two
operations:

send(message) receive(message)

• Messages sent by a process can be either fixed or


variable in size.
b) Message Passing System
• If processes P and Q want to communicate, they must send
messages to and receive messages from each other: a
communication link must exist between them.

• This link can be implemented in a variety of ways. We are


concerned here not with the link’s physical implementation
but rather with its logical implementation.

• Here are several methods for logically implementing a link and


the send/receive operations:

1. Direct or indirect communication


2. Synchronous or asynchronous communication
3. Buffering
1) Direct/Indirect Communication
• Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.

A) Under direct communication, each process that wants to communicate


must explicitly name the recipient or sender of the communication. In this
scheme, the send() and receive() primitives are defined as:

• send(P, message)—Send a message to process P.


• receive(Q, message)—Receive a message from process Q.

• Here a communication link has the following properties:


• A link is established automatically between every pair of processes
that want to communicate.
• The processes need to know only each other’s identity to
communicate.
• A link is associated with exactly two processes.
• Exactly only one link exists between each pair of processes.
1) Direct/Indirect Communication
• This scheme exhibits symmetry in addressing; that is, both the sender
process and the receiver process must name the other to
communicate.
• A variant of this scheme employs asymmetry in addressing. Here,
only the sender names the recipient; the recipient is not required to
name the sender.
• send(P, message)—Send a message to process P.
• receive(id, message)—Receive a message from any process.
The variable id is set to the name of the process with which
communication has taken place.
• The disadvantage in both of these schemes (symmetric and asymmetric)
is the limited modularity of the resulting process definitions.
• Changing the identifier of a process may necessitate examining all other
process definitions.
• All references to the old identifier must be found, so that they can be
modified to the new identifier.
• In general, any such hard-coding techniques, where identifiers must be
explicitly stated, are less desirable than techniques involving indirection,
as in the next technique.
1) Direct/Indirect Communication
B) With indirect communication, the messages are sent to and
received from mailboxes, or ports.

• A mailbox can be viewed abstractly as an object into which messages


can be placed by processes and from which messages can be
removed. Each mailbox has a unique identification.

• A process can communicate with another process via a number of


different mailboxes, but two processes can communicate only if they
have a shared mailbox. The send() and receive() primitives are
defined as follows:

• send(A, message)—Send a message to mailbox A.


• receive(A, message)—Receive a message from mailbox A.
2) Synchronous/asynchronous
communication
• Communication between processes takes place through calls to send()
and receive() primitives.
• There are different design options for implementing each primitive:

• Blocking (synchronous) send. The sending process is blocked


until the message is received by the receiving process or by the
mailbox.

• Non-blocking (asynchronous) send. The sending process sends


the message and resumes operation.

• Blocking receive. The receiver blocks until message is available.

• Non-blocking receive. The receiver retrieves either a valid


message or a null.
3) Buffering
• Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such
queues can be implemented in three ways:

a) Zero capacity. The queue has a maximum length of zero; thus, the link
cannot have any messages waiting in it. In
this case, the sender must block until
the recipient receives the message.

b) Bounded capacity. The queue has finite length n; thus, at most n


messages can reside in
it. If the queue is not full when a new
message is sent, the message is placed in the
queue and the sender can continue execution
without
waiting.

c) Unbounded capacity. The queue’s length is potentially infinite; thus, any


number of messages can
wait in it. The sender

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy