0% found this document useful (0 votes)
14 views38 pages

OS Chapter 2 NOTES Final

The document provides an overview of processes and CPU scheduling in operating systems, defining a process as a program in execution with various components such as code, data, resources, and status. It explains different process states, models, and the structure of a Process Control Block (PCB), which contains essential information about a process. Additionally, it discusses process scheduling categories, types of schedulers, and the concept of context switching for efficient CPU management.

Uploaded by

raiumeshwar75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views38 pages

OS Chapter 2 NOTES Final

The document provides an overview of processes and CPU scheduling in operating systems, defining a process as a program in execution with various components such as code, data, resources, and status. It explains different process states, models, and the structure of a Process Control Block (PCB), which contains essential information about a process. Additionally, it discusses process scheduling categories, types of schedulers, and the concept of context switching for efficient CPU management.

Uploaded by

raiumeshwar75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

DEPARTMENT OF

COMPUTER SCIENCE
Nutan College Of Engineering & Research, & ENGINEERING
Talegaon Dabhade, Pune- 410507
Operating system
Unit 2 – Processes And CPU Scheduling
1. Process Concept
When the operating System (OS) runs a program ,this program is loaded into memory and the
control is transferred to this program’s first instruction.
A process is a program in execution. A process is defined as an entity which represents the basic
unit of work to be implemented in the system.
Components of a process are following
S.N. Component & Description
1 Object: Program Code to be executed.
2 Data : Data to be used for executing the program.
3 Resources : While executing the program, it may require some resources.
4 Status : Verifies the status of the process execution. A process can run to completion
only when all requested resources have been allocated to the process. Two or more
processes could be executing the same program, each using their own data and
resources.

Process is not as same as program code but a lot more than it. A process is an 'active' entity as
opposed to program which is considered to be a 'passive' entity. To put it in simple terms, we
write our computer programs in a text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program. Attributes held by process
include hardware state, memory, CPU etc. Two program create multiple
processes.Example,Internet Explorer

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following Figure shows a simplified layout of a
process inside main memory

1
S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4
Data
This section contains the global and static variables.

a. Differences between Process and Program


Process Program
Process is a dynamic entity Program is a static entity
Process is sequence of instruction Program is a sequence of instructions
executions.
Process loaded in to main memory Program loaded into secondary storage devices
Time span of process is limited Time span of program is unlimited
Process is an active entity Program is a passive entity
A. Process States
a) Two-State Process Model
. In this model, we consider two main states of the process. These two states are

2
State1:Process is Running on CPU
State 2: Process is Not Running on CPU
First of all when a new process is created, then it is in Not Running State. Suppose a new
process P2 is created then P2 is in NOT Running State.When the CPU becomes free, Dispatcher
allows the P2 to use the CPU. Before this P2 was in waiting for the state in a waiting queue.
Dispatcher: Dispatcher is a program that gives the CPU to the process selected by the CPU
scheduler. Suppose dispatcher allows P2 to execute on CPU.
Running: When the dispatcher allows P2 to execute on the CPU then P2 starts its execution.
Here we can say that P2 is in running state.
Now, according to priority scheduling, if any process with high priority wants to execute on
CPU, Suppose P3 with high priority, then P2 should be the pause or we can say that P2 will be in
waiting state and P3 will be in running state.
Now, when P3 terminates then P2 again allows the dispatcher to execute on CPU.

b) Five-State Process Model


As a process executes, it changes state. The state of a process is defined as the current activity of
the process. Process can have one of the following five states at a time.

S.N. State & Description


1 New The process is being created.
2 Ready The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so that
they can run.
3 Running Process instructions are being executed (i.e. The process that is
currently being executed posseses all the resources needed for its exexution).
4 Waiting The process is waiting for some event to occur (such as the completion
of an I/O operation).
5 Terminated The process has finished execution.

3
Whenever process changes state, the operating system reacts by placing the process PCB in the
list that corresponds to its new state. Only one process can be running on any processor at any
instant and many processes may be ready and waiting state

c) Seven-State Process Model


 New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.
 Ready – New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and
is waiting to get the CPU time for its execution. Processes that are ready for execution by
the CPU are maintained in a queue for ready processes.
 Run – The process is chosen by CPU for execution and the instructions within the process
are executed by any one of the available CPU cores.
 Blocked or wait – Whenever the process requests access to I/O or needs input from the
user or needs access to a critical region(the lock for which is already acquired) it enters the
blocked or wait state. The process continues to wait in the main memory and does not
require CPU. Once the I/O operation is completed the process goes to the ready state.
 Terminated or completed – Process is killed as well as PCB is deleted.
 Suspend ready – Process that was initially in the ready state but were swapped out of main
memory and placed onto external storage by scheduler are said to be in suspend ready state.
The process will transition back to ready state whenever the process is again brought onto
the main memory.
 Suspend wait or suspend blocked – Similar to suspend ready but uses the process which
was performing I/O operation and lack of main memory caused them to move to secondary
memory. When work is finished it may go to suspend ready.

4
B. Process Control Block
PCB Each process is represented in the operating system by a process control block (PCB) also
called a task control block. PCB is the data structure used by the operating system. Operating
system groups all information that needs about particular process. PCB contains many pieces of
information associated with a specific process. The architecture of a PCB is completely
dependent on Operating System and may contain different information in different operating
systems. Here is a simplified diagram of a PCB −

Pointer
Process State
Process ID
Program Counter
CPU Registers
CPU Scheduling Information
Memory – Management Information
Accounting Information
I/O Status Information
Priority
S.N. Information & Description

1 Pointer Pointer points to another process control block. Pointer is used for maintaining
the scheduling list.
2 Process State Process state may be new, ready, running, waiting and so on.
3 Process ID Unique identification for each of the process in the operating system.

5
4 Program Counter Program Counter indicates the address of the next instruction to be
executed for this process.
5 CPU registers CPU registers include general purpose register, stack pointers, index
registers and accumulators etc. number of register and type of register totally depends
upon the computer architecture.
6 Memory management information This information may include the value of base
and limit registers, the page tables, or the segment tables depending on the memory
system used by the operating system. This information is useful for deallocating the
memory when the process terminates.
7 Accounting information This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers etc.
8 CPU-Scheduling Information includes a process pointer, pointers to scheduling
Queues, other scheduling parameters etc.

9 I/O Status Information: Includes the list of I/O Devices Allocated to the processes, list
of open files.
10 Priority: Every process has its own priority. The process with the highest priority
among the processes gets the CPU first. This is also stored on the process control block.

The PCB serves as the repository for any information which can vary from process to
process. Loader/linker sets flags and registers when a process is created. If that process gets
suspended, the contents of the registers are saved on a stack and the pointer to the particular
stack frame is stored in the PCB. By this technique, the hardware state can be restored so that the
process can be scheduled to run again. The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
2. Process scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Multiprogramming operating system allow more than one process to be loaded into the
executable memory at a time and loaded process shares the CPU using time multiplexing.
a) Categories of Scheduling
There are two categories of scheduling:
Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to
a waiting state.
Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state
to ready state. This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.

a) Process Scheduling Queues


 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The
OS maintains a separate queue for each of the process states and PCBs of all processes in

6
the same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state queue.
 Scheduling queues refers to queues of processes or devices. It maintains information of all
ready processes for CPU/Devices.
 A sample queue is shown below:
 A queue is normally maintained as a linked list. Its header contains the pointer to the first
PCBs and the tail contains pointers to the last PCBs in the list.
 Each PCB has a pointer field that points to the next process in the queue.

The Operating System maintains the following important process scheduling queues −

Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

\
In the above-given Diagram,
 Rectangle represents a queue.

7
 Circle denotes the resource
 Arrow indicates the flow of the process.

A newly arrived process is put in the ready queue. Processes waits in ready queue for
allocating the CPU. Once the CPU is assigned to a process, then that process will execute. While
executing the process, any one of the following events can occur.
 The process could issue an I/O request and then it would be placed in an I/O queue.
 The process could create new sub process and will wait for its termination.
 The process could be removed forcibly from the CPU, as a result of interrupt and put back
in the ready queue.

b) Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types – Long Term Scheduler ,Short Term Scheduler ,Medium Term
Scheduler

Long Term Scheduler :Long term scheduler runs less frequently. Long term scheduler is also
known as job scheduler. It chooses the processes from the pool (secondary memory) and keeps
them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long
term scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs
present in the pool. An optimal degree of Multiprogramming means the average rate of process
creation is equal to the average departure rate of processes from the execution memory.

8
Short Term Scheduler :This is also known as CPU Scheduler and runs very frequently. It
selects one of the Jobs from the ready queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution.It
is responsible for selecting one process from the ready state for scheduling it on the running
state. The primary aim of this scheduler is to enhance CPU performance and increase process
execution rate.
Medium Term Scheduler : Medium-term scheduling is a part of swapping. It is also called
swapping scheduler. It removes the processes from the memory. It reduces the degree of
multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

This complete process is descripted in the below diagram:

c) Difference Between Long term,Short term and Medium term Schedulers


Long-Term Short-Term Medium-Term
Long term is also known as a Short term is also known as Medium-term is also called
job scheduler CPU scheduler swapping scheduler.
It is either absent or minimal It is insignificant in the time- This scheduler is an element of
in a time-sharing system. sharing order. Time-sharing systems.
Speed is less compared to the Speed is the fastest compared to It offers medium speed.
short term scheduler. the short-term and medium-
term scheduler.
Allow you to select processes It only selects processes that is It helps you to send process
from the loads and pool back in a ready state of the execution. back to memory.
into the memory
Offers full control Offers less control Reduce the level of
multiprogramming.

9
d) Context Switch:
A context switching is a process that involves switching of the CPU from one process or task to
another. It is a method to store/restore the state or of a CPU in PCB. So that process execution
can be resumed from the same point at a later time. The context switching method is important
for multitasking OS.In this phenomenon, the execution of the process that is present in the
running state is suspended by the kernel and another process that is present in the ready state is
executed by the CPU.
But the context switching process involved a number of steps that need to be followed. You can't
directly switch a process from the running state to the ready state. You have to save the context
of that process. If you are not saving the context of any process P then after some time, when the
process P comes in the CPU for execution again, then the process will start executing from
starting. But in reality, it should continue from that point where it left the CPU in its previous
execution. So, the context of the process should be saved before putting any other process in the
running state.
A context is the contents of a CPU's registers and program counter at any point in time. Context
switching can happen due to the following reasons:
 When a process of high priority comes in the ready state. In this case, the execution of the
running process should be stopped and the higher priority process should be given the
CPU for execution.
 When an interruption occurs then the process in the running state should be stopped and
the CPU should handle the interrupt before doing something else.
 When a transition between the user mode and kernel mode is required then you have to
perform the context switching.
Steps involved in Context Switching
The process of context switching involves a number of steps. The following diagram
depicts the process of context switching between the two processes P1 and P2.

In the above figure, you can see that initially, the process P1 is in the running state and
the process P2 is in the ready state. Now, when some interruption occurs then you have to
switch the process P1 from running to the ready state after saving the context and the
process P2 from ready to running state. The following steps will be performed:

10
1. Firstly, the context of the process P1 i.e. the process present in the running state
will be saved in the Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O
queue, waiting queue, etc.
3. From the ready state, select the new process that is to be executed i.e. the process
P2.
4. Now, update the Process Control Block of process P2 i.e. PCB2 by setting the
process state to running. If the process P2 was earlier executed by the CPU, then
you can get the position of last executed instruction so that you can resume the
execution of P2.
5. Similarly, if you want to execute the process P1 again, then you have to follow
the same steps as mentioned above(from step 1 to 4).
For context switching to happen, two processes are at least required in general, and in the
case of the round-robin algorithm, you can perform context switching with the help of
one process only.
The time involved in the context switching of one process by other is called the Context
Switching Time.
Advantage of Context Switching
Context switching is used to achieve multitasking i.e. multiprogramming with time-sharing
Disadvantage of Context Switching
The disadvantage of context switching is that it requires some time for context switching i.e.
the context switching time. Time is required to save the context of one process that is in the
running state and then getting the context of another process that is about to come in the
running state. During that time, there is no useful work done by the CPU from the user
perspective. So, context switching is pure overhead in this condition.

3. Operations on Process:
Process Creation and Process termination are used to create and terminate processes respectively.
A. Process Creation
A process may be created in the system for different operations. Some of the events that lead to
process creation are as follows −
 User request for process creation
 System Initialization
 Batch job initialization
 Execution of a process creation system call by a running process
A process may create several new processes, via a create-process system call, during the course
of execution. The creating process is called a parent process, and the new processes are called the
children of that process. Each of these new processes may in turn create other processes, forming
a tree of processes. Most operating systems (including UNIX and the Windows family of
operating systems) identify processes according to a unique process identifier (or pid), which is
typically an integer number. Figure illustrates a typical process tree for the Solaris operating
system, showing the name of each process and its pid.

11
A new process is created by the forkO system call. The new process consists of a copy of the
address space of the original process. This mechanism allows the parent process to communicate
easily with its child process. Both processes (the parent and the child) continue execution at the
instruction after the f ork(), with one difference: The return code for the forkO is zero for the
new (child) process, whereas the (nonzero) process identifier of the child is returned to the
parent. Typically, the execO system call is used after a forkO system call by one of the two
processes to replace the process's memory space with a new program. The exec () system call
loads a binary file into memory (destroying the memory image of the program containing the
execO system call) and starts its execution. In this manner, the two processes are able to
communicate and then go their separate ways.
The parent can then create more children; or, if it has nothing else to do while the child runs, it
can issue a wait () system call to move itself off the ready queue until the termination of the
child. The parent waits for the child process to complete with the wait () system call. When the
child process completes (by either implicitly or explicitly invoking exit ()) the parent process
resumes from the call to wait (), where it completes using the exit () system call.
A diagram that demonstrates process creation using fork() is as follows:

12
Depending on system implementation the three condition can be there for Resource sharing
after child process is created.
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
There are two options for the parent process after creating the child:
 Parent and children execute concurrently
 Parent waits until children terminate
Two possibilities for the address space of the child relative to the parent:
The child may be an exact duplicate of the parent, sharing the same program and data segments
in memory.
The child process may have a new program loaded into its address space, with all new code and
data segments.
UNIX examples
 fork system call creates new process
 exec system call used after a fork to replace the process’ memory space with a new
program
B. Process Termination
Process termination occurs when the process is terminated The exit() system call is used by most
operating systems for process termination.
Some of the causes of process termination are as follows −
•A process may be terminated after its execution is naturally completed. This process leaves the
processor and releases all its resources.
•A child process may be terminated if its parent process requests for its termination.
•A process can be terminated if it tries to use a resource that it is not allowed to. For example - A
process can be terminated for trying to write into a read only file.
•If an I/O failure occurs for a process, it can be terminated. For example - If a process requires
the printer and it is not working, then the process will be terminated.
•In most cases, if a parent process is terminated then its child processes are also terminated. This
is done because the child process cannot exist without the parent process.
•If a process requires more memory than is currently available in the system, then it is terminated
because of memory scarcity.
•A process might also terminate due to that it executes a system call that tells the operating
system (OS) just to kill some other process(eg.abort)
•Some systems do not allow a child to exist if its parent has terminated. In such systems, if a
process terminates (either normally or abnormally), then all its children must also be terminated.
This phenomenon, referred to as cascading termination, is normally initiated by the operating
system.
4. Interprocess communication
Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.
A diagram that illustrates interprocess communication is as follows:

13
Reasons for using the interprocess communication protocol for information sharing:
Information Sharing
Several processes may need to access the same piece of information. The mechanism of IPC
allows different processes to share the same information concurrently.
Computation Speedup
Small computations execute faster. Thus, if you want to execute a certain task faster. Then break
the task into smaller sub-tasks. Now the interprocess communication mechanism lets these
subtasks run in parallel.
But to speed up such tasks, the system must have multiple processing elements. Such as multiple
processing units or I/O channels.
Modularity
The interprocess communication mechanism allows the construction of a modern system. Where
the functions of the system can be divided into separate processes and threads.
Convenience
A computer user can perform several tasks simultaneously. Such as editing a document, printing
a document, playing music etc

Disadvantages of Inter-Process Communication (IPC)


Complexity: IPC can add complexity to the design and implementation of software systems, as
it requires careful coordination and synchronization between processes. This can lead to
increased development time and maintenance costs.
Overhead: IPC can introduce additional overhead, such as the need to serialize and deserialize
data, and the need to synchronize access to shared resources. This can impact the performance of
the system.
Scalability: IPC can also limit the scalability of a system, as it may be difficult to manage and
coordinate large numbers of processes communicating with each other.

.
Different approaches to implement interprocess communication/ IPC commnication
models are given as follows:
a. Shared Memory
The operating system implementing the shared memory model is a shared memory system. Here
a cooperating process creates a shared memory segment in its address space. Now, if there
occurs another process that wants to communicate with this process. Then it must attach itself to
the created shared memory segment.Shared memory is the memory that can be simultaneously
accessed by multiple processes. This is done so that the processes can communicate with each
other. All POSIX systems, as well as Windows operating systems use shared memory.
The communicating processes share information by:
 Writing data to the shared region
 Reading from the shared region.

14
Fig: Shared memory comminication model
Advantage of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
Some of the disadvantages of shared memory model are as follows:
 All the processes that use the shared memory model need to make sure that they are not
writing to the same memory location.
 Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.

b. Message Passing
The message-passing system facilitates communication via at least two operations. That sends
messages and receives messages.
Multiple processes can read and write data to the message queue without being connected to
each other. Messages are stored on the queue until their recipient retrieves them. Message queues
are quite useful for interprocess communication and are used by most operating systems.

Message Queues
We have a linked list to store messages in a kernel of OS and a message queue is identified using
"message queue identifier".
Direct Communication
In this, process that want to communicate must name the sender or receiver.
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
 A pair of communicating processes must have one link between them.
 A link (generally bi-directional) establishes between every pair of communicating
processes.
Indirect Communication
 Messages are directed and received from mailboxes (also referred to as ports)
 Each mailbox has a unique idProcesses can communicate only if they share a mailbox
 Link (uni-directional or bi-directional) is established between pairs of processes.
 Sender process puts the message in the port or mailbox of a receiver process and the
receiver process takes out (or deletes) the data from the mailbox.

15
 Operations
o create a new mailbox (port)
o send and receive messages through mailbox
o destroy a mailbox
 Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Size of messages:
Although the size of the message can be variable, or it can be of a fixed size.With the fixed-size
messages, the system-level implementation becomes simple. But the programming of such a
system becomes difficult.With the variable size messages, the system-level implementation is
difficult. But the programming of such a system becomes simpler.

The message passing can either be:


1. Synchronous
2. Asynchronous.
Synchronous Message Passing System
In synchronous message passing, the sender is blocked until the receiver receives the message.
And the receiver is blocked until the message is available.
Blocking is considered synchronous
o Blocking send -- the sender is blocked until the message is received
o Blocking receive -- the receiver is blocked until a message is available

Asynchronous Message Passing System


In asynchronous message passing, the sender sends a message and resumes its operation.
Whereas the receiver may receive a valid message or a null.
Non-blocking is considered asynchronous
o Non-blocking send -- the sender sends the message and continue
o Non-blocking receive -- the receiver receives:A valid message, or Null message

Fig:Messaging Passing Communication Model

16
Advantage of Messaging Passing Model
The message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
The message passing model has slower communication than the shared memory model because
the connection setup takes time.
A diagram that demonstrates the shared memory model and message passing model is given as
follows:

c. Pipes
Pipes are a simple form of IPC used to allow communication between two processes. A pipe is a
unidirectional communication channel that allows one process to send data to another process.
The receiving process can read the data from the pipe, and the sending process can write data to
the pipe. Pipes are used in all POSIX systems as well as Windows operating systems.

The | command is called a pipe. It is used to pipe, or transfer, the standard output from the
command on its left into the standard input of the command on its right.
Example: echo "Hello World" | wc -w
# First, echo "Hello World" will send Hello World to the standard output.
# Next, pipe | will transfer the standard output to the next command's standard input.
# Finally, wc -w will count the number of words from its standard input, which is 2.

c. Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent
between processes on the same computer or data sent between different computers on the same
network. Most of the operating systems use sockets for interprocess communication. A socket is
a combination of an IP address and a port number, which allows a process to connect to another
process over the network.
Sockets are commonly used for client-server applications, where a server listens on a socket for
incoming connections, and clients connect to the server over the socket.
From the diagram below ,the first part is the Client, the second part is the Server.

The Client wants to request something from the server, the Server has to fulfill the request by
given whatever the Client asks from the Server. In order, for this to occur, there must be a
communication link between the Server and the Client. The communication link is being
established using the concept of Socket which we defined earlier.
 From the first part of the diagram, a Client process is trying to establish a connection
between the Client and the Server. The host computer will assign a port number to this
process which wants to communicate with the server.

17
 The IP address of the Host Computer from the diagram above is 146.86.5.20 a process that
belongs to the Host computer wants to communicate with the Web Server. For that to be
possible, a specific port number will be assigned by the Host computer to the Client process.
 The IP address is then assigned a port number 146.86.5.20:1625:1625 A port number is an
arbitrary number greater than 1024. It is greater than 1024 because the port numbers below
1024 are considered well known and are used for implementing standard services.
 Similarly in part B (The webserver) the Web server has a socket. the socket belongs to the
process in the Web server that is going to communicate with the Client process. It has an IP
address and port number 161.25.19.8:80
 The packets that are traveling from the Client process to the Server process are delivered
appropriately based on the port numbers. This is how communication between
systems/processes takes place using Sockets.

Fig: IPC using sockets

d. Remote Procedure Call (RPC) :


Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-
server based applications. It is based on extending the conventional local procedure calling
so that the called procedure need not exist in the same address space as the calling
procedure. The two processes may be on the same system, or they may be on different
systems with a network connecting them.
The following steps take place during a RPC :
1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub
resides within the client’s own address space.
2. The client stub marshalls(pack) the parameters into a message. Marshalling includes
converting the representation of the parameters into a standard format, and copying each
parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the remote server
machine.

18
4. On the server, the transport layer passes the message to a server stub,
which demarshalls(unpack) the parameters and calls the desired server routine using the
regular procedure call mechanism.
5. When the server procedure completes, it returns to the server stub (e.g., via a normal
procedure call return), which marshalls the return values into a message. The server stub then
hands the message to the transport layer.
6. The transport layer sends the result message back to the client transport layer, which hands
the message back to the client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.

5. Cooperating processes
In the computer system, there are many processes which may be either independent processes or
cooperating processes that run in the operating system. A process is said to be independent when
it cannot affect or be affected by any other processes that are running the system. It is clear that
any process which does not share any data (temporary or persistent) with any another process
then the process independent. On the other hand, a cooperating process is one which can affect or
affected by any another process that is running on the computer. the cooperating process is one
which shares data with another process.
Example:
Below is a very simple example of two cooperating processes. The problem is called the
Producer Consumer problem and it uses two processes, the producer and the consumer.
Producer Process: It produces information that will be consumed by the consumer.
Consumer Process: It consumes information produced by the producer.
Both processes run concurrently. If the consumer has nothing to consume, it waits.
There are two versions of the producer. In version one, the producer can produce an infinite
amount of items. This is called the Unbounded Buffer Producer Consumer Problem. In the other

19
version, there is a fixed limit to the buffer size. When the buffer is full, the producer must wait
until there is some space in the buffer before it can produce a new item.

There are several reasons for providing an environment that allows process cooperation:
1. Information sharing
In the information sharing at the same time, many users may want the same piece of
information(for instance, a shared file) and we try to provide that environment in which the users
are allowed to concurrent access to these types of resources.
2. Computation speedup
When we want a task that our process run faster so we break it into a subtask, and each subtask
will be executing in parallel with another one. It is noticed that the speedup can be achieved only
if the computer has multiple processing elements (such as CPUs or I/O channels).
3. Modularity
In the modularity, we are trying to construct the system in such a modular fashion, in which the
system dividing its functions into separate processes.
4. Convenience
An individual user may have many tasks to perform at the same time and the user is able to do
his work like editing, printing and compiling.
6. Threads
A thread is a path of execution within a process. A thread is a basic unit of CPU utilization,
consisting of a program counter, a stack, and a set of registers, ( and a thread ID. ).A process can
contain multiple threads.A thread is also known as lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a browser, multiple tabs
can be different threads. MS Word uses multiple threads: one thread to format the text, another
thread to process inputs, etc
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that. Each such
thread has its own CPU state and stack, but they share the address space of the process and the
environment.
Threads can share common data so they do not need to use interprocess communication.

There are four major categories of benefits to multi-threading:


Responsiveness - One thread may provide rapid response while other threads are blocked or
slowed down doing intensive calculations.

20
Resource sharing - By default threads share common code, data, and other resources, which
allows multiple tasks to be performed simultaneously in a single address space.
Economy - Creating and managing threads ( and context switches between them ) is much faster
than performing the same tasks for processes.
Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process can
only run on one CPU, no matter how many may be available, whereas the execution of a multi-
threaded application may be split amongst available processors. ( Note that single threaded
processes can still benefit from multi-processor architectures when there are multiple processes
contending for the CPU, i.e. when the load average is above some certain threshold. )

A. Difference between Process and Thread


S.N. Process Thread
1 Process is heavy weight or resource Thread is light weight, taking lesser resources than
intensive. a process.
2 Process switching needs interaction with Thread switching does not need to interact with
operating system. operating system.
3 In multiple processing environments, All threads can share same set of open files, child
each process executes the same code but processes.
has its own memory and file resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a second
process can execute until the first process thread in the same task can run.
is unblocked.
5 Multiple processes without using threads Multiple threaded processes use fewer resources.
use more resources.
6 In multiple processes each process One thread can read, write or change another
operates independently of the others. thread's data.

B. Types of Thread
Threads are implemented in following two ways −
User Level Threads (ULT) –
The User-level Threads are implemented by the user-level software. the user-level threads are
basically created and implemented by the thread library which OS provides as an API for
creating the managing synchronizing threads. it is faster than the kernel-level threads, it is
basically represented by the program counter, stack, register, and PCB. Thread switching does
not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the user level
thread and manages them as if they were single-threaded processes.Context switching in user-
level threads is faster.
Example – user threads library includes POSIX threads, Mach C-Threads
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages
 Multithreaded applications on user-level threads cannot benefit from multiprocessing.

21
 If a single user-level thread performs a blocking operation, the entire process is halted.

Kernel Level Thread (KLT) :


Kernel level threads are implemented using system calls and Kernel level threads are recognized
by the OS.Kernel-level threads are slower to create and manage compared to user-level threads.
Context switching in a kernel-level thread is slower.Even if one kernel-level thread performs a
blocking operation, it does not affect other threads. . OS kernel provides system call to create and
manage threads.

Example:Window Solaris.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within same process requires a mode
switch to the Kernel

C. Difference between User-Level & Kernel-Level Thread


S.N. User-Level Threads Kernel-Level Thread
1 User-level threads are faster to create and Kernel-level threads are slower to create
manage. and manage.
2 Implementation is by a thread library at the Operating system supports creation of
user level. Kernel threads.
3 User-level thread is generic and can run on Kernel-level thread is specific to the

22
any operating system. operating system.
4 Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.

D. Thread States:
Creation/New/Born State: When an application is to be processed, then it creates a thread.
Ready/Runnable : It is then allocated the required resources(such as a network) and it comes in
the READY queue.
Running : When the thread scheduler (like a process scheduler) assign the thread with
processor, it comes in RUNNING queue.
Waiting: When the process needs some other event to be triggered, which is outsides it’s control
(like another process to be completed), it transitions from RUNNING to WAITING queue.
Delayed/Sleep: When the application has the capability to delay the processing of the thread, it
when needed can delay the thread and put it to sleep for a specific amount of time. The thread
then transitions from RUNNING to DELAYED queue.
Blocked/Not Runnable: When thread generates an I/O request and cannot move further till it’s
done, it transitions from RUNNING to BLOCKED queue.
Finished/Terminated : After the process is completed, the thread transitions from RUNNING
to FINISHED.

The difference between the WAITING and BLOCKED transition is that in WAITING the thread
waits for the signal from another thread or waits for another process to be completed, meaning
the burst time is specific. While, in BLOCKED state, there is no specified time (it depends on the
user when to give an input).

THREAD STATES
E. Thread Control Block in Operating System
Thread Control Blocks (TCBs) represents threads generated in the system. It contains
information about the threads, such as it’s ID and states.

23
The components have been defined below:
Thread ID: It is a unique identifier assigned by the Operating System to the thread when it is
being created.
Thread states: These are the states of the thread which changes as the thread progresses through
the system
CPU information: It includes everything that the OS needs to know about, such as how far the
thread has progressed and what data is being used.
Thread Priority: It indicates the weight (or priority) of the thread over other threads which
helps the thread scheduler to determine which thread should be selected next from the READY
queue.
A pointer which points to the process which triggered the creation of this thread.
A pointer which points to the thread(s) created by this thread.

7. Multithreading Models
A process is divided into number of smaller tasks each task is called a Thread. Number of
Threads with in a Process execute at a time is called Multithreading. In multi-threads, the
same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.

. Multithreading models are three types


 Many to one relationship.
 One to one relationship.
 Many to many relationship.
1. Many to one relationship.
In this model, we have multiple user threads mapped to one kernel thread. In this model
when a user thread makes a blocking system call entire process blocks. As we have only
one kernel thread and only one user thread can access kernel at a time, so multiple threads
are not able access multiprocessor at the same time. The thread management is done on the
user level so it is more efficient.

24
2. One to One Model
 The one to one model maps each of the user threads to a kernel thread. This means that
many threads can run in parallel on multiprocessors and other threads can run when one
thread makes a blocking system call.
 A disadvantage of the one to one model is that the creation of a user thread requires a
corresponding kernel thread. Since a lot of kernel threads burden the system, there is
restriction on the number of threads in the system.
 OS/2, windows NT and windows 2000 use one to one relationship model.

3. Many to Many Model

25
 The many to many model maps many of the user threads to a equal number or lesser
number of kernel threads. The number of kernel threads depends on the application or
machine. This model provides the best accuracy on concurrency and when a thread
performs a blocking system call, the kernel can schedule another thread for execution.
 The many to many does not have the disadvantages of the one to one model or the many
to one model. There can be as many user threads as required and their corresponding
kernel threads can run in parallel on a multiprocessor.
 Blocking the kernel system calls does not block the entire process.
 Processes can be split across multiple processors.

8. CPU Scheduling: Scheduling Criteria


There are many different criterias to check when considering the "best" scheduling algorithm,
they are:
 CPU Utilization
It is the average function of time during which the processor is busy.To make out the best use
of CPU and not to waste any CPU cycle, CPU would be working most of the time(Ideally
100% of the time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
 Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time.
 Turnaround Time
The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.
 Waiting Time
The Total amount of time for which the process waitsin the ready queue to get control on the
CPU is called waiting time.

26
 Response Time
The difference between the arrival time and the time at which the process first gets the CPU
is called Response Time.
 Priority: Give preferential treatment to processes with highest priorities
 Balanced Utilization: For better performance Utilization of memory, I/O devices and other
resources should be considered along with CPU utilization.
 Fairness: Avoid Process from starvation .All the process should get equal opportunity to
execute.

Various Times related to the Process:

Turnaround Time(TAT)=Completion Time(CT)-Arrival Time(AT)

Waiting Time(WT)= Turnaround Time(TAT)-Burst Time(BT)

Arrival Time: The time at which the process enters into the ready queue is called the arrival
time.
Burst Time: The total amount of time required by the CPU to execute the whole process is
called the Burst Time. This does not include the waiting time.
Completion Time:The Time at which the process enters into the completion state or the time at
which the process completes its execution, is called completion time.

9. Scheduling Algorithm
There are mainly six types of process scheduling algorithms
A. First Come First Serve (FCFS)
B. Shortest-Job-First (SJF) Scheduling
C. Shortest Remaining Time
D. Priority Scheduling
E. Round Robin Scheduling
F. Multilevel Queue Scheduling
G. Multilevel Feedback Queue Scheduling
H. Multiprocessor Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it completes

27
its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority process enters into a ready
state.
A. First Come First Serve (FCFS)
The process coming first in the ready state will be executed first by the CPU irrespective of the
burst time or the priority. This means that whichever process enters process enters the ready
queue first is executed first. This shows that First Come First Serve Algorithm follows First In
First Out (FIFO) principle. This is implemented by using the First In First Out (FIFO) queue. So,
what happens is that, when a process enters into the ready state, then the PCB of that process will
be linked to the tail of the queue and the CPU starts executing the processes by taking the
process from the head of the queue . If the CPU is allocated to a process then it can't be taken
back until it finishes the execution of that process.
Advantages of FCFS:
 It is the most simple scheduling algorithm and is easy to implement.
 It is easy to use.
 It follows first come, first serve.
Disadvantages of FCFS:
 This algorithm is non-preemptive so you have to execute the process fully and after that
other processes will be allowed to execute.
 Throughput is not efficient.
 This algorithm has Long Waiting Time
 FCFS suffers from the Convey effect i.e. if a process is having very high burst time and
it is coming first, then it will be executed first irrespective of the fact that a process
having very less time is there in the ready state. Due, to this shorter jobs or processes
behind the larger processes or jobs takes too much time to complete its execution. Due, to
this the Waiting Time, Turn Around Time, Completion Time is very high. This affects
the performance of the operating system, and also leads to poor utilization of the
resources.

B. Shortest Job First (Non-preemptive)


In the FCFS, we saw if a process is having a very high burst time and it comes first then the other
process with a very low burst time have to wait for its turn. So, to remove this problem, we come
with a new approach i.e. Shortest Job First or SJF.
In this technique, the process having the minimum burst time at a particular instant of time will
be executed first. It is a non-preemptive approach i.e. if the process starts its execution then it
will be fully executed and then some other process will come.
Advantages of SJF (preemptive):
 Short processes will be executed first.
 Maximum throughput
 Minimum average waiting and turnaround time.
Disadvantages of SJF (preemptive):
 May lead to starvation as if shorter processes keep on coming, then longer processes will
never get a chance to run.
 Time taken by a process must be known to the CPU beforehand, which is not always
possible.
C. Shortest Remaining Time First /Shortest Job First (Preemptive)

28
This is the preemptive approach of the Shortest Job First algorithm. In SRTF, the execution of
the process can be stopped after certain amount of time. At the arrival of every process, the short
term scheduler schedules the process with the least remaining burst time among the list of
available processes and the running process.Once all the processes are available in the ready
queue, No preemption will be done and the algorithm will work as SJF scheduling
Here, at every instant of time, the CPU will check for some shortest job. For example, at time
0ms, we have P1 as the shortest process. So, P1 will execute for 1ms and then the CPU will
check if some other process is shorter than P1 or not. If there is no such process, then P1 will
keep on executing for the next 1ms and if there is some process shorter than P1 then that process
will be executed. This will continue until the process gets executed.
This algorithm is also known as Shortest Remaining Time First i.e. we schedule the process
based on the shortest remaining time of the processes
Advantages-
 SRTF is optimal and guarantees the minimum average waiting time.
 It provides a standard for other algorithms since no other algorithm performs better than
it.
Disadvantages-
 It can not be implemented practically since burst time of the processes can not be known
in advance.
 It leads to starvation for processes with larger burst time.
 Priorities can not be set for the processes.
 Processes with larger burst time have poor response time.
.
D. Round-Robin
In this approach of CPU scheduling, we have a fixed time quantum and the CPU will be
allocated to a process for that amount of time only at a time. It is similar to FCFS scheduling but
preemption is added, allowing the system to switch between processes.
For example, if we are having three process P1, P2, and P3, and our time quantum is 2ms, then
P1 will be given 2ms for its execution, then P2 will be given 2ms, then P3 will be given 2ms.
After one cycle, again P1 will be given 2ms, then P2 will be given 2ms and so on until the
processes complete its execution.
It is generally used in the time-sharing environments and there will be no starvation in case of
the round-robin.
Advantages-
 It gives the best performance in terms of average response time.
 It is best suited for time sharing system, client server architecture and interactive system.
Disadvantages-
 It leads to starvation for processes with larger burst time as they have to repeat the cycle
many times.
 Its performance heavily depends on time quantum.
 Priorities cannot be set for the processes.

E. Priority Scheduling
In this approach, we have a priority number associated with each process and based on that
priority number the CPU selects one process from a list of processes. The priority number can be
anything. It is just used to identify which process is having a higher priority and which process is

29
having a lower priority. For example, you can denote 0 as the highest priority process and 100 as
the lowest priority process. Also, the reverse can be true i.e. you can denote 100 as the highest
priority and 0 as the lowest priority. Processes with same priority are executed in FCFS manner.
Types of Priority Scheduling Algorithm
Priority scheduling can be of two types:
1. Preemptive Priority Scheduling: If the new process arrived at the ready queue has a
higher priority than the currently running process, the CPU is preempted, which means
the processing of the current process is stoped and the incoming new process with higher
priority gets the CPU for its execution.
2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling
algorithm if a new process arrives with a higher priority than the current running process,
the incoming process is put at the head of the ready queue, which means after the
execution of the current process it will be processed.
Advantages-
 It considers the priority of the processes and allows the important processes to run first.
 Priority scheduling in preemptive mode is best suited for real time operating system.
Disadvantages-
 Processes with lesser priority may starve for CPU.
 When the time quantum for scheduling is less, the Gantt chart seems too big.
 This algorithm offers a larger waiting time.

F. Multilevel Queue Scheduling


A multi-level queue scheduling technique partitions or divides the ready queue into many
separate queues. The processes get permanently assigned to one queue, usually based on some
property of the process, such as the size of the memory, process priority and/or type of process.
Each queue got its scheduling algorithm which works at the multilevel form.
Scheduling must also be done between queues, that is scheduling one queue to get time relative
to other queues. Two common options are strict priority ( no job in a lower priority queue runs
until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn,
possibly of different sizes. )
For example: A common division is made between foreground(or interactive) processes and
background (or batch) processes. These two types of processes have different response-time
requirements, and so might have different scheduling needs. In addition, foreground processes
may have priority over background processes. The foreground queue might be scheduled by
Round Robin algorithm, while the background queue is scheduled by an FCFS algorithm.

30
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the ready queue while
a batch process was running, the batch process will be preempted.
Advantages
 Multilevel queue scheduling helps us apply different scheduling algorithms for different
processes.
 It will have a low scheduling overhead.
Disadvantages:
 There are chances of starving for the lower priority processes.
 It is inflexible in nature.

G. Multilevel feedback Queue Scheduling


 Multilevel feedback Queue Scheduling allows a process to move between queues. The
idea is to separate processes with different CPU-burst characteristics. If a process uses
too much CPU time, it will be moved to a lower-priority queue. Similarly, a process that
waits too long in a lower-priority queue may be moved to a higher-priority queue. This
form of aging prevents starvation.

 In a multilevel feedback queue, we have a list of queues having some priority and the
higher priority queue is always executed first.
 A technique called Aging helps promote a lower priority process to the next higher
priority queue after a certain interval of time.

31
Advantages :
 MFQS is a flexible scheduling algorithm.
 It allows the different processes to move between queues.
 Prevents CPU starvation.
Disadvantages :
 It is the most complex scheduling algorithm.
 It needs other means to be able to select the best scheduler.
 This process may have CPU overheads.
Example : Let's assume that we have three queues i.e. queue0 ,queue1 and queue2 . A
process entering the ready queue is placed in queue 0.
 If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
 If it does not complete, it is preempted and placed into queue 2.
 Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only
when queue 0 and queue 1 are empty.`

10.Thread Scheduling:
The scheduling of thread involves two boundary scheduling:
 Scheduling of user level threads (ULT) to kernel level threads (KLT) via lightweight
process (LWP) by the application developer.
 Scheduling of kernel level threads by the system scheduler to perform different unique os
functions.
Lightweight process (LWP):
Light-weight process are threads in the user space that acts as an interface for the ULT to access
the physical CPU resources. Thread library schedules which thread of a process to run on which
LWP and how long.
In real-time, the first boundary of thread scheduling is beyond specifying the scheduling policy
and the priority. It requires two controls to be specified for the User level threads: Contention
scope and Allocation domain.

32
1.Contention Scope : The word contention here refers to the competition or fight among the
User level threads to access the kernel resources. Depending upon the extent of contention it is
classified as Process Contention Scope and System Contention Scope.
Process Contention Scope (PCS)
The contention takes place among threads within a same process. The thread library schedules
the high-prioritized PCS thread to access the resources via available LWPs (priority as specified
by the application developer during thread creation).
System Contention Scope (SCS)
The contention takes place among all threads in the system. In this case, every SCS thread is
associated to each LWP by the thread library and are scheduled by the system scheduler to
access the kernel resources.
In LINUX and UNIX operating systems, the POSIX Pthread library provides a function
Pthread_attr_setscope to define the type of contention scope for a thread during its creation.

int Pthread_attr_setscope(pthread_attr_t *attr, int scope)


The first parameter denotes to which thread within the process the scope is defined.
The second parameter defines the scope of contention for the thread pointed. It takes two values.
PTHREAD_SCOPE_SYSTEM or PTHREAD_SCOPE_PROCESS
If the scope value specified is not supported by the system, then the function returns ENOTSUP.
2. Allocation Domain : The allocation domain is a set of one or more resources for which a
thread is competing. One ULT can be a part of one or more allocation domain. Due to this high
complexity in dealing with hardware and software architectural interfaces, this control is not
specified

33
The amount of CPU resources allocated to each process and to each thread depends on the
contention scope, scheduling policy and priority of each thread defined by the application
developer using thread library and also depends on the system scheduler. These User level
threads are of a different contention scope. For every system call to access the kernel resources, a
Kernel Level thread is created and associated to separate LWP by the system scheduler.

Advantages of PCS over SCS :


 If all threads are PCS, then context switching, synchronization, scheduling everything
takes place within the userspace. This reduces system calls and achieves better
performance.
 PCS is cheaper than SCS.
 PCS threads share one or more available LWPs. For every SCS thread, a separate LWP is
associated.For every system call, a separate KLT is created.
 The number of KLT and LWPs created highly depends on the number of SCS threads
created. This increases the kernel complexity of handling scheduling and
synchronization. Thereby, results in a limitation over SCS thread creation, stating that,
the number of SCS threads to be smaller than the number of PCS threads.

In multiprocessor architecture threads can be implemented to achieve parallelism .There are four
approaches for effective thread scheduling to achieve parallelism.
a) Load Sharing: In this case a pool of thread is maintained when processor becomes idle the
scheduler selects a thread from the ready queue for execution.Load is equally distributed among
all processes by taking care of load balancing. When the processor is idle scheduler is executed
by operating system and the idle processor is allotted with next thread. The global queue can be
organized and accessed using of the schemes like first-come-first-served, smallest number of
threads first, preemptive smallest number of threads first.
Advantages of load sharing
 Load distributed evenly across the processors
 No centralized scheduler required
 Global queue can be organized and accessed
Disadvantages of load sharing
 Central queue needs mutual exclusion
b) Gang Scheduling:
The process of scheduling Gang of thread/Process together which belong to same process or all
closely related process that belong to same program.Due to this switching will get reduced and
hence system performance will increase.
c) Dedicated Processor Assignment:
In this method,complete application is assigned to dedicated processor or processors. This may
result in poor performance due to possibility that many processes/threads may be getting blocked
while waiting for I/O and may waste lot of time.
d) Dynamic scheduling:
In this method the process/thread is assignment to the process at the run time.Depending on the
situation the processor is assigned.

34
11.Multiple-Processor Scheduling
 When multiple processors are available, then the scheduling gets more complicated,
because now there is more than one CPU which must be kept busy and in effective use at
all times.
 Multiprocessor scheduling focuses on designing the system's scheduling function, which
consists of more than one processor
 The multiple CPUs in the system are in close communication, which shares a common bus,
memory, and other peripheral devices. So we can say that the system is tightly coupled.
 These systems are used when we want to process a bulk amount of data, and these systems
are mainly used in satellite, weather forecasting, etc.
 Multiprocessor systems may be heterogeneous (different kinds of CPUs)
or homogenous (the same CPU).

Approaches to Multiple-Processor Scheduling –


 One approach is when all the scheduling decisions and I/O processing are handled by a
single processor which is called the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data sharing. This entire scenario is
called Asymmetric Multiprocessing.
 A second approach uses Symmetric Multiprocessing where each processor is self
scheduling. All processes may be in a common ready queue or each processor may have its
own private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute.
Processor Affinity
 Processor Affinity means a process has an affinity for the processor on which it is currently
running.
 When a process runs on a specific processor, there are certain effects on the cache memory.
The data most recently accessed by the process populate the cache for the processor. As a
result, successive memory access by the process is often satisfied in the cache memory.
 Now, suppose the process migrates to another processor. In that case, the contents of the
cache memory must be invalidated for the first processor, and the cache for the second
processor must be repopulated.
There are two types of processor affinity, such as:
 Soft Affinity: Soft affinity occurs when the system attempts to keep processes on the same
processor but makes no guarantees..
 Hard Affinity: Linux and some other OSes support hard affinity, in which a process
specifies that it is not to be moved between processors.

Load Sharing: Load Balancing is the phenomenon that keeps the workload evenly distributed
across all processors in an SMP system.On SMP (symmetric multiprocessing), it is important to
keep the workload balanced among all processors to utilize the benefits of having more than one
processor fully. One or more processors will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU. There are two general approaches to load
balancing:

35
Push Migration: In push migration, a task routinely checks the load on each processor. If it
finds an imbalance, it evenly distributes the load on each processor by moving the processes
from overloaded to idle or less busy processors.
Pull Migration: Pull Migration occurs when an idle processor pulls a waiting task from a busy
processor for its execution.

12.Scheduling Algorithms evaluation.


The first thing we need to decide is how we will evaluate the algorithms. To do this we need to
decide on the relative importance of the factors Fairness, Efficiency, Response Times,
Turnaround and Throughput. Only once we have decided on our evaluation method can we carry
out the evaluation.
a) Deterministic Modeling(analytic evaluation): Deterministic modeling is one type of analytic
evaluation. This method evaluates a specific algorithm by comparing the efficiencies of
different algorithms on a specific, predetermined workload.
Consider the example shown in the below image.

Processes with different CPU burst times: analytic evaluation


Our aim is to find out the best algorithm from FCFS, SJF and RR algorithms by calculating the
average waiting time in each case.
FCFS algorithm: The Gantt chart, in this case, will be as shown in the below image.

Gantt chart for FCFS: analytic evaluation

The average waiting time in this case is (8 + 32 + 34) / 4 = 18.5 msec.

Gantt chart for SJF: analytic evaluation

The average waiting time in this case is (2 + 8 + 16) / 4 = 6.5 msec.

36
RR algorithm: (with a time quantum of 8 msec)

The average waiting time in this case is (0 + (8 + 8) + 16 + 18) / 4 = 12.5 msec.

Since the SJF algorithm results in only 6.5 msec of average waiting time, it can be evaluated as
the best algorithm in this case.
The advantages of deterministic modeling is that it is exact and fast to compute. The main
disadvantage of this evaluation method is that we may not know the exact details of all the
processes.
b) Queuing Models
Another method of evaluating scheduling algorithms is to use queuing theory. Using data from
real processes we can arrive at a probability distribution for the length of a burst time and the I/O
times for a process. We can now generate these times with a certain distribution.
We can also generate arrival times for processes (arrival time distribution). If we define a queue
for the CPU and a queue for each I/O device we can test the various scheduling algorithms using
queuing theory.
Knowing the arrival rates and the service rates we can calculate various figures such as average
queue length, average wait time, CPU utilization etc.
One useful formula is Little's Formula.
n = λw
Where
n is the average queue length
λ is the average arrival rate for new processes (e.g. five a second)
w is the average waiting time in the queue
Knowing two of these values we can, obviously, calculate the third.
For example, if we know that eight processes arrive every second and there are normally sixteen
processes in the queue we can compute that the average waiting time per process is two seconds.
The main disadvantage of using queuing models is that it is not always easy to define realistic
distribution times and we have to make assumptions. This results in the model only being an
approximation of what actually happens.

c) Simulations
Simulations provide a relatively more accurate evaluation of scheduling algorithms. Here, we
generate the data required for simulations in a number of ways. After the simulation executes,
the statistics that indicate the performance are gathered and thus the efficiency of an algorithm is
evaluated.
However, simulations often consume a lot of money and also the time of computers. Moreover,
we cannot rely upon the results completely. Also, it is really cumbersome to design, code and
debug a simulator.

d) Implementation of Algorithms:
The only way to evaluate the performance of an algorithm in an efficient and precise manner is
to code it and put it in a real environment (operating system) and they observe its performance.

37
However, this is a very costly approach. Also, the operating system may need to be changed to
some extent to support the new algorithm.

38

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy