0% found this document useful (0 votes)
25 views51 pages

FocusTask3 Reading3

Uploaded by

pascalotieno1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views51 pages

FocusTask3 Reading3

Uploaded by

pascalotieno1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

TAITA TAVETA UNIVERSITY

SIC 2141 – Operating System


Session 3: Process Management

Session Topics
Overview

Learning Outcome
a) Describe process, process management, process architecture and process blocks as
well as different types of programs
b) Describe process coordination and synchronization, critical section, critical section
problems and how to solve critical section problems
c) Describe Process Scheduling, Process Communication and Rea-time Management

Topics to be covered in this sessions:


 Process and Program Concepts,
 Process Coordination and Synchronization,
 Process Scheduling,
 Inter Process Communication,
 Real-Time Clock Management

Session 3: Process Management


Firstly, open an MSWord document to type a brief description of the answers from your web searches.
Remember to continually save your document. Secondly, using Chapter 4 of your textbook (Abraham
Silberschatz, Greg Gagne, Peter B. Galvin, 2018, Operating System Concepts, Wiley; 10th Edition.
ISBN-10: 1119456339, ISBN-13: 978-1119456339), the links below and any of your favorite search
engines and sites as a guide:

1
Focus Task 3.1
1. Describe the two general roles of an operating system, and elaborate why these roles
are important
Focus Task 3.2
1. What is a process scheduler? State the characteristics of a good process scheduler?
2. Explain time slicing. How its duration affects the overall working of the system?
3. What is Shortest Remaining Time, SRT scheduling?
Focus Task 3.3
1. What are the different principles which must be considered while selection of a
scheduling algorithm?
2. What are the different principles which must be considered while selection of a
scheduling algorithm?
Focus Task 3.4
Read Chapter 3 of your text book { Abraham Silberschatz, Greg Gagne, Peter B. Galvin, 2018,
Operating System Concepts, Wiley; 10th Edition}

Overview of the functions of Operating Systems


An Operating System acts as a communication bridge (interface) between the user and computer
hardware. The purpose of an operating system is to provide a platform on which a user can execute
programs in a convenient and efficient manner.

An operating system is a piece of software that manages the allocation of computer hardware. The
coordination of the hardware must be appropriate to ensure the correct working of the computer system
and to prevent user programs from interfering with the proper working of the system.

Example: Just like a boss gives order to his employee, in the similar way we request or pass our orders
to the Operating System. The main goal of the Operating System is to thus make the computer
environment more convenient to use and the secondary goal is to use the resources in the most efficient
manner.

Essential Reading
Abraham Silberschatz, Greg Gagne, Peter B. Galvin, 2018, Operating System Concepts, Wiley; 10th
Edition. ISBN-10: 1119456339, ISBN-13: 978-1119456339

Process and Program Concepts

2
Process Management in Operating System: PCB in OS

What is a Process?

Process is the execution of a program that performs the actions specified in that program. It can
be defined as an execution unit where a program runs. The OS helps you to create, schedule,
and terminates the processes which is used by CPU. A process created by the main process is
called a child process.

Process operations can be easily controlled with the help of PCB (Process Control Block). You
can consider it as the brain of the process, which contains all the crucial information related to
processing like process id, priority, state, CPU registers, etc.

What is Process Management?

Process management involves various tasks like creation, scheduling, termination of processes,
and a dead lock. Process is a program that is under execution, which is an important part of
modern-day operating systems. The OS must allocate resources that enable processes to share
and exchange information. It also protects the resources of each process from other methods
and allows synchronization among processes.

It is the job of OS to manage all the running processes of the system. It handles operations by
performing tasks like process scheduling and such as resource allocation.

Process Architecture

Process architecture Image

3
Here, is an Architecture diagram of the Process

 Stack: The Stack stores temporary data like function parameters, returns addresses,
and local variables.
 Heap Allocates memory, which may be processed during its run time.
 Data: It contains the variable.
 Text: Text Section includes the current activity, which is represented by the value of the
Program Counter.

Process Control Blocks

The PCB is a full form of Process Control Block. It is a data structure that is maintained by the
Operating System for every process. The PCB should be identified by an integer Process ID
(PID). It helps you to store all the information required to keep track of all the running processes.

It is also accountable for storing the contents of processor registers. These are saved when the
process moves from the running state and then returns back to it. The information is quickly
updated in the PCB by the OS as soon as the process makes the state transition.

Process States

Process States Diagram

4
A process state is a condition of the process at a specific instant of time. It also defines the
current position of the process.

There are mainly seven stages of a process which are:

 New: The new process is created when a specific program calls from secondary
memory/ hard disk to primary memory/ RAM a
 Ready: In a ready state, the process should be loaded into the primary memory, which
is ready for execution.
 Waiting: The process is waiting for the allocation of CPU time and other resources for
execution.
 Executing: The process is an execution state.
 Blocked: It is a time interval when a process is waiting for an event like I/O operations to
complete.
 Suspended: Suspended state defines the time when a process is ready for execution
but has not been placed in the ready queue by OS.
 Terminated: Terminated state specifies the time when a process is terminated

After completing every step, all the resources are used by a process, and memory becomes
free.

Process Control Block(PCB)

Every process is represented in the operating system by a process control block, which is also
called a task control block.

Here, are important components of PCB:

5
Process Control Block(PCB)

 Process state: A process can be new, ready, running, waiting, etc.


 Program counter: The program counter lets you know the address of the next
instruction, which should be executed for that process.
 CPU registers: This component includes accumulators, index and general-purpose
registers, and information of condition code.
 CPU scheduling information: This component includes a process priority, pointers for
scheduling queues, and various other scheduling parameters.
 Accounting and business information: It includes the amount of CPU and time utilities
like real time used, job or process numbers, etc.
 Memory-management information: This information includes the value of the base and
limit registers, the page, or segment tables. This depends on the memory system, which
is used by the operating system.
 I/O status information: This block includes a list of open files, the list of I/O devices that
are allocated to the process, etc.

Summary:

 A process is defined as the execution of a program that performs the actions specified in
that program.
 Process management involves various tasks like creation, scheduling, termination of
processes, and a dead lock.
 The important elements of Process architecture are 1) Stack 2) Heap 3) Data, and 4)
Text

6
 The PCB is a full form of Process Control Block. It is a data structure that is maintained
by the Operating System for every process
 A process state is a condition of the process at a specific instant of time.
 Every process is represented in the operating system by a process control block, which
is also called a task control block.

Process Coordination and Synchronization

Process Coordination
Process coordination or concurrency control deals with mutual exclusion and synchronization.

Mutual exclusion--ensure that two concurrent activities do not access shared data (resource) at
the same time, critical region--set of instructions that only one process can execute.

Synchronization--using a condition to coordinate the actions of concurrent activities. A


generalization of mutual exclusion.

When considering process coordination, we must keep in mind the following situations:

1. Deadlock occurs when two activities are waiting each other and neither can
proceed. For example:

Suppose processes A and B each need two tape drives to continue, but only one drive
has been assigned to each of them. If the system has only 2 drives, neither process can
ever proceed.

2. Starvation occurs when a blocked activity is consistently passed over and not
allowed to run. For example:

Consider two CPU bound jobs, one running at a higher priority than the other. The lower priority
process will never be allowed to execute. Some synchronization primitives lead to starvation.

Critical Section

Instructions that must be executed while another activity is excluded will be called a critical
section. For instance:

A[n++] = item could be executed by only one of the processes AddChar1 and AddChar2. This
statement is a critical section.

7
Starvation and Deadlock

Consider the following situation:

Process p1 is in a critical section c1 waiting for process p2 to get out of a critical section c2.
Process p2, meanwhile, is waiting for process p1 to get out of c1. Both p1 and p2 are stuck:
each is waiting to get into a critical section being executed by the other. This sort of circular
waiting is called a deadlock. In Europe, it is known by the more striking name of “deadly
embrace”. Another problem related to process coordination is starvation. This problem arises
when one or more processes waiting for a critical section are never allowed to enter the region.
Different process coordination techniques are often judged on the basis of their ability to prevent
these two problems.

Other Requirements

There are many other metrics of measuring the effectiveness of a process coordination
technique:

 Concurrency: How much concurrency does it allow, while meeting the process
coordination constraints?
 Functionality: What are the range of process coordination constraints it supports? Does
it support both mutual exclusion and synchronization? What kind of synchronization
constraints does it support?
 Programmability: What is the programming cost of using the approach? How easy is it
to make mistakes when using it?
 Efficiency: What are the space and time costs? Does it result in frequent context
switches? Worse, does it do polling?
 Multiprocessor systems: Does it work on multiprocessor systems?

Semaphores in Operating System


Semaphores are integer variables that are used to solve the critical section problem by using
two atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

 Wait

8
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.

wait(S)
{
while (S<=0);

S--;
}

 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

The following invariant is true for semaphores:

A nonnegative count means that the queue is empty; a semaphore count of negative n means
that the queue contains n waiting processes.

The four operations are:

 Wait: decrements count, and enqueues process if count is negative.


 Signal: increments count and make a process ready if waiting.
 Create: generates a new semaphore with a count.
 Delete: destroys an existing semaphore.

The last two operations are non-standard and allow semaphores to be created and destroyed
dynamically. The description of semaphores given above is too implementation-oriented in that
it is possible to create an implementation that does not, for instance, increments rather than
decrements a count in wait. Here is a more general description of the semaphore semantics. A
semaphore is associated with an initial count, a FIFO process-queue, and the wait and signal
operations. It keeps track of how many times wait and signal are called. The signal operation
deques the process, if any, at the head of the queue.

The wait operation puts the process calling it in the queue if:

9
The number of previous signals + the initial count < the number of prervious waits

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −

 Counting Semaphores: - These are integer value semaphores and have an


unrestricted value domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available resources. If the
resources are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.

 Binary Semaphores: - The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier to
implement binary semaphores than counting semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.

 There is no resource wastage because of busy waiting in semaphores as processor time


is not wasted unnecessarily to check if a condition is fulfilled to allow a process to
access the critical section.

 Semaphores are implemented in the machine independent code of the microkernel. So


they are machine independent.

Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.

 Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a
structured layout for the system.

10
 Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.

Process Synchronization

Process Synchronization: Critical Section Problem in OS

What is Process Synchronization?

Process Synchronization is the task of coordinating the execution of processes in a way that
no two processes can have access to the same shared data and resources.

It is specially needed in a multi-process system when multiple processes are running together,
and more than one processes try to gain access to the same shared resource or data at the
same time.

This can lead to the inconsistency of shared data. So the change made by one process not
necessarily reflected when other processes accessed the same shared data. To avoid this type
of inconsistency of data, the processes need to be synchronized with each other.

On the basis of synchronization, processes are categorized as one of the following two types:

 Independent Process: Execution of one process does not affects the execution of other
processes.
 Cooperative Process: Execution of one process affects the execution of other processes.

How Process Synchronization Works?


For Example, process A changing the data in a memory location while another process B is
trying to read the data from the same memory location. There is a high probability that data
read by the second process will be erroneous.

11
Sections of a Program

Here, are four essential elements of the critical section:

 Entry Section: It is part of the process which decides the entry of a particular process.
 Critical Section: This part allows one process to enter and modify the shared variable.
 Exit Section: Exit section allows the other process that are waiting in the Entry Section, to
enter into the Critical Sections. It also checks that a process that finished its execution
should be removed through this Section.
 Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section.

What is Critical Section Problem?

A critical section is a segment of code which can be accessed by a signal process at a specific
point of time. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes

The section consists of shared data resources that required to be accessed by other processes.

 The entry to the critical section is handled by the wait() function, and it is represented as
P().
 The exit from a critical section is controlled by the signal() function, represented as V().

In the critical section, only a single process can be executed. Other processes, waiting to
execute their critical section, need to wait until the current process completes its execution.

12
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Process Synchronization was introduced to handle problems that arose while multiple process
executions.

Rules for Critical Section

The critical section need to must enforce all three rules:

 Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore


which is used for controlling access to the shared resource. It includes a priority
inheritance mechanism to avoid extended priority inversion problems. Not more
than one process can execute in its critical section at one time.
 Progress: This solution is used when no one is in the critical section, and
someone wants in. Then those processes not in their reminder section should
decide who should go in, in a finite time.
 Bound Waiting: When a process makes a request for getting into critical
section, there is a specific limit about number of processes can get into their
critical section. So, when the limit is reached, the system must allow request to
the process to get into its critical section.

Solutions to The Critical Section

In Process Synchronization, critical section plays the main role so that the problem must be
solved.

Here are some widely used methods to solve the critical section problem.

a. Peterson Solution

Peterson's solution is widely used solution to critical section problems. This algorithm was
developed by a computer scientist Peterson that's why it is named as a Peterson's solution.

In this solution, when a process is executing in a critical state, then the other process only
executes the rest of the code, and the opposite can happen. This method also helps to make
sure that only a single process runs in the critical section at a specific time.

13
Example

PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS

 Assume there are N processes (P1, P2, ... PN) and every process at some point of time
requires to enter the Critical Section
 A FLAG[] array of size N is maintained which is by default false. So, whenever a process
requires to enter the critical section, it has to set its flag as true. For example, If Pi wants
to enter it will set FLAG[i]=TRUE.
 Another variable called TURN indicates the process number which is currently wating to
enter into the CS.
 The process which enters into the critical section while exiting would change the TURN
to another number from the list of ready processes.
 Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and
therefore P3 breaks out of wait loop.

14
Race Condition
When more than one processes are executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing race to say that my output is correct
this condition known as race condition.
Several processes access and process the manipulations over the same data concurrently, then
the outcome depends on the particular order in which the access takes place.

Solution to Critical Section Problem


A solution to the critical section problem must satisfy the following three conditions:
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a
given point of time.

2. Progress
If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.

3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So
after the limit is reached, system must grant the process permission to get into its critical
section.

Synchronization Hardware

Sometimes the problems of the Critical Section are also resolved by hardware. Some operating
system offers a lock functionality where a Process acquires a lock when entering the Critical
section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to enter as it is
locked. It can only do so if it is free by acquiring the lock itself.

15
Mutex Locks

Synchronization hardware not simple method to implement for everyone, so strict software
method known as Mutex Locks was also introduced.

In this approach, in the entry section of code, a LOCK is obtained over the critical resources
used inside the critical section. In the exit section that lock is released.

Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads. It is another
algorithm or solution to the critical section problem. It is a signaling mechanism and a thread
that is waiting on a semaphore, which can be signaled by another thread.
It uses two atomic operations, 1) wait, and 2) signal for the process synchronization.

Example

WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;

Summary:
 Process synchronization is the task of coordinating the execution of processes in a way
that no two processes can have access to the same shared data and resources.
 Four elements of critical section are 1) Entry section 2) Critical section 3) Exit section 4)
Reminder section
 A critical section is a segment of code which can be accessed by a signal process at a
specific point of time.
 Three must rules which must enforce by critical section are :1) Mutual Exclusion 2)
Process solution 3) Bound waiting
 Mutual Exclusion is a special type of binary semaphore which is used for controlling
access to the shared resource.
 Process solution is used when no one is in the critical section, and someone wants in.
 In bound waiting solution, after a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical section.
 Peterson's solution is widely used solution to critical section problems.
 Problems of the Critical Section are also resolved by synchronization of hardware
 Synchronization hardware is not a simple method to implement for everyone, so the
strict software method known as Mutex Locks was also introduced.

16
 Semaphore is another algorithm or solution to the critical section problem.

Classical Problems of Synchronization


Semaphore can be used in other synchronization problems besides Mutual Exclusion.

Below are some of the classical problem depicting flaws of process synchronization in systems
where cooperating processes are present.

We will discuss the following three problems:

a) Bounded Buffer (Producer-Consumer) Problem


b) Dining Philosophers Problem
c) The Readers Writers Problem

a) Bounded Buffer (Producer-Consumer) Problem

This problem is generalized in terms of the Producer Consumer problem, where


a finite buffer pool is used to exchange messages between producer and consumer processes.

Because the buffer pool has a maximum size, this problem is often called the Bounded buffer
problem.

Solution to this problem is, creating two counting semaphores "full" and "empty" to keep track of
the current number of full and empty buffers respectively.

b) Dining Philosophers Problem

The dining philosopher's problem involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.

There are five philosophers sitting around a table, in which there are five chopsticks/forks kept
beside them and a bowl of rice in the center, when a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their right. When a philosopher wants to think, he
keeps down both chopsticks at their original place.

c) The Readers Writers Problem

 In this problem there are some processes (called readers) that only read the shared

data, and never change it, and there are other processes (called writers) who may

change the data in addition to reading, or instead of reading it.

17
 There is various type of readers-writers’ problem, most centered on relative priorities of

readers and writers.

Peterson’s Solution preserves all three conditions: -


 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.

Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and loaded process shares the CPU using time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, it's PCB is unlinked from its
current queue and moved to its new state queue.
There are following important process scheduling queues maintained by the Operating System:
 Job queue - This queue keeps all the processes in the system.
 Ready queue - This queue keeps set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues - The processes which are blocked due to unavailability of an I/O device
constitute this queue.

18
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority etc). The
OS scheduler determines how to move processes between the ready and run queues which can
only have one entry per processor core on the system and in above diagram it has been
merged with the CPU.
Two State Process Model
Two state process model refers to running and non-running states which are described below.
S.N State & Description
1 Running
When new process is created by Operating System that process enters into the system as
in the running state.
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers
Schedulers are special system software’s which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types: -

19
a) Long Term Scheduler
b) Short Term Scheduler
c) Medium Term Scheduler

a. Long Term Scheduler


It is also called job scheduler. Long term scheduler determines which programs are admitted to the
system for processing.
Job scheduler selects processes from the queue and loads them into memory for execution. Process
loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then
the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When process changes the state from new to ready, then
there is use of long term scheduler.

b. Short Term Scheduler


It is also called CPU scheduler. Main objective is increasing system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects process among the processes that are ready to execute and allocates CPU to one
of them.
Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained
decision of which process to execute next. Short term scheduler is faster than long term scheduler.

c. Medium Term Scheduler


Medium term scheduling is part of the swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the
swapped out-processes.
Running process may become suspended if it makes an I/O request. Suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and
make space for other process, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

20
Comparison between Scheduler
S.N. Long Term Scheduler Short Term Scheduler Medium Term Scheduler
1. It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler
2. Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.
3. It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4. It is almost absent or minimal It is also minimal in time It is a part of Time sharing
in time sharing system sharing system
systems.
5. It selects processes from It selects those processes It can re-introduce the process
pool and loads them into which are ready to into memory and execution can
memory for execution be continued.
execute

What is Context switch?

It is a method to store/restore the state or of a CPU in PCB. So that process execution can be
resumed from the same point at a later time. The context switching method is important for
multitasking OS.

Summary:

 Process scheduling is an OS task that schedules the processes of different states like
ready, waiting, and running.
 Two-state process models are 1) Running, and) Not Running
 Process scheduling maximizes the number of interactive users, within acceptable
response times.
 A scheduler is a type of system software that allows you to handle process scheduling.
 Three types of the scheduler are 1) Long term 2) Short term 3) Medium-term
 Long term scheduler regulates the program and select process from the queue and
loads them into memory for execution.
 The medium-term scheduler enables you to handle the swapped out-processes.
 The main goal of short term scheduler is to boost the system performance according to
set criteria
 Long term is also known as a job scheduler, whereas the short term is also known as
CPU scheduler, and the medium-term is also called swapping scheduler.

21
Round Robin Scheduling Algorithm with Example

What is Round-Robin Scheduling?

The name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turns. It is the oldest, simplest scheduling algorithm, which is mostly
used for multitasking.

In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for a limited
time slice. This algorithm also offers starvation free execution of processes.

22
Characteristics of Round-Robin Scheduling

 Round robin is a pre-emptive algorithm


 The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
 The process that is preempted is added to the end of the queue.
 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a specific task that needs to be
processed. However, it may differ OS to OS.
 It is a real time algorithm which responds to the event within a specific time limit.
 Round robin is one of the oldest, fairest, and easiest algorithm.
 Widely used scheduling method in traditional OS.

Example of Round-robin Scheduling

Consider this following three processes

Process Queue Burst time

P1 4

P2 3

P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

23
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4, P2 is preempted and add at the end of the queue. P3 starts executing.

24
Step 4) - At time=6, P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8, P1 has a burst time of 4. It has completed execution. P2 starts execution

Step 6) P2 has a burst time of 3. It has already executed for 2 intervals. At time=9, P2
completes execution. Then, P3 starts execution till it completes.

25
Step 7) Let's calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

Advantage of Round-robin Scheduling

Here, are pros/benefits of Round-robin scheduling method:

 It doesn't face the issues of starvation or convoy effect.


 All the jobs get a fair allocation of CPU.
 It deals with all process without any priority
 If you know the total number of processes on the run queue, then you can also assume
the worst-case response time for the same process.
 This scheduling method does not depend upon burst time. That's why it is easily
implementable on the system.
 Once a process is executed for a specific set of the period, the process is preempted,
and another process executes for that given time period.
 Allows OS to use the Context switching method to save states of preempted processes.
 It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling

Here, are drawbacks/cons of using Round-robin scheduling:

 If slicing time of OS is low, the processor output will be reduced.


 This method spends more time on context switching
 Its performance heavily depends on time quantum.
 Priorities cannot be set for the processes.
 Round-robin scheduling doesn't give special priority to more important tasks.
 Decreases comprehension
 Lower time quantum results in higher the context switching overhead in the system.
 Finding a correct time quantum is a quite difficult task in this system.

26
Worst Case Latency

This term is used for the maximum time taken for execution of all the tasks.

 dt = Denote detection time when a task is brought into the list


 st = Denote switching time from one task to another
 et = Denote task execution time

Formula:

Summary:

 The name of this algorithm comes from the round-robin principle, where each person
gets an equal share of something in turns.
 Round robin is one of the oldest, fairest, and easiest algorithms and widely used
scheduling methods in traditional OS.
 Round robin is a pre-emptive algorithm
 The biggest advantage of the round-robin scheduling method is that If you know the total
number of processes on the run queue, then you can also assume the worst-case
response time for the same process.
 This method spends more time on context switching
 Worst-case latency is a term used for the maximum time taken for the execution of all
the tasks.

Preemptive vs Non-Preemptive Scheduling: Key Differences

What is Preemptive Scheduling?

Preemptive Scheduling is a scheduling method where the tasks are mostly assigned with their
priorities. Sometimes it is important to run a task with a higher priority before another lower
priority task, even if the lower priority task is still running.

27
At that time, the lower priority task holds for some time and resumes when the higher priority
task finishes its execution.

What is Non- Preemptive Scheduling?

In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating.

It is the only method that can be used for various hardware platforms. That's because it doesn't
need specialized hardware (for example, a timer) like preemptive Scheduling.

Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.

Difference Between Preemptive and Non-Preemptive Scheduling in OS

Preemptive Scheduling Non-preemptive Scheduling

A processor can be preempted to execute Once the processor starts its execution,
the different processes in the middle of it must finish it before executing the
any current process execution. other. It can't be paused in the middle.

CPU utilization is more efficient compared CPU utilization is less efficient compared
to Non-Preemptive Scheduling. to preemptive Scheduling.

Waiting and response time of preemptive Waiting and response time of the non-
Scheduling is less. preemptive Scheduling method is higher.

Preemptive Scheduling is prioritized. The When any process enters the state of
highest priority process is a process that is running, the state of that process is
currently utilized. never deleted from the scheduler until it
finishes its job.

Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.

Examples: - Shortest Remaining Time Examples: First Come First Serve,


First, Round Robin, etc. Shortest Job First, Priority Scheduling,
etc.

Preemptive Scheduling algorithm can be In non-preemptive scheduling process

28
pre-empted that is the process can be cannot be Scheduled
Scheduled

In this process, the CPU is allocated to the In this process, CPU is allocated to the
processes for a specific time period. process until it terminates or switches to
the waiting state.

Preemptive algorithm has the overhead of Non-preemptive Scheduling has no such


switching the process from the ready state overhead of switching the process from
to the running state and vice-versa. running into the ready state.

Advantages of Preemptive Scheduling

Here, are pros/benefits of Preemptive Scheduling method:

 Preemptive scheduling method is more robust, approach so one process cannot


monopolize the CPU
 Choice of running task reconsidered after each interruption.
 Each event cause interruption of running tasks
 The OS makes sure that CPU usage is the same by all running process.
 In this, the usage of CPU is the same, i.e., all the running processes will make use of
CPU equally.
 This scheduling method also improvises the average response time.
 Preemptive Scheduling is beneficial when we use it for the multi-programming
environment.

Advantages of Non-Preemptive Scheduling

Here, are pros/benefits of Non-Preemptive Scheduling method:

 Offers low scheduling overhead


 Tends to offer high throughput
 It is conceptually very simple method
 Less computational resources need for Scheduling

Disadvantages of Preemptive Scheduling

Here, are cons/drawback of Preemptive Scheduling method:

29
 Need limited computational resources for Scheduling
 Takes a higher time by the scheduler to suspend the running task, switch the context,
and dispatch the new incoming task.
 The process which has low priority needs to wait for a longer time if some high priority
processes arrive continuously.

Disadvantages of Non-Preemptive Scheduling

Here, are cons/drawback of Non-Preemptive Scheduling method:

 It can lead to starvation especially for those real-time tasks


 Bugs can cause a machine to freeze up
 It can make real-time and priority Scheduling difficult
 Poor response time for processes

Example of Non-Preemptive Scheduling

In non-preemptive SJF scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.

Consider the following five processes each having its own unique burst time and arrival time.

Process Queue Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1

P4 3 0

P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

30
Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will
continue execution.

Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

31
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.

Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

32
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.

33
Step 11) Let's calculate the average waiting time for above example.

Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2

Example of Pre-emptive Scheduling

Consider this following three processes in Round-robin

Process Queue Burst time

P1 4

P2 3

P3 5

34
Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

35
Step 3) At time=4, P2 is preempted and add at the end of the queue. P3 starts executing.

Step 4) At time=6, P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8, P1 has a burst time of 4. It has completed execution. P2 starts execution

36
Step 6) P2 has a burst time of 3. It has already executed for 2 intervals. At time=9, P2
completes execution. Then, P3 starts execution till it completes.

Step 7) Let's calculate the average waiting time for above example.

Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

KEY DIFFERENCES

 In Preemptive Scheduling, the CPU is allocated to the processes for a specific time
period, and non-preemptive scheduling CPU is allocated to the process until it
terminates.
 In Preemptive Scheduling, tasks are switched based on priority while non-preemptive
Scheduling no switching takes place.
 Preemptive algorithm has the overhead of switching the process from the ready state to
the running state while Non-Preemptive Scheduling has no such overhead of switching.
 Preemptive Scheduling is flexible while Non-Preemptive Scheduling is rigid.

Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features. When the
scheduler switches the CPU from executing one process to execute another, the state from the

37
current running process is stored into the process control block. After this completes, the state
for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At
that point, the second process can begin executing.

Context switches are computationally intensive since register and memory state must be saved and
restored and to avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers. When the process is switched, the following information is stored
for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State I/O
 State information
 Accounting information

38
Inter Process Communication
What is Inter Process Communication?

Inter process communication (IPC) is used for exchanging data between multiple threads in one or
more processes or programs. The Processes may be running on single or multiple computers
connected by a network. The full form of IPC is Inter-process communication.

It is a set of programming interface which allow a programmer to coordinate activities among


various program processes which can run concurrently in an operating system. This allows a
specific program to handle many user requests at the same time.

Since every single user request may result in multiple processes running in the operating
system, the process may require to communicate with each other. Each IPC protocol approach
has its own advantage and limitation, so it is not unusual for a single program to use all of the
IPC methods.

or

Inter Process Communication (IPC) refers to a mechanism, where the operating systems allow
various processes to communicate with each other. This involves synchronizing their actions
and managing shared data.

Inter Process Communication (IPC) is a mechanism that involves communication of one


process with another process. This usually occurs only in one system.

Communication can be of two types −

 Between related processes initiating from only one process, such as parent and child
processes.

 Between unrelated processes, or two or more different processes.

Approaches for Inter-Process Communication

Here, are few important methods for inter-process communication:

39
1. Pipes − Communication between two related processes. The mechanism is half duplex
meaning the first process communicates with the second process. To achieve a full
duplex i.e., for the second process to communicate with the first process another pipe is
required.
2. FIFO − Communication between two unrelated processes. FIFO is a full duplex,
meaning the first process can communicate with the second process and vice versa at
the same time.
3. Message Queues − Communication between two or more processes with full duplex
capacity. The processes will communicate with each other by posting a message and
retrieving it out of the queue. Once retrieved, the message is no longer available in the
queue.
4. Shared Memory − Communication between two or more processes is achieved
through a shared piece of memory among all processes. The shared memory needs to
be protected from each other by synchronizing access to all the processes.
5. Semaphores − Semaphores are meant for synchronizing access to multiple processes.
When one process wants to access the memory (for reading or writing), it needs to be
locked (or protected) and released when the access is removed. This needs to be
repeated by all the processes to secure data.
6. Signals − Signal is a mechanism to communication between multiple processes by way
of signaling. This means a source process will send a signal (recognized by number)
and the destination process will handle it accordingly.

40
7. Direct Communication: In this type of inter-process communication process, should
name each other explicitly. In this method, a link is established between one pair of
communicating processes, and between each pair, only one link exists.
8. Indirect Communication: Indirect communication establishes like only when
processes share a common mailbox each pair of processes sharing several
communication links. A link can communicate with many processes. The link may be bi-
directional or unidirectional.

Why IPC?

Here, are the reasons for using the inter-process communication protocol for information
sharing:

 It helps to speedup modularity


 Computational
 Privilege separation
 Convenience
 Helps operating system to communicate with each other and synchronize their actions.

Real-Time Clock Management


Real-time Computing

Real-time computing (RTC), or reactive computing is the computer science term


for hardware and software systems subject to a "real-time constraint", for example
from event to system response. Real-time programs must guarantee response within specified
time constraints, often referred to as "deadlines".

Real-time responses are often understood to be in the order of milliseconds, and sometimes
microseconds. A system not specified as operating in real time cannot usually guarantee a
response within any timeframe, although typical or expected response times may be given.
Real-time processing fails if not completed within a specified deadline relative to an event;
deadlines must always be met, regardless of system load.

A real-time system has been described as one which "controls an environment by receiving
data, processing them, and returning the results sufficiently quickly to affect the environment at
that time". The term "real-time" is also used in simulation to mean that the simulation's clock
runs at the same speed as a real clock, and in process control and enterprise systems to
mean "without significant delay".

41
Real-time software may use one or more of the following: synchronous programming
languages, real-time operating systems, and real-time networks, each of which provide
essential frameworks on which to build a real-time software application.

Systems used for many mission critical applications must be real-time, such as for control
of fly-by-wire aircraft, or anti-lock brakes, both of which demand immediate and accurate
mechanical response.

Criteria for Real-Time Computing


A system is said to be real-time if the total correctness of an operation depends not only upon
its logical correctness, but also upon the time in which it is performed. Real-time systems, as
well as their deadlines, are classified by the consequence of missing a deadline:

 Hard – missing a deadline is a total system failure.


 Firm – infrequent deadline misses are tolerable, but may degrade the system's quality of
service. The usefulness of a result is zero after its deadline.
 Soft – the usefulness of a result degrades after its deadline, thereby degrading the system's
quality of service.

Thus, the goal of a hard real-time system is to ensure that all deadlines are met, but for soft
real-time systems the goal becomes meeting a certain subset of deadlines in order to optimize
some application-specific criteria. The particular criteria optimized depend on the application,
but some typical examples include maximizing the number of deadlines met, minimizing the
lateness of tasks and maximizing the number of high priority tasks meeting their deadlines.

Hard real-time systems are used when it is imperative that an event be reacted to within a strict
deadline. Such strong guarantees are required of systems for which not reacting in a certain
interval of time would cause great loss in some manner, especially damaging the surroundings
physically or threatening human lives (although the strict definition is simply that missing the
deadline constitutes failure of the system). Some examples of hard real-time systems:

 A car engine control system is a hard real-time system because a delayed signal may
cause engine failure or damage.
 Medical systems such as heart pacemakers. Even though a pacemaker's task is simple,
because of the potential risk to human life, medical systems like these are typically required
to undergo thorough testing and certification, which in turn requires hard real-time
computing in order to offer provable guarantees that a failure is unlikely or impossible.

42
 Industrial process controllers, such as a machine on an assembly line. If the machine is
delayed, the item on the assembly line could pass beyond the reach of the machine (leaving
the product untouched), or the machine or the product could be damaged by activating the
robot at the wrong time. If the failure is detected, both cases would lead to the assembly line
stopping, which slows production. If the failure is not detected, a product with a defect could
make it through production, or could cause damage in later steps of production.
 Hard real-time systems are typically found interacting at a low level with physical hardware,
in embedded systems. Early video game systems such as the Atari
2600 and Cinematronics vector graphics had hard real-time requirements because of the
nature of the graphics and timing hardware.
 Soft modems replace a hardware modem with software running on a computer's CPU. The
software must run every few milliseconds to generate the next audio data to be output. If
that data is late, the receiving modem will lose synchronization, causing a long interruption
as synchronization is reestablished or causing the connection to be lost entirely.
 Many types of printers have hard real-time requirements, such as inkjets (the ink must be
deposited at the correct time as the print-head crosses the page), laser printers (the laser
must be activated at the right time as the beam scans across the rotating drum), and dot
matrix and various types of line printers (the impact mechanism must be activated at the
right time as the print mechanism comes into alignment with the desired output). A failure in
any of these would cause either missing output or misaligned output.

In the context of multitasking systems, the scheduling policy is normally priority driven (pre-
emptive schedulers). In some situations, these can guarantee hard real-time performance (for
instance if the set of tasks and their priorities is known in advance). There are other hard real-
time schedulers such as rate-monotonic which is not common in general-purpose systems, as
it requires additional information in order to schedule a task: namely a bound or worst-case
estimate for how long the task must execute. Specific algorithms for scheduling such hard real-
time tasks exist, such as earliest deadline first, which, ignoring the overhead of context
switching, is sufficient for system loads of less than 100%. New overlay scheduling systems,
such as an adaptive partition scheduler assist in managing large systems with a mixture of
hard real-time and non-real-time applications.

Firm real-time systems are more nebulously defined, and some classifications do not include
them, distinguishing only hard and soft real-time systems. Some examples of firm real-time
systems:

43
 The assembly line machine described earlier as hard real-time could instead be
considered firm real-time. A missed deadline still causes an error which needs to be dealt
with: there might be machinery to mark a part as bad or eject it from the assembly line, or
the assembly line could be stopped so an operator can correct the problem. However, as
long as these errors are infrequent, they may be tolerated.

Soft real-time systems are typically used to solve issues of concurrent access and the need to
keep a number of connected systems up-to-date through changing situations. Some examples
of soft real-time systems:

 Software that maintains and updates the flight plans for commercial airliners. The flight
plans must be kept reasonably current, but they can operate with the latency of a few
seconds.
 Live audio-video systems are also usually soft real-time. A frame of audio that's played late
may cause a brief audio glitch (and may cause all subsequent audio to be delayed
correspondingly, causing a perception that the audio is being played slower than normal),
but this may be better than the alternatives of continuing to play silence, static, a previous
audio frame, or estimated data. A frame of video that's delayed typically causes even less
disruption for viewers. The system can continue to operate and also recover in the future
using workload prediction and reconfiguration methodologies.[7]
 Similarly, video games are often soft real-time, particularly as they try to meet a
target frame rate. As the next image cannot be computed in advance, since it depends on
inputs from the player, only a short time is available to perform all the computing needed to
generate a frame of video before that frame must be displayed. If the deadline is missed,
the game can continue at a lower frame rate; depending on the game, this may only affect
its graphics (while the gameplay continues at normal speed), or the gameplay itself may be
slowed down (which was common on older third- and fourth-generation consoles).

Real-time in Digital Signal Processing

In a real-time digital signal processing (DSP) process, the analyzed (input) and generated
(output) samples can be processed (or generated) continuously in the time it takes to input and
output the same set of samples independent of the processing delay. It means that the
processing delay must be bounded even if the processing continues for an unlimited time. That
means that the mean processing time per sample, including overhead, is no greater than the

44
sampling period, which is the reciprocal of the sampling rate. This is the criterion whether the
samples are grouped together in large segments and processed as blocks or are processed
individually and whether there are long, short, or non-existent input and output buffers.

Consider an audio DSP example; if a process requires 2.01 seconds to analyze, synthesize,
or process 2.00 seconds of sound, it is not real-time. However, if it takes 1.99 seconds, it is or
can be made into a real-time DSP process.

A common life analogy is standing in a line or queue waiting for the checkout in a grocery store.
If the line asymptotically grows longer and longer without bound, the checkout process is not
real-time. If the length of the line is bounded, customers are being "processed" and output as
rapidly, on average, as they are being inputted then that process is real-time. The grocer might
go out of business or must at least lose business if they cannot make their checkout process
real-time; thus, it is fundamentally important that this process is real-time.

A signal processing algorithm that cannot keep up with the flow of input data with output falling
farther and farther behind the input, is not real-time. But if the delay of the output (relative to the
input) is bounded regarding a process that operates over an unlimited time, then that signal
processing algorithm is real-time, even if the throughput delay may be very long.

Memory Management in OS: Contiguous, Swapping, Fragmentation


What is Memory Management?

Memory Management is the process of controlling and coordinating computer memory,


assigning portions known as blocks to various running programs to optimize the overall
performance of the system.

It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS
to keep track of every memory location, irrespective of whether it is allocated to some process
or it remains free.

Why Use Memory Management?

 It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.

45
 Tracks whenever inventory gets freed or unallocated. According to it will update the
status.
 It allocates the space to application routines.
 It also make sure that these applications do not interfere with each other.
 Helps protect different processes from each other
 It places the programs in memory so that memory is utilized to its full extent.

Memory Management Techniques


1. Single Contiguous Allocation

It is the easiest memory management technique. In this method, all types of computer's memory
except a small portion which is reserved for the OS is available for one application. For
example, MS-DOS operating system allocates memory in this way. An embedded system also
runs on a single application.

2. Partitioned Allocation

It divides primary memory into various memory partitions, which is mostly contiguous areas of
memory. Every partition stores all the information for a specific task or job. This method consists
of allotting a partition to a job when it starts & unallocate when it ends.

3. Paged Memory Management

This method divides the computer's main memory into fixed-size units known as page frames.
This hardware memory management unit maps pages into frames which should be allocated on
a page basis.

4. Segmented Memory Management

Segmented memory is the only memory management method that does not provide the user's
program with a linear and contiguous address space.

Segments need hardware support in the form of a segment table. It contains the physical
address of the section in memory, size, and other data like access protection bits and status.

46
What is Swapping?

Swapping is a method in which the process should be swapped temporarily from the main
memory to the backing store. It will be later brought back into the memory for continue
execution.

Backing store is a hard disk or some other secondary storage device that should be big enough
inorder to accommodate copies of all memory images for all users. It is also capable of offering
direct access to these memory images.

Benefits of Swapping

Here, are major benefits/pros of swapping:

 It offers a higher degree of multiprogramming.


 Allows dynamic relocation. For example, if address binding at execution time is being
used, then processes can be swap in different locations. Else in case of compile and
load time bindings, processes should be moved to the same location.
 It helps to get better utilization of memory.
 Minimum wastage of CPU time on completion so it can easily be applied to a priority-
based scheduling method to improve its performance.

47
What is Memory allocation?

Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

1. Low Memory - Operating system resides in this type of memory.


2. High Memory- User processes are held in high memory.

Partition Allocation

Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.

Below are the various partition allocation schemes:

 First Fit: In this type fit, the partition is allocated, which is the first sufficient block from
the beginning of the main memory.
 Best Fit: It allocates the process to the partition that is the first smallest partition among
the free partitions.
 Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
 Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient
partition from the last allocation point.

What is Paging?

Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory is
divided into small fixed-size blocks of physical memory, which is called frames. The size of a
frame should be kept the same as that of a page to have maximum utilization of the main
memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a
logical concept.

What is Fragmentation?

Processes are stored and removed from memory, which creates free memory space, which are
too small to use by other processes.

48
After sometimes, that processes not able to allocate to memory blocks because its small size
and memory blocks always remain unused is called fragmentation. This type of problem
happens during a dynamic memory allocation system when free blocks are quite small, so it is
not able to fulfill any request.

Two types of Fragmentation methods are:

1. External fragmentation
2. Internal fragmentation

 External fragmentation can be reduced by rearranging memory contents to place all free
memory together in a single block.
 The internal fragmentation can be reduced by assigning the smallest partition, which is
still good enough to carry the entire process.

What is Segmentation?

Segmentation method works almost similarly to paging. The only difference between the two is
that segments are of variable-length, whereas, in the paging method, pages are always of fixed
size.

A program segment includes the program's main function, data structures, utility functions, etc.
The OS maintains a segment map table for all the processes. It also includes a list of free
memory blocks along with its size, segment numbers, and its memory locations in the main
memory or virtual memory.

What is Dynamic Loading?

Dynamic loading is a routine of a program which is not loaded until the program calls it. All
routines should be contained on disk in a relocatable load format. The main program will be
loaded into memory and will be executed. Dynamic loading also provides better memory space
utilization.

49
What is Dynamic Linking?

Linking is a method that helps OS to collect and merge various modules of code and data into a

single executable file. The file can be loaded into memory and executed. OS can link system-
level libraries into a program that combines the libraries at load time. In Dynamic linking method,
libraries are linked at execution time, so program code size can remain small.

Difference Between Static and Dynamic Loading

Static Loading Dynamic Loading

Static loading is used when you want to load In a Dynamically loaded program,
your program statically. Then at the time of references will be provided and the
compilation, the entire program will be linked loading will be done at the time of
and compiled without need of any external execution.
module or program dependency.

At loading time, the entire program is loaded Routines of the library are loaded into
into memory and starts its execution. memory only when they are required in
the program.

Difference Between Static and Dynamic Linking

Here, is main difference between Static vs. Dynamic Linking:

Static Linking Dynamic Linking

Static linking is used to combine all other When dynamic linking is used, it does not need
modules, which are required by a to link the actual module or library with the
program into a single executable code. program. Instead of it use a reference to the
This helps OS prevent any runtime dynamic module provided at the time of
dependency. compilation and linking.

Summary:

 Memory management is the process of controlling and coordinating computer memory,


assigning portions called blocks to various running programs to optimize the overall
performance of the system.

50
 It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
 In Single Contiguous Allocation, all types of computer's memory except a small portion
which is reserved for the OS is available for one application
 Partitioned Allocation method divides primary memory into various memory partitions,
which is mostly contiguous areas of memory
 Paged Memory Management method divides the computer's main memory into fixed-
size units known as page frames
 Segmented memory is the only memory management method that does not provide the
user's program with a linear and contiguous address space.
 Swapping is a method in which the process should be swapped temporarily from the
main memory to the backing store. It will be later brought back into the memory for
continue execution.
 Memory allocation is a process by which computer programs are assigned memory or
space.
 Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages.
 Fragmentation refers to the condition of a disk in which files are divided into pieces
scattered around the disk.
 Segmentation method works almost similarly to paging. The only difference between the
two is that segments are of variable-length, whereas, in the paging method, pages are
always of fixed size.
 Dynamic loading is a routine of a program which is not loaded until the program calls it.
 Linking is a method that helps OS to collect and merge various modules of code and
data into a single executable file.

51

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy