0% found this document useful (0 votes)
6 views21 pages

Unit 4 Memory Management Notes

The document discusses memory management in operating systems, focusing on mono-programming and multiprogramming, their characteristics, and techniques for memory allocation such as fixed and variable partitioning. It highlights the advantages and disadvantages of each approach, including issues like fragmentation and the trade-offs between ease of implementation and memory efficiency. Additionally, it covers concepts like virtual memory and paging, explaining how they enhance system performance and resource utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

Unit 4 Memory Management Notes

The document discusses memory management in operating systems, focusing on mono-programming and multiprogramming, their characteristics, and techniques for memory allocation such as fixed and variable partitioning. It highlights the advantages and disadvantages of each approach, including issues like fragmentation and the trade-offs between ease of implementation and memory efficiency. Additionally, it covers concepts like virtual memory and paging, explaining how they enhance system performance and resource utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Unit IV

(Memory Management)

Mono programming in Operating System


What is Mono programming?
Mono programming implies that only a single task or program is in the
main memory at a particular time. It was more common in the initial
computers and mobiles where one can run only a single application at a
time.
Characteristics of Mono programming:
1. It allows only one program to sit in the memory at one time.
2. The size is small as only one program is present.
3. The resources are allocated to the program that is in the memory at that time.

What is Multiprogramming?
Multiprogramming is where several programs can be run at the same time.
The programs reside in the main memory or the RAM of the system at a
time, and the operating system that handles multiple programs is known as
a multiprogramming operating system.
Characteristics of Multiprogramming:
1. The memory can hold several programs at a time.
2. The resources are allocated to different programs.
3. The size of the memory is larger as compared to uniprogramming.

Mono-Programming Multiprogramming

1. In mono-programming the system runs smoothly as only one task in a run at


1. But for multiprogramming, the processor needs to be
a time, it can function on a slow processor as well. faster.

1. The main memory has a smaller size in mono-programming as only one task 2. In multiprogramming, the main memory needs more
space.
sits there at a time.
3. Both fixed and variable size partitions can be used in
1. A fixed-size partition is used in mono-programming multiprogramming.

 Some examples of mono-programming are: 4. Some examples of multiprogramming are:


The operating system in old mobile phones, batch processing in old The operating systems that are used in modern
computers, etc. computers like Windows 10, etc.

Modeling Multiprogramming
When multiprogramming is used, the CPU utilization can be improved. Crudely put, if the average process
computes only 20% of the time it is sitting in memory, then with fiv e processes in memory at once the CPU
should be busy all the time. This model is unrealistically optimistic, however, since it tacitly assumes that all
five e processes will never be waiting for I/O at the same time

Multi-programming with Fixed and Variable Partition

In operating systems, Memory Management is the function responsible for allocating and managing a
computer’s main memory. The memory Management function keeps track of the status of each
memory location, either allocated or free to ensure effective and efficient use of Primary Memory.

There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In Contiguous
Technique, executing process must be loaded entirely in main memory. Contiguous Technique can be
divided into:

1. Fixed (or static) partitioning


2. Variable (or dynamic) partitioning

Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process
in the main memory. In this partitioning, a number of partitions (non-overlapping) in RAM
is fixed but the size of each partition may or may not be the same. As it
is contiguous allocation, hence no spanning is allowed. Here partitions are made before
execution or during system configure.

As illustrated in the above figure, the first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.

There are some advantages and disadvantages of fixed partitioning.

Advantages of Fixed Partitioning –

1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement. It
simply requires putting a process into a certain partition without focusing on the emergence of Internal
and External Fragmentation.
2. Little OS overhead:
Processing of Fixed Partitioning requires lesser excess and indirect computational
power.

Disadvantages of Fixed Partitioning –

1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This
can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load the
3. Processes even though there is space available but not in the contiguous form (as spanning is not
allowed).
4. Limit process size:
Process of size greater than the size of the partition in Main Memory cannot be accommodated. The
partition size cannot be varied according to the size of the incoming process’s size. Hence, the process
size of 32MB in above-stated example is invalid.
5. Limitation on Degree of Multi-programming:
Partitions in Main Memory is made before execution or during system configure. Main Memory is
divided into a fixed number of partitions.
6. Suppose if there are partitions in RAM and are the number of processes, then condition must be
fulfilled. A number of processes greater than a number of partitions in RAM are invalid in Fixed
Partitioning.
Variable Partitioning –
It is a part of the Contiguous allocation technique. It is used to alleviate
the problem faced by Fixed Partitioning. In contrast with fixed partitioning, partitions are not
made before the execution or during system configure. Various features associated with
variable Partitioning-
1. Initially, RAM is empty and partitions are made during the run-time according to the process’s need
instead of partitioning during system configure.
2. The size of the partition will be equal to the incoming process.
3. The partition size varies according to the need of the process so that the internal fragmentation can be
avoided to ensure efficient utilization of RAM.
4. The number of partitions in RAM is not fixed and depends on the number of incoming processes and
Main Memory’s size.

There are some advantages and disadvantages of variable partitioning over fixed partitioning as given
Advantages of Variable Partitioning –

1. No Internal Fragmentation:
In variable Partitioning, space in the main memory is allocated strictly according to the need of the
process, hence there is no case of internal fragmentation. There will be no unused space left in the
partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to the absence of internal fragmentation. A
process can be loaded until the memory is empty.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with a size greater than the size of the largest partition could not be
loaded and the process cannot be divided as it is invalid in the contiguous allocation technique. Here, In
variable partitioning, the process size can’t be restricted since the partition size is decided according to
the process size.

Disadvantages of Variable Partitioning –

1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it involves the
allocation of memory during run-time rather than during system configure.
2. External Fragmentation:
There will be external fragmentation in spite of the absence of internal fragmentation. For example,
suppose in the above example- process P1(2MB) and process P3(1MB) completed their execution.
Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose process P5 of size 3MB comes. The empty
space in memory cannot be allocated as no spanning is allowed in contiguous allocation. The rule says
that process must be continuously present in the main memory to get executed. Hence it results in
External Fragmentation
Requirements of Memory Management System
Memory management meant to satisfy some requirements that we should keep in mind.
These Requirements of memory management are:
1. Relocation – The available memory is generally shared among a number of processes
in a multiprogramming system, so it is not possible to know in advance which other
programs will be resident in main memory at the time of execution of this program.
Swapping the active processes in and out of the main memory enables the operating
system to have a larger pool of ready-to-execute process.

When a program gets swapped out to a disk memory, then it is not always possible that
when it is swapped back into main memory then it occupies the previous memory
location, since the location may still be occupied by another process. We may need
to relocate the process to a different area of memory. Thus there is a possibility that
program may be moved in main memory due to swapping.

The figure depicts a process image. The process image is occupying a continuous region of
main memory. The operating system will need to know many things including the location
of process control information, the execution stack, and the code entry. Within a program,
there are memory references in various instructions and these are called logical addresses.

After loading of the program into main memory, the processor and the operating system
must be able to translate logical addresses into physical addresses. Branch instructions
contain the address of the next instruction to be executed. Data reference instructions
contain the address of byte or word of data referenced.
2. Protection – There is always a danger when we have multiple programs the same time
as one program may write to the address space of another program. So every process must
be protected against unwanted interference when other process tries to write in a process
whether accidental or incidental. Between relocation and protection requirement a trade-off
occurs as the satisfaction of relocation requirement increases the difficulty of satisfying the
protection requirement.

Prediction of the location of a program in main memory is not possible, that’s why it is
impossible to check the absolute address at compile time to assure protection.
Most of the programming language allows the dynamic calculation of address at run time.
The memory protection requirement must be satisfied by the processor rather than the
operating system because the operating system can hardly control a process when it
occupies the processor. Thus it is possible to check the validity of memory references.

Memory management using bit maps vs linked list


There are two ways of doing memory management: using bits, and using linked list. While using bits,
we maintain a bit map of size equal to number of allocation units While using liked list, we maintain
two linked lists: one for allocated memory, and one for holes
Pros: Small space requirements, as each block can store a pointer to the next free block.

Cons: To traverse the list, you need to read each block! Also, it is costly to maintain the list in a
"contiguous" manner, in order to avoid fragmentation (think about the cost of updating the list in a
smarter way than just appending each new free block at the end).
The linked list scheme can be made slightly more efficient by storing multiple free blocks ID in each
block (thus fewer I/Os are required to retrieve some amount of free blocks).

Bitmap
Pros: Random allocation: checking if a block is free only requires reading the corresponding bit; also,
checking for a large continuous section is relatively fast (as you can check a word size of bits from the
bitmap in a single read). Fast deletion: you can just flip a bit to "free" a block, without overwriting the
data.

Cons: Higher memory requirements, as you need one bit per block (~128MB for a 1TB disk with
1KB blocks).

Contiguous Memory Allocation:


The main memory should oblige both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the operating
system. The memory is usually divided into two partitions: one for the resident operating
system and one for the user processes. We normally need several user processes to reside in
memory simultaneously. Therefore, we need to consider how to allocate available memory
to the processes that are in the input queue waiting to be brought into memory. In adjacent
memory allotment, each process is contained in a single contiguous segment of memory.

Contiguous memory allocation

Memory allocation:

To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.
Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available
for other processes.
Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes. Initially,
all memory is available for user processes and is considered one large block of available
memory. This available memory is known as a “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If the requirement is
fulfilled then we allocate memory to process, otherwise keeping the rest available to satisfy
future requests. While allocating a memory sometimes dynamic storage allocation problems
occur, which concerns how to satisfy a request of size n from a list of free holes. There ar e
some solutions to this problem:
First fit:-
In the first fit, the first available free hole fulfils the requirement of the process allocated.

Here, in this diagram 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements. For
this, we search the entire list, unless the list is ordered by size.

Here in this example, first, we traverse the complete list and find the last hole 25KB is the
best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory allocation
techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This method
produces the largest leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.

What is Virtual Memory in OS (Operating System)?


Virtual Memory is a storage scheme that provides user an illusion of having a very big main memory.
This is done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by having the
illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads the different parts
of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will
also be increased.

Advantages of Virtual Memory


1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.

3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory


1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.

3. The user will have the lesser hard disk space for its use.

Paging in OS (Operating System)


In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be stored at
the different locations of the memory but the priority is always to find the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same
as frame size.
Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will
be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.

Frames, pages and the mapping between the two is shown in the image below.

Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become
empty and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8
pages) is waiting inside the ready queue.
Given the fact that, we have 8 non-contiguous frames available in the memory and paging provides the
flexibility of storing the process at the different places. Therefore, we can load the pages of process P5
in the place of P2 and P4.

Memory Management Unit


The purpose of Memory Management Unit (MMU) is to convert the logical address into the physical
address. The logical address is the address generated by the CPU for every page while the physical
address is the actual address of the frame where each page will be stored.

When a page is to be accessed by the CPU by using the logical address, the operating system needs to
obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number

2. Offset

Memory management unit of OS needs to convert the page number to the frame number.

Example Considering the above image, let's say that the CPU demands 10th word of 4th page of process
P3. Since the page number 4 of process P1 gets stored at frame number 9 therefore the 10th word of 9th
frame will be returned as the physical address.

Page Table in OS
Page Table is a data structure used by the virtual memory system to store the mapping between logical
addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes therefore they are generally
used by the processes.

Physical addresses are the actual frame address of the memory. They are generally used by the hardware
or more specifically by RAM subsystems.

The image given below considers,

Physical Address Space = M words


Logical Address Space = L words
Page Size = P words
Physical Address = log 2 M = m bits
Logical Address = log 2 L = l bits
page offset = log 2 P = p bits

The CPU always accesses the processes through their logical addresses. However, the main memory
recognizes physical address only.

In this situation, a unit named as Memory Management Unit comes into the picture. It converts the page
number of the logical address to the frame number of the physical address. The offset remains same in
both the addresses.

To perform this task, Memory Management unit needs a special kind of mapping which is done by page
table. The page table stores all the Frame numbers corresponding to the page numbers of the page table.

In other words, the page table maps the page number to its actual location (frame number) in the
memory.

In the image given below shows, how the required word of the frame is accessed with the help of offset.
Page Fault Handling in Operating System
Page faults dominate more like an error. A page fault will happen if a program tries to access a piece
of memory that does not exist in physical memory (main memory). The fault specifies the operating
system to trace all data into virtual memory management and then relocate it from secondary memory
to its primary memory, such as a hard disk.

A page fault trap occurs if the requested page is not loaded into memory. The page fault primarily causes
an exception, which is used to notify the operating system to retrieve the "pages" from virtual memory
to continue operation. Once all of the data has been placed into physical memory, the program resumes
normal operation. The Page fault process occurs in the background, and thus the user is unaware of it.

1. The computer's hardware track to the kernel and the program counter is often saved on the stack. The
CPU registers hold information about the current state of instruction.

2. An assembly program is started, which saves the general registers and other volatile data to prevent the
Operating system from destroying it.

Page Fault Handling


A Page Fault happens when you access a page that has been marked as invalid. The paging hardware
would notice that the invalid bit is set while translating the address across the page table, which will
cause an operating system trap. The trap is caused primarily by the OS's failure to load the needed page
into memory.

Now, let's understand the procedure of page fault handling in the OS:

1. Firstly, an internal table for this process to assess whether the reference was valid or invalid memory
access.
2. If the reference becomes invalid, the system process would be terminated. Otherwise, the page will be
paged in.

3. After that, the free-frame list finds the free frame in the system.

4. Now, the disk operation would be scheduled to get the required page from the disk.

5. When the I/O operation is completed, the process's page table will be updated with a new frame number,
and the invalid bit will be changed. Now, it is a valid page reference.

6. If any page fault is found, restart these steps from starting.

Page Fault Terminology


There are various page fault terminologies in the operating system. Some terminologies of page fault
are as follows:

1. Page Hit

When the CPU attempts to obtain a needed page from main memory and the page exists in main
memory (RAM), it is referred to as a "PAGE HIT".

2. Page Miss

If the needed page has not existed in the main memory (RAM), it is known as "PAGE MISS".

3. Page Fault Time

The time it takes to get a page from secondary memory and recover it from the main memory after
loading the required page is known as "PAGE FAULT TIME".

4. Page Fault Delay

The rate at which threads locate page faults in memory is referred to as the "PAGE FAULT RATE".
The page fault rate is measured per second.

5. Hard Page Fault

If a required page exists in the hard disk's page file, it is referred to as a "HARD PAGE FAULT".

6. Soft Page Fault

If a required page is not located on the hard disk but is found somewhere else in memory, it is referred
to as a "SOFT PAGE FAULT".

7. Minor Page Fault

If a process needs data and that data exists in memory but is being allotted to another process at the
same moment, it is referred to as a "MINOR PAGE FAULT".
Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when a new page comes in.
Page Fault: A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space but not loaded in physical memory. Since actual
physical memory is much smaller than virtual memory, page faults happen. In case of a
page fault, Operating System might have to replace one of the existing pages with the
newly needed page. Different page replacement algorithms suggest different ways to decide
which page to replace. The target for all algorithms is to reduce the number of page faults.

Page Replacement Algorithms:

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a page needs to be replaced page in the front
of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the
number of page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —
> 3 Page Faults.
When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not
available in memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is
also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault. Finally, when 3 come it is not available so it replaces 0 1 page fault.

Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and
3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be
used for the longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is
not used for the longest duration of time in the future.—>1 Page fault. 0 is already there so
—> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analysed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently
used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4
page frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is
least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been
used recently. Belay’s anomaly can occur in this algorithm.
Segmentation in Operating System
In Operating Systems, Segmentation is a memory management technique in which the memory is
divided into the variable size parts. Each part is known as a segment which can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment table is stored in
one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Operating system doesn't care about the User's view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It decreases
the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each segment contains
the same type of functions such as the main function can be included in one segment and the library
functions can be included in the other segment.

Translation of Logical address into physical address by segment table

CPU generates a logical address which contains two parts:

1. Segment Number

2. Offset

For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for the segment offset
so the maximum segment size is 4096 and the maximum number of segments that can be refereed is
16.

When a program is loaded into memory, the segmentation system tries to locate space that is large
enough to hold the first segment of the process, space information is obtained from the free list
maintained by memory manager. Then it tries to locate space for other segments. Once adequate space
is located for all the segments, it loads them into their respective areas.
Paged Segmentation and Segmented Paging
Paged Segmentation and Segmented Paging are two different memory management
techniques that combine the benefits of paging and segmentation.

1. Paged Segmentation is a memory management technique that divides a process’s


address space into segments and then divides each segment into pages. This allows for a
flexible allocation of memory, where each segment can have a different size, and each
page can have a different size within a segment.
2. Segmented Paging, on the other hand, is a memory management technique that divides
the physical memory into pages, and then maps each logical address used by a process
to a physical page. In this approach, segments are used to map virtual memory
addresses to physical memory addresses, rather than dividing the virtual memory into
pages.
3. Both Paged Segmentation and Segmented Paging provide the benefits of paging, such
as improved memory utilization, reduced fragmentation, and increased performance.
They also provide the benefits of segmentation, such as increased flexibility in memory
allocation, improved protection and security, and reduced overhead in memory
management.
Segmented paging
A solution to the problem is to use segmentation along with paging to reduce the size of
page table. Traditionally, a program is divided into four segments, namely code segment,
data segment, stack segment and heap segment.

Segments of a process

The size of the page table can be reduced by creating a page table for each segment. To
accomplish this hardware support is required. The address provided by CPU will now be
partitioned into segment no., page no. and offset.

The memory management unit (MMU) will use the segment table which will contain the
address of page table (base) and limit. The page table will point to the page frames of the
segments in main memory.
Segmented Paging

Advantages of Segmented Paging


1. The page table size is reduced as pages are present only for data of segments, hence
reducing the memory requirements.
2. Gives a programmers view along with the advantages of paging.
3. Reduces external fragmentation in comparison with segmentation.
4. Since the entire segment need not be swapped out, the swapping out into virtual
memory becomes easier.
Disadvantages of Segmented Paging
1. Internal fragmentation still exists in pages.
2. Extra hardware is required
3. Translation becomes more sequential increasing the memory access time.
4. External fragmentation occurs because of varying sizes of page tables and varying sizes
of segment tables in today’s systems.
Paged Segmentation
1. In segmented paging, not every process has the same number of segments and the
segment tables can be large in size which will cause external fragmentation due to the
varying segment table sizes. To solve this problem, we use paged segmentation which
requires the segment table to be paged. The logical address generated by the CPU will
now consist of page no #1, segment no, page no #2 and offset.
2. The page table even with segmented paging can have a lot of invalid pages. Instead of
using multi-level paging along with segmented paging, the problem of larger page table
can be solved by directly applying multi-level paging instead of segmented paging.
Paged Segmentation

Advantages of Paged Segmentation


1. No external fragmentation
2. Reduced memory requirements as no. of pages limited to segment size.
3. Page table size is smaller just like segmented paging,
4. Similar to segmented paging, the entire segment need not be swapped out.
5. Increased flexibility in memory allocation: Paged Segmentation allows for a flexible
allocation of memory, where each segment can have a different size, and each page can
have a different size within a segment.
6. Improved protection and security: Paged Segmentation provides better protection and
security by isolating each segment and its pages, preventing a single segment from
affecting the entire process’s memory.
Increased program structure: Paged Segmentation provides a natural program structure,
with each segment representing a different logical part of a program.
7. Improved error detection and recovery: Paged Segmentation enables the detection of
memory errors and the recovery of individual segments, rather than the entire process’s
memory.
8. Reduced overhead in memory management: Paged Segmentation reduces the overhead
in memory management by eliminating the need to maintain a single, large page table
for the entire process’s memory.
9. Improved memory utilization: Paged Segmentation can improve memory utilization by
reducing fragmentation and allowing for the allocation of larger blocks of contiguous
memory to each segment.
Disadvantages of Paged Segmentation
1. Internal fragmentation remains a problem.
2. Hardware is complexer than segmented paging.
3. Extra level of paging at first stage adds to the delay in memory access.
4. Increased complexity in memory management: Paged Segmentation introduces
additional complexity in the memory management process, as it requires the
maintenance of multiple page tables for each segment, rather than a single page table
for the entire process’s memory.
5. Increased overhead in memory access: Paged Segmentation introduces additional
overhead in memory access, as it requires multiple lookups in multiple page tables to
access a single memory location.
6. Reduced performance: Paged Segmentation can result in reduced performance, as the
additional overhead in memory management and access can slow down the overall
process.

With the help of segment map tables and hardware assistance, the operating system can easily translate
a logical address into physical address on execution of a program.

The Segment number is mapped to the segment table. The limit of the respective segment is compared
with the offset. If the offset is less than the limit then the address is valid otherwise it throws an error
as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset to get the physical
address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.

3. Less overhead

4. It is easier to relocate segments than entire address space.

5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.

3. Costly memory management algorithms.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy