0% found this document useful (0 votes)
3 views33 pages

OS Unit 3

The document covers memory management and virtual memory concepts, detailing the functions, importance, and techniques of memory allocation, including contiguous and non-contiguous methods. It explains fragmentation issues, compaction techniques, and various memory allocation algorithms like First Fit, Best Fit, and Worst Fit. Additionally, it introduces paging as a solution to the limitations of traditional memory allocation methods.

Uploaded by

ankitgarg67149
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views33 pages

OS Unit 3

The document covers memory management and virtual memory concepts, detailing the functions, importance, and techniques of memory allocation, including contiguous and non-contiguous methods. It explains fragmentation issues, compaction techniques, and various memory allocation algorithms like First Fit, Best Fit, and Worst Fit. Additionally, it introduces paging as a solution to the limitations of traditional memory allocation methods.

Uploaded by

ankitgarg67149
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

0

NotesNeo

Unit 3 : Memory Management and Virtual Memory

Syllabus :
Memory Management: Basic concept, Logical and Physical address map, Memory
allocation: Contiguous Memory allocation – Fixed and variable partition – Internal and
External fragmentation and Compaction; Paging: Principle of operation – Page allocation
– Hardware support for paging, Protection and sharing, Disadvantages of paging.
Virtual Memory: Basics of Virtual Memory – Hardware and control structures – Locality of
reference, Page fault, Working Set, Dirty page/Dirty bit – Demand paging, Page
Replacement algorithms: Optimal, First in First Out (FIFO), Optimal Page Replacement
and Least Recently used (LRU).

Section 1 : Memory Management

I. Introduction to Memory Management

A. Basic Concepts of Memory Management


●​ Memory management is a critical function of an operating system that involves
managing the computer's memory resources, including primary memory (RAM) and
secondary storage (such as hard disks and SSDs).
●​ The primary goal of memory management is to ensure efficient and effective use
of memory, enabling multiple processes to run concurrently without interfering with
each other.

Function of Memory Management:


1.​ Allocation: Assigning memory space to processes or programs, ensuring they have
adequate resources for execution.
2.​ Mapping: Establishing the correspondence between logical addresses (generated
by the CPU) and physical addresses (actual memory locations).
3.​ Protection: Implementing mechanisms to prevent unauthorized access to memory
regions, enhancing system security.
4.​ Deallocation: Reclaiming memory space when it is no longer needed by a process,
preventing memory leaks and optimizing resource utilization.

Importance of Memory Management:


1.​ Efficient Utilization: Ensuring that memory is used efficiently, reducing wastage,
and maximizing the number of processes that can run concurrently.
2.​ System Performance: Enhancing system performance by optimizing memory
access patterns and minimizing the overhead associated with memory
management.
3.​ System Stability: Preventing memory leaks and ensuring that processes do not
interfere with each other, thereby maintaining overall system stability and
reliability.

1
NotesNeo

B. Logical and Physical Address Mapping

Logical Address (Virtual Address):


●​ Generated by the CPU during program execution.
●​ Doesn’t exist physically, so also known as a virtual address.
●​ Used by applications to access memory.
●​ Allows for abstraction and protection, as programs operate in their own address
space.

Physical Address:
●​ Actual location in main memory where data is stored.
●​ Directly corresponds to the hardware ie, physical memory cells.
●​ Used by the memory controller to access physical memory.

Address Translation:
●​ Processes access memory using logical addresses, which are then translated by the
operating system into physical addresses.
●​ The memory management unit (MMU) performs the translation of logical addresses
to physical addresses, before a program runs.
●​ It uses a relocation register (base register) to map logical addresses to physical
addresses.
●​ This translation is facilitated by page tables (paging) or segment tables
(segmentation), which store the mapping information.

II. Memory Allocation Techniques


Memory allocation techniques are essential for efficient utilization of computer memory.
There are primary two Memory Allocation Techniques:
1.​ Contiguous
2.​ Non-Contiguous

In Contiguous Memory Allocation, the memory is allocated in a contiguous block. It is


further divided into two types:
1.​ Fixed (Static) Partitioning :- The memory is divided into fixed-size partitions, and
each process occupies one partition.
2.​ Variable (Dynamic) Partitioning :- Partitions are dynamically allocated based on
process requirements.

2
NotesNeo
Various contiguous memory allocation algorithms include:
1.​ First Fit :- Assigns the first available partition that fits the process.
2.​ Best Fit :- Allocates the smallest available partition that fits the process.
3.​ Worst Fit :- Allocates the largest available partition.
4.​ Next Fit :- Similar to First Fit, but starts searching from the last allocated partition.

In Non-Contiguous Memory Allocation, the memory is allocated in a non-contiguous


block. It is further divided into two types:
1.​ Paging :- Divides memory into fixed-size pages and processes into equal-sized
frames :- Efficient but may lead to internal fragmentation.
2.​ Segmentation :- Divides memory into segments (e.g., code, data, stack) and
assigns them to processes.

A. Contiguous Memory Allocation


Contiguous memory allocation is a method of allocating memory where each process is
allocated a contiguous block of memory. This means that the memory allocated to a
process is one continuous chunk, without any gaps in between.

1. Fixed (Static) Partitioning


●​ Fixed partitioning divides memory into fixed-size partitions, and each partition is
allocated to a single process.
●​ It's simple and easy to implement but can lead to wastage of memory if processes
don't perfectly fit into the fixed partitions.
●​ This approach is like dividing a pizza into equal slices before giving each person
their portion.
●​ Examples: Fixed partitioning is commonly used in early operating systems and
embedded systems where memory requirements are predictable and static.

In fixed partitioning:
●​ Main memory is divided into partitions of equal or different sizes.
●​ The operating system typically resides in the first partition, while the remaining
partitions are available to store user processes.
●​ Memory is assigned to processes in a contiguous manner, meaning that a process
must occupy a single, uninterrupted block of memory.

Advantages of Fixed Partitioning:


1.​ Simplicity and Ease of Implementation: Fixed partitioning is simple to understand
and implement. The algorithms required for this technique are straightforward.
2.​ Predictability: The operating system can ensure a minimum amount of memory for
each process, providing predictability in memory allocation.
3.​ Prevention of Process Interference: Processes are confined to their respective
partitions, preventing interference with each other's memory space. This
contributes to system security and stability.
4.​ Low Overhead: Fixed partitioning requires minimal overhead, making it suitable for
systems with limited resources.

3
NotesNeo
Disadvantages of Fixed Partitioning:
1.​ Internal Fragmentation:
●​ When a process occupies a partition that is larger than its size, the extra
memory within the partition remains unused, resulting in wastage. This
wasted memory is known as internal fragmentation.
●​ For example: If a 3 MB process is loaded into a 4 MB partition, then the
remaining 1 MB of memory within the partition remains unused.

2.​ External Fragmentation:


●​ Even if there is enough total unused space across multiple partitions, it may
not be usable for loading new processes if the free space is not contiguous.
●​ For instance, if there are multiple 4 MB partitions each with 1 MB of unused
space, a new 4 MB process cannot be loaded because the available space is
fragmented across different partitions.
3.​ Limitation on Process Size:
●​ The size of a process must not exceed the size of the largest partition. If a
process size exceeds the maximum partition size, it cannot be loaded into
memory.
●​ This limitation imposes constraints on the size of processes that can be
executed in the system.
4.​ Reduced Degree of Multiprogramming:
●​ The degree of multiprogramming refers to the maximum number of
processes that can be simultaneously loaded into memory.
●​ In fixed partitioning, the degree of multiprogramming is fixed and limited
due to the fixed sizes of partitions. This limitation reduces the system's
ability to run multiple processes concurrently.
●​ Since partition sizes cannot be adjusted dynamically, the degree of
multiprogramming remains constant and relatively low.

4
NotesNeo
2. Variable (Dynamic) Partitioning
●​ Variable partitioning is a memory management technique designed to overcome
the limitations and inefficiencies of fixed partitioning.
●​ Variable partitioning allocates memory dynamically by allowing partitions to be
created and resized dynamically based on the size of the processes being loaded
into memory. This approach reduces wastage of memory but requires more
complex management to handle varying sizes of processes.
●​ It's like cutting a cake into slices of varying sizes based on how many people need
to be served.
●​ Examples: Variable partitioning is used in modern operating systems like Linux and
Windows, where memory requirements of processes vary dynamically.

In dynamic partitioning:
●​ Partition sizes are not predefined; instead, they are determined at the time of
process loading based on the size of the process.
●​ The first partition is typically reserved for the operating system, while the
remaining space is divided into partitions of varying sizes to accommodate
processes.

Advantages of Dynamic Partitioning over Fixed Partitioning:


1.​ No Internal Fragmentation:
●​ Since partitions are created according to the size of the process being
loaded, there is no unused space within partitions, eliminating internal
fragmentation.
2.​ No Limitation on Process Size:
●​ Unlike fixed partitioning, where processes must fit within predefined
partition sizes, dynamic partitioning allows processes of any size to be
executed as long as there is enough available memory.
3.​ Dynamic Degree of Multiprogramming:
●​ With no internal fragmentation, there is more efficient use of memory,
allowing for a dynamic degree of multiprogramming where more processes
can be loaded into memory simultaneously.

5
NotesNeo
Disadvantages of Dynamic Partitioning:
1.​ External Fragmentation:
●​ Despite the absence of internal fragmentation, external fragmentation can
still occur. When processes are completed and memory is freed, the resulting
unused partitions may not be contiguous, leading to wasted space that
cannot be utilized by new processes.
●​ For example: If processes P1 (5 MB), P2 (2 MB), P2 (3 MB), and P4 (4MB) are
loaded and then P1 and P3 complete, their memory becomes available (5 MB
and 3 MB blocks). However, a new process P5 requiring 8 MB cannot be
allocated since the free memory is not contiguous.

2.​ Complex Memory Allocation:


●​ The allocation and deallocation of memory partitions in dynamic
partitioning are more complex compared to fixed partitioning. The operating
system must constantly track and manage partitions, adjusting their sizes as
processes are loaded and unloaded.
3.​ Difficulty in Implementation:
●​ Dynamic partitioning is more difficult to implement compared to fixed
partitioning due to its dynamic nature and continuous allocation
adjustments.

B. Fragmentation Issues
Fragmentation refers to the wastage of memory space due to inefficient memory
allocation. It refers to the phenomenon where the available memory space is divided into
small, non-contiguous blocks, making it challenging to allocate large contiguous blocks of

6
NotesNeo
memory to processes. It leads to inefficient memory utilization and can lower the
performance of the system.
There are two types of fragmentation:

1. Internal Fragmentation
Internal fragmentation happens when a process is allocated more memory than it
actually needs, resulting in wasted space within the allocated memory block. It's like
buying a large pizza but only eating a small portion, leaving the rest uneaten and wasted.
●​ Causes and Consequences: Internal fragmentation occurs due to fixed-sized
partitions or memory allocation strategies that allocate memory in fixed-sized
chunks. It reduces overall system efficiency by wasting memory that could have
been utilized by other processes.
●​ Removal: Techniques such as compaction and dynamic memory allocation can help
reduce internal fragmentation.

2. External Fragmentation
External fragmentation occurs when there is enough total memory space available to
satisfy a request, but it is not contiguous, scattered in small chunks throughout memory.
It's like having enough total space in your room, but it's divided into small, scattered
areas, making it challenging to find a single large enough space for a new item.

External fragmentation occurs when free memory exists in non-contiguous blocks, making
it difficult to allocate contiguous blocks of memory to processes.
●​ Causes and Consequences: External fragmentation arises when processes are
loaded and removed from memory, leaving gaps of free memory between
allocated blocks. These gaps cannot be utilized by processes requesting contiguous
memory, leading to inefficient memory utilization.
●​ Removal: Compaction techniques, such as memory compaction and dynamic
memory allocation, can help reduce external fragmentation by consolidating free
memory blocks into contiguous blocks.

Internal Fragmentation vs External Fragmentation

Aspect Internal Fragmentation External Fragmentation

Occurs when allocated memory Occurs when there is unused space


Definition
is larger than what is needed. between allocated memory blocks.

Fixed-sized memory blocks Inefficient dynamic allocation and


Cause
larger than needed. deallocation of memory.

Happens within allocated Exists outside allocated memory


Location
memory blocks. blocks.

More common in fixed More common in dynamic


Occurrence
partitioning and paging. partitioning.

7
NotesNeo

Reduces efficient use of Prevents allocation of large


Impact
allocated memory. contiguous blocks.

Use variable-sized partitions or Memory compaction, paging, or


Solution
dynamic partitioning. segmentation.

Allocating 4 MB memory for a 3 Multiple 1 MB free memory blocks,


Example
MB process (1 MB wasted). unable to fit a 2 MB process.

C. Compaction Techniques
Compaction tis a technique used to reduce external fragmentation by rearranging all free
memory together in one contiguous block. It's like rearranging furniture in a room to
create more space.
Compaction can be done periodically or when necessary to ensure efficient memory
usage. However, compaction can be time-consuming and may require moving active
processes, impacting performance temporarily.

Explanation and Implementation:


●​ Compaction involves rearranging memory allocation to consolidate free memory
blocks and reduce fragmentation.
●​ During compaction, free memory blocks are moved to create larger contiguous
blocks that can be allocated to processes.
●​ Compaction is typically performed during periods of low system activity to
minimize disruption to running processes.

Pros and Cons of Compaction:


●​ Pros: Compaction can significantly reduce fragmentation and improve memory
utilization, leading to better overall system performance.

8
NotesNeo
●​ Cons: Compaction may introduce overhead and delay in memory allocation,
especially in systems with large memory capacities. Additionally, compaction may
not be feasible in real-time systems where strict timing constraints must be met.

D. Continuous Memory Allocation Algorithms


●​ First fit, best fit, and worst fit are collectively types of continuous memory
allocation algorithms or partitioning algorithms used in operating systems. These
algorithms determine how the operating system assigns memory blocks to
processes requesting memory.
●​ Each algorithm has its own approach to selecting a suitable memory block for
allocation, influencing factors such as memory utilization, fragmentation, and
computational overhead.

1. First-Fit Allocation:
In First-Fit memory allocation, the operating system searches for the first available
memory block that is large enough to hold the incoming process. It doesn't search for the
best-suited block but allocates the process to the nearest available partition with
sufficient size.

Advantages:
●​ Simple and efficient search algorithm.
●​ Minimizes memory fragmentation.
●​ Fast allocation of memory.

Disadvantages:
●​ Poor performance in highly fragmented memory.
●​ May lead to poor memory utilization.
●​ May allocate larger blocks of memory than required.

2. Best-Fit Allocation:
Best-Fit allocation allocates memory by searching for the smallest free block of memory
that is large enough to accommodate the process. It aims to minimize memory wastage
by selecting the closest-fitting free partition.

Advantages:
●​ Efficient memory usage.
●​ Saves memory from being wasted.
●​ Improved memory utilization.
●​ Reduces memory fragmentation.
●​ Minimizes external fragmentation.

Disadvantages:
●​ Slow process due to extensive memory search.
●​ Increased computational overhead.
●​ May result in increased internal fragmentation.

9
NotesNeo
●​ Can lead to slow memory allocation times.

3. Worst-Fit Allocation:
Worst-Fit allocation searches for the largest free block of memory to place the process. It
maximizes internal fragmentation but allows other smaller processes to use the leftover
space.

Advantages:
●​ Large internal fragmentation enables other processes to use leftover memory.

Disadvantages:
●​ Slow process due to traversing the entire memory.
●​ Leads to significant internal fragmentation.
●​ Can result in slow memory allocation times.

III. Paging

A. Need for Paging


In the context of memory management in operating systems, the need for paging arises
from the limitations and disadvantages associated with other memory allocation
techniques, particularly dynamic partitioning. The primary issue addressed by paging is
external fragmentation.
1.​ Problem Scenario with Dynamic Partitioning:
●​ Imagine a scenario where the main memory is divided into partitions, and
some processes leave gaps or holes in the memory after completion.
●​ Now, a new process requires memory larger than the available contiguous
holes. Although there might be sufficient total space, the lack of contiguous
memory prevents the process from being loaded.
2.​ Addressing Contiguous Memory Constraints:
●​ The primary challenge is to efficiently utilize available memory space even
when it is not contiguous.
●​ Compaction, a technique used to remove external fragmentation by
rearranging processes, introduces inefficiency due to the movement of
processes in memory.
3.​ Introducing Paging:
●​ The need for paging arises as a solution to the problem of storing processes
in a more optimal way, especially when contiguous memory is not readily
available.
●​ Paging divides processes into smaller units called pages, and the main
memory into frames of equal size. This division allows pages of a process to
be stored at different locations in memory, addressing the issue of
contiguous allocation.
4.​ Example:

10
NotesNeo
●​ If a process P1 needs 2 MB of space, and there are two holes of 1 MB each in
the main memory but not contiguous, paging allows storing different pages
of P1 in those non-contiguous locations.

Paging in OS (Operating System):


Paging is a memory management technique used in operating systems to efficiently
utilize main memory. It involves dividing processes into fixed-size units called pages and
dividing main memory into corresponding units called frames. Each page of a process is
stored in a frame of the main memory. Paging is used to retrieve processes from
secondary storage into the main memory in the form of pages.

Principle of Operation / Working:


1.​ Division into Pages and Frames:
●​ Each process is divided into pages, and the main memory is divided into
frames of equal size.
●​ The size of a page matches the size of a frame.

2.​ Storage Mechanism:


●​ One page of a process is stored in one frame of the main memory.
●​ Pages can be stored at different locations in the memory, but efforts are
made to allocate contiguous frames when possible.
●​ Pages of a process can be stored in different locations of memory,
eliminating the need for contiguous memory allocation.

3.​ Dynamic Loading:


●​ Pages of a process are brought into the main memory only when they are
needed for execution. Otherwise, they reside in secondary storage, such as a
hard drive.

11
NotesNeo
●​ This dynamic loading optimizes memory usage by loading only the
necessary pages into memory at any given time.

4.​ Frame Size Consistency:


●​ Different operating systems may define different frame sizes, but the size of
each frame within an OS must be equal.
●​ To ensure efficient paging, the page size must match the frame size so that
pages can be mapped directly to frames.

5.​ Page Table:


●​ Each process has its own page table that maps the logical pages to the
physical frames.
●​ The page table is stored in memory and contains entries for each page,
indicating the corresponding frame in physical memory.

6.​ Memory Management Unit (MMU):


●​ The Memory Management Unit (MMU) plays a crucial role in paging by
converting logical addresses generated by the CPU into physical addresses,
allowing access to the actual locations of pages in the main memory.
●​ The logical address consists of two parts:
12
NotesNeo
○​ Page Number: Specifies the page within the process.
○​ Offset: Indicates the position within the page.

7.​ Page Allocation:


●​ Pages are allocated to processes dynamically as needed.
●​ When a process requires memory, the operating system allocates pages
from the free list and maps them to available page frames in the physical
memory.

Example:
1.​ Consider a scenario with a main memory size of 16 KB and a frame size of 1 KB.
This results in 16 frames, each of 1 KB size.
2.​ Suppose there are four processes, P1, P2, P3, and P4, each requiring 4 KB of
memory. Each process is divided into pages of 1 KB, allowing one page to be stored
in one frame.
3.​ Initially, all frames are empty, and pages of processes are stored in contiguous
frames. Frames, pages, and their mapping are depicted below.

13
NotesNeo

4.​ Later, processes P2 and P4 enter a waiting state, freeing up eight frames. Another
process, P5, requiring 8 KB of memory (eight pages), is waiting in the ready queue.
5.​ With eight non-contiguous empty frames available in memory, pages of process P5
can be loaded into these frames, replacing the pages of processes P2 and P4.

B. Hardware Support for Paging


Paging requires hardware support from the Memory Management Unit (MMU) and the
Translation Lookaside Buffer (TLB).
●​ The MMU translates logical addresses generated by the CPU into physical
addresses by using a page table. This table keeps track of which pages are
mapped to which frames in physical memory.
●​ The MMU contains a page table base register (PTBR) that points to the base
address of the page table.

14
NotesNeo
●​ The TLB is a cache used to speed up address translation by storing the recent page
table entries.
●​ When a logical address is translated, the MMU first checks the TLB. If the entry is
found (a TLB hit), the physical address is obtained quickly.
●​ If the entry is not found (a TLB miss), the MMU accesses the page table in memory
to retrieve the frame number.
●​

C. Protection and Sharing in Paging


Paging also provides mechanisms for protecting memory and sharing it among different
processes. Each page can be assigned specific permissions, such as read-only or
read-write, to control access. Additionally, pages can be shared among multiple
processes, allowing them to access the same memory region. It's like giving certain
students access to read-only notes while allowing others to edit and add their own notes
to the same pages.

Security Measures in Paging:


●​ Paging enables the operating system to implement memory protection
mechanisms to prevent unauthorized access to memory regions.
●​ Each page table entry contains protection bits that specify whether the page is
readable, writable, or executable.

Mechanisms for Sharing Pages:


●​ Paging facilitates memory sharing among processes by allowing multiple processes
to map the same physical page to their respective logical address spaces.
●​ Shared memory regions can be used for inter-process communication and data
sharing.

E. Advantages and Disadvantages of Paging

Advantages:
1.​ Eliminates External Fragmentation:
●​ Paging divides memory into fixed-size pages and frames, preventing gaps
between allocated memory blocks. This avoids external fragmentation
where free memory is scattered in non-contiguous blocks.
2.​ Efficient Memory Utilization:
●​ Processes can be loaded into any available frames, making it easier to use
all available memory without worrying about finding a large enough
contiguous block of memory.
3.​ Simplicity in Memory Management:
●​ Fixed-size pages simplify memory allocation and deallocation. The operating
system can easily keep track of free frames and allocate them to processes
as needed.
4.​ Support for Virtual Memory:

15
NotesNeo
●​ Paging supports virtual memory, allowing processes to use more memory
than is physically available by swapping pages between physical memory
and secondary storage (e.g., a hard disk).
5.​ Protection and Isolation:
●​ Each process has its own page table, ensuring that one process cannot
accidentally access another process's memory. This provides better
protection and isolation between processes.
6.​ Flexible Address Space:
●​ Paging allows processes to have a non-contiguous address space, making it
easier to manage memory for processes with varying memory needs and
sizes.
7.​ Easier to Load Large Programs:
●​ Large programs can be loaded into memory even if there isn’t a single large
contiguous block of free memory. The program is divided into pages, and
each page can be loaded into any available frame.

Disadvantages:
1.​ Internal Fragmentation:
●​ Since pages and frames are of fixed size, there can be wasted space within
pages if a process does not use the entire page. This unused space within a
page is known as internal fragmentation.
2.​ Overhead of Maintaining Page Tables:
●​ Paging requires additional memory and computational overhead to
maintain page tables. Each process needs a page table to map its logical
pages to physical frames.
3.​ TLB Miss Penalty:
●​ Address translation involves looking up entries in the Translation Lookaside
Buffer (TLB). If the TLB does not contain the required page table entry (a
TLB miss), it must be fetched from the page table in memory, which can slow
down the process.
4.​ Complexity in Address Translation:
●​ Translating logical addresses to physical addresses requires additional steps
and hardware support (such as a Memory Management Unit, or MMU),
adding complexity to the system.
5.​ Thrashing:
●​ Thrashing occurs when the system spends excessive time swapping pages
between main memory and secondary storage due to high paging activity.
This severely degrades system performance.
6.​ Memory Overhead for Page Tables:
●​ Large processes require large page tables, which consume additional
memory. This can be particularly problematic in systems with limited
memory.
7.​ Increased Latency:
●​ Accessing memory through paging can introduce latency due to the multiple
steps involved in address translation, including potential TLB lookups and
page table accesses.

16
NotesNeo

IV. Segmentation
Segmentation is a memory management technique in operating systems where memory
is divided into variable-sized parts called segments. Each segment can be allocated to a
process, and details about each segment are stored in a table called a segment table.

Segment table contains mainly two information about segment :


1.​ Base: It is the base address of the segment.
2.​ Limit: It is the length of the segment.

Why Segmentation is Required:


●​ Till now, we were using Paging as our main memory management technique.
Paging is more close to the Operating system rather than the User. It divides all the
processes into the form of pages regardless of the fact that a process can have
some relative parts of functions which need to be loaded in the same page.
●​ Operating system doesn't care about the User's view of the process. It may divide
the same function into different pages and those pages may or may not be loaded
at the same time into the memory. It decreases the efficiency of the system.
●​ It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.

Translation of Logical Address:


When the CPU generates a logical address, it contains a segment number and an offset.
For example, a 16-bit address with 4 bits for the segment number and 12 bits for the offset
allows for a maximum segment size of 4096 and a maximum of 16 referred segments.
Segmentation translates logical addresses into physical addresses using segment tables.
The segment number is mapped to the segment table, and the offset is compared with the
segment's limit. If valid, the base address of the segment is added to the offset to obtain
the physical address in main memory.

17
NotesNeo

Advantages of Segmentation:
1.​ No internal fragmentation: Segmentation reduces internal fragmentation, as
segments are variable-sized.
2.​ Larger average segment size: Segments can be larger than pages, optimizing
memory usage.
3.​ Less overhead: Segmentation has lower overhead compared to other techniques.
4.​ Easy relocation: Relocating segments is easier than relocating entire address
spaces.
5.​ Smaller segment tables: Segment tables are smaller than page tables in paging.

Disadvantages of Segmentation:
1.​ External fragmentation: Segmentation may lead to external fragmentation,
reducing memory efficiency.
2.​ Difficulty in contiguous memory allocation: Allocating contiguous memory to
variable-sized partitions can be challenging.
3.​ Costly memory management: The algorithms used in segmentation can be
resource-intensive.

Paging vs Segmentation:

Feature Paging Segmentation

Contiguity of Memory allocation is Memory allocation is


Memory non-contiguous, with pages non-contiguous, with segments
Allocation distributed throughout memory distributed throughout memory

Division of Divides program into fixed-size Divides program into


Program pages variable-size segments

Managed by the operating


Responsibility Managed by the compiler
system

Generally faster than


Speed Typically slower than paging
segmentation

Proximity to
Closer to the Operating System Closer to the User
System

Fragmentation Prone to internal fragmentation Prone to external fragmentation

External Not applicable, as pages are of Not applicable, as segments are


Fragmentation fixed size variable in size

Logical address divided into


Logical Address Logical address divided into
segment number and segment
Division page number and page offset
offset

Table Page table used to maintain Segment table used to maintain


Maintenance page information segment information

18
NotesNeo

Each entry contains frame Each entry contains base address


Entry Content
number and flag bits of segment and protection bits

Section 2 : Virtual Memory

V. Virtual Memory

A. Basics of Virtual Memory

Definition and Purpose:


●​ Virtual memory is a memory management technique that allows a computer to use
more memory than is physically available by temporarily transferring data from
main memory (RAM) to secondary storage (hard disk or SSD).
●​ It allows multiple processes to run simultaneously without the need for all program
code and data to be loaded into physical memory simultaneously.
●​ The significance of the virtual memory is that it provides the illusion that a process
whose size is larger than the size of main memory can also be executed.

How Virtual Memory Works:


●​ When a process requires more memory than the available physical RAM, the
operating system transfers some of the less frequently used data from RAM to the
secondary storage, freeing up space in RAM for other processes.
●​ This process, known as paging or swapping, allows multiple processes to share the
limited physical memory effectively.

●​ When data is needed from the secondary storage, it is swapped back into RAM,
replacing other data if necessary. This is transparent to the user and appears as if
the entire program is stored in RAM.

19
NotesNeo
●​ Virtual memory management is handled by the operating system's memory
management unit (MMU), which performs address translation between logical
addresses used by the CPU and physical addresses in RAM or secondary storage.

Snapshot of Virtual Memory Management System:


Consider two processes, P1 and P2, each having four pages with a page size of 1 KB. The
main memory has eight frames of 1 KB each, with the OS residing in the first two
partitions. The image depicts the main memory with pages from both processes and their
corresponding page tables. A CPU register contains the base address of each process's
page table, facilitating access to the relevant entries.

Features of Virtual Memory:


●​ Address Translation: Logical addresses generated by the CPU are translated into
physical addresses by the memory management unit (MMU) using page tables.
●​ Demand Paging: Only the portions of a program that are actively being used are
loaded into physical memory, reducing the need to load the entire program into
memory at once.
●​ Page Replacement: When the physical memory is full, the OS must decide which
pages to swap out to make room for new pages. This is handled by page
replacement algorithms such as FIFO (First-In, First-Out), LRU (Least Recently
Used), or others.

Address Translation Process:


●​ Generation of Logical Address: CPU generates logical addresses for each page of a
process, consisting of a page number and offset.

20
NotesNeo
●​ Scaling: The page table base address is added to the page number to locate the
page entry in the table. This process is known as scaling.
●​ Generation of Physical Address: The page table entry provides the frame number
corresponding to the page number. The physical address is generated, comprising
the frame number and offset.
●​ Accessing Main Memory: The frame number and offset are used to access the
actual memory location.

Page Table:
In the context of virtual memory management, a Page Table is a critical data structure
used by the operating system to manage the mapping between logical addresses
generated by the CPU and physical addresses in the main memory.

Logical Addresses vs. Physical Addresses:


●​ Logical addresses: Generated by the CPU for accessing data within processes.
●​ Physical addresses: Actual addresses of memory locations used by hardware or
RAM subsystems.

Purpose of Page Table:


●​ The CPU operates with logical addresses, whereas the main memory recognizes
physical addresses.
●​ The Memory Management Unit (MMU) is responsible for translating logical
addresses into physical addresses. It uses the page table for this translation.
●​ Page table stores the mapping between page numbers (logical addresses) and
frame numbers (physical addresses).

Page Table Entry:


●​ Each entry in the page table holds the frame number and additional bits
representing various attributes of the page.
●​ Attributes include caching status, referencing status, modification status,
protection level, and presence/absence in main memory.

Size of the Page Table:


●​ The size of the page table depends on the number of pages and the bytes stored in
each entry.
●​ For instance, if the logical address space is 24 bits and the page size is 4 KB:
○​ Number of pages = 2^12 = 4 KB
○​ Size of the page table = 4 KB (assuming each entry is 1 byte)
●​ In some cases, the size of the page table may not match the frame size. In such
cases, the page table is considered a collection of frames and stored in different
frames in main memory.

Advantages of Virtual Memory:


1.​ Execution of Large Processes: Processes larger than the physical memory can be
executed, as only portions of the process are loaded into RAM as needed.

21
NotesNeo
2.​ Increased memory capacity: Virtual memory enables computers to run programs
that require more memory than what is physically available.
3.​ Improved performance: By utilizing secondary storage, virtual memory reduces
the chances of running out of memory, thereby preventing system crashes and
improving overall performance.
4.​ Increased Degree of Multiprogramming: Allows more processes to be run
simultaneously by utilizing secondary storage as additional memory.
5.​ Efficient Use of Physical RAM: Optimizes the utilization of physical RAM by
swapping out less frequently used data.
6.​ Cost-Effective: Eliminates the need to purchase additional physical RAM, reducing
hardware costs.

Disadvantages of Virtual Memory:


1.​ Page Fault Overhead: Handling page faults involves significant overhead,
including disk I/O and context switching, which can slow down system
performance.
2.​ Internal Fragmentation: Even though virtual memory reduces external
fragmentation, there can still be internal fragmentation within pages.
3.​ Complexity: Managing virtual memory, including maintaining page tables and
handling page faults, adds complexity to the operating system.
4.​ Thrashing: Excessive paging activity (thrashing) can occur if there are too many
page faults, severely degrading system performance.
5.​ Increased Latency: Swapping data between RAM and secondary storage can
increase latency, especially when accessing data not currently in RAM.
6.​ Reduced Disk Space: Allocating space on the secondary storage for virtual
memory reduces the available disk space for user data and applications.

B. Hardware and Control Structures


Hardware support for virtual memory includes the Memory Management Unit (MMU) and
the operating system's control structures. The MMU translates virtual addresses to
physical addresses using page tables, similar to how it's done in paging. The operating
system manages these structures, allocating and deallocating memory as needed.

Components Involved in Virtual Memory:


●​ Memory Management Unit (MMU): Hardware component that translates virtual
addresses to physical addresses using page tables, similar to how it's done in
paging.
●​ Page Tables: Data structures used by the MMU to store the mapping between
logical and physical addresses.
●​ Translation Lookaside Buffer (TLB): Hardware cache that stores recently accessed
page table entries to speed up address translation.

22
NotesNeo
Control Structures for Efficient Virtual Memory Management:
●​ Page Table Entries: Each entry in the page table contains information about the
mapping between a logical page number and a physical frame number, as well as
status bits for protection and other attributes.

C. Locality of Reference
Locality of reference is a key concept in virtual memory management that describes the
tendency of programs to access memory locations near each other.

Understanding the Principle of Locality:


●​ Temporal Locality: The tendency of a program to access the same memory
locations repeatedly over a short period of time.
●​ Spatial Locality: The tendency of a program to access memory locations that are
close to each other in address space.

Importance in Virtual Memory Systems:


●​ Locality of reference allows the operating system to optimize virtual memory
performance by prefetching and caching pages that are likely to be accessed in the
near future.
●​ Efficient use of locality reduces the number of page faults and improves overall
system responsiveness.

D. Page Fault Handling

Causes of Page Faults:


●​ Page faults occur when a program tries to access a page of memory that is not
currently in RAM. When this happens, the operating system must handle the page
fault by fetching the required page from disk into RAM.
●​ Page faults can occur due to demand paging, where pages are loaded into
memory only when they are needed, or due to page replacement, where a page is
evicted from memory to make room for a new page.

Handling Page Faults in Virtual Memory:


●​ Page fault handling involves suspending the offending process, transferring data
between disk and RAM, updating page tables, and resuming the process.
●​ Page Fault Handler: The operating system's mechanism for handling page faults by
loading the required page into physical memory from secondary storage.
●​ Page Replacement Algorithms: When physical memory is full, the operating system
selects a page to evict from memory based on a page replacement algorithm.

E. Working Set
The working set is a measure of the set of pages that a process is actively using at any
given time.

23
NotesNeo
Definition and Significance:
●​ The working set concept defines the set of pages that a process actively uses
during its execution. It represents the minimum amount of memory needed to
prevent excessive page faults and maintain good performance.
●​ The working set changes over time as the program's memory requirements evolve.
Operating systems use the working set to make decisions about page replacement
and memory allocation.
●​ It's like the items you need for a specific task, such as ingredients for cooking,
which may vary depending on the recipe you're preparing.
●​ Working Set Size: The number of pages that a process typically accesses during a
given time interval, such as a fixed time window.
●​ Application: The working set concept is used by the operating system to
dynamically adjust memory allocation to processes based on their working set
sizes, ensuring optimal performance.

F. Dirty Page and Dirty Bit


The dirty page and dirty bit are mechanisms used to track whether a page in memory has
been modified since it was last loaded from secondary storage.

Explanation:
●​ Dirty Page: A dirty page is a page of memory that has been modified since it was
last read from disk or written to disk.
●​ Dirty Bit: The dirty bit is a flag associated with each page which indicates whether
the page has been modified.
●​ When a dirty page is evicted from memory, it must be written back to disk to
ensure data integrity. The dirty bit helps the operating system identify which pages
need to be written back to disk.

Handling Dirty Pages in Virtual Memory:


●​ Dirty Page Handling: When a dirty page is evicted from memory, it must be written
back to secondary storage to ensure data integrity.
●​ Dirty Bit Management: The operating system updates the dirty bit whenever a
page is modified, allowing it to efficiently identify dirty pages during page
replacement.

VI. Demand Paging

A. Concept and Implementation


●​ Demand paging is a memory management scheme used by operating systems to
efficiently manage memory by bringing pages into memory only when they are
required or demanded by the CPU. This approach helps in optimizing memory
usage and reducing unnecessary overhead.
●​ Unlike traditional memory management schemes where entire programs are
loaded into memory at once, demand paging brings in only the required pages as
the program executes.

24
NotesNeo
●​ This approach saves memory space and reduces the time needed to load programs
into memory initially. When a program references a page that is not currently in
memory, a page fault occurs, triggering the operating system to load the required
page from secondary storage (such as a hard drive) into memory.

How Demand Paging Works:


1.​ Page Fault Handling:
●​ When a process needs a page that is not currently in the main memory, a
page fault occurs.
●​ The operating system intercepts this page fault and suspends the process
temporarily.
●​ It then checks whether the requested page is valid or not.
2.​ Page Replacement:
●​ If the requested page is not valid, the operating system locates a free frame
in the main memory to bring in the required page.
●​ An I/O operation is scheduled to move the page from the secondary
memory (disk) to the allocated frame in the main memory.
●​ Once the page is brought into memory, the process's page table is updated,
marking the page as valid.
3.​ Process Resumption:
●​ The suspended process is then resumed, and the instruction that caused the
page fault is restarted from the beginning.
●​ The process can now access the required page from the main memory
without causing further page faults.

B. Advantages and Disadvantages

Advantages:
1.​ Efficient Memory Usage: Demand paging allows for more efficient use of memory
by loading only the necessary pages into memory, thereby reducing memory
wastage.
2.​ Faster Program Startup: Since only the essential pages are loaded initially,
programs can start up faster compared to loading the entire program into memory
at once.
3.​ Improved System Responsiveness: Demand paging helps in improving overall
system responsiveness by reducing the amount of time spent loading unnecessary
pages into memory.
4.​ Fragmentation Avoidance: Demand paging helps avoid external fragmentation by
allocating memory on-demand rather than allocating fixed-sized blocks.
5.​ Reduced I/O Overhead: Only pages that are actually needed by processes are
loaded into memory, reducing unnecessary I/O operations.
6.​ Flexibility in Memory Management: Demand paging provides flexibility in
managing memory, as pages can be loaded and unloaded dynamically based on
process requirements.

25
NotesNeo
7.​ Support for Large Programs: Demand paging enables the execution of large
programs that may not fit entirely in physical memory, as only the required
portions are loaded.

Disadvantages:
1.​ Increased Overheads: Demand paging introduces additional overhead, interrupts,
and page table management, due to the need to manage page faults and load
pages from secondary storage into memory as needed. This overhead can impact
system performance.
2.​ Potential for Thrashing: If the system constantly swaps pages between main
memory and secondary memory due to high memory pressure, it can lead to
thrashing, where the system spends more time swapping pages than executing
processes.
3.​ Complexity in Page Fault Handling: Managing page faults and ensuring efficient
page replacement strategies can be complex and require careful optimization to
avoid performance bottlenecks.
4.​ Longer Memory Access Times: Accessing pages from secondary memory (disk) is
slower compared to accessing pages from main memory, leading to longer
memory access times.
5.​ Complexity in Implementation: Implementing demand paging requires complex
algorithms for page replacement, handling page faults, and managing page tables.

Pure Demand Paging:


●​ In pure demand paging, no pages are initially loaded into memory when a process
starts execution.
●​ Pages are brought into memory only when demanded by the process, resulting in
page faults at the beginning of execution.
●​ Pure demand paging minimizes initial memory usage but may lead to frequent
page faults until all required pages are loaded into memory.

VII. Page Replacement Algorithms


Page replacement algorithms are used in demand-paged memory management systems
to decide which pages should be replaced when a page fault occurs and there is no free
space in memory. These algorithms aim to minimize the number of page faults and
optimize memory usage by selecting the most appropriate page to evict from memory.
There are three main types of Page Replacement Algorithms. They are:
1.​ Optimal Page Replacement Algorithm
2.​ First In First Out (FIFO) Page Replacement Algorithm
3.​ Least Recently Used (LRU) Page Replacement Algorithm

A. Optimal Page Replacement


●​ The Optimal Page Replacement Algorithm replaces the page that will not be used
for the longest period of time in the future. It is considered the best possible
algorithm because it results in the lowest possible page-fault rate.

26
NotesNeo
●​ When a page fault occurs, the algorithm looks ahead in the reference string to see
which pages will be used and replaces the one that will not be used for the longest
time.

How OPTIMAL Works:


1.​ Maintain a set of memory frames: These frames hold the pages loaded from
secondary storage.
2.​ Process the reference string: This string represents the sequence of page
references made by the program.
3.​ For each page reference:
●​ If the page is already in a frame:
○​ A page hit occurs (no action needed).
●​ If the page is not in a frame (page fault):
○​ Find the page in memory that will be used farthest in the future
based on the remaining reference string.
○​ Replace that page with the new page.

Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0, with three
frames. Let's calculate the number of page faults using the OPTIMAL Page Replacement
Algorithm.

●​ Number of Page Hits = 8


●​ Number of Page Faults = 12
●​ Ratio of Page Hit to the Page Fault = 8 : 12 ---> 2 : 3 ---> 0.66
●​ Page Hit Percentage = (8 / 20) * 100 = 40%
●​ Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Advantages:
●​ Optimal Page Replacement guarantees the fewest possible page faults because it
always picks the best page to replace.
●​ It gives the best performance in theory.
●​ It is used as a benchmark to compare the performance of other page replacement
algorithms

27
NotesNeo
Limitations:
●​ The main drawback of the Optimal Page Replacement algorithm is that it requires
knowledge of future memory access patterns, which is impossible to know in
real-world scenarios.
●​ It's not feasible to implement because it's like trying to predict the future, which is
beyond the capabilities of computers.
●​ It is not used in real systems due to its impracticality.

B. First-In-First-Out (FIFO) Page Replacement


●​ The FIFO algorithm replaces the oldest page in memory, the one that has been in
memory the longest by using a queue (FIFO).
●​ When a page fault occurs, the page that has been in memory the longest (the first
one loaded) is replaced, regardless of how frequently or recently it has been
accessed.

How FIFO Works:


1.​ Maintain a set of memory frames: These frames hold the pages loaded from
secondary storage.
2.​ Process the reference string: This string represents the sequence of page
references made by the program.
3.​ For each page reference:
●​ If the page is already in a frame:
○​ A page hit occurs (no action needed).
●​ If the page is not in a frame (page fault):
○​ Find the page that has been in memory the longest (at the front of
the queue).
○​ Replace that page with the new page.

Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0, with three
frames. Let's calculate the number of page faults using the FIFO Page Replacement
Algorithm.

●​ Number of Page Hits = 8


●​ Number of Page Faults = 12

28
NotesNeo
●​ Ratio of Page Hit to the Page Fault = 8 : 12 ---> 2 : 3 ---> 0.66
●​ Page Hit Percentage = 8 *100 / 20 = 40%
●​ Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Advantages:
●​ Simplicity: FIFO is easy to implement because it only requires keeping track of the
order in which pages were loaded into memory.
●​ Low overhead: FIFO has minimal overhead compared to more complex page
replacement algorithms.

Limitations:
●​ High number of page faults: It can result in a high number of page faults,
especially for frequently accessed pages that are replaced simply because they
were loaded first.
●​ Belady's Anomaly: One of the main drawbacks of FIFO is the occurrence of
Belady's Anomaly, where increasing the number of page frames can unexpectedly
lead to more page faults instead of reducing them.
●​ Poor performance with certain access patterns: FIFO may not perform well with
certain access patterns, especially those that exhibit poor locality of reference.
●​ Lack of adaptability: FIFO does not consider how often pages are accessed or how
recently they were used, which can lead to suboptimal performance in some cases.

C. Least Recently Used (LRU) Page Replacement


●​ The LRU algorithm replaces the page that has not been used for the longest period
of time in the past.
●​ When a page fault occurs, the algorithm looks back at the history of page usage
and replaces the page that was least recently accessed.

How LRU Works:


1.​ Maintain a set of memory frames: These frames hold the pages loaded from
secondary storage.
2.​ Track recently used pages: LRU utilizes a mechanism (often a data structure like a
doubly linked list or a hash table with timestamps) to keep track of which page was
used most recently.
3.​ Process the reference string: This string represents the sequence of page
references made by the program.
4.​ For each page reference:
●​ If the page is already in a frame:
○​ A page hit occurs (update its usage marker to reflect recent access).
●​ If the page is not in a frame (page fault):
○​ Find the least recently used page (based on the tracking mechanism).
○​ Replace that page with the new page.

29
NotesNeo
Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0, with three
frames. Let's calculate the number of page faults using the LRU Page Replacement
Algorithm.

●​ Number of Page Hits = 7


●​ Number of Page Faults = 13
●​ Ratio of Page Hit to the Page Fault = 7 : 12 ---> 0.5833 : 1
●​ Page Hit Percentage = (7 /20) * 100 = 35%
●​ Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%

Advantages:
●​ Optimality: LRU is designed to approximate optimal page replacement by selecting
the page that has not been used for the longest time. This helps minimize the
number of page faults and improve overall system performance.
●​ Simple concept: Despite its optimality, the concept behind LRU is relatively simple
and intuitive to understand.
●​ Good performance in practice: LRU tends to perform well in practice for many
types of workloads and access patterns.

Limitations:
●​ Complex implementation: While the concept of LRU is simple, its implementation
can be complex and require additional overhead, especially in systems with limited
resources or large amounts of memory.
●​ Expensive to maintain access history: Keeping track of access times for each page
can be expensive in terms of memory and computational resources, especially in
systems with a large number of pages.
●​ Vulnerable to access patterns: LRU may not perform well with certain access
patterns, such as those with frequent access to a small subset of pages or those
that exhibit cyclic access patterns.

D. Comparison of Page Replacement Algorithms


●​ Optimal page replacement guarantees the fewest page faults but is impractical to
implement.
●​ FIFO is simple to implement but suffers from the Belady's Anomaly.

30
NotesNeo
●​ LRU provides good performance by approximating the optimal algorithm but may
be complex to implement efficiently.

Algorithm Advantages Disadvantages Implementability Performance

Lowest page Requires future Theoretical, not


Optimal Best possible
fault rate knowledge practical

Simple and Can cause many


Simple to Can be
FIFO easy to page faults
implement inefficient
implement (Belady's anomaly)

Considers Requires complex


Moderate Generally
LRU actual usage tracking and
complexity effective
patterns additional resources

31
NotesNeo

MDU PYQs (Short)

May 2023
1.​ Differentiate between paging and segmentation.
2.​ Difference between contiguous memory allocation and non contiguous memory
allocation ?

July 2022
3.​ What is contiguous memory location ?

Others
4.​ Why are page size always powers of 2, explain ?
5.​ Differentiate between virtual memory and main memory.
6.​ What is the cause of thrashing ? How did system detects thrashing ?

MDU PYQs (Long)

May 2023
1.​ (a) Explain the concept of virtual memory and its advantages and disadvantages.
(b) What is fragmentation ? Explain the difference between internal and external
fragmentation briefly.
2.​ Explain the following:
a.​ Optimal Page Replacement and Least Recently Used.
b.​ Demand Paging.

July 2022
3.​ What is Paging ? Describe various page replacement algorithms ?
4.​ (a) What is Memory Management ? Explain Contiguous and Non-contiguous
Memory allocation.
(b) Explain internal and external fragmentation with example.

July 2021
5.​ What do you mean by page replacement ? Describe various page replacement
algorithm.
6.​ What is memory management ? Explain the concept of virtual memory.

Others
7.​ Describe first fit, best fit and worst fit mechanisms for memory allocation.
8.​ Explain paging and segmentation techniques.
9.​ Explain virtual memory management.
10.​Explain swapping in detail.

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy