Overlays in Memory Management
Overlays in Memory Management
In memory management, overlays are a technique used to enable programs to run within limited memory by dynamically
loading only the parts of a program that are required at a given time. This is particularly useful when the total size of a
program exceeds the available physical memory.
1. Program Segmentation:
o A large program is divided into smaller segments or modules.
o Only the required module or segment is loaded into memory while the others are stored on disk.
2. Dynamic Loading:
o When a particular part of the program is needed, it is loaded from secondary storage (disk) into memory.
o Once its execution is complete, it can be unloaded to free up memory space for other parts of the program.
3. Overlay Manager:
o A specialized part of the operating system or program that manages which parts of the program are loaded
or swapped in and out of memory.
o The manager ensures that the correct segment is available in memory when needed.
4. Memory Efficiency:
o Overlays help in managing memory efficiently, especially in systems with limited RAM.
o By loading only essential parts of the program, it reduces memory wastage and ensures that larger programs
can run on smaller systems.
Overlay Structure:
Main Segment: This is the central part of the program that remains in memory throughout execution. It contains
the overlay manager.
Overlay Segments: These are the sections of the program that are loaded and unloaded dynamically as needed
during execution.
Example:
Imagine a program consisting of three functions: A, B, and C. If the system memory cannot hold all three at once, an
overlay can be used to load only the function that is being executed at the moment.
This process continues until the program completes execution, ensuring that only a small portion of the program is ever in
memory at any given time.
Benefits of Overlays:
1. Memory Conservation: Overlays allow programs to run within limited memory, making efficient use of available
resources.
2. Execution of Large Programs: Programs larger than the physical memory can still run by loading sections
dynamically.
3. Cost Reduction: Reduces the need for expensive hardware upgrades by allowing large applications to run on
systems with lower memory.
Drawbacks of Overlays:
1. Complexity: Designing and implementing overlays can be complex, requiring careful planning of which segments
are loaded and when.
2. Performance Overhead: The frequent loading and unloading of program segments can slow down the execution
speed.
3. Obsolete Technique: With modern systems having large amounts of memory and advanced virtual memory
techniques, overlays are rarely used today.
Illustration:
Memory: +-----------------+
| Main Program |
+-----------------+
| Overlay Segment |
+-----------------+
| Free Space |
+-----------------+
Process:
This overlay structure helps fit large programs in limited memory environments, with the Main Program (containing the
overlay manager) always in memory and various overlay segments being loaded as needed.
Modern Use:
While overlays were a common technique in older systems, modern operating systems often use virtual memory to handle
similar situations by swapping data between RAM and disk. However, overlays are still an important concept for
understanding memory management in early systems.
VIRTUAL MEMORY
Virtual Memory is a memory management technique used by modern operating systems to give the illusion of having a
large amount of physical memory (RAM) available, even when the actual physical memory is limited. It allows programs
to use more memory than is physically present on the system by extending RAM onto the hard disk or SSD.
1. Address Space:
o In virtual memory, the logical (virtual) address space used by a process is independent of the physical
address space in the hardware.
o A program can be assigned a large contiguous address space, while the actual data might be stored in
different physical memory locations or even on disk.
2. Paging:
o Virtual memory systems often use a technique called paging to divide the virtual memory into fixed-size
blocks called pages (usually 4KB).
o Correspondingly, physical memory is divided into page frames, which are also of the same size as pages.
o Pages are loaded into physical memory as needed, and if memory runs out, some pages are moved to disk
(known as page swapping).
3. Page Table:
o Each process maintains a page table, which keeps track of where virtual pages are stored in physical
memory (or on disk if paged out).
o The page table maps virtual addresses to physical addresses.
4. Page Fault:
o A page fault occurs when a program tries to access a page that is not currently loaded in physical memory.
o The operating system handles this by loading the required page from disk into memory and updating the
page table.
5. Swap Space:
o The area on the disk used for storing pages that are swapped out of memory is called swap space or paging
file.
o When memory is full, least-recently-used pages are written to swap space to free up physical memory.
Let’s assume a system with 4 GB of physical RAM but with a program that needs 8 GB of memory to run. Virtual memory
makes this possible:
In this illustration, the system has limited physical memory, but processes A and B have larger virtual address
spaces. Pages are loaded and swapped between RAM and disk as needed.
Performance Considerations:
1. Thrashing:
o Thrashing occurs when the system spends most of its time swapping pages in and out of memory, resulting
in very poor performance.
o Thrashing can happen if the working set of pages needed by a program exceeds the available physical
memory.
2. Page Replacement Algorithms:
o To decide which pages to swap out of memory when needed, the operating system uses page replacement
algorithms, such as:
Least Recently Used (LRU): Replaces the page that has not been used for the longest time.
First-In, First-Out (FIFO): Replaces the page that has been in memory the longest.
Optimal: Replaces the page that will not be needed for the longest time in the future.
Summary:
Virtual memory allows systems to run larger applications than the available physical memory.
It uses paging to load only the necessary parts of programs into memory.
The operating system manages page faults, page tables, and swap space to ensure efficient memory usage.
While virtual memory is highly effective, improper tuning or overuse can lead to thrashing and performance
degradation
Page Replacement Algorithms are techniques used in operating systems to manage how pages are swapped in and out of
physical memory when the system is running low on memory. When a page fault occurs (when a program tries to access a
page that is not currently in memory), the operating system must decide which page to evict to make room for the new
page. The choice of which page to replace can significantly affect system performance.
Illustration:
Imagine you have 3 page frames and the following page reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
In LRU, when the page frame is full, it replaces the page that was least recently used. After accessing each page, the most
recently used page is placed at the end of the list, and the least recently accessed one is replaced.
Illustration:
Using the same page reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Illustration:
This algorithm chooses the page that will not be used for the longest time in the future. Using the same reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
4. Here, page 4 is evicted at step 7 because, based on future knowledge, it won’t be needed for the longest time
compared to the others.
Concept:
Illustration:
Step Reference Frame 1 Frame 2 Frame 3 Frequency Count (1, 2, 3, 4, 5) Page Fault?
Explanation:
1. At step 6, when 4 needs to be loaded, we see that pages 1, 2, and 3 are already loaded. Among these, page 3 has the
lowest frequency count (only used once), so it is replaced by 4.
2. At step 7, when 5 needs to be loaded, page 4 is retained (used once), while page 3 is evicted since it is no longer the
least frequently used.
3. At step 10, when 3 comes back into use, 1 is evicted as it has not been accessed for a while (its frequency count is
now 0).
Behavior of LFU:
LFU tends to hold onto frequently accessed pages and replaces less frequently used ones.
When two or more pages have the same frequency, a tie-breaker like the FIFO principle is often used.
The Second Chance algorithm is an enhancement of the FIFO (First In First Out) algorithm, designed to improve
performance by giving pages a "second chance" before being replaced.
How It Works:
1. Reference Bit: Each page in memory has an associated reference bit, which is set to 1 when the page is accessed (read or
written).
2. Clock Hand: Pages are arranged in a circular list (like a clock), and a "clock hand" points to the next page to be replaced.
3. When a page needs to be replaced:
o If the reference bit of the page pointed to by the clock hand is 0, the page is replaced.
o If the reference bit is 1, the bit is cleared to 0 (giving the page a second chance), and the clock hand moves to the
next page.
The algorithm continues this process until it finds a page with a reference bit of 0, which is then replaced.
Illustration:
Initial Setup:
Step Reference Frame 1 Frame 2 Frame 3 Reference Bits (1, 2, 3, 4, 5, 6) Clock Hand Page Fault?
Explanation:
1. Step 1-3: The first three pages (1, 2, 3) are loaded into frames, causing page faults each time. Their reference bits are set to
1.
2. Step 4: When page 1 is referenced again, its reference bit is already 1, so no page replacement occurs.
3. Step 5: When page 4 needs to be loaded, the clock hand points to 1. Since the reference bit of 1 is 1, it gives the page a
second chance, setting its reference bit to 0 and moving the hand to page 2. The process continues until 1 is replaced by 4.
4. Step 6-10: The clock hand continues to move in a circular manner, resetting the reference bits of pages that are accessed
frequently and replacing those that are accessed less often, as seen when 5 and 6 are loaded into memory.
Fairness: Pages that have been used recently get a second chance before being replaced.
Efficiency: It works well in practice by reducing the chances of unnecessary page replacements.
This illustration gives an overview of the Second Chance (Clock) Page Replacement Algorithm and its approach to
replacing pages in memory.
The Most Recently Used (MRU) page replacement algorithm is based on the idea that the page that was used most
recently is likely to be used again soon. Unlike LRU (Least Recently Used), which replaces the least recently used page,
MRU replaces the page that has been most recently accessed.
How It Works:
Whenever a page needs to be replaced, the page that was most recently used is selected for removal.
The assumption behind MRU is that the most recently used page is likely to be accessed soon, and thus keeping it in memory
may not be efficient in some scenarios.
Illustration:
Steps:
Step Reference Frame 1 Frame 2 Frame 3 Most Recently Used Page Page Fault?
1 1 1 1 Yes
2 2 1 2 2 Yes
3 3 1 2 3 3 Yes
Step Reference Frame 1 Frame 2 Frame 3 Most Recently Used Page Page Fault?
4 4 1 2 4 4 (Replaced 3) Yes
5 1 1 2 4 1 No
6 2 1 2 4 2 No
7 5 1 5 4 5 (Replaced 2) Yes
8 6 1 5 6 6 (Replaced 4) Yes
Explanation:
1. Step 1-3: The first three pages (1, 2, 3) are loaded into the frames, causing page faults, as they are being accessed for the
first time. Page 3 is the most recently used.
2. Step 4: When page 4 is requested, the algorithm replaces page 3 (the most recently used page), loading 4 into the frame.
3. Step 5-6: Pages 1 and 2 are accessed again, but since they are already in memory, there are no page faults.
4. Step 7: Page 5 needs to be loaded, and since page 2 was the most recently used, it gets replaced with page 5.
5. Step 8: Page 6 is requested, replacing the most recently used page 4.
Visual Representation:
Imagine that after every page request, the most recently used page is marked. When a new page is requested and a
replacement is necessary, the page that was most recently used is replaced.
MRU in a Diagram:
yaml
Copy code
Step 4: Replacing most recently used page (3)
Before:
Frames: [1, 2, 3]
Most Recently Used: 3
After:
Frames: [1, 2, 4]
Most Recently Used: 4
Advantages:
Useful in Specific Situations: MRU can be efficient in workloads where the most recently accessed page is unlikely to be
needed soon (e.g., when data is read sequentially).
Disadvantages:
Non-intuitive: In general, MRU can lead to inefficient memory usage in many scenarios because it assumes that the most
recently used page will not be needed again, which is not true for all programs.
This illustrates how the Most Recently Used (MRU) algorithm works and how it determines which page to replace based
on recent usage.
Least Recently Used (LRU) Moderate Low Good performance High overhead for tracking usage
First-In, First-Out (FIFO) Simple Moderate to High Easy to implement May lead to Belady's anomaly
3. Data Transfer:
o The DMA controller takes over the system bus to perform the data transfer. This can happen in different modes
(Burst Mode, Cycle Stealing, or Transparent Mode), depending on the system’s configuration.
4. Completion of Transfer:
o Once the DMA controller has completed the data transfer, it sends an interrupt signal back to the CPU, notifying it
that the operation is done.
1. Burst Mode:
o The DMA controller takes over the bus completely until the transfer is complete. The CPU cannot access memory
during this time.
2. Cycle Stealing Mode:
o The DMA controller "steals" a bus cycle from the CPU to transfer small amounts of data at a time, allowing the CPU
to work in between data transfers.
The Peripheral sends a request to the DMA Controller to begin the data transfer.
The CPU sets the parameters (e.g., address, size of data) and then allows the DMA controller to handle the transfer.
The DMA Controller directly communicates with the Memory and Peripheral, taking control of the system bus.
Once the transfer is complete, the DMA controller signals the CPU via an interrupt, informing it that the operation is done.
Benefits of DMA:
Frees CPU: The CPU is not involved in every part of the data transfer, allowing it to focus on other tasks.
Faster Transfers: Data moves faster because the DMA controller is optimized for bulk transfers without CPU overhead.
Efficient Use of System Resources: The system bus is used more efficiently, especially in burst mode.
Conclusion:
Direct Memory Access (DMA) is crucial in modern computer architecture as it enhances system efficiency, especially
during large data transfers. With DMA, systems can achieve better performance by offloading the work from the CPU to a
dedicated controller.
Disk arm scheduling algorithms are used to optimize the movement of the disk arm across the tracks of the disk. These
algorithms determine the sequence in which disk I/O requests are serviced to minimize disk seek time, improving the
overall efficiency of disk operations. Below are some key disk scheduling algorithms, each explained step by step with
detailed illustrations.
1. First-Come, First-Served (FCFS)
Explanation:
Steps:
Illustration:
Let's assume the disk has 200 tracks and the disk arm is initially at track 50. The following requests arrive in this order: 98,
183, 37, 122, 14, 124, 65.
vbnet
Copy code
Initial Head Position: 50
Step-by-Step Movements:
1. Move from 50 to 98 (seek distance = 48)
2. Move from 98 to 183 (seek distance = 85)
3. Move from 183 to 37 (seek distance = 146)
4. Move from 37 to 122 (seek distance = 85)
5. Move from 122 to 14 (seek distance = 108)
6. Move from 14 to 124 (seek distance = 110)
7. Move from 124 to 65 (seek distance = 59)
Explanation:
SSTF selects the request that is closest to the current position of the disk head.
It minimizes seek time by always choosing the nearest request.
However, SSTF can lead to starvation, where far-off requests may be delayed for a long time if nearby requests keep
arriving.
Steps:
Illustration:
Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 37 (seek distance = 28)
3. Move from 37 to 14 (seek distance = 23)
4. Move from 14 to 98 (seek distance = 84)
5. Move from 98 to 122 (seek distance = 24)
6. Move from 122 to 124 (seek distance = 2)
7. Move from 124 to 183 (seek distance = 59)
Explanation:
In the SCAN algorithm, the disk arm moves in one direction, servicing all requests along the way, until it reaches the end of
the disk.
Once it reaches the end, it reverses direction and services requests on the way back.
This is similar to an elevator moving up and down, hence the name "elevator algorithm."
Steps:
Illustration:
Starting position: 50, Requests: [98, 183, 37, 122, 14, 124, 65]
vbnet
Copy code
Initial Head Position: 50
Direction: Moving towards higher tracks
Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 98 (seek distance = 33)
3. Move from 98 to 122 (seek distance = 24)
4. Move from 122 to 124 (seek distance = 2)
5. Move from 124 to 183 (seek distance = 59)
6. Reverse direction at 183, move to 37 (seek distance = 146)
7. Move from 37 to 14 (seek distance = 23)
Explanation:
C-SCAN is a variant of SCAN where the disk arm services requests in one direction only.
When it reaches the end, it returns immediately to the start without servicing any requests, creating a circular pattern.
This ensures uniform wait times because it doesn’t prioritize requests at the ends of the disk.
Steps:
Illustration:
vbnet
Copy code
Initial Head Position: 50
Direction: Moving towards higher tracks
Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 98 (seek distance = 33)
3. Move from 98 to 122 (seek distance = 24)
4. Move from 122 to 124 (seek distance = 2)
5. Move from 124 to 183 (seek distance = 59)
6. Reset back to the start, no servicing (seek distance to 0)
7. Move from 0 to 14 (seek distance = 14)
8. Move from 14 to 37 (seek distance = 23)
LOOK Algorithm:
Similar to SCAN, but the disk arm only goes as far as the last request in each direction, without going to the edge of the disk.
C-LOOK Algorithm:
A circular version of LOOK where the disk arm moves in one direction until the last request and then returns to the
beginning, servicing no requests during the return.
Conclusion:
Different disk scheduling algorithms provide various ways to optimize disk access time. Each algorithm has its advantages
depending on the workload: