0% found this document useful (0 votes)
11 views16 pages

Overlays in Memory Management

Overlays in memory management allow programs to run within limited memory by dynamically loading only necessary segments, which reduces memory wastage. Virtual memory extends RAM onto disk storage, enabling programs to use more memory than physically available through techniques like paging and page tables. Page replacement algorithms, such as LRU and FIFO, manage how pages are swapped in and out of memory, impacting system performance.

Uploaded by

truphndo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views16 pages

Overlays in Memory Management

Overlays in memory management allow programs to run within limited memory by dynamically loading only necessary segments, which reduces memory wastage. Virtual memory extends RAM onto disk storage, enabling programs to use more memory than physically available through techniques like paging and page tables. Page replacement algorithms, such as LRU and FIFO, manage how pages are swapped in and out of memory, impacting system performance.

Uploaded by

truphndo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

OVERLAYS IN MEMORY MANAGEMENT

In memory management, overlays are a technique used to enable programs to run within limited memory by dynamically
loading only the parts of a program that are required at a given time. This is particularly useful when the total size of a
program exceeds the available physical memory.

Key Concepts of Overlays:

1. Program Segmentation:
o A large program is divided into smaller segments or modules.
o Only the required module or segment is loaded into memory while the others are stored on disk.
2. Dynamic Loading:
o When a particular part of the program is needed, it is loaded from secondary storage (disk) into memory.
o Once its execution is complete, it can be unloaded to free up memory space for other parts of the program.
3. Overlay Manager:
o A specialized part of the operating system or program that manages which parts of the program are loaded
or swapped in and out of memory.
o The manager ensures that the correct segment is available in memory when needed.
4. Memory Efficiency:
o Overlays help in managing memory efficiently, especially in systems with limited RAM.
o By loading only essential parts of the program, it reduces memory wastage and ensures that larger programs
can run on smaller systems.

Overlay Structure:

An overlay structure consists of:

 Main Segment: This is the central part of the program that remains in memory throughout execution. It contains
the overlay manager.
 Overlay Segments: These are the sections of the program that are loaded and unloaded dynamically as needed
during execution.

Example:

Imagine a program consisting of three functions: A, B, and C. If the system memory cannot hold all three at once, an
overlay can be used to load only the function that is being executed at the moment.

 Initially, function A is loaded into memory and executed.


 When the program calls function B, function A is swapped out of memory and B is loaded in its place.
 Similarly, if function C is needed next, function B is unloaded, and C is brought into memory.

This process continues until the program completes execution, ensuring that only a small portion of the program is ever in
memory at any given time.

Benefits of Overlays:

1. Memory Conservation: Overlays allow programs to run within limited memory, making efficient use of available
resources.
2. Execution of Large Programs: Programs larger than the physical memory can still run by loading sections
dynamically.
3. Cost Reduction: Reduces the need for expensive hardware upgrades by allowing large applications to run on
systems with lower memory.

Drawbacks of Overlays:
1. Complexity: Designing and implementing overlays can be complex, requiring careful planning of which segments
are loaded and when.
2. Performance Overhead: The frequent loading and unloading of program segments can slow down the execution
speed.
3. Obsolete Technique: With modern systems having large amounts of memory and advanced virtual memory
techniques, overlays are rarely used today.

Illustration:

Here is a simple illustration of overlays in memory management:

Memory: +-----------------+
| Main Program |
+-----------------+
| Overlay Segment |
+-----------------+
| Free Space |
+-----------------+

Process:

Step 1: Load Function A into Overlay Segment -> Execute Function A


Step 2: Swap out Function A -> Load Function B into Overlay Segment -> Execute Function B
Step 3: Swap out Function B -> Load Function C into Overlay Segment -> Execute Function C

This overlay structure helps fit large programs in limited memory environments, with the Main Program (containing the
overlay manager) always in memory and various overlay segments being loaded as needed.

Modern Use:

While overlays were a common technique in older systems, modern operating systems often use virtual memory to handle
similar situations by swapping data between RAM and disk. However, overlays are still an important concept for
understanding memory management in early systems.

VIRTUAL MEMORY

Virtual Memory is a memory management technique used by modern operating systems to give the illusion of having a
large amount of physical memory (RAM) available, even when the actual physical memory is limited. It allows programs
to use more memory than is physically present on the system by extending RAM onto the hard disk or SSD.

Key Concepts of Virtual Memory:

1. Address Space:
o In virtual memory, the logical (virtual) address space used by a process is independent of the physical
address space in the hardware.
o A program can be assigned a large contiguous address space, while the actual data might be stored in
different physical memory locations or even on disk.
2. Paging:
o Virtual memory systems often use a technique called paging to divide the virtual memory into fixed-size
blocks called pages (usually 4KB).
o Correspondingly, physical memory is divided into page frames, which are also of the same size as pages.
o Pages are loaded into physical memory as needed, and if memory runs out, some pages are moved to disk
(known as page swapping).
3. Page Table:
o Each process maintains a page table, which keeps track of where virtual pages are stored in physical
memory (or on disk if paged out).
o The page table maps virtual addresses to physical addresses.
4. Page Fault:
o A page fault occurs when a program tries to access a page that is not currently loaded in physical memory.
o The operating system handles this by loading the required page from disk into memory and updating the
page table.
5. Swap Space:
o The area on the disk used for storing pages that are swapped out of memory is called swap space or paging
file.
o When memory is full, least-recently-used pages are written to swap space to free up physical memory.

Benefits of Virtual Memory:

1. Efficient Memory Utilization:


o Virtual memory allows the system to run more applications simultaneously, as only the needed portions of
programs are loaded into memory.
o Unused portions can remain on disk, allowing efficient use of the limited physical memory.
2. Isolation and Protection:
o Virtual memory ensures that each process has its own isolated address space, which enhances security and
stability.
o Processes cannot interfere with each other's memory spaces, reducing the risk of crashes or security
vulnerabilities.
3. Simplified Programming:
o Programmers do not have to worry about the physical memory limitations, as the operating system manages
memory automatically.
o Programs can be written to assume they have large amounts of contiguous memory, regardless of the
underlying hardware.
4. Running Large Applications:
o Virtual memory enables the execution of large applications that may not fit into physical memory by
keeping parts of the program in memory and other parts on disk.

Virtual Memory Components:

1. Logical Address Space:


o The range of addresses that a process can use, which is larger than the physical memory available.
o Virtual addresses are translated to physical addresses using page tables.
2. Physical Memory:
o The actual RAM in the system. The operating system uses physical memory to store pages from the virtual
address space.
3. Secondary Storage (Swap Space):
o The disk or SSD space that acts as an extension of RAM. Pages not actively in use are stored in swap space.

Virtual Memory Example:

Let’s assume a system with 4 GB of physical RAM but with a program that needs 8 GB of memory to run. Virtual memory
makes this possible:

1. The program is assigned a virtual address space of 8 GB.


2. Pages of the program that are actively in use are loaded into the available 4 GB of physical RAM.
3. The remaining pages are kept on the disk in swap space until needed.
4. When the program needs a page that is on the disk, a page fault occurs, and the operating system swaps in the
required page, possibly swapping out another page to free up memory.
Virtual Memory Paging Illustration:
+----------------------+ +-------------------+
| Virtual Address Space | | Physical Memory |
| (Process A) | | (RAM) |
| +------------------+ | | +---------------+ |
| | Page 1 | | | | Page 3 | |
| | Page 2 | | ----> | | Page 1 | |
| | Page 3 | | | +---------------+ |
| | Page 4 | | | | Free | |
| +------------------+ | | +---------------+ |
| (Process B) | | | Page 4 | |
| +------------------+ | | +---------------+ |
| | Page 1 | | | | Free | |
| | Page 2 | | +-------------------+
+----------------------+ (Disk Swap Space)

 In this illustration, the system has limited physical memory, but processes A and B have larger virtual address
spaces. Pages are loaded and swapped between RAM and disk as needed.

Performance Considerations:

1. Thrashing:
o Thrashing occurs when the system spends most of its time swapping pages in and out of memory, resulting
in very poor performance.
o Thrashing can happen if the working set of pages needed by a program exceeds the available physical
memory.
2. Page Replacement Algorithms:
o To decide which pages to swap out of memory when needed, the operating system uses page replacement
algorithms, such as:
 Least Recently Used (LRU): Replaces the page that has not been used for the longest time.
 First-In, First-Out (FIFO): Replaces the page that has been in memory the longest.
 Optimal: Replaces the page that will not be needed for the longest time in the future.

Summary:

 Virtual memory allows systems to run larger applications than the available physical memory.
 It uses paging to load only the necessary parts of programs into memory.
 The operating system manages page faults, page tables, and swap space to ensure efficient memory usage.
 While virtual memory is highly effective, improper tuning or overuse can lead to thrashing and performance
degradation

PAGE REPLACEMENT ALGORITHM

Page Replacement Algorithms are techniques used in operating systems to manage how pages are swapped in and out of
physical memory when the system is running low on memory. When a page fault occurs (when a program tries to access a
page that is not currently in memory), the operating system must decide which page to evict to make room for the new
page. The choice of which page to replace can significantly affect system performance.

Common Page Replacement Algorithms

1. Least Recently Used (LRU):


o Description: LRU replaces the page that has not been used for the longest period of time. It is based on the
assumption that pages used recently will likely be used again soon.
o Implementation: This can be implemented using counters or timestamps to track when pages were last accessed, or
by using a stack to keep track of the order of page accesses.
o Pros: Generally, results in good performance by keeping frequently used pages in memory.
o Cons: Can be costly to implement due to the overhead of tracking page usage.

Illustration:
Imagine you have 3 page frames and the following page reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Step Reference Frame 1 Frame 2 Frame 3 Page Fault?


1 1 1 Yes
2 2 1 2 Yes
3 3 1 2 3 Yes
4 4 4 2 3 Yes
5 1 4 1 3 Yes
6 2 4 1 2 Yes
7 5 5 1 2 Yes
8 1 5 1 2 No
9 2 5 1 2 No
10 3 3 1 2 Yes
11 4 3 4 2 Yes
12 5 3 4 5 Yes

In LRU, when the page frame is full, it replaces the page that was least recently used. After accessing each page, the most
recently used page is placed at the end of the list, and the least recently accessed one is replaced.

2. First-In, First-Out (FIFO):


o Description: FIFO replaces the oldest page in memory, following a simple queue structure.
o Implementation: The oldest page is evicted, regardless of how frequently or recently it has been used.
o Pros: Simple and easy to implement.
o Cons: Can lead to suboptimal performance, known as Belady's anomaly, where increasing the number of page
frames can lead to an increase in the number of page faults.

Illustration:
Using the same page reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Step Reference Frame 1 Frame 2 Frame 3 Page Fault?


1 1 1 Yes
2 2 1 2 Yes
3 3 1 2 3 Yes
4 4 4 2 3 Yes
5 1 4 1 3 Yes
6 2 4 1 2 Yes
7 5 5 1 2 Yes
8 1 5 1 2 No
9 2 5 1 2 No
10 3 3 1 2 Yes
11 4 3 4 2 Yes
12 5 3 4 5 Yes

3. Optimal Page Replacement:


o Description: The optimal algorithm replaces the page that will not be used for the longest time in the future.
o Implementation: This requires knowledge of future requests, which is often not feasible in practice.
o Pros: Provides the lowest possible page fault rate for a given reference string.
o Cons: Not implementable in a real system because it requires future knowledge.

Illustration:
This algorithm chooses the page that will not be used for the longest time in the future. Using the same reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Step Reference Frame 1 Frame 2 Frame 3 Page Fault?


1 1 1 Yes
2 2 1 2 Yes
3 3 1 2 3 Yes
4 4 4 2 3 Yes
5 1 4 1 3 Yes
6 2 4 1 2 Yes
7 5 4 1 5 Yes
8 1 4 1 5 No
9 2 4 1 2 Yes
10 3 4 3 2 Yes
11 4 4 3 2 Yes
12 5 5 3 2 Yes

4. Here, page 4 is evicted at step 7 because, based on future knowledge, it won’t be needed for the longest time
compared to the others.

5. Least Frequently Used (LFU):


o Description: LFU replaces the page that has been used the least number of times.
o Implementation: Each page has a counter that increments every time the page is accessed. The page with the
lowest count is replaced.
o Pros: Keeps pages that are used frequently in memory.
o Cons: Can lead to problems if a page is accessed frequently but is no longer needed (the cache pollution problem).

Concept:

 LFU replaces the page that is used least frequently.


 Each page has a counter that keeps track of how many times it has been accessed.
 When a new page needs to be loaded and all frames are full, the page with the lowest frequency count is replaced.

Illustration:

Let's use the page reference string:


1, 2, 3, 1, 2, 4, 5, 1, 2, 3, 4, 5
Assume 3 page frames are available.

Step Reference Frame 1 Frame 2 Frame 3 Frequency Count (1, 2, 3, 4, 5) Page Fault?

1 1 1 1(1), 2(0), 3(0), 4(0), 5(0) Yes


Step Reference Frame 1 Frame 2 Frame 3 Frequency Count (1, 2, 3, 4, 5) Page Fault?

2 2 1 2 1(1), 2(1), 3(0), 4(0), 5(0) Yes

3 3 1 2 3 1(1), 2(1), 3(1), 4(0), 5(0) Yes

4 1 1 2 3 1(2), 2(1), 3(1), 4(0), 5(0) No

5 2 1 2 3 1(2), 2(2), 3(1), 4(0), 5(0) No

6 4 1 2 4 1(2), 2(2), 3(1), 4(1), 5(0) Yes

7 5 1 2 5 1(2), 2(2), 3(0), 4(1), 5(1) Yes

8 1 1 2 5 1(3), 2(2), 3(0), 4(1), 5(1) No

9 2 1 2 5 1(3), 2(3), 3(0), 4(1), 5(1) No

10 3 3 2 5 1(0), 2(3), 3(1), 4(1), 5(1) Yes

11 4 3 2 4 1(0), 2(3), 3(1), 4(2), 5(1) Yes

12 5 3 5 4 1(0), 2(3), 3(1), 4(2), 5(2) Yes

Explanation:

1. At step 6, when 4 needs to be loaded, we see that pages 1, 2, and 3 are already loaded. Among these, page 3 has the
lowest frequency count (only used once), so it is replaced by 4.
2. At step 7, when 5 needs to be loaded, page 4 is retained (used once), while page 3 is evicted since it is no longer the
least frequently used.
3. At step 10, when 3 comes back into use, 1 is evicted as it has not been accessed for a while (its frequency count is
now 0).

Behavior of LFU:

 LFU tends to hold onto frequently accessed pages and replaces less frequently used ones.
 When two or more pages have the same frequency, a tie-breaker like the FIFO principle is often used.

6. Second Chance (Clock Algorithm):


o Description: This is a modification of FIFO that gives pages a second chance before replacement. Each page has a
reference bit.
o Implementation: When a page is to be replaced, if its reference bit is 1, it is cleared to 0 and the page is given a
second chance. The algorithm continues checking pages until it finds one with a reference bit of 0 to replace.
o Pros: More effective than FIFO as it considers the recent usage of pages.
o Cons: Still may not perform as well as LRU in some cases.

The Second Chance algorithm is an enhancement of the FIFO (First In First Out) algorithm, designed to improve
performance by giving pages a "second chance" before being replaced.

How It Works:

1. Reference Bit: Each page in memory has an associated reference bit, which is set to 1 when the page is accessed (read or
written).
2. Clock Hand: Pages are arranged in a circular list (like a clock), and a "clock hand" points to the next page to be replaced.
3. When a page needs to be replaced:
o If the reference bit of the page pointed to by the clock hand is 0, the page is replaced.
o If the reference bit is 1, the bit is cleared to 0 (giving the page a second chance), and the clock hand moves to the
next page.

The algorithm continues this process until it finds a page with a reference bit of 0, which is then replaced.

Illustration:

Consider the following page reference string:


1, 2, 3, 1, 4, 5, 2, 1, 4, 6
We have 3 frames available.

Initial Setup:

 Assume each page has a reference bit that starts at 0.


 The clock hand starts at the first frame.

Step Reference Frame 1 Frame 2 Frame 3 Reference Bits (1, 2, 3, 4, 5, 6) Clock Hand Page Fault?

1 1 1 1(1), 2(0), 3(0), 4(0), 5(0), 6(0) Hand on 1 Yes

2 2 1 2 1(1), 2(1), 3(0), 4(0), 5(0), 6(0) Hand on 2 Yes

3 3 1 2 3 1(1), 2(1), 3(1), 4(0), 5(0), 6(0) Hand on 3 Yes

4 1 1 2 3 1(1), 2(1), 3(1), 4(0), 5(0), 6(0) Hand on 1 No

5 4 4 2 3 1(0), 2(1), 3(1), 4(1), 5(0), 6(0) Hand on 1 Yes

6 5 4 5 3 1(0), 2(0), 3(1), 4(1), 5(1), 6(0) Hand on 2 Yes

7 2 4 5 2 1(0), 2(1), 3(0), 4(1), 5(1), 6(0) Hand on 3 Yes

8 1 1 5 2 1(1), 2(1), 3(0), 4(0), 5(1), 6(0) Hand on 1 Yes

9 4 1 4 2 1(1), 2(1), 3(0), 4(1), 5(0), 6(0) Hand on 1 Yes

10 6 6 4 2 1(0), 2(1), 3(0), 4(1), 5(0), 6(1) Hand on 1 Yes

Explanation:

1. Step 1-3: The first three pages (1, 2, 3) are loaded into frames, causing page faults each time. Their reference bits are set to
1.
2. Step 4: When page 1 is referenced again, its reference bit is already 1, so no page replacement occurs.
3. Step 5: When page 4 needs to be loaded, the clock hand points to 1. Since the reference bit of 1 is 1, it gives the page a
second chance, setting its reference bit to 0 and moving the hand to page 2. The process continues until 1 is replaced by 4.
4. Step 6-10: The clock hand continues to move in a circular manner, resetting the reference bits of pages that are accessed
frequently and replacing those that are accessed less often, as seen when 5 and 6 are loaded into memory.

Visual Representation (Clock):


lua
Copy code
+-------+
| 2 |
+------|-------|------+
| | | |
| 1 + + 3 |
| | | |
+---------------------+
Clock Hand

Advantages of Second Chance Algorithm:

 Fairness: Pages that have been used recently get a second chance before being replaced.
 Efficiency: It works well in practice by reducing the chances of unnecessary page replacements.

This illustration gives an overview of the Second Chance (Clock) Page Replacement Algorithm and its approach to
replacing pages in memory.

7. Most Recently Used (MRU):


o Description: MRU replaces the page that was most recently used. This is based on the assumption that if a page was
just used, it is less likely to be needed soon.
o Implementation: A stack can be used to keep track of the order of page accesses.
o Pros: Can be effective in certain scenarios, such as when working sets are small.
o Cons: Generally, less effective than LRU or LFU for typical workloads.

Most Recently Used (MRU) Page Replacement Algorithm

The Most Recently Used (MRU) page replacement algorithm is based on the idea that the page that was used most
recently is likely to be used again soon. Unlike LRU (Least Recently Used), which replaces the least recently used page,
MRU replaces the page that has been most recently accessed.

How It Works:

 Whenever a page needs to be replaced, the page that was most recently used is selected for removal.
 The assumption behind MRU is that the most recently used page is likely to be accessed soon, and thus keeping it in memory
may not be efficient in some scenarios.

Illustration:

Let’s consider an example page reference string:


1, 2, 3, 4, 1, 2, 5, 6
We have 3 frames available.

Steps:
Step Reference Frame 1 Frame 2 Frame 3 Most Recently Used Page Page Fault?

1 1 1 1 Yes

2 2 1 2 2 Yes

3 3 1 2 3 3 Yes
Step Reference Frame 1 Frame 2 Frame 3 Most Recently Used Page Page Fault?

4 4 1 2 4 4 (Replaced 3) Yes

5 1 1 2 4 1 No

6 2 1 2 4 2 No

7 5 1 5 4 5 (Replaced 2) Yes

8 6 1 5 6 6 (Replaced 4) Yes

Explanation:

1. Step 1-3: The first three pages (1, 2, 3) are loaded into the frames, causing page faults, as they are being accessed for the
first time. Page 3 is the most recently used.
2. Step 4: When page 4 is requested, the algorithm replaces page 3 (the most recently used page), loading 4 into the frame.
3. Step 5-6: Pages 1 and 2 are accessed again, but since they are already in memory, there are no page faults.
4. Step 7: Page 5 needs to be loaded, and since page 2 was the most recently used, it gets replaced with page 5.
5. Step 8: Page 6 is requested, replacing the most recently used page 4.

Visual Representation:

Imagine that after every page request, the most recently used page is marked. When a new page is requested and a
replacement is necessary, the page that was most recently used is replaced.

MRU in a Diagram:
yaml
Copy code
Step 4: Replacing most recently used page (3)

Before:
Frames: [1, 2, 3]
Most Recently Used: 3

After:
Frames: [1, 2, 4]
Most Recently Used: 4

Advantages:

 Useful in Specific Situations: MRU can be efficient in workloads where the most recently accessed page is unlikely to be
needed soon (e.g., when data is read sequentially).

Disadvantages:

 Non-intuitive: In general, MRU can lead to inefficient memory usage in many scenarios because it assumes that the most
recently used page will not be needed again, which is not true for all programs.
This illustrates how the Most Recently Used (MRU) algorithm works and how it determines which page to replace based
on recent usage.

Comparison of Page Replacement Algorithms

Implementation Average Page Fault


Algorithm Advantages Disadvantages
Complexity Rate

Least Recently Used (LRU) Moderate Low Good performance High overhead for tracking usage

First-In, First-Out (FIFO) Simple Moderate to High Easy to implement May lead to Belady's anomaly

Best possible page fault


Optimal Theoretical Lowest Not feasible for real systems
rate

Keeps frequently used


Least Frequently Used (LFU) Moderate Moderate Cache pollution problem
pages

Still may not perform as well as


Second Chance (Clock) Moderate Moderate Better than FIFO
LRU

Effective in small working Generally less effective than LRU


Most Recently Used (MRU) Moderate Moderate to High
sets or LFU

Detailed Illustration of DMA Operation

1. DMA Request Initiation:


o The Peripheral Device (e.g., a hard drive or network card) sends a request to the DMA Controller (DMAC) to transfer
data to/from the system memory.

2. DMA Controller Informs CPU:


o The DMA controller notifies the CPU of the upcoming transfer, and the CPU gives permission for the DMA transfer.
The CPU sets up the initial conditions (source address, destination address, and amount of data to transfer) for the
DMA controller to perform the task.

3. Data Transfer:
o The DMA controller takes over the system bus to perform the data transfer. This can happen in different modes
(Burst Mode, Cycle Stealing, or Transparent Mode), depending on the system’s configuration.

4. Completion of Transfer:
o Once the DMA controller has completed the data transfer, it sends an interrupt signal back to the CPU, notifying it
that the operation is done.

Illustration of the Process:


scss
Copy code
┌──────────────────────────────────────────────────────────┐
│ CPU (Central Processing Unit) │
│ 1. Initiates DMA, sets transfer parameters │
└──────────────────────────────────────────────────────────┘


┌───────────────────────────────────────────────────────┐
│ DMA Controller (DMAC) │
│ 2. Takes control of system bus │
│ 3. Transfers data between memory and peripheral │
└───────────────────────────────────────────────────────┘
▲ ▲
│ │
┌───────┴───────────────┐ ┌────────────┴────────────┐
│ Memory │ │ Peripheral Device │
│ (e.g., RAM, main memory) │ │ (e.g., hard disk, network) │
└───────────────────────┘ └───────────────────────┘

**Data transfer between memory and the peripheral device**


Managed by the DMA Controller, not the CPU

4. Upon completion, DMA controller sends an interrupt to the CPU

Steps in the Process:

1. Peripheral Device Initiates Transfer:


o The peripheral device (e.g., a hard drive) requests data transfer through the DMA controller.

2. CPU Prepares Transfer Parameters:


o The CPU informs the DMA controller where the data should be transferred (source and destination addresses) and
how much data to transfer.

3. DMA Controller Transfers Data:


o The DMA controller controls the system bus and performs the transfer between the peripheral device and memory.
This transfer can be done in a block (burst mode) or intermittently (cycle stealing mode).

4. CPU Interrupted When Transfer Completes:


o Once the transfer is complete, the DMA controller sends an interrupt to the CPU, informing it that the data has been
transferred.

Modes of DMA Transfer (Illustrated):

1. Burst Mode:
o The DMA controller takes over the bus completely until the transfer is complete. The CPU cannot access memory
during this time.
2. Cycle Stealing Mode:
o The DMA controller "steals" a bus cycle from the CPU to transfer small amounts of data at a time, allowing the CPU
to work in between data transfers.

Complete DMA Cycle – Illustrated with Data Flow:


markdown
Copy code
STEP 1: CPU sets up DMA Controller
--------------------------------------------------------------
┌───────────────┐ ┌───────────────┐
│ Peripheral │ │ DMA │
│ (e.g., Disk) │ │ Controller │
└───────────────┘ └───────────────┘
│ │
│ Initiates Request │
│ <────── Transfer Request ────┘

STEP 2: DMA Controller Takes Over Bus and Transfers Data


--------------------------------------------------------------
CPU DMA
┌─────────┐ ┌─────────────────┐
│ Busy │ │ Bus Controller │
└─────────┘ └─────────────────┘

┌─────────┐ │
│ Memory │<────────── Data ───>│
│(e.g., RAM)│ ┌──────────────┐
└─────────┘ │ Peripheral │
│ (e.g., Disk) │
└──────────────┘

STEP 3: DMA Interrupts CPU Upon Completion


--------------------------------------------------------------
┌─────────────────┐ ┌─────────┐
│ DMA Controller │ ───── Interrupt →│ CPU │
└─────────────────┘ └─────────┘

Key Points of the Illustration:

 The Peripheral sends a request to the DMA Controller to begin the data transfer.
 The CPU sets the parameters (e.g., address, size of data) and then allows the DMA controller to handle the transfer.
 The DMA Controller directly communicates with the Memory and Peripheral, taking control of the system bus.
 Once the transfer is complete, the DMA controller signals the CPU via an interrupt, informing it that the operation is done.

Benefits of DMA:

 Frees CPU: The CPU is not involved in every part of the data transfer, allowing it to focus on other tasks.
 Faster Transfers: Data moves faster because the DMA controller is optimized for bulk transfers without CPU overhead.
 Efficient Use of System Resources: The system bus is used more efficiently, especially in burst mode.

Conclusion:

Direct Memory Access (DMA) is crucial in modern computer architecture as it enhances system efficiency, especially
during large data transfers. With DMA, systems can achieve better performance by offloading the work from the CPU to a
dedicated controller.

Disk arm scheduling algorithms are used to optimize the movement of the disk arm across the tracks of the disk. These
algorithms determine the sequence in which disk I/O requests are serviced to minimize disk seek time, improving the
overall efficiency of disk operations. Below are some key disk scheduling algorithms, each explained step by step with
detailed illustrations.
1. First-Come, First-Served (FCFS)

Explanation:

 FCFS is the simplest disk scheduling algorithm.


 The requests are processed in the order they arrive, regardless of their position on the disk.
 There is no optimization of seek time, and the performance may not be efficient when requests are scattered across the
disk.

Steps:

1. The disk starts at a particular position.


2. The requests are serviced sequentially in the order they arrive.
3. No attempt is made to optimize the order to minimize seek time.

Illustration:

Let's assume the disk has 200 tracks and the disk arm is initially at track 50. The following requests arrive in this order: 98,
183, 37, 122, 14, 124, 65.

vbnet
Copy code
Initial Head Position: 50

Requests: [98, 183, 37, 122, 14, 124, 65]

Step-by-Step Movements:
1. Move from 50 to 98 (seek distance = 48)
2. Move from 98 to 183 (seek distance = 85)
3. Move from 183 to 37 (seek distance = 146)
4. Move from 37 to 122 (seek distance = 85)
5. Move from 122 to 14 (seek distance = 108)
6. Move from 14 to 124 (seek distance = 110)
7. Move from 124 to 65 (seek distance = 59)

Total seek distance: 48 + 85 + 146 + 85 + 108 + 110 + 59 = 641

2. Shortest Seek Time First (SSTF)

Explanation:

 SSTF selects the request that is closest to the current position of the disk head.
 It minimizes seek time by always choosing the nearest request.
 However, SSTF can lead to starvation, where far-off requests may be delayed for a long time if nearby requests keep
arriving.

Steps:

1. Start at the current position of the disk head.


2. Look for the request closest to the current position.
3. Service the request and update the disk head’s position.
4. Repeat the process for the remaining requests.

Illustration:

Using the same set of requests and starting position:


vbnet
Copy code
Initial Head Position: 50

Requests: [98, 183, 37, 122, 14, 124, 65]

Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 37 (seek distance = 28)
3. Move from 37 to 14 (seek distance = 23)
4. Move from 14 to 98 (seek distance = 84)
5. Move from 98 to 122 (seek distance = 24)
6. Move from 122 to 124 (seek distance = 2)
7. Move from 124 to 183 (seek distance = 59)

Total seek distance: 15 + 28 + 23 + 84 + 24 + 2 + 59 = 235

3. SCAN (Elevator Algorithm)

Explanation:

 In the SCAN algorithm, the disk arm moves in one direction, servicing all requests along the way, until it reaches the end of
the disk.
 Once it reaches the end, it reverses direction and services requests on the way back.
 This is similar to an elevator moving up and down, hence the name "elevator algorithm."

Steps:

1. Start from the current position of the disk arm.


2. Move in one direction (up or down the tracks), servicing all requests in that direction.
3. When you reach the end of the disk (or there are no more requests in that direction), reverse direction and continue
servicing requests.

Illustration:

Starting position: 50, Requests: [98, 183, 37, 122, 14, 124, 65]

vbnet
Copy code
Initial Head Position: 50
Direction: Moving towards higher tracks

Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 98 (seek distance = 33)
3. Move from 98 to 122 (seek distance = 24)
4. Move from 122 to 124 (seek distance = 2)
5. Move from 124 to 183 (seek distance = 59)
6. Reverse direction at 183, move to 37 (seek distance = 146)
7. Move from 37 to 14 (seek distance = 23)

Total seek distance: 15 + 33 + 24 + 2 + 59 + 146 + 23 = 302

4. C-SCAN (Circular SCAN)

Explanation:

 C-SCAN is a variant of SCAN where the disk arm services requests in one direction only.
 When it reaches the end, it returns immediately to the start without servicing any requests, creating a circular pattern.
 This ensures uniform wait times because it doesn’t prioritize requests at the ends of the disk.

Steps:

1. Move in one direction, servicing all requests along the way.


2. Once the disk arm reaches the end, reset to the start (without servicing requests).
3. Continue servicing requests in the same direction as before.

Illustration:
vbnet
Copy code
Initial Head Position: 50
Direction: Moving towards higher tracks

Requests: [98, 183, 37, 122, 14, 124, 65]

Step-by-Step Movements:
1. Move from 50 to 65 (seek distance = 15)
2. Move from 65 to 98 (seek distance = 33)
3. Move from 98 to 122 (seek distance = 24)
4. Move from 122 to 124 (seek distance = 2)
5. Move from 124 to 183 (seek distance = 59)
6. Reset back to the start, no servicing (seek distance to 0)
7. Move from 0 to 14 (seek distance = 14)
8. Move from 14 to 37 (seek distance = 23)

Total seek distance: 15 + 33 + 24 + 2 + 59 + 14 + 23 = 170

5. LOOK and C-LOOK Algorithms

LOOK Algorithm:

 Similar to SCAN, but the disk arm only goes as far as the last request in each direction, without going to the edge of the disk.

C-LOOK Algorithm:

 A circular version of LOOK where the disk arm moves in one direction until the last request and then returns to the
beginning, servicing no requests during the return.

Conclusion:

Different disk scheduling algorithms provide various ways to optimize disk access time. Each algorithm has its advantages
depending on the workload:

 FCFS is simple but inefficient.


 SSTF minimizes seek time but can cause starvation.
 SCAN and C-SCAN ensure fairness and uniform wait times.
 LOOK and C-LOOK reduce unnecessary travel to the edges of the disk.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy