0% found this document useful (0 votes)
15 views

OS Unit 4

Memory management is a critical operating system function that allocates and deallocates memory to processes, ensuring efficient utilization, isolation, protection, relocation, and sharing. It encompasses various techniques such as contiguous allocation, paging, segmentation, and virtual memory, each with its own advantages and limitations. Additionally, concepts like protection schemes and demand paging enhance system stability and efficiency by managing how processes access memory.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

OS Unit 4

Memory management is a critical operating system function that allocates and deallocates memory to processes, ensuring efficient utilization, isolation, protection, relocation, and sharing. It encompasses various techniques such as contiguous allocation, paging, segmentation, and virtual memory, each with its own advantages and limitations. Additionally, concepts like protection schemes and demand paging enhance system stability and efficiency by managing how processes access memory.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit 4

Memory Management

Memory Management

Definition:
Memory management is a core function of the operating system that handles the allocation ad
deallocation of memory space to processes during execution.

Objectives of Memory Management

 Efficient utilization of available memory.


 Isolation: Ensure processes do not interfere with each other's memory.
 Protection: Prevent unauthorized access to memory.
 Relocation: Support moving processes between memory during execution.
 Sharing: Enable memory sharing between processes when needed.

Functions of Memory Management

1. Tracking memory usage (which parts are in use or free).


2. Allocating memory to processes (either static or dynamic).
3. Swapping: Moving processes between main memory and disk.
4. Maintaining data structures like page tables, segment tables, and free lists.

Types of Memory

 Primary Memory (RAM): Fast, volatile memory where programs are loaded for
execution.
 Secondary Memory (Disk): Non-volatile storage used for virtual memory.
 Cache Memory: Small, fast memory between CPU and RAM to reduce access time.

Memory Allocation Techniques

 Contiguous Allocation (Fixed/Variable partitions)


 Paging
 Segmentation
 Paged Segmentation
 Virtual Memory (Demand Paging)

Basic Bare Machine

Definition:
A bare machine refers to a computer system without any operating system or software
interface. It represents the lowest-level interaction with computer hardware.

Characteristics
 No operating system; only hardware and machine language are present.
 Programs must be written in machine code or low-level assembly.
 No abstraction or user-friendly environment.
 Direct control over CPU, memory, and I/O devices is required.
 Programmer is responsible for memory management, I/O handling, and job
sequencing.

Limitations

 Extremely difficult and error-prone to use.


 Lack of protection between programs.
 No multitasking or multiprogramming.
 Very low productivity due to manual handling of everything.
 Not suitable for general-purpose or multi-user systems.

Historical Relevance

 Used in the early days of computing, especially with first-generation computers.


 Served as a foundation for the development of operating systems and resident
monitors.

Resident Monitor

Definition:
A resident monitor is an early form of operating system that resides permanently in memory
and controls the execution of programs. It automates the job sequencing process and manages
system resources.

Key Functions

 Job Control: Automatically loads and executes jobs one after another.
 Memory Management: Divides memory into two parts:
o Monitor area: Permanently resides in memory.
o User area: Where user programs are loaded and executed.
 I/O Management: Handles basic input and output operations.
 Program Loading: Loads user programs into memory from secondary storage.

Structure

 Control Card Interpreter: Reads and interprets control commands (job start, end,
etc.).
 Loader: Loads executable programs into memory.
 Device Drivers: Basic routines for interacting with hardware.
 Error Handler: Detects and handles basic runtime errors.

Advantages

 Automates program execution (no manual intervention needed).


 Provides basic resource management.
 Simplifies system usage compared to bare machines.

Limitations

 Can only execute one job at a time.


 Still lacks advanced features like multiprogramming, protection, and process
management.

Multiprogramming with Fixed Partitions

Definition:
This is a memory management technique where the main memory is divided into a fixed
number of partitions at system startup, and each partition can hold exactly one process.

Key Features

 Each partition is of fixed size, defined in advance.


 Multiple programs (processes) can be loaded simultaneously, one in each partition.
 The operating system keeps track of which partitions are free or occupied.
 When a process ends, its partition becomes available for the next job.

Types of Fixed Partitioning

1. Equal-size partitions: All partitions are of the same size.


2. Unequal-size partitions: Partitions vary in size to accommodate different process
sizes better.

Advantages

 Simple to implement.
 Allows multiprogramming: more than one process can reside in memory.
 Reduces CPU idle time compared to single-task systems.

Disadvantages

 Internal fragmentation: Memory wasted if a process is smaller than the partition.


 Limited flexibility: Number of processes is restricted to the number of partitions.
 Cannot dynamically adjust to varying memory needs of processes.

Use Case

 Early batch processing systems used fixed partitioning before dynamic allocation
techniques evolved.
Multiprogramming with Variable Partitions

Definition:
In this memory management technique, the main memory is divided dynamically into
partitions of variable sizes based on the size of incoming processes. Unlike fixed partitions,
there are no pre-set divisions.

Key Features

 Memory is allocated exactly as needed by each process.


 The number and size of partitions change dynamically.
 Increases memory utilization and reduces internal fragmentation.
 The OS maintains a free memory list to track available space.

Memory Allocation Strategies

1. First Fit: Allocates the first block of memory that is large enough.
2. Best Fit: Allocates the smallest available block that fits the process.
3. Worst Fit: Allocates the largest block to leave a large leftover for future use.

Advantages

 More flexible than fixed partitions.


 Better memory utilization as it avoids internal fragmentation.
 Can handle a variable number of processes depending on their sizes.

Disadvantages

 External fragmentation: Free memory becomes scattered in small blocks.


 May require compaction to combine scattered free memory into a contiguous block.
 Slightly more complex to manage than fixed partitions.

Use Case

 Used in early versions of operating systems that aimed to support dynamic and more
efficient memory use.

Protection Schemes

Definition:
Protection schemes in memory management ensure that processes do not interfere with each
other’s memory space, providing security and stability within the system.

Objectives of Protection

 Prevent a process from accessing memory locations allocated to another process.


 Prevent accidental or malicious corruption of data.
 Control access to shared resources.

Common Protection Mechanisms

1. Base and Limit Registers


o Base Register holds the starting physical address of a process’s memory
partition.
o Limit Register specifies the size (length) of the partition.
o Every memory reference is checked to be within base and limit bounds.
o Ensures address translation and bounds checking.
2. Segmentation Protection
o Each segment has associated access rights (read, write, execute).
o Segment tables contain permission bits for protection.
o Provides fine-grained control based on logical program segments.
3. Paging Protection
o Page tables include protection bits to control access.
o Enables marking pages as read-only, read-write, or no access.
o Helps isolate processes and protect OS memory.
4. Access Control Lists (ACLs)
o Define which users/processes have what kind of access to memory regions.

Hardware Support

 Modern CPUs provide memory management units (MMUs) to support protection


schemes.
 MMUs handle address translation and access permission enforcement.

Importance

 Protects system stability and integrity.


 Essential for multi-user and multitasking environments.
 Enables secure sharing of memory.

Paging

Definition:
Paging is a memory management technique that eliminates external fragmentation by
dividing both physical memory and logical memory into fixed-size blocks.

Key Concepts

 Logical memory is divided into blocks called pages.


 Physical memory is divided into blocks called frames.
 Page size = Frame size.
 The page table maps each page to a frame in physical memory.
 Addresses generated by the CPU are logical addresses, consisting of:
o Page number (p): identifies the page.
o Page offset (d): specifies the exact byte within the page.
 The physical address is formed by combining the frame number (from the page
table) and the offset.

Advantages

 Eliminates external fragmentation.


 Simplifies memory allocation.
 Allows non-contiguous memory allocation.
 Easy to implement swapping and virtual memory.

Example Numerical Problem

Problem:
A system has a logical address space of 16 KB and a physical memory of 64 KB. The page
size is 1 KB. The page table for a process is given below (page number → frame number):

Page Number Frame Number


0 5
1 9
2 1
3 7
4 3

Find the physical address corresponding to the logical address 2200 (in decimal).

Solution

1. Calculate the number of bits for page number and offset:

 Page size = 1 KB = 1024 bytes → Offset size = 10 bits (since 2^10 = 1024).
 Logical address space = 16 KB = 16 * 1024 = 16384 bytes.
 Number of pages = Logical address space / Page size = 16 KB / 1 KB = 16 pages.
 Number of bits for page number = 4 bits (since 2^4 = 16).

2. Convert the logical address to binary or calculate page number and offset
directly:

 Logical address = 2200 (decimal).


 Page number = Logical address / Page size = 2200 / 1024 = 2 (integer division).
 Offset = Logical address % Page size = 2200 % 1024 = 2200 - (2 × 1024) = 2200 -
2048 = 152.

3. Find the corresponding frame number from the page table:

 Page number 2 → Frame number 1.

4. Calculate the physical address:


 Physical address = (Frame number × Page size) + Offset
 = (1 × 1024) + 152
 = 1024 + 152 = 1176 (decimal).

Answer: The physical address corresponding to logical address 2200 is 1176.

Segmentation
Definition:
Segmentation is a memory management scheme that divides a process’s memory into logical
segments of varying lengths, each representing a different logical unit like code, data, stack,
etc.

Key Concepts

 Each segment has:


o A segment number.
o A segment base address (starting physical address).
o A segment limit (length of the segment).
 A logical address in segmentation is a tuple:

(segment number, offset)

 The segment table holds the base and limit for each segment.
 To get the physical address:
o Check if offset < segment limit (for protection).
o Physical address = base address of the segment + offset.

Advantages

 Reflects the logical structure of programs.


 Supports sharing and protection at segment level.
 More flexible than fixed partitioning.
 Facilitates modular programming.

Disadvantages

 External fragmentation can occur.


 Segment sizes vary, so memory allocation is more complex.
 Requires hardware support (segment table and checking).

Diagram
Logical Address = (Segment Number, Offset)

Segment Table
+-------------------+
| Segment | Base | Limit |
+---------+------+-------+
| 0 | 1000 | 200 |
| 1 | 3000 | 500 |
| 2 | 7000 | 1000 |
+-------------------+

Physical Memory

[Addresses]

1000 ----------------------------------------- 1199 (Segment 0: 200 bytes)


3000 ----------------------------------------- 3499 (Segment 1: 500 bytes)
7000 ----------------------------------------- 7999 (Segment 2: 1000
bytes)

Numerical Example

Problem:
Given the segment table below, find the physical address for the logical address (Segment =
1, Offset = 400).

Segment Base Address Limit (Length)


0 1000 200
1 3000 500
2 7000 1000

Solution

1. Check offset against limit:

 Offset = 400
 Limit for segment 1 = 500
 Since 400 < 500, address is valid.

2. Calculate physical address:

Physical Address=Base+Offset=3000+400=3400

Answer: The physical address corresponding to logical address (1, 400) is 3400.

Paged Segmentation

Definition:
Paged segmentation is a memory management technique that combines both segmentation
and paging to benefit from the logical structuring of programs and efficient memory use.

Key Concepts

 Memory is divided into segments, and each segment is further divided into fixed-size
pages.
 A logical address is represented as a triple:

(Segment number, Page number, Offset)

 Each segment has its own page table.


 The segment table maps segment numbers to the base address of the page table for
that segment.
 The page table maps page numbers to frame numbers in physical memory.
 The physical address is formed by:
1. Using the segment table to find the page table.
2. Using the page table to find the frame number for the page.
3. Combining the frame number with the offset to get the physical address.

Advantages

 Supports the logical organization of segmentation.


 Reduces external fragmentation by paging segments.
 Provides flexibility and protection at both segment and page levels.
 Efficient memory utilization due to paging.

Disadvantages

 More complex hardware and data structures (two-level address translation).


 Slightly higher overhead due to two tables (segment and page tables).

Address Translation Process

1. The logical address is split into segment number (s), page number (p), and offset (d).
2. The segment table is accessed using s to get the base address of the page table.
3. The page table is accessed using p to find the frame number.
4. The physical address = (frame number × page size) + offset.

Numerical Example: Paged Segmentation

Problem:
Consider a system with the following parameters:

 Page size = 1 KB (1024 bytes)


 Logical address format: (Segment number, Page number, Offset)
 Segment table:

Segment Number Base Address of Page Table


0 2000
1 3000

 Page tables:

Segment 0 Page Number Frame Number


0 5
Segment 0 Page Number Frame Number
1 7
Segment 1 Page Number Frame Number
0 9
1 3

Find the physical address for logical address:

(Segment=1,Page=0,Offset=500)

Solution:

1. Identify the base address of the page table for segment 1:


From the segment table, base address = 3000.
2. Find the frame number for page 0 in segment 1:
From segment 1’s page table, page 0 → frame number 9.
3. Calculate the physical address:

Physical Address=(Frame Number×Page Size)+Offset


=(9×1024)+500=9216+500=9716

Answer:
The physical address corresponding to logical address (1, 0, 500) is 9716

Virtual Memory Concepts

Definition:
Virtual memory is a memory management technique that allows a computer to compensate
for physical memory shortages by temporarily transferring data from RAM to disk storage. It
gives the illusion of a large, contiguous memory space to processes.

Key Ideas

 Logical memory (virtual address space) is larger than physical memory.


 Processes use virtual addresses, which are mapped to physical addresses by the OS
and hardware.
 Only parts of a process’s memory are loaded into RAM at any time; the rest can
reside on disk.
 Enables multiprogramming with more processes than would fit in physical
memory.

Benefits

 Enables running large programs with limited physical memory.


 Increases CPU utilization by allowing more processes to be loaded.
 Supports demand paging and swapping.
 Simplifies programming by abstracting physical memory details.

Key Components

 Page Table: Maps virtual pages to physical frames.


 Swap Space: Disk area reserved for pages not in physical memory.
 Translation Lookaside Buffer (TLB): Cache to speed up virtual-to-physical address
translation.
 Page Fault: Occurs when a referenced page is not in physical memory, triggering a
page load from disk.

Working

 When a process accesses a virtual address:


o If the page is in physical memory → access continues.
o If the page is not present → page fault occurs.
 OS loads the required page from disk to memory, possibly swapping out another page
if memory is full.

Demand Paging
Definition:
Demand paging is a virtual memory management technique where pages are loaded into
physical memory only when they are needed, i.e., when a page fault occurs.

How Demand Paging Works

1. Initial Load:
When a process starts, none or only some pages are loaded into physical memory.
2. Page Fault:
When the CPU accesses a page not in physical memory, a page fault occurs.
3. Handling Page Fault:
o The OS locates the required page on the disk (swap space).
o If physical memory is full, it selects a page to evict (swap out).
o Loads the requested page into the freed frame.
o Updates the page table to mark the page as present.
o Resumes the interrupted instruction.
4. Resuming Execution:
The instruction causing the page fault is restarted, now with the required page in
memory.

Advantages of Demand Paging

 Reduces I/O overhead: Only necessary pages are loaded.


 Allows running programs larger than physical memory.
 Enables faster process startup since not all pages are loaded upfront.
 Supports efficient memory use by loading pages on-demand.

Disadvantages

 Increased page fault overhead if pages are not effectively predicted.


 Requires sophisticated page replacement algorithms to decide which pages to evict.
 Possible thrashing if too many page faults occur.

Comparison with Prepaging

 Demand paging loads pages only when requested.


 Prepaging attempts to load pages in advance based on prediction to reduce faults.

Performance of Demand Paging

Overview:
Demand paging improves memory utilization by loading pages only when needed. However,
its performance depends heavily on how often page faults occur and how efficiently the
system handles them.

Key Metrics

 Effective Access Time (EAT):


The average time to access memory considering page faults.

Calculating Effective Access Time

EAT=(1−p)×Memory Access Time+p×Page Fault Time

Where:

 p = Probability of a page fault (page fault rate).


 Memory Access Time = Time to access memory when page is in RAM.
 Page Fault Time = Time to handle a page fault (includes disk I/O, updating tables, and
restarting the process).

Factors Affecting Performance

1. Page Fault Rate (p):


o Lower page fault rate → better performance.
o High page fault rate causes severe slowdown.
2. Page Fault Service Time:
o Includes disk read/write time, OS overhead.
o Disk access is orders of magnitude slower than memory access.
3. Locality of Reference:
o Processes tend to access a limited set of pages (working set).
o Better locality reduces page faults.
4. Page Replacement Algorithm Efficiency:
o Good algorithms minimize page faults.

Example

 Memory access time = 100 nanoseconds.


 Page fault service time = 8 milliseconds = 8,000,000 nanoseconds.
 Page fault rate p=0.0001p = 0.0001p=0.0001.

Calculate Effective Access Time:

EAT=(1−0.0001)×100+0.0001×8,000,000=99.99+800=899.99 nanoseconds

Even a tiny page fault rate drastically increases average access time.

Conclusion

 Demand paging significantly saves memory but can degrade performance if page
faults are frequent.
 Optimizing page fault rate and service time is crucial for system efficiency.

Page Replacement Algorithms


Page Replacement Algorithms are used in operating systems to manage memory pages in
virtual memory systems. When a page needed by a process is not in memory (page fault)
and the memory is full, the OS must replace an existing page with the required one. The
strategy used to select the page to replace is called the page replacement algorithm.

Goals of Page Replacement Algorithms

 Minimize page faults


 Maximize CPU utilization
 Ensure efficient use of memory

Common Page Replacement Algorithms

1. FIFO (First-In-First-Out)

 Idea: Replace the oldest page (the one that entered memory first).
 Implementation: Use a queue to track the order of pages.
 Advantages: Simple to implement.
 Disadvantages: May replace frequently used pages (bad performance).

Problem 1

Given:
 Page reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3
 Number of page frames: 3

Solution:

Step Page Frame Content Page Fault?


1 7 7__ Yes
2 0 70_ Yes
3 1 701 Yes
4 2 201 Yes (7 replaced)
5 0 201 No
6 3 231 Yes (0 replaced)
7 0 031 Yes (2 replaced)
8 4 041 Yes (3 replaced)
9 2 241 Yes (0 replaced)
10 3 231 Yes (4 replaced)

Total Page Faults: 9

Problem 2

Given:

 Page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5


 Number of page frames: 4

Solution:

Step Page Frame Content Page Fault?


1 1 1___ Yes
2 2 12__ Yes
3 3 123_ Yes
4 4 1234 Yes
5 1 1234 No
6 2 1234 No
7 5 5234 Yes (1 replaced)
8 1 5134 Yes (2 replaced)
9 2 5124 Yes (3 replaced)
10 3 5123 Yes (4 replaced)
11 4 4123 Yes (5 replaced)
12 5 4523 Yes (1 replaced)

Total Page Faults: 10

2. Optimal Page Replacement


 Idea: Replace the page that will not be used for the longest time in future.
 Advantages: Gives minimum number of page faults.
 Disadvantages: Not practical, as future knowledge is required.

Problem 1

Given:

 Page reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3


 Number of page frames: 3

Step-by-Step Solution:

We always replace the page that will not be used for the longest time in future.

Frame Page Page


Step Page Reason
Content Fault? Replaced
1 7 7__ Yes - Empty frame
2 0 70_ Yes - Empty frame
3 1 701 Yes - Empty frame
4 2 201 Yes 7 7 is not used again
5 0 201 No - Already in memory
0 used later, but 0 and 1 both used later;
6 3 231 Yes 0
we pick 0
7 0 031 Yes 2 2 not used again
8 4 041 Yes 3 3 not used again
9 2 241 Yes 0 0 not used again
10 3 341 Yes 2 2 not used again

Total Page Faults: 9

(Note: Here, although optimal is better than FIFO generally, the number of faults is still high
due to frequent replacements.)

Problem 2

Given:

 Page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5


 Number of page frames: 4

Step-by-Step Solution:

Step Page Frame Content Page Fault? Page Replaced Reason


1 1 1___ Yes - Empty
2 2 12__ Yes - Empty
3 3 123_ Yes - Empty
4 4 1234 Yes - Empty
Step Page Frame Content Page Fault? Page Replaced Reason
5 1 1234 No - Present
6 2 1234 No - Present
3 used later than others or never
7 5 1254 Yes 3
used
8 1 1254 No - Present
9 2 1254 No - Present
10 3 3254 Yes 1 1 not used again
11 4 3254 No - Present
12 5 3254 No - Present

Total Page Faults: 8

3. LRU (Least Recently Used)

 Idea: Replace the page that has not been used for the longest time.
 Implementation: Use a stack or timestamps.
 Advantages: Good approximation of optimal.
 Disadvantages: Costly to implement (tracking recent usage).

Problem 1

Given:

 Page reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3


 Number of page frames: 3

Step-by-Step (using LRU)

LRU replaces the page that has not been used for the longest time.

Step Page Frame Content Page Fault? Page Replaced Explanation


1 7 7__ Yes - Empty frame
2 0 70_ Yes - Empty frame
3 1 701 Yes - Empty frame
4 2 201 Yes 7 7 least recently used
5 0 201 No - 0 is in memory
6 3 301 Yes 2 2 is LRU
7 0 301 No - 0 is in memory
8 4 401 Yes 3 3 is LRU
9 2 201 Yes 4 4 is LRU
10 3 301 Yes 2 2 is LRU

Total Page Faults: 8


Problem 2

Given:

 Page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5


 Number of page frames: 4

Step-by-Step (using LRU)

Step Page Frame Content Page Fault? Page Replaced Explanation


1 1 1___ Yes - Empty frame
2 2 12__ Yes - Empty frame
3 3 123_ Yes - Empty frame
4 4 1234 Yes - Empty frame
5 1 1234 No - In memory
6 2 1234 No - In memory
7 5 1254 Yes 3 3 is LRU
8 1 1254 No - In memory
9 2 1254 No - In memory
10 3 3254 Yes 1 1 is LRU
11 4 3254 No - In memory
12 5 3254 No - In memory

Total Page Faults: 8

4. LFU (Least Frequently Used)

 Idea: Replace the page that is used least often.


 Replace the page with the least number of accesses (lowest frequency).
 If a tie occurs (multiple pages have the same lowest frequency), we may break it using
FIFO or LRU as a secondary criterion.

 Disadvantages: Pages that were heavily used in the past may stay in memory even if
not used recently.

Problem 1

Given:

 Page reference string: 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2


 Number of page frames: 3

Step-by-Step Table

We’ll track:
 Frequency count of each page
 Current pages in memory

Step Page Frequency Count Frame Content Page Fault? Page Replaced
1 2 2→1 2__ Yes -
2 3 2→1, 3→1 23_ Yes -
3 2 2→2, 3→1 23_ No -
4 1 2→2, 3→1, 1→1 231 Yes -
5 5 2→2, 3→1, 1→1, 5→1 2 5 1 Yes 3 (LFU: 3→1)
6 2 2→3, 5→1, 1→1 251 No -
7 4 2→3, 5→1, 1→1, 4→1 2 4 1 Yes 5 (LFU: 5→1)
8 5 2→3, 4→1, 1→1, 5→1 2 5 1 Yes 4 (LFU: 4→1)
9 3 2→3, 5→1, 1→1, 3→1 2 3 1 Yes 5 (LFU: 5→1)
10 2 2→4, 3→1, 1→1 231 No -
11 5 2→4, 3→1, 1→1, 5→1 2 5 1 Yes 3 (LFU: 3→1)
12 2 2→5, 5→1, 1→1 251 No -

Total Page Faults: 9

Problem 2

Given:

 Page reference string: 1, 2, 3, 2, 4, 1, 5, 2, 1, 2, 3, 4


 Number of page frames: 3

Step-by-Step Table

Step Page Frequency Count Frame Content Page Fault? Page Replaced
1 1 1→1 1__ Yes -
2 2 1→1, 2→1 12_ Yes -
3 3 1→1, 2→1, 3→1 123 Yes -
4 2 1→1, 2→2, 3→1 123 No -
5 4 1→1, 2→2, 3→1, 4→1 2 3 4 Yes 1 (LFU: 1→1)
6 1 2→2, 3→1, 4→1, 1→1 2 4 1 Yes 3 (LFU: 3→1)
7 5 2→2, 4→1, 1→1, 5→1 2 1 5 Yes 4 (LFU: 4→1)
8 2 2→3, 1→1, 5→1 215 No -
9 1 2→3, 1→2, 5→1 215 No -
10 2 2→4, 1→2, 5→1 215 No -
11 3 2→4, 1→2, 5→1, 3→1 1 5 3 Yes 2 (LFU: 5→1)
12 4 1→2, 3→1, 4→1 431 Yes 5 (LFU: 5→1)

Total Page Faults: 9

5. MFU (Most Frequently Used)


 Idea: Replace the page that was used most often, assuming it has completed its job.

Problem 1

Given:

 Page reference string: 1, 2, 1, 3, 1, 2, 4, 5, 1, 2, 3, 4


 Number of page frames: 3

Step-by-Step Table (MFU)

Step Page Frequency Count Frame Content Page Fault? Page Replaced

1 1 1→1 1__ Yes -

2 2 1→1, 2→1 12_ Yes -

3 1 1→2, 2→1 12_ No -

4 3 1→2, 2→1, 3→1 123 Yes -

5 1 1→3, 2→1, 3→1 123 No -

6 2 1→3, 2→2, 3→1 123 No -

7 4 1→3, 2→2, 3→1, 4→1 4 2 3 Yes 1 (MFU: 1→3)

8 5 4→1, 2→2, 3→1, 5→1 4 5 3 Yes 2 (MFU: 2→2)

9 1 4→1, 5→1, 3→1, 1→1 1 5 3 Yes 4 (MFU: all 1, FIFO used)

10 2 1→1, 5→1, 3→1, 2→1 2 5 3 Yes 1 (MFU: all 1, FIFO)

11 3 2→1, 5→1, 3→2 253 No -

12 4 2→1, 5→1, 3→2, 4→1 4 5 3 Yes 2 (MFU: all 1, FIFO)

Total Page Faults: 10

Problem 2

Given:

 Page reference string: 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2


 Number of page frames: 3

Step-by-Step Table (MFU)


Step Page Frequency Count Frame Content Page Fault? Page Replaced

1 2 2→1 2__ Yes -

2 3 2→1, 3→1 23_ Yes -

3 2 2→2, 3→1 23_ No -

4 1 2→2, 3→1, 1→1 231 Yes -

5 5 2→2, 3→1, 1→1, 5→1 5 3 1 Yes 2 (MFU: 2→2)

6 2 2→1, 3→1, 1→1, 5→1 2 3 1 Yes 5 (tie; replace 5)

7 4 2→1, 3→1, 1→1, 4→1 2 4 1 Yes 3 (MFU: 3→1)

8 5 2→1, 4→1, 1→1, 5→1 2 5 1 Yes 4 (MFU: 4→1)

9 3 2→1, 5→1, 1→1, 3→1 3 5 1 Yes 2 (MFU: 2→1)

10 2 2→1, 5→1, 1→1, 3→1 2 5 1 Yes 3 (MFU: 3→1)

11 5 2→1, 5→2, 1→1 251 No -

12 2 2→2, 5→2, 1→1 251 No -

Total Page Faults: 10

6. Clock (Second-Chance) Algorithm

 Idea: Circular queue where each page has a reference bit:


o If reference bit = 1 → give a second chance (set bit to 0 and move on)
o If reference bit = 0 → replace the page
 Advantages: Efficient approximation of LRU

Page Fault Rate and Performance

 Page fault rate depends on:


o Number of page frames
o The algorithm used
o The program’s access pattern

Belady’s Anomaly

 In some algorithms like FIFO, increasing the number of frames can lead to more
page faults.
 Does not occur in Optimal or LRU.

Comparison Table
Algorithm Complexity Page Faults Belady’s Anomaly Practicality
FIFO Low High Yes High
Optimal High Lowest No Theoretical
LRU Medium Low No Moderate
LFU High Variable No Low
Clock Medium Low No High

Thrashing in Operating Systems

Definition:

Thrashing occurs when a process spends more time swapping pages in and out of
memory than executing actual instructions.

This results in:

 High page fault rate


 Excessive disk I/O
 Drastic drop in CPU utilization
 Very poor system performance

What causes Thrashing?

1. High degree of multiprogramming


Too many processes in memory can lead to insufficient frames per process.
2. Lack of locality of reference
A process jumps around its pages too often without reusing them.
3. Poor page replacement strategy
Frequently used pages are replaced, causing constant page faults.

Example Scenario (Conceptual)

 Suppose a process needs 10 pages to function efficiently.


 The OS allocates only 4 frames.
 Every time the process accesses a page, it causes a page fault because that page isn’t
in memory.
 Page replacement happens repeatedly → each replaced page is needed again shortly
→ more faults → cycle continues.

This loop is thrashing.

How to Detect Thrashing?

 Sudden drop in CPU utilization


 Increase in page fault rate
 Increase in disk activity
 Process execution slows down significantly

🔧 How to Handle or Prevent Thrashing

Strategy Description
Allocate enough frames to cover the process’s working set
Working Set Model
(frequently used pages).
Page Fault Frequency Monitor the fault rate; if too high, reduce degree of
(PFF) multiprogramming.
Reduce
Suspend or swap out processes to free frames.
Multiprogramming
Use Local Page Don’t let a process take frames from others (prevents one process
Replacement from harming another).
Increase RAM If possible, physically increase memory.

Cache Memory Organization – Explained

Cache memory is a small, fast memory located close to the CPU that stores copies of frequently
accessed data from main memory to speed up processing.

Why Use Cache Memory?

 Accessing main memory (RAM) is slower than accessing the CPU.


 Cache acts as a buffer between the CPU and RAM.
 Increases execution speed by reducing memory access time.

Types of Cache Memory Organization

Cache memory is organized in the following ways based on how blocks from main memory are
mapped to cache blocks:

1. Direct Mapping

Each block of main memory maps to exactly one cache line.

🧩 How it Works:

 If two blocks map to the same line, the newer one replaces the older.
 Uses:
Cache line number = (Main memory block number) MOD (Number of cache
lines)

Pros:

 Simple and fast


Cons:

 High conflict misses (different blocks map to the same cache line)

2. Fully Associative Mapping

A block from main memory can be placed anywhere in the cache.

How it Works:

 Cache is searched fully to find a block.


 Uses replacement policies like LRU, FIFO, Random, etc.

Pros:

 No conflict misses
 Maximum flexibility

Cons:

 Expensive and slow (requires full search or associative hardware)

3. Set-Associative Mapping

A compromise between direct and fully associative mapping.

🔧 How it Works:

 Cache is divided into sets, each set has a few lines (e.g., 2, 4, 8).
 A block maps to a set, and within the set it can be placed anywhere.
 For example, in 2-way set-associative, each set holds 2 blocks.

Pros:

 Fewer conflict misses than direct mapping


 Less costly than full associativity

Cons:

 Slightly complex logic

Comparison Table
Feature Direct Mapping Fully Associative Set-Associative

Flexibility Low High Medium

Search Time Fast Slow Moderate

Hardware Cost Low High Moderate

Miss Rate High Low Lower


Feature Direct Mapping Fully Associative Set-Associative

Cache Terms You Should Know


Term Meaning

Hit Data found in cache

Miss Data not in cache → fetched from RAM

Hit Ratio % of memory accesses found in cache

Miss Penalty Time taken to fetch from RAM after a miss

Cache Replacement Policies

When the cache is full, replacement is needed. Common strategies:

 LRU (Least Recently Used)


 FIFO (First In First Out)
 Random Replacement

Locality of Reference in Operating Systems

Definition:

Locality of reference refers to the tendency of a program to access the same set of memory
locations repeatedly over a short period of time.

This concept is crucial for the effectiveness of:

 Cache memory
 Virtual memory
 Page replacement algorithms

🧩 Types of Locality

Type Description Example


1. Temporal If a memory location is referenced, it is likely to Reusing a variable in a
Locality be referenced again soon. loop.
2. Spatial If a memory location is referenced, nearby Accessing array elements
Locality locations are likely to be referenced soon. sequentially.
3. Sequential A special case of spatial locality where memory Fetching next instruction
Locality is accessed in order. in sequence.

Why Locality Matters


Modern memory systems (like cache and virtual memory) exploit locality to improve
performance:

 Temporal locality is used by cache to keep recently used data.


 Spatial locality helps in prefetching adjacent blocks/pages.
 Virtual memory loads entire pages assuming nearby data will be used soon.

Examples

Example of Temporal Locality:

for (int i = 0; i < 100; i++) {


sum += array[i]; // `sum` is used repeatedly
}

Example of Spatial Locality:

for (int i = 0; i < 100; i++) {


printf("%d", array[i]); // accessing `array[i]`, `array[i+1]`, ...
}

Example of Sequential Locality:

Instructions in code are typically executed one after another:

int a = 5;
int b = a + 2;
int c = b * 3;

How OS and Hardware Use Locality

Uses
Feature How?
Locality?
Cache Memory ✔️ Keeps recently used/frequently used data
Loads an entire page to exploit spatial
Paging ✔️
locality
TLB (Translation Lookaside
✔️ Stores recent address translations
Buffer)
Prefetching ✔️ Loads expected future memory accesses

Impact on Performance

 Higher locality ⇒ More cache hits, fewer page faults


 Programs with poor locality (e.g., random access) perform worse on modern systems.

Summary
Key Point Meaning
Temporal Locality Reuse of same data/instruction soon
Spatial Locality Use of data near recently accessed memory
Sequential Locality Access of data in a predictable sequence
Why it matters Basis for efficient memory hierarchy

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy