OS Unit 4
OS Unit 4
Memory Management
Memory Management
Definition:
Memory management is a core function of the operating system that handles the allocation ad
deallocation of memory space to processes during execution.
Types of Memory
Primary Memory (RAM): Fast, volatile memory where programs are loaded for
execution.
Secondary Memory (Disk): Non-volatile storage used for virtual memory.
Cache Memory: Small, fast memory between CPU and RAM to reduce access time.
Definition:
A bare machine refers to a computer system without any operating system or software
interface. It represents the lowest-level interaction with computer hardware.
Characteristics
No operating system; only hardware and machine language are present.
Programs must be written in machine code or low-level assembly.
No abstraction or user-friendly environment.
Direct control over CPU, memory, and I/O devices is required.
Programmer is responsible for memory management, I/O handling, and job
sequencing.
Limitations
Historical Relevance
Resident Monitor
Definition:
A resident monitor is an early form of operating system that resides permanently in memory
and controls the execution of programs. It automates the job sequencing process and manages
system resources.
Key Functions
Job Control: Automatically loads and executes jobs one after another.
Memory Management: Divides memory into two parts:
o Monitor area: Permanently resides in memory.
o User area: Where user programs are loaded and executed.
I/O Management: Handles basic input and output operations.
Program Loading: Loads user programs into memory from secondary storage.
Structure
Control Card Interpreter: Reads and interprets control commands (job start, end,
etc.).
Loader: Loads executable programs into memory.
Device Drivers: Basic routines for interacting with hardware.
Error Handler: Detects and handles basic runtime errors.
Advantages
Limitations
Definition:
This is a memory management technique where the main memory is divided into a fixed
number of partitions at system startup, and each partition can hold exactly one process.
Key Features
Advantages
Simple to implement.
Allows multiprogramming: more than one process can reside in memory.
Reduces CPU idle time compared to single-task systems.
Disadvantages
Use Case
Early batch processing systems used fixed partitioning before dynamic allocation
techniques evolved.
Multiprogramming with Variable Partitions
Definition:
In this memory management technique, the main memory is divided dynamically into
partitions of variable sizes based on the size of incoming processes. Unlike fixed partitions,
there are no pre-set divisions.
Key Features
1. First Fit: Allocates the first block of memory that is large enough.
2. Best Fit: Allocates the smallest available block that fits the process.
3. Worst Fit: Allocates the largest block to leave a large leftover for future use.
Advantages
Disadvantages
Use Case
Used in early versions of operating systems that aimed to support dynamic and more
efficient memory use.
Protection Schemes
Definition:
Protection schemes in memory management ensure that processes do not interfere with each
other’s memory space, providing security and stability within the system.
Objectives of Protection
Hardware Support
Importance
Paging
Definition:
Paging is a memory management technique that eliminates external fragmentation by
dividing both physical memory and logical memory into fixed-size blocks.
Key Concepts
Advantages
Problem:
A system has a logical address space of 16 KB and a physical memory of 64 KB. The page
size is 1 KB. The page table for a process is given below (page number → frame number):
Find the physical address corresponding to the logical address 2200 (in decimal).
Solution
Page size = 1 KB = 1024 bytes → Offset size = 10 bits (since 2^10 = 1024).
Logical address space = 16 KB = 16 * 1024 = 16384 bytes.
Number of pages = Logical address space / Page size = 16 KB / 1 KB = 16 pages.
Number of bits for page number = 4 bits (since 2^4 = 16).
2. Convert the logical address to binary or calculate page number and offset
directly:
Segmentation
Definition:
Segmentation is a memory management scheme that divides a process’s memory into logical
segments of varying lengths, each representing a different logical unit like code, data, stack,
etc.
Key Concepts
The segment table holds the base and limit for each segment.
To get the physical address:
o Check if offset < segment limit (for protection).
o Physical address = base address of the segment + offset.
Advantages
Disadvantages
Diagram
Logical Address = (Segment Number, Offset)
Segment Table
+-------------------+
| Segment | Base | Limit |
+---------+------+-------+
| 0 | 1000 | 200 |
| 1 | 3000 | 500 |
| 2 | 7000 | 1000 |
+-------------------+
Physical Memory
[Addresses]
Numerical Example
Problem:
Given the segment table below, find the physical address for the logical address (Segment =
1, Offset = 400).
Solution
Offset = 400
Limit for segment 1 = 500
Since 400 < 500, address is valid.
Physical Address=Base+Offset=3000+400=3400
Answer: The physical address corresponding to logical address (1, 400) is 3400.
Paged Segmentation
Definition:
Paged segmentation is a memory management technique that combines both segmentation
and paging to benefit from the logical structuring of programs and efficient memory use.
Key Concepts
Memory is divided into segments, and each segment is further divided into fixed-size
pages.
A logical address is represented as a triple:
Advantages
Disadvantages
1. The logical address is split into segment number (s), page number (p), and offset (d).
2. The segment table is accessed using s to get the base address of the page table.
3. The page table is accessed using p to find the frame number.
4. The physical address = (frame number × page size) + offset.
Problem:
Consider a system with the following parameters:
Page tables:
(Segment=1,Page=0,Offset=500)
Solution:
Answer:
The physical address corresponding to logical address (1, 0, 500) is 9716
Definition:
Virtual memory is a memory management technique that allows a computer to compensate
for physical memory shortages by temporarily transferring data from RAM to disk storage. It
gives the illusion of a large, contiguous memory space to processes.
Key Ideas
Benefits
Key Components
Working
Demand Paging
Definition:
Demand paging is a virtual memory management technique where pages are loaded into
physical memory only when they are needed, i.e., when a page fault occurs.
1. Initial Load:
When a process starts, none or only some pages are loaded into physical memory.
2. Page Fault:
When the CPU accesses a page not in physical memory, a page fault occurs.
3. Handling Page Fault:
o The OS locates the required page on the disk (swap space).
o If physical memory is full, it selects a page to evict (swap out).
o Loads the requested page into the freed frame.
o Updates the page table to mark the page as present.
o Resumes the interrupted instruction.
4. Resuming Execution:
The instruction causing the page fault is restarted, now with the required page in
memory.
Disadvantages
Overview:
Demand paging improves memory utilization by loading pages only when needed. However,
its performance depends heavily on how often page faults occur and how efficiently the
system handles them.
Key Metrics
Where:
Example
EAT=(1−0.0001)×100+0.0001×8,000,000=99.99+800=899.99 nanoseconds
Even a tiny page fault rate drastically increases average access time.
Conclusion
Demand paging significantly saves memory but can degrade performance if page
faults are frequent.
Optimizing page fault rate and service time is crucial for system efficiency.
1. FIFO (First-In-First-Out)
Idea: Replace the oldest page (the one that entered memory first).
Implementation: Use a queue to track the order of pages.
Advantages: Simple to implement.
Disadvantages: May replace frequently used pages (bad performance).
Problem 1
Given:
Page reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3
Number of page frames: 3
Solution:
Problem 2
Given:
Solution:
Problem 1
Given:
Step-by-Step Solution:
We always replace the page that will not be used for the longest time in future.
(Note: Here, although optimal is better than FIFO generally, the number of faults is still high
due to frequent replacements.)
Problem 2
Given:
Step-by-Step Solution:
Idea: Replace the page that has not been used for the longest time.
Implementation: Use a stack or timestamps.
Advantages: Good approximation of optimal.
Disadvantages: Costly to implement (tracking recent usage).
Problem 1
Given:
LRU replaces the page that has not been used for the longest time.
Given:
Disadvantages: Pages that were heavily used in the past may stay in memory even if
not used recently.
Problem 1
Given:
Step-by-Step Table
We’ll track:
Frequency count of each page
Current pages in memory
Step Page Frequency Count Frame Content Page Fault? Page Replaced
1 2 2→1 2__ Yes -
2 3 2→1, 3→1 23_ Yes -
3 2 2→2, 3→1 23_ No -
4 1 2→2, 3→1, 1→1 231 Yes -
5 5 2→2, 3→1, 1→1, 5→1 2 5 1 Yes 3 (LFU: 3→1)
6 2 2→3, 5→1, 1→1 251 No -
7 4 2→3, 5→1, 1→1, 4→1 2 4 1 Yes 5 (LFU: 5→1)
8 5 2→3, 4→1, 1→1, 5→1 2 5 1 Yes 4 (LFU: 4→1)
9 3 2→3, 5→1, 1→1, 3→1 2 3 1 Yes 5 (LFU: 5→1)
10 2 2→4, 3→1, 1→1 231 No -
11 5 2→4, 3→1, 1→1, 5→1 2 5 1 Yes 3 (LFU: 3→1)
12 2 2→5, 5→1, 1→1 251 No -
Problem 2
Given:
Step-by-Step Table
Step Page Frequency Count Frame Content Page Fault? Page Replaced
1 1 1→1 1__ Yes -
2 2 1→1, 2→1 12_ Yes -
3 3 1→1, 2→1, 3→1 123 Yes -
4 2 1→1, 2→2, 3→1 123 No -
5 4 1→1, 2→2, 3→1, 4→1 2 3 4 Yes 1 (LFU: 1→1)
6 1 2→2, 3→1, 4→1, 1→1 2 4 1 Yes 3 (LFU: 3→1)
7 5 2→2, 4→1, 1→1, 5→1 2 1 5 Yes 4 (LFU: 4→1)
8 2 2→3, 1→1, 5→1 215 No -
9 1 2→3, 1→2, 5→1 215 No -
10 2 2→4, 1→2, 5→1 215 No -
11 3 2→4, 1→2, 5→1, 3→1 1 5 3 Yes 2 (LFU: 5→1)
12 4 1→2, 3→1, 4→1 431 Yes 5 (LFU: 5→1)
Problem 1
Given:
Step Page Frequency Count Frame Content Page Fault? Page Replaced
Problem 2
Given:
Belady’s Anomaly
In some algorithms like FIFO, increasing the number of frames can lead to more
page faults.
Does not occur in Optimal or LRU.
Comparison Table
Algorithm Complexity Page Faults Belady’s Anomaly Practicality
FIFO Low High Yes High
Optimal High Lowest No Theoretical
LRU Medium Low No Moderate
LFU High Variable No Low
Clock Medium Low No High
Definition:
Thrashing occurs when a process spends more time swapping pages in and out of
memory than executing actual instructions.
Strategy Description
Allocate enough frames to cover the process’s working set
Working Set Model
(frequently used pages).
Page Fault Frequency Monitor the fault rate; if too high, reduce degree of
(PFF) multiprogramming.
Reduce
Suspend or swap out processes to free frames.
Multiprogramming
Use Local Page Don’t let a process take frames from others (prevents one process
Replacement from harming another).
Increase RAM If possible, physically increase memory.
Cache memory is a small, fast memory located close to the CPU that stores copies of frequently
accessed data from main memory to speed up processing.
Cache memory is organized in the following ways based on how blocks from main memory are
mapped to cache blocks:
1. Direct Mapping
🧩 How it Works:
If two blocks map to the same line, the newer one replaces the older.
Uses:
Cache line number = (Main memory block number) MOD (Number of cache
lines)
Pros:
High conflict misses (different blocks map to the same cache line)
How it Works:
Pros:
No conflict misses
Maximum flexibility
Cons:
3. Set-Associative Mapping
🔧 How it Works:
Cache is divided into sets, each set has a few lines (e.g., 2, 4, 8).
A block maps to a set, and within the set it can be placed anywhere.
For example, in 2-way set-associative, each set holds 2 blocks.
Pros:
Cons:
Comparison Table
Feature Direct Mapping Fully Associative Set-Associative
Definition:
Locality of reference refers to the tendency of a program to access the same set of memory
locations repeatedly over a short period of time.
Cache memory
Virtual memory
Page replacement algorithms
🧩 Types of Locality
Examples
int a = 5;
int b = a + 2;
int c = b * 3;
Uses
Feature How?
Locality?
Cache Memory ✔️ Keeps recently used/frequently used data
Loads an entire page to exploit spatial
Paging ✔️
locality
TLB (Translation Lookaside
✔️ Stores recent address translations
Buffer)
Prefetching ✔️ Loads expected future memory accesses
Impact on Performance
Summary
Key Point Meaning
Temporal Locality Reuse of same data/instruction soon
Spatial Locality Use of data near recently accessed memory
Sequential Locality Access of data in a predictable sequence
Why it matters Basis for efficient memory hierarchy