OS Unit 3
OS Unit 3
NotesNeo
Syllabus :
Memory Management: Basic concept, Logical and Physical address map, Memory
allocation: Contiguous Memory allocation – Fixed and variable partition – Internal and
External fragmentation and Compaction; Paging: Principle of operation – Page allocation
– Hardware support for paging, Protection and sharing, Disadvantages of paging.
Virtual Memory: Basics of Virtual Memory – Hardware and control structures – Locality of
reference, Page fault, Working Set, Dirty page/Dirty bit – Demand paging, Page
Replacement algorithms: Optimal, First in First Out (FIFO), Optimal Page Replacement
and Least Recently used (LRU).
1
NotesNeo
Physical Address:
● Actual location in main memory where data is stored.
● Directly corresponds to the hardware ie, physical memory cells.
● Used by the memory controller to access physical memory.
Address Translation:
● Processes access memory using logical addresses, which are then translated by the
operating system into physical addresses.
● The memory management unit (MMU) performs the translation of logical addresses
to physical addresses, before a program runs.
● It uses a relocation register (base register) to map logical addresses to physical
addresses.
● This translation is facilitated by page tables (paging) or segment tables
(segmentation), which store the mapping information.
2
NotesNeo
Various contiguous memory allocation algorithms include:
1. First Fit :- Assigns the first available partition that fits the process.
2. Best Fit :- Allocates the smallest available partition that fits the process.
3. Worst Fit :- Allocates the largest available partition.
4. Next Fit :- Similar to First Fit, but starts searching from the last allocated partition.
In fixed partitioning:
● Main memory is divided into partitions of equal or different sizes.
● The operating system typically resides in the first partition, while the remaining
partitions are available to store user processes.
● Memory is assigned to processes in a contiguous manner, meaning that a process
must occupy a single, uninterrupted block of memory.
3
NotesNeo
Disadvantages of Fixed Partitioning:
1. Internal Fragmentation:
● When a process occupies a partition that is larger than its size, the extra
memory within the partition remains unused, resulting in wastage. This
wasted memory is known as internal fragmentation.
● For example: If a 3 MB process is loaded into a 4 MB partition, then the
remaining 1 MB of memory within the partition remains unused.
4
NotesNeo
2. Variable (Dynamic) Partitioning
● Variable partitioning is a memory management technique designed to overcome
the limitations and inefficiencies of fixed partitioning.
● Variable partitioning allocates memory dynamically by allowing partitions to be
created and resized dynamically based on the size of the processes being loaded
into memory. This approach reduces wastage of memory but requires more
complex management to handle varying sizes of processes.
● It's like cutting a cake into slices of varying sizes based on how many people need
to be served.
● Examples: Variable partitioning is used in modern operating systems like Linux and
Windows, where memory requirements of processes vary dynamically.
In dynamic partitioning:
● Partition sizes are not predefined; instead, they are determined at the time of
process loading based on the size of the process.
● The first partition is typically reserved for the operating system, while the
remaining space is divided into partitions of varying sizes to accommodate
processes.
5
NotesNeo
Disadvantages of Dynamic Partitioning:
1. External Fragmentation:
● Despite the absence of internal fragmentation, external fragmentation can
still occur. When processes are completed and memory is freed, the resulting
unused partitions may not be contiguous, leading to wasted space that
cannot be utilized by new processes.
● For example: If processes P1 (5 MB), P2 (2 MB), P2 (3 MB), and P4 (4MB) are
loaded and then P1 and P3 complete, their memory becomes available (5 MB
and 3 MB blocks). However, a new process P5 requiring 8 MB cannot be
allocated since the free memory is not contiguous.
B. Fragmentation Issues
Fragmentation refers to the wastage of memory space due to inefficient memory
allocation. It refers to the phenomenon where the available memory space is divided into
small, non-contiguous blocks, making it challenging to allocate large contiguous blocks of
6
NotesNeo
memory to processes. It leads to inefficient memory utilization and can lower the
performance of the system.
There are two types of fragmentation:
1. Internal Fragmentation
Internal fragmentation happens when a process is allocated more memory than it
actually needs, resulting in wasted space within the allocated memory block. It's like
buying a large pizza but only eating a small portion, leaving the rest uneaten and wasted.
● Causes and Consequences: Internal fragmentation occurs due to fixed-sized
partitions or memory allocation strategies that allocate memory in fixed-sized
chunks. It reduces overall system efficiency by wasting memory that could have
been utilized by other processes.
● Removal: Techniques such as compaction and dynamic memory allocation can help
reduce internal fragmentation.
2. External Fragmentation
External fragmentation occurs when there is enough total memory space available to
satisfy a request, but it is not contiguous, scattered in small chunks throughout memory.
It's like having enough total space in your room, but it's divided into small, scattered
areas, making it challenging to find a single large enough space for a new item.
External fragmentation occurs when free memory exists in non-contiguous blocks, making
it difficult to allocate contiguous blocks of memory to processes.
● Causes and Consequences: External fragmentation arises when processes are
loaded and removed from memory, leaving gaps of free memory between
allocated blocks. These gaps cannot be utilized by processes requesting contiguous
memory, leading to inefficient memory utilization.
● Removal: Compaction techniques, such as memory compaction and dynamic
memory allocation, can help reduce external fragmentation by consolidating free
memory blocks into contiguous blocks.
7
NotesNeo
C. Compaction Techniques
Compaction tis a technique used to reduce external fragmentation by rearranging all free
memory together in one contiguous block. It's like rearranging furniture in a room to
create more space.
Compaction can be done periodically or when necessary to ensure efficient memory
usage. However, compaction can be time-consuming and may require moving active
processes, impacting performance temporarily.
8
NotesNeo
● Cons: Compaction may introduce overhead and delay in memory allocation,
especially in systems with large memory capacities. Additionally, compaction may
not be feasible in real-time systems where strict timing constraints must be met.
1. First-Fit Allocation:
In First-Fit memory allocation, the operating system searches for the first available
memory block that is large enough to hold the incoming process. It doesn't search for the
best-suited block but allocates the process to the nearest available partition with
sufficient size.
Advantages:
● Simple and efficient search algorithm.
● Minimizes memory fragmentation.
● Fast allocation of memory.
Disadvantages:
● Poor performance in highly fragmented memory.
● May lead to poor memory utilization.
● May allocate larger blocks of memory than required.
2. Best-Fit Allocation:
Best-Fit allocation allocates memory by searching for the smallest free block of memory
that is large enough to accommodate the process. It aims to minimize memory wastage
by selecting the closest-fitting free partition.
Advantages:
● Efficient memory usage.
● Saves memory from being wasted.
● Improved memory utilization.
● Reduces memory fragmentation.
● Minimizes external fragmentation.
Disadvantages:
● Slow process due to extensive memory search.
● Increased computational overhead.
● May result in increased internal fragmentation.
9
NotesNeo
● Can lead to slow memory allocation times.
3. Worst-Fit Allocation:
Worst-Fit allocation searches for the largest free block of memory to place the process. It
maximizes internal fragmentation but allows other smaller processes to use the leftover
space.
Advantages:
● Large internal fragmentation enables other processes to use leftover memory.
Disadvantages:
● Slow process due to traversing the entire memory.
● Leads to significant internal fragmentation.
● Can result in slow memory allocation times.
III. Paging
10
NotesNeo
● If a process P1 needs 2 MB of space, and there are two holes of 1 MB each in
the main memory but not contiguous, paging allows storing different pages
of P1 in those non-contiguous locations.
11
NotesNeo
● This dynamic loading optimizes memory usage by loading only the
necessary pages into memory at any given time.
Example:
1. Consider a scenario with a main memory size of 16 KB and a frame size of 1 KB.
This results in 16 frames, each of 1 KB size.
2. Suppose there are four processes, P1, P2, P3, and P4, each requiring 4 KB of
memory. Each process is divided into pages of 1 KB, allowing one page to be stored
in one frame.
3. Initially, all frames are empty, and pages of processes are stored in contiguous
frames. Frames, pages, and their mapping are depicted below.
13
NotesNeo
4. Later, processes P2 and P4 enter a waiting state, freeing up eight frames. Another
process, P5, requiring 8 KB of memory (eight pages), is waiting in the ready queue.
5. With eight non-contiguous empty frames available in memory, pages of process P5
can be loaded into these frames, replacing the pages of processes P2 and P4.
14
NotesNeo
● The TLB is a cache used to speed up address translation by storing the recent page
table entries.
● When a logical address is translated, the MMU first checks the TLB. If the entry is
found (a TLB hit), the physical address is obtained quickly.
● If the entry is not found (a TLB miss), the MMU accesses the page table in memory
to retrieve the frame number.
●
Advantages:
1. Eliminates External Fragmentation:
● Paging divides memory into fixed-size pages and frames, preventing gaps
between allocated memory blocks. This avoids external fragmentation
where free memory is scattered in non-contiguous blocks.
2. Efficient Memory Utilization:
● Processes can be loaded into any available frames, making it easier to use
all available memory without worrying about finding a large enough
contiguous block of memory.
3. Simplicity in Memory Management:
● Fixed-size pages simplify memory allocation and deallocation. The operating
system can easily keep track of free frames and allocate them to processes
as needed.
4. Support for Virtual Memory:
15
NotesNeo
● Paging supports virtual memory, allowing processes to use more memory
than is physically available by swapping pages between physical memory
and secondary storage (e.g., a hard disk).
5. Protection and Isolation:
● Each process has its own page table, ensuring that one process cannot
accidentally access another process's memory. This provides better
protection and isolation between processes.
6. Flexible Address Space:
● Paging allows processes to have a non-contiguous address space, making it
easier to manage memory for processes with varying memory needs and
sizes.
7. Easier to Load Large Programs:
● Large programs can be loaded into memory even if there isn’t a single large
contiguous block of free memory. The program is divided into pages, and
each page can be loaded into any available frame.
Disadvantages:
1. Internal Fragmentation:
● Since pages and frames are of fixed size, there can be wasted space within
pages if a process does not use the entire page. This unused space within a
page is known as internal fragmentation.
2. Overhead of Maintaining Page Tables:
● Paging requires additional memory and computational overhead to
maintain page tables. Each process needs a page table to map its logical
pages to physical frames.
3. TLB Miss Penalty:
● Address translation involves looking up entries in the Translation Lookaside
Buffer (TLB). If the TLB does not contain the required page table entry (a
TLB miss), it must be fetched from the page table in memory, which can slow
down the process.
4. Complexity in Address Translation:
● Translating logical addresses to physical addresses requires additional steps
and hardware support (such as a Memory Management Unit, or MMU),
adding complexity to the system.
5. Thrashing:
● Thrashing occurs when the system spends excessive time swapping pages
between main memory and secondary storage due to high paging activity.
This severely degrades system performance.
6. Memory Overhead for Page Tables:
● Large processes require large page tables, which consume additional
memory. This can be particularly problematic in systems with limited
memory.
7. Increased Latency:
● Accessing memory through paging can introduce latency due to the multiple
steps involved in address translation, including potential TLB lookups and
page table accesses.
16
NotesNeo
IV. Segmentation
Segmentation is a memory management technique in operating systems where memory
is divided into variable-sized parts called segments. Each segment can be allocated to a
process, and details about each segment are stored in a table called a segment table.
17
NotesNeo
Advantages of Segmentation:
1. No internal fragmentation: Segmentation reduces internal fragmentation, as
segments are variable-sized.
2. Larger average segment size: Segments can be larger than pages, optimizing
memory usage.
3. Less overhead: Segmentation has lower overhead compared to other techniques.
4. Easy relocation: Relocating segments is easier than relocating entire address
spaces.
5. Smaller segment tables: Segment tables are smaller than page tables in paging.
Disadvantages of Segmentation:
1. External fragmentation: Segmentation may lead to external fragmentation,
reducing memory efficiency.
2. Difficulty in contiguous memory allocation: Allocating contiguous memory to
variable-sized partitions can be challenging.
3. Costly memory management: The algorithms used in segmentation can be
resource-intensive.
Paging vs Segmentation:
Proximity to
Closer to the Operating System Closer to the User
System
18
NotesNeo
V. Virtual Memory
● When data is needed from the secondary storage, it is swapped back into RAM,
replacing other data if necessary. This is transparent to the user and appears as if
the entire program is stored in RAM.
19
NotesNeo
● Virtual memory management is handled by the operating system's memory
management unit (MMU), which performs address translation between logical
addresses used by the CPU and physical addresses in RAM or secondary storage.
20
NotesNeo
● Scaling: The page table base address is added to the page number to locate the
page entry in the table. This process is known as scaling.
● Generation of Physical Address: The page table entry provides the frame number
corresponding to the page number. The physical address is generated, comprising
the frame number and offset.
● Accessing Main Memory: The frame number and offset are used to access the
actual memory location.
Page Table:
In the context of virtual memory management, a Page Table is a critical data structure
used by the operating system to manage the mapping between logical addresses
generated by the CPU and physical addresses in the main memory.
21
NotesNeo
2. Increased memory capacity: Virtual memory enables computers to run programs
that require more memory than what is physically available.
3. Improved performance: By utilizing secondary storage, virtual memory reduces
the chances of running out of memory, thereby preventing system crashes and
improving overall performance.
4. Increased Degree of Multiprogramming: Allows more processes to be run
simultaneously by utilizing secondary storage as additional memory.
5. Efficient Use of Physical RAM: Optimizes the utilization of physical RAM by
swapping out less frequently used data.
6. Cost-Effective: Eliminates the need to purchase additional physical RAM, reducing
hardware costs.
22
NotesNeo
Control Structures for Efficient Virtual Memory Management:
● Page Table Entries: Each entry in the page table contains information about the
mapping between a logical page number and a physical frame number, as well as
status bits for protection and other attributes.
C. Locality of Reference
Locality of reference is a key concept in virtual memory management that describes the
tendency of programs to access memory locations near each other.
E. Working Set
The working set is a measure of the set of pages that a process is actively using at any
given time.
23
NotesNeo
Definition and Significance:
● The working set concept defines the set of pages that a process actively uses
during its execution. It represents the minimum amount of memory needed to
prevent excessive page faults and maintain good performance.
● The working set changes over time as the program's memory requirements evolve.
Operating systems use the working set to make decisions about page replacement
and memory allocation.
● It's like the items you need for a specific task, such as ingredients for cooking,
which may vary depending on the recipe you're preparing.
● Working Set Size: The number of pages that a process typically accesses during a
given time interval, such as a fixed time window.
● Application: The working set concept is used by the operating system to
dynamically adjust memory allocation to processes based on their working set
sizes, ensuring optimal performance.
Explanation:
● Dirty Page: A dirty page is a page of memory that has been modified since it was
last read from disk or written to disk.
● Dirty Bit: The dirty bit is a flag associated with each page which indicates whether
the page has been modified.
● When a dirty page is evicted from memory, it must be written back to disk to
ensure data integrity. The dirty bit helps the operating system identify which pages
need to be written back to disk.
24
NotesNeo
● This approach saves memory space and reduces the time needed to load programs
into memory initially. When a program references a page that is not currently in
memory, a page fault occurs, triggering the operating system to load the required
page from secondary storage (such as a hard drive) into memory.
Advantages:
1. Efficient Memory Usage: Demand paging allows for more efficient use of memory
by loading only the necessary pages into memory, thereby reducing memory
wastage.
2. Faster Program Startup: Since only the essential pages are loaded initially,
programs can start up faster compared to loading the entire program into memory
at once.
3. Improved System Responsiveness: Demand paging helps in improving overall
system responsiveness by reducing the amount of time spent loading unnecessary
pages into memory.
4. Fragmentation Avoidance: Demand paging helps avoid external fragmentation by
allocating memory on-demand rather than allocating fixed-sized blocks.
5. Reduced I/O Overhead: Only pages that are actually needed by processes are
loaded into memory, reducing unnecessary I/O operations.
6. Flexibility in Memory Management: Demand paging provides flexibility in
managing memory, as pages can be loaded and unloaded dynamically based on
process requirements.
25
NotesNeo
7. Support for Large Programs: Demand paging enables the execution of large
programs that may not fit entirely in physical memory, as only the required
portions are loaded.
Disadvantages:
1. Increased Overheads: Demand paging introduces additional overhead, interrupts,
and page table management, due to the need to manage page faults and load
pages from secondary storage into memory as needed. This overhead can impact
system performance.
2. Potential for Thrashing: If the system constantly swaps pages between main
memory and secondary memory due to high memory pressure, it can lead to
thrashing, where the system spends more time swapping pages than executing
processes.
3. Complexity in Page Fault Handling: Managing page faults and ensuring efficient
page replacement strategies can be complex and require careful optimization to
avoid performance bottlenecks.
4. Longer Memory Access Times: Accessing pages from secondary memory (disk) is
slower compared to accessing pages from main memory, leading to longer
memory access times.
5. Complexity in Implementation: Implementing demand paging requires complex
algorithms for page replacement, handling page faults, and managing page tables.
26
NotesNeo
● When a page fault occurs, the algorithm looks ahead in the reference string to see
which pages will be used and replaces the one that will not be used for the longest
time.
Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0, with three
frames. Let's calculate the number of page faults using the OPTIMAL Page Replacement
Algorithm.
Advantages:
● Optimal Page Replacement guarantees the fewest possible page faults because it
always picks the best page to replace.
● It gives the best performance in theory.
● It is used as a benchmark to compare the performance of other page replacement
algorithms
27
NotesNeo
Limitations:
● The main drawback of the Optimal Page Replacement algorithm is that it requires
knowledge of future memory access patterns, which is impossible to know in
real-world scenarios.
● It's not feasible to implement because it's like trying to predict the future, which is
beyond the capabilities of computers.
● It is not used in real systems due to its impracticality.
Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0, with three
frames. Let's calculate the number of page faults using the FIFO Page Replacement
Algorithm.
28
NotesNeo
● Ratio of Page Hit to the Page Fault = 8 : 12 ---> 2 : 3 ---> 0.66
● Page Hit Percentage = 8 *100 / 20 = 40%
● Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Advantages:
● Simplicity: FIFO is easy to implement because it only requires keeping track of the
order in which pages were loaded into memory.
● Low overhead: FIFO has minimal overhead compared to more complex page
replacement algorithms.
Limitations:
● High number of page faults: It can result in a high number of page faults,
especially for frequently accessed pages that are replaced simply because they
were loaded first.
● Belady's Anomaly: One of the main drawbacks of FIFO is the occurrence of
Belady's Anomaly, where increasing the number of page frames can unexpectedly
lead to more page faults instead of reducing them.
● Poor performance with certain access patterns: FIFO may not perform well with
certain access patterns, especially those that exhibit poor locality of reference.
● Lack of adaptability: FIFO does not consider how often pages are accessed or how
recently they were used, which can lead to suboptimal performance in some cases.
29
NotesNeo
Example:
Consider the reference string: 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0, with three
frames. Let's calculate the number of page faults using the LRU Page Replacement
Algorithm.
Advantages:
● Optimality: LRU is designed to approximate optimal page replacement by selecting
the page that has not been used for the longest time. This helps minimize the
number of page faults and improve overall system performance.
● Simple concept: Despite its optimality, the concept behind LRU is relatively simple
and intuitive to understand.
● Good performance in practice: LRU tends to perform well in practice for many
types of workloads and access patterns.
Limitations:
● Complex implementation: While the concept of LRU is simple, its implementation
can be complex and require additional overhead, especially in systems with limited
resources or large amounts of memory.
● Expensive to maintain access history: Keeping track of access times for each page
can be expensive in terms of memory and computational resources, especially in
systems with a large number of pages.
● Vulnerable to access patterns: LRU may not perform well with certain access
patterns, such as those with frequent access to a small subset of pages or those
that exhibit cyclic access patterns.
30
NotesNeo
● LRU provides good performance by approximating the optimal algorithm but may
be complex to implement efficiently.
31
NotesNeo
May 2023
1. Differentiate between paging and segmentation.
2. Difference between contiguous memory allocation and non contiguous memory
allocation ?
July 2022
3. What is contiguous memory location ?
Others
4. Why are page size always powers of 2, explain ?
5. Differentiate between virtual memory and main memory.
6. What is the cause of thrashing ? How did system detects thrashing ?
May 2023
1. (a) Explain the concept of virtual memory and its advantages and disadvantages.
(b) What is fragmentation ? Explain the difference between internal and external
fragmentation briefly.
2. Explain the following:
a. Optimal Page Replacement and Least Recently Used.
b. Demand Paging.
July 2022
3. What is Paging ? Describe various page replacement algorithms ?
4. (a) What is Memory Management ? Explain Contiguous and Non-contiguous
Memory allocation.
(b) Explain internal and external fragmentation with example.
July 2021
5. What do you mean by page replacement ? Describe various page replacement
algorithm.
6. What is memory management ? Explain the concept of virtual memory.
Others
7. Describe first fit, best fit and worst fit mechanisms for memory allocation.
8. Explain paging and segmentation techniques.
9. Explain virtual memory management.
10.Explain swapping in detail.
32