0% found this document useful (0 votes)
95 views36 pages

Chapter 09 - Virtual Memory

Chapter 9 discusses virtual memory, explaining its necessity and advantages, such as allowing processes to execute without being fully loaded into physical memory. It covers various strategies for virtual memory management, including demand paging, page replacement algorithms, and frame allocation methods, while also addressing issues like thrashing and techniques to prevent it. The chapter emphasizes the importance of optimizing page fault rates and the performance implications of different memory management strategies.

Uploaded by

m40050272
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views36 pages

Chapter 09 - Virtual Memory

Chapter 9 discusses virtual memory, explaining its necessity and advantages, such as allowing processes to execute without being fully loaded into physical memory. It covers various strategies for virtual memory management, including demand paging, page replacement algorithms, and frame allocation methods, while also addressing issues like thrashing and techniques to prevent it. The chapter emphasizes the importance of optimizing page fault rates and the performance implications of different memory management strategies.

Uploaded by

m40050272
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 9

Virtual Memory

Dr. Niroj Kumar Pani


nirojpani@gmail.com

Department of Computer Science Engineering & Applications


Indira Gandhi Institute of Technology
Sarang, Odisha
Chapter Outline…
◼ Virtual Memory: Why & What ?

◼ Types of Virtual Memory Management Strategies

◼ Demand Paging

◼ Page Replacement

◼ Allocation of Frames

◼ Thrashing

◼ Demand Segmentation

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.2


Virtual Memory: Why & What ?
◼ Why Virtual Memory?: How come we watch a 2 GB movie with a 1 GB RAM?

◼ What is Virtual Memory?: It is a technique that allows the execution of


processes that may not be completely within the primary memory.
➢ Only part of the program needs to be in main memory for execution.
➢ Logical address space (the process) can therefore be much larger than
physical address space.
➢ It separates the user logical memory from physical memory.

◼ Advantages:
➢ Less memory needed
➢ Faster response
➢ More users

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.3


Types of Virtual Memory Management Strategies
◼ Virtual memory management is implemented through two strategies:
1. Demand paging
2. Demand segmentation

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.4


Demand Paging (Paging On Demand)
The Method

A demand-paging system is similar to a paging system with lazy swapping:

◼ Like paging in demand-paging logical memory is divided into pages and


physical memory is divide onto equal sized frames.

◼ When a process arrives for execution (brought to the ready queue) the pager
(lazy swapper) guesses which pages will be used by the process and brings
(swaps-in) only those necessary pages to the main memory (frames), instead of
swapping in all the pages of the process (as in the case of paging).

◼ Which pages are in the primary memory, and which are not are distinguished by
a valid-invalid bit in the page table of that process.

[The method is shown in Fig. 9.1, next slide]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.5


[Fig. 9.1: Demand paging method]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.6


Page Fault In Demand Paging

◼ When the CPU needs a page it is searched in the page table.


➢ If the valid-invalid bit is ‘v’ then it’s OK; the page is there in the main
memory, it is delivered to the processor.
➢ If the valid-invalid bit is ‘i’ that is the page is not there in the main
memory, then it causes a page-fault trap.

◼ The procedure for handling page fault is as follows:


1. Terminate / Block the process that has caused the page-fault trap.
2. Find a free frame (if there is no free frame use a page replacement
algorithm to get one) [Page replacement algorithms are discussed later]
3. Search the desired page in the disc and bring it to the newly allocated
frame.
4. Set the valid-invalid bit for that page to ‘v’ in the page table (i.e., update
the page table).
5. Restart the process.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.7


[Fig. 9.2: Procedure for handling a page fault]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.8


Performance of Demand Paging

◼ Page fault rate can have a significant effect on the performance of demand
paging and hence on the performance of computer. To see the effect let us
calculate the Effective Access Time (EAT)

Let ‘p’ be the probability of page fault (0 ≤p ≤ 1)

EAT = (1-p) × memory access time + p × (page fault overhead


+ swap page out
+ swap page in
+ restart overhead)

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.9


◼ An Example:

➢ Let Memory access time = 200 nanoseconds and


Let Average page-fault service time = 8 milliseconds

➢ EAT = (1 – p) × 200 + p (8 milliseconds)


= (1 – p) × 200 + p × 8,000,000
= 200 + p × 7,999,800

➢ If one access out of 1,000 causes a page fault, then EAT = 8.2
microseconds.
➢ This is a slowdown by a factor of 40!!

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.10


Pure Demand Paging

◼ Never bring a page until it is required.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.11


Page Replacement
◼ The Need: Page replacement technique is used if a new page is required to be
swapped-in to the main memory (incase of a page fault) and no free frames are
available.

◼ The Technique:
In a page fault situation, when a new page is required to be brought to the main
memory and there is no free frame:
1. Use a page-replacement algorithm to select a victim frame.
2. Swap-out the victim page to the disc, change the page and frame table
accordingly.
3. Swap-in the desired page into the newly allocated free frame, change
the page and frame table accordingly.
4. Restart the blocked process.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.12


[Fig. 9.3: Page replacement technique]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.13


Use of Dirty Bit in Page Replacement

◼ A modify bit / dirty bit is used to reduce overhead of page transfers - only
modified pages are written to disk

Page Replacement Algorithms

1. FIFO page replacement algorithm / Queue page replacement algorithm

2. Optimal page replacement algorithm

3. LRU page replacement algorithm / Stack page replacement algorithm

4. LRU Approximation Algorithms


➢ Reference Bit Algorithm
➢ Second-Chance (Clock) Algorithm
5. Counting-Based Page Replacement Algorithms
➢ LFU
➢ MFU

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.14


Page Replacement Algorithms Evaluation

◼ We evaluate the page replacement algorithm on following parameter:


➢ page-fault rate

◼ We want lowest page-fault rate .

◼ We will evaluate the algorithms by running it on a particular string of memory


references (reference string) and computing the number of page faults on that
string.

◼ In all our examples, the reference string is:


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.15


FIFO (Queue) Page Replacement Algorithm

◼ The Rule: When a page must be replaced, replace the oldest page.

◼ Example:
Let frame size = 3

[Fig. 9.4]

Number of page faults = 15

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.16


◼ Analysis:

➢ Not the optimal one.


➢ FIFO page replacement algorithm results in Belady’s Anomaly:
Generally, the number of page faults decreases with the increase in
frame numbers, but in FIFO page replacement algorithm the number of
page faults may increase as the number of frames increases.

[Fig. 9.5: Belady’s


Anomaly]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.17


Optimal Page Replacement Algorithm

◼ The Rule: Replace the page that will not be used for longest period of time.

◼ Example:
Let frame size = 3

[Fig. 9.6]

Number of page faults = 9

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.18


◼ Analysis:
➢ Provably the optimal one.
➢ Difficult to implement because it requires future knowledge.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.19


LRU (Stack) Page Replacement Algorithm

◼ The Rule: Replace the page that has not been used for longest period of time.

◼ Example:
Let frame size = 3

[Fig. 9.7]

Number of page faults = 12

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.20


◼ Analysis:
➢ Not the optimal one, but better than FIFO (considered to be a good
algorithm)
➢ Often used as the page replacement algorithm in most systems.
➢ Major problem - How to implement this algorithm? (How to know which
page has not been used for longest period of time?)

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.21


◼ Implementation: Two approaches

1. Counter

▪ Associate with each page a time-of-use field, initially set to 0.


▪ When a page is referred put the time of reference in this field.
▪ The page with lowest value in the time-of-use filed is the LRU
page (target page to be replaced).

2. Stack
▪ Keep the page numbers in a stack.
▪ When ever a page is referenced, it is removed from its current
position of the stack and put on the top.
▪ The stack top is the most recently used page and bottom is the
LRU page.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.22


LRU Approximation Page Replacement Algorithms

◼ LRU approximation algorithms: Why & What?


➢ Some computer system doesn’t provide sufficient h/w support of LRU
algorithm. For those system LRU approximation algorithms are used. We
can’t exactly know what is the LRU page, but we can approximate it.
➢ LRU approximation algorithms are a variance of the LRU page
replacement algorithm where the target page will be approximated with
some technique.

◼ Following 2 approaches for approximation:


➢ Reference Bit Algorithms
➢ Second-Chance (Clock) Algorithm

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.23


◼ Reference Bit Algorithm:
➢ Each page is associate with some (1 to 8) reference bits.
➢ Initially all the bits are set to 0.
➢ On each page reference the value of the reference bits are incremented.
➢ The page having lowest reference bit value is the LRU page.

◼ Second-Chance Algorithm
➢ Each page is associated with only one reference bit.
➢ The value of the reference bit is initially set to 0.
➢ On each page reference the value of the reference bit becomes 1.
➢ When a victim page is to be chosen, the pages are searched in order and
the reference bit is examined:
▪ If the value is 0, the page is chosen as the victim.
▪ If the value is 1, this page is given a second-chance (i.e., its
reference bit is set to 0, but it is not chosen as the victim. The OS
moves to the next page for examining its reference bit).

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.24


[Fig. 9.8: Second chance algorithm]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.25


Counting-Based Page Replacement Algorithms

◼ The algorithms keep a counter of the number of references that have been
made to each page.

◼ Two Approaches to determine the victim page:

1. Least Frequently Used (LFU) page replacement algorithm: Replaces


page with smallest count.

2. Most Frequently Used (MFU) page replacement algorithm: Replaces


page with highest count (based on the argument that the page with the
smallest count was probably just brought in and has yet to be used.)

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.26


Allocation of Frames
◼ The Issue: How many frames should be allocated to a process initially when it if
first loaded to the main memory?

◼ Two major allocation schemes:


1. Fixed allocation
2. Priority Allocation

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.27


Fixed Allocation

◼ Equal allocation: If 100 frames and 5 processes, give each 20 pages.

◼ Proportional allocation: Allocate according to the size of process.

Let, si = size of process Pi (in number of pages)


S = ∑ si
F = total number of frames
ai = allocation for Pi
⟹ ai = (si/S) × F

An Example: Suppose, F = 64, s1 = 10, and s2 = 127


⟹ a1 = (10/137) × 64 ≈ 5
⟹ a2 = (127/137) × 64 ≈ 59

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.28


Priority Allocation

◼ Use a proportional allocation scheme using priorities rather than size.

◼ If a victim page is needed to be chosen then, select for replacement a frame


from a process with lower priority number.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.29


Thrashing
Thrashing: Why & What?

◼ In a demand paging environment, every process requires a minimum number of


frames. If the process is not given this minimum number of frames (to increase
the degree of multiprogramming), the page fault rate increases rapidly. This
high paging activity is called thrashing.

◼ Thrashing occurs when: ∑ size of locality > total allocated memory size.

◼ In thrashing a process spends


more time in paging (swapping in & out)
than executing.

◼ Thrashing results in
degradation in CPU utilization

[Fig. 9.9: Effect of Thrashing]

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.30


Working-Set Model (Technique to Prevent Thrashing)

◼ The Need for Working Set Model?


➢ Thrashing occurs when a process is not given the minimum number of
frames it requires.
➢ But how to know how many minimum frame a process will require?
➢ Working-Set model is a technique used to determine this minimum
number of frame and hence to prevent thrashing.

◼ Description of Working-Set Model:


➢ Working-set model is based on the assumption of locality (It states that,
as a process executes, it moves from locality to locality. A locality is a
set of pages actively used together).
➢ As per the model, the allocated frame should be greater than the size of
the locality.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.31


◼ This is How the Model Works:

➢ The model defines a parameter  called the ‘working-set window’.

➢ The locality is defined by the set of pages in the most recent  page
reference. It is the called the ‘working set (WS)’.

[Fig. 9.10:
Working
set model]

➢ The total demand for frames by a process = Minimum number of frames


allocated to a process Pi is defined as follows :
D =  WSSi , where WSSi is the working set size of process Pi

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.32


◼ Conclusion:
➢ If D > M, then thrashing will occur else not.
➢ Policy: if D > M, then suspend one of the processes.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.33


Demand Segmentation (Segmentation on Demand)
The Method

A demand-segmentation system is similar to a segmentation system with lazy


swapping:

◼ Like segmentation in demand-segmentation the program segments are stored


as one continuous unit in the physical memory but with the following exception.

◼ When a process arrives for execution (brought to the ready queue) the OS
guesses which segments will be used by the process and brings (swaps-in) only
those necessary segments to the main memory. instead of swapping in all the
segments of the process (as in the case of segmentation ).

◼ Which segments are in the primary memory and which are not are
distinguished by a valid-invalid bit in the segment table of that process.

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.34


[NOTE]
like demand paging in demand segmentation we have:
◼ Segment faults

◼ Pure segmentation

◼ Segment replacement

Dr. N. K. Pani, Dept. of CSEA, IGIT Sarang | 9.35


End of Chapter 9

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy