Chapter 09 - Virtual Memory
Chapter 09 - Virtual Memory
Virtual Memory
◼ Demand Paging
◼ Page Replacement
◼ Allocation of Frames
◼ Thrashing
◼ Demand Segmentation
◼ Advantages:
➢ Less memory needed
➢ Faster response
➢ More users
◼ When a process arrives for execution (brought to the ready queue) the pager
(lazy swapper) guesses which pages will be used by the process and brings
(swaps-in) only those necessary pages to the main memory (frames), instead of
swapping in all the pages of the process (as in the case of paging).
◼ Which pages are in the primary memory, and which are not are distinguished by
a valid-invalid bit in the page table of that process.
◼ Page fault rate can have a significant effect on the performance of demand
paging and hence on the performance of computer. To see the effect let us
calculate the Effective Access Time (EAT)
➢ If one access out of 1,000 causes a page fault, then EAT = 8.2
microseconds.
➢ This is a slowdown by a factor of 40!!
◼ The Technique:
In a page fault situation, when a new page is required to be brought to the main
memory and there is no free frame:
1. Use a page-replacement algorithm to select a victim frame.
2. Swap-out the victim page to the disc, change the page and frame table
accordingly.
3. Swap-in the desired page into the newly allocated free frame, change
the page and frame table accordingly.
4. Restart the blocked process.
◼ A modify bit / dirty bit is used to reduce overhead of page transfers - only
modified pages are written to disk
◼ The Rule: When a page must be replaced, replace the oldest page.
◼ Example:
Let frame size = 3
[Fig. 9.4]
◼ The Rule: Replace the page that will not be used for longest period of time.
◼ Example:
Let frame size = 3
[Fig. 9.6]
◼ The Rule: Replace the page that has not been used for longest period of time.
◼ Example:
Let frame size = 3
[Fig. 9.7]
1. Counter
2. Stack
▪ Keep the page numbers in a stack.
▪ When ever a page is referenced, it is removed from its current
position of the stack and put on the top.
▪ The stack top is the most recently used page and bottom is the
LRU page.
◼ Second-Chance Algorithm
➢ Each page is associated with only one reference bit.
➢ The value of the reference bit is initially set to 0.
➢ On each page reference the value of the reference bit becomes 1.
➢ When a victim page is to be chosen, the pages are searched in order and
the reference bit is examined:
▪ If the value is 0, the page is chosen as the victim.
▪ If the value is 1, this page is given a second-chance (i.e., its
reference bit is set to 0, but it is not chosen as the victim. The OS
moves to the next page for examining its reference bit).
◼ The algorithms keep a counter of the number of references that have been
made to each page.
◼ Thrashing occurs when: ∑ size of locality > total allocated memory size.
◼ Thrashing results in
degradation in CPU utilization
➢ The locality is defined by the set of pages in the most recent page
reference. It is the called the ‘working set (WS)’.
[Fig. 9.10:
Working
set model]
◼ When a process arrives for execution (brought to the ready queue) the OS
guesses which segments will be used by the process and brings (swaps-in) only
those necessary segments to the main memory. instead of swapping in all the
segments of the process (as in the case of segmentation ).
◼ Which segments are in the primary memory and which are not are
distinguished by a valid-invalid bit in the segment table of that process.
◼ Pure segmentation
◼ Segment replacement