Virtual Memory
Virtual Memory
Virtual Memory
Virtual memory is a technique that allows the execution of process that may not be completely in
memory. The main visible advantage of this scheme is that programs can be larger than physical memory.
Following are the situations, when entire program is not required to load fully.
• User written error handling routines are used only when an error occurs in the data or
computation.
• Certain options and features of a program may be used rarely.
• Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.
The ability to execute a program that is only partially in memory would counter many benefits.
Less number of I/O would be needed to load or swap each user program into memory.
A program would no longer be constrained by the amount of physical memory that is available.
Each user program could take less physical memory; more programs could be run the same time,
with a corresponding increase in CPU utilization and throughput.
Virtual memory is the separation of user logical memory from physical memory this separation allows
an extremely large virtual memory to be provided for programmers when only a smaller physical
memory is available ( Fig ).
Figure: Diagram showing virtual memory that is larger than physical memory.
Demand Paging
An option is selected by the user to load pages which are needed. This is known as demand paging.
A demand paging is similar to a paging system with swapping
Figure: Transfer of a paged memory to continuous disk space
When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory, we use lazy swapper.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a
process. Thus use pager, rather than swapper, in connection with demand paging
Basic Concepts:
When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again.
Instead of swapping in a whole process, the pager brings only those necessary pages into memory.
Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and
the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages that
are on the disk using the valid-invalid bit scheme.
The page-table entry for a page that is brought into memory is set as usual but the page-table entry for a
page that is not currently in memory is either simply marked invalid or contains the address of the page
on disk.
We check an internal table for this process to determine whether the reference was a valid or invalid
memory access.
If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that
page, we now page in that later. We find a free frame. We schedule a disk operation to read the desired
page into the newly allocated frame.
When the disk read is complete, we modify the internal table kept with the process and the page table to
indicate that the page is now in memory.
We restart the instruction that was interrupted by the illegal address trap. The process can now access the
page as though it had always been memory.
Therefore, the operating system reads the desired page into memory and restarts the process as though the
page had always been in memory.
It can execute with no more faults. This scheme is pure demand paging: never bring a page into memory
until it is required.
The hardware to support demand paging is the same as the hardware for paging and swapping:
o Page table
o Secondary memory: This memory holds those pages that are not present in main memory
The page replacement is used to make the frame free if they are not in used. If no frame is free then other
process is called in.
Advantages of Demand Paging:
Copy-on-Write
The fork( ) system call creates a copy of the parent's address space for the child, duplicating the pages
belonging to the parent.
Many child processes invoke the exec( ) system call immediately after creation, the copying of the
parent's address space may be unnecessary. Instead, we can use a technique known as copy-on-write
It works by allowing the parent and child processes initially to share the same pages.
These shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared
page, a copy of the shared page is created.
For example, assume that the child process attempts to modify a page containing portions of the stack,
with the pages set to be copy-on-write.
The operating system will create a copy of this page, mapping it to the address space of the child process.
The child process will then modify its copied page and not the page belonging to the parent process.
The copy-on-write technique is used, only the pages that are modified by either process are copied; all
unmodified pages can be shared by the parent and child processes
Copy-on-write is used by several operating systems, including Windows XP, Linux, and Solaris.
When a page is going to be duplicated using copy-on-write, it is important to note the location from
which the free page will be allocated. Many operating systems provide a pool of free pages for such
requests.
Operating systems typically allocate these pages using a technique known as zero-fill-on-demand.
Several versions of UNIX (including Solaris and Linux) provide a variation of the fork() system call-
vfork( ) (for virtual memory fork). vfork( ) does not use copy-on-write.
We evaluate an algorithm by running it on a particular string of memory reference and computing the
number of page faults.
The string of memory references is called reference string.
Reference strings are generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data.
To reduce the number of data, we use two facts:
For a given page size, we need to consider only the page number, not the entire address.
if we have a reference to a page p, then any immediately following references to page p will
never cause a page fault. Page p will be in memory after the first reference; the immediately
following references will not fault.
As the number of frames available increase, the number of page faults will decrease.
So, we use the reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
FIFO Page Replacement Algorithm:
The simplest page-replacement algorithm is a FIFO algorithm. A FIFO replacement algorithm associates
with each page the time when that page was brought into memory. When a page must be replaced, the oldest
page is chosen. We can create a FIFO queue to hold all pages in memory.
Example: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 and consider 3 frames.
The first three references (7, 0, 1) cause page faults, and are brought into these empty frames.
The next reference (2) replaces page 7, because page 7 was brought in first.
Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
The first reference to 3 results in replacement of page 0
Notice: number of faults for four frames (ten) is greater than the number of faults for three frames (nine)!
This most unexpected result is known as Belady’s anomaly: for some page-replacement algorithms, the page-
fault rate may increase as the number of allocated frames increases.
The FIFO algorithm uses the time when a page was brought into memory; the OPT algorithm uses the
time when a page is to be used. In LRU (least-recently-used) replace the page that has not been used for the
longest period of time.
LRU replacement associates with each page the time of that page's last use. We can think of this strategy
as the optimal page-replacement algorithm looking backward in time, rather than forward.
Let SR be the reverse of a reference string S, then the page-fault rate for the OPT algorithm on S is the
same as the page-fault rate for the OPT algorithm on SR.
Now consider the same string with 3 empty frames
The LRU algorithm produces twelve faults. The first five faults are the same as those for optimal
replacement. When the reference to page 4 occurs, however, LRU replacement sees that, of the three frames in
memory, page 2 was used least recently.
Despite these problems, LRU replacement with twelve faults is much better than FIFO replacement with
fifteen.
The problem is to determine an order for the frames defined by the time of last use. Two implementations:
o Counters: For each page-table entry time-of-use fields is associated and add to the CPU a logical
clock or counter. The clock is incremented for every memory reference.
o Stack: Another approach to implementing LRU replacement is to keep a stack of page numbers.
Whenever a page is referenced, it is removed from the stack and put on the top.
Figure: Use of a stack to record the most recent page references.
7 7 7 2 2 4 4 3 3 3 1 7 1
0 0 0 0 0 0 0 0 0 0 0 0
1 1 3 3 2 2 1 2 2 2 2
7=0+1-1=0+1=1-1=0,
0=0+1+1+1=3+1+1+1=6,
1=0+1-1=0+1-1+1=1,
2=0+1-1+1=1+1=2-1+1=2,
3=0+1-1=0+1+1=2-1=1,
4=0+1=1-1=0
When all frequencies are same, then we will follow FIFO Algorithm.
We have 13 page faults.