0% found this document useful (0 votes)
194 views11 pages

Virtual Memory

Virtual memory allows programs to execute even if they are larger than physical memory by storing portions of programs that are not currently in use on secondary storage such as a hard disk. When a program attempts to access code or data that is not in memory, a page fault occurs which causes the needed portion to be swapped in from secondary storage. This allows programs to have a very large virtual address space even with limited physical memory. Demand paging, a common implementation of virtual memory, only swaps in pages from secondary storage when they are needed rather than loading the entire program at once. Copy-on-write optimization avoids duplicating memory pages during fork by initially sharing pages between parent and child processes and only copying pages when they are modified
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
194 views11 pages

Virtual Memory

Virtual memory allows programs to execute even if they are larger than physical memory by storing portions of programs that are not currently in use on secondary storage such as a hard disk. When a program attempts to access code or data that is not in memory, a page fault occurs which causes the needed portion to be swapped in from secondary storage. This allows programs to have a very large virtual address space even with limited physical memory. Demand paging, a common implementation of virtual memory, only swaps in pages from secondary storage when they are needed rather than loading the entire program at once. Copy-on-write optimization avoids duplicating memory pages during fork by initially sharing pages between parent and child processes and only copying pages when they are modified
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

VIRTUAL MEMORY

Virtual Memory
 Virtual memory is a technique that allows the execution of process that may not be completely in
memory. The main visible advantage of this scheme is that programs can be larger than physical memory.
Following are the situations, when entire program is not required to load fully.
• User written error handling routines are used only when an error occurs in the data or
computation.
• Certain options and features of a program may be used rarely.
• Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.
The ability to execute a program that is only partially in memory would counter many benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is available.
 Each user program could take less physical memory; more programs could be run the same time,
with a corresponding increase in CPU utilization and throughput.
 Virtual memory is the separation of user logical memory from physical memory this separation allows
an extremely large virtual memory to be provided for programmers when only a smaller physical
memory is available ( Fig ).

Figure: Diagram showing virtual memory that is larger than physical memory.

 Virtual memory is commonly implemented by demand paging. It can also be implemented in a


segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

 An option is selected by the user to load pages which are needed. This is known as demand paging.
 A demand paging is similar to a paging system with swapping
Figure: Transfer of a paged memory to continuous disk space

 When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory, we use lazy swapper.
 A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a
process. Thus use pager, rather than swapper, in connection with demand paging

Basic Concepts:
 When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again.
 Instead of swapping in a whole process, the pager brings only those necessary pages into memory.
 Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and
the amount of physical memory needed.
 Hardware support is required to distinguish between those pages that are in memory and those pages that
are on the disk using the valid-invalid bit scheme.
 The page-table entry for a page that is brought into memory is set as usual but the page-table entry for a
page that is not currently in memory is either simply marked invalid or contains the address of the page
on disk.

Figure: Transfer of a paged memory to contiguous disk space.


 Where valid and invalid pages can be checked the bit and marking a page invalid will have no effect if the
process never attempts to access that page.
 While the process executes and accesses pages that are memory resident, execution proceeds normally.
 Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory. But page fault can be handled as following (Fig)

Figure: Steps in handling a page fault

 We check an internal table for this process to determine whether the reference was a valid or invalid
memory access.
 If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that
page, we now page in that later. We find a free frame. We schedule a disk operation to read the desired
page into the newly allocated frame.
 When the disk read is complete, we modify the internal table kept with the process and the page table to
indicate that the page is now in memory.
 We restart the instruction that was interrupted by the illegal address trap. The process can now access the
page as though it had always been memory.
 Therefore, the operating system reads the desired page into memory and restarts the process as though the
page had always been in memory.
 It can execute with no more faults. This scheme is pure demand paging: never bring a page into memory
until it is required.
 The hardware to support demand paging is the same as the hardware for paging and swapping:
o Page table
o Secondary memory: This memory holds those pages that are not present in main memory
 The page replacement is used to make the frame free if they are not in used. If no frame is free then other
process is called in.
Advantages of Demand Paging:

 Large virtual memory.


 More efficient use of memory.
 Unconstrained multiprogramming. There is no limit on degree of multiprogramming.

Disadvantages of Demand Paging


 Number of tables and amount of processor over head for handling page interrupts are greater than in the
case of the simple paged management techniques.
 The major difficulty arises when one instruction may modify several different locations
o This problem can be solved in two different ways. In one solution, the microcode computes and
attempts to access both ends of both blocks

Performance of Demand Paging:


Most computer systems, the memory-access time, denoted ma, ranges from 10 to 200 nanoseconds.
Let p be the probability of a page fault (0 ≤ p ≤ 1). We would expect p to be close to zero-that is, we would
expect to have only a few page faults.
effective access time= (1 - p) * ma + p * page fault time

A page fault causes the following sequence to occur:

 Trap to the operating system.


 Save the user registers and process state.
 Determine that the interrupt was a page fault.
 Check that the page reference was legal and determine the location of the page on the disk
 Issue a read from the disk to a free frame:
 Wait in a queue for this device until the read request is serviced.
 Wait for the device seek and/ or latency time.
 Begin the transfer of the page to a free frame.
 While waiting, allocate the CPU to some other user (CPU scheduling, optional).
 Receive an interrupt from the disk I/0 subsystem (I/0 completed).
 Save the registers and process state for the other user (if step 6 is executed).
 Determine that the interrupt was from the disk
 Correct the page table and other tables to show that the desired page is now in memory.
 Wait for the CPU to be allocated to this process again.
 Restore the user registers, process state, and new page table, and then resume the interrupted instruction.
In any case, we are faced with 3 major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.

Example: effective access time= (1 - p) *(200 nanoseconds) + p (8 milliseconds)


= (1- p) * 200 + p * 8,000,000
= 200 + 7,999,800 * p.
So, effective access time is directly proportional to the page-fault rate.
The computer will be slowed down by a factor of 40 because of demand paging! If we want performance
degradation to be less than 10 percent, we need
220 > 200 + 7,999,800 * p,
20 > 7,999,800 * p,
p < 0.0000025.
That is, to keep the slowdown due to paging at a reasonable level, we can allow fewer than one memory access
out of 399,990 to page-fault. In sum, it is important to keep the page-fault rate low in a demand-paging system.
Otherwise, the effective access time increases, slowing process execution dramatically

Copy-on-Write

 The fork( ) system call creates a copy of the parent's address space for the child, duplicating the pages
belonging to the parent.
 Many child processes invoke the exec( ) system call immediately after creation, the copying of the
parent's address space may be unnecessary. Instead, we can use a technique known as copy-on-write
 It works by allowing the parent and child processes initially to share the same pages.
 These shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared
page, a copy of the shared page is created.

Figure: Before process 1 modifies page C


Figure: After process 1 modifies page C

 For example, assume that the child process attempts to modify a page containing portions of the stack,
with the pages set to be copy-on-write.
 The operating system will create a copy of this page, mapping it to the address space of the child process.
 The child process will then modify its copied page and not the page belonging to the parent process.
 The copy-on-write technique is used, only the pages that are modified by either process are copied; all
unmodified pages can be shared by the parent and child processes
 Copy-on-write is used by several operating systems, including Windows XP, Linux, and Solaris.
 When a page is going to be duplicated using copy-on-write, it is important to note the location from
which the free page will be allocated. Many operating systems provide a pool of free pages for such
requests.
 Operating systems typically allocate these pages using a technique known as zero-fill-on-demand.
 Several versions of UNIX (including Solaris and Linux) provide a variation of the fork() system call-
vfork( ) (for virtual memory fork). vfork( ) does not use copy-on-write.

Page Replacement Algorithms

 There are many different page replacement algorithms.


 The operating system determines where the desired page is residing on the disk but then finds that there
are no free frames on the free-frame list; all memory is in use
Figure: Need for page replacement
 The operating system could instead swap out a process, freeing all its frames and reducing the level of
multiprogramming
 The most common solution: Page Replacement

Basic Page Replacement:


We modify the page-fault service routine to include page replacement:

 Find the location of the desired page on the disk.


 Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select victim frame.
c. Write the victim frame to the disk; change the page and frame tables accordingly.
 Read the desired page into the newly freed frame; change the page and frame tables.
 Restart the user process.

Figure: Page replacement


 We must solve two major problems to implement demand paging:
 Develop a frame-allocation algorithm. If we have multiple processes in memory, we must decide
how many frames to allocate to each process.
 Develop a page-replacement algorithm: When page replacement is required, we must select the
frames that are to be replaced.

 We evaluate an algorithm by running it on a particular string of memory reference and computing the
number of page faults.
 The string of memory references is called reference string.
 Reference strings are generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data.
 To reduce the number of data, we use two facts:
 For a given page size, we need to consider only the page number, not the entire address.
 if we have a reference to a page p, then any immediately following references to page p will
never cause a page fault. Page p will be in memory after the first reference; the immediately
following references will not fault.

Example:- consider the address sequence


0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101, 0610, 0102, 0103, 0104,
0101, 0609, 0102, 0105
At 100 bytes per page, this sequence is reduced to the following reference string:
010 043 010 061 010 010 010 010 061 010 010 010 010 061 010 010 010 010 060 010 010
0 2 1 2 2 3 4 1 1 2 3 4 1 0 2 3 4 1 9 2 5
1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1
To determine the number of page faults for a particular reference string and page replacement algorithm,
we also need to know the number of page frames available.

As the number of frames available increase, the number of page faults will decrease.
So, we use the reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
FIFO Page Replacement Algorithm:
The simplest page-replacement algorithm is a FIFO algorithm. A FIFO replacement algorithm associates
with each page the time when that page was brought into memory. When a page must be replaced, the oldest
page is chosen. We can create a FIFO queue to hold all pages in memory.
Example: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 and consider 3 frames.
 The first three references (7, 0, 1) cause page faults, and are brought into these empty frames.
 The next reference (2) replaces page 7, because page 7 was brought in first.
 Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
 The first reference to 3 results in replacement of page 0

 The FIFO page-replacement algorithm is easy to understand and program.


 However, its performance is not always good.
Problems that are possible with a FIFO page-replacement algorithm, we consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Figure: Page-fault curve for FIFO replacement on a reference string

Notice: number of faults for four frames (ten) is greater than the number of faults for three frames (nine)!
This most unexpected result is known as Belady’s anomaly: for some page-replacement algorithms, the page-
fault rate may increase as the number of allocated frames increases.

Optimal Page Replacement Algorithm:


An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal
page-replacement algorithm exists, and has been called OPT or MIN. It is simply
Replace the page that will not be used for the longest
period of time.

Now consider the same string with 3 empty frames.


The reference to page 2 replaces page 7, because 7 will not be used until reference 18, whereas page 0
will be used at 5, and page 1 at 14. The reference to page 3 replaces page 1, as page 1 will be the last of the
three pages in memory to be referenced again. Optimal replacement is much better than a FIFO.
The optimal page-replacement algorithm is difficult to implement, because it requires future knowledge
of the reference string. As a result, the optimal algorithm is used mainly for comparison studies. For instance, it
may be useful to know that, although a new algorithm is not optimal it is within 12.3 percent of optimal at worst
and within 4.7 percent on average.
LRU Algorithm:

The FIFO algorithm uses the time when a page was brought into memory; the OPT algorithm uses the
time when a page is to be used. In LRU (least-recently-used) replace the page that has not been used for the
longest period of time.
LRU replacement associates with each page the time of that page's last use. We can think of this strategy
as the optimal page-replacement algorithm looking backward in time, rather than forward.
Let SR be the reverse of a reference string S, then the page-fault rate for the OPT algorithm on S is the
same as the page-fault rate for the OPT algorithm on SR.
Now consider the same string with 3 empty frames

The LRU algorithm produces twelve faults. The first five faults are the same as those for optimal
replacement. When the reference to page 4 occurs, however, LRU replacement sees that, of the three frames in
memory, page 2 was used least recently.

 Despite these problems, LRU replacement with twelve faults is much better than FIFO replacement with
fifteen.
 The problem is to determine an order for the frames defined by the time of last use. Two implementations:
o Counters: For each page-table entry time-of-use fields is associated and add to the CPU a logical
clock or counter. The clock is incremented for every memory reference.
o Stack: Another approach to implementing LRU replacement is to keep a stack of page numbers.
Whenever a page is referenced, it is removed from the stack and put on the top.
Figure: Use of a stack to record the most recent page references.

LRU Approximation Page Replacement Algorithms


 Some systems provide no hardware support, and other page-replacement algorithm.
 Many systems provide some help, however, in the form of a reference bit.
 The reference bit for a page is set, by the hardware, whenever that page is referenced.
 Reference bits are associated with each entry in the page table
o Initially, all bits are cleared (to 0) by the operating system.
o As a user process executes, the bit associated with each page referenced is set (to 1) by the hardware.

Counting-Based Page Replacement Algorithms:


There are many other algorithms that can be used for page replacement.
 LFU Algorithm: The least frequently used (LFU) page-replacement algorithm requires that the page with the
smallest count be replaced. This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
 MFU Algorithm: The most frequently used (MFU) page-replacement algorithm is based on the argument that
the page with the smallest count was probably just brought in and has yet to be used.
Now consider the same string with 3 empty frames
X X X X X X X X X X
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 4 4 3 3 3 1 7 1
0 0 0 0 0 0 0 0 0 0 0 0
1 1 3 3 2 2 1 2 2 2 2

7=0+1-1=0+1=1-1=0,
0=0+1+1+1=3+1+1+1=6,
1=0+1-1=0+1-1+1=1,
2=0+1-1+1=1+1=2-1+1=2,
3=0+1-1=0+1+1=2-1=1,
4=0+1=1-1=0
 When all frequencies are same, then we will follow FIFO Algorithm.
 We have 13 page faults.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy