Memory Management - II
Memory Management - II
Segmentation Hardware
Segmentation: Generalized Base/Bounds
To solve this problem, an idea was born, and it is called segmentation. It is quite
an old idea, going at least as far back as the very early 1960’s. The idea is
simple: instead of having just one base and bounds pair in our MMU, why not
have a base and bounds pair per logical segment of the address space? A
segment is just a contiguous portion of the address space of a particular length,
and in our canonical address space, we have three logically different
segments: code, stack, and heap. What segmentation allows the OS to do
is to place each one of those segments in different parts of physical
memory, and thus avoid filling physical memory with unused virtual
address space.
Let’s look at an example. Assume we want to place the address space from
Figure 1 into physical memory. With a base and bounds pair per segment, we
can place each segment independently in physical memory. For example, see
Figure 2; there you see a 64KB physical memory with those three segments in it
(and 16KB reserved for the OS). As you can see in the diagram, only used
memory is allocated space in physical memory, and thus large address
spaces with large amounts of unused address space (which we sometimes
call sparse address spaces) can be accommodated. The hardware
Segment Base Size structure in our MMU required to support
segmentation is just what you’d expect: in this
Code 32K 2K case, a set of three base and bounds register
Heap 34K 2K pairs. Figure 16.3 below shows the register values for the example above; each
Stack 28K 2K bounds register holds the size of a segment. Figure 16.3: Segment Register
Values. You can see from the figure that the code segment is placed at physical
address 32KB and has a size of 2KB and the heap segment is placed a t 34KB and also has a size of
2KB.Let’s do an example translation, using the address space in Figure 1. Assume a reference is made to
virtual address 100 (which is in the code segment). When the reference takes place (say, on an instruction
fetch), the hardware will add the base value to the offset into this segment (100 in this case) to arrive at the
desired physical address: 100 + 32KB, or 32868. It will then check that the address is within bounds (100 is
less than 2KB), find that it is, and issue the reference to physical memory address 32868. Now let’s look at an
address in the heap, virtual address 4200 (again refer to Figure 1). If we just add the virtual address 4200 to the base of the heap (34KB), we get a
physical address of 39016, which is not the correct physical address. What we need to first do is extract the offset into the heap, i.e., which byte(s)
in this segment the address refers to. Because the heap starts at virtual address 4KB (4096), the offset of 4200 is actually 4200 – 4096 or 104. We
then take this offset (104) and add it to the base register physical address (34K or 34816) to get the desired result: 34920. What if we tried to refer
to an illegal address, such as 7KB which is beyond the end of the heap? You can imagine what will happen: the hardware detects that the address
is out of bounds, traps into the OS, likely leading to the termination of the offending process. And now you know the origin of the famous term
that all C programmers learn to dread: the segmentation violation or segmentation fault.
Which Segment Are We Referring To?
The hardware uses segment registers during translation. How does it know the offset into a segment, and to which segment an address refers? One
common approach, sometimes referred to as an explicit approach, is to chop up the address space into segments based on the top few bits of the
virtual address; this technique was used in the VAX/VMS system [LL82]. In our example above, we have three segments; thus we need two bits to
accomplish our task. If we use the top two bits of our 14-bit virtual address to select the segment, our virtual address looks like this:
In our example, then, if the top two bits are 00, the hardware knows the virtual address is in the code segment, and thus uses the code base and
bounds pair to relocate the address to the correct physical location. If the top two bits are 01, the hardware knows the address is in the heap, and
thus uses the heap base and bounds. Let’s take our example heap virtual address from above (4200) and translate it, just to make sure this is clear.
The virtual address 4200, in binary form, can be seen here:
As you can see from the picture, the top two bits (01) tell the hardware which segment we are referring to. The bottom 12 bits are the offset into
the segment: 0000 0110 1000, or hex 0x068, or 104 in decimal. Thus, the hardware simply takes the first two bits to determine which segment
register to use, and then takes the next 12 bits as the offset into the segment. By adding the base register to the offset, the hardware arrives at the
final physical address. Note the offset eases the bounds check too: we can simply check if the offset is less than the bounds; if not, the address is
illegal. Thus, if base and bounds were arrays (with one entry per segment), the hardware would be doing something like this to obtain the desired
physical address:
1 // get top 2 bits of 14-bit VA
2 Segment = (VirtualAddress & SEG_MASK) >> SEG_SHIFT
3 // now get offset
4 Offset = VirtualAddress & OFFSET_MASK
5 if (Offset >= Bounds[Segment])
6 RaiseException(PROTECTION_FAULT)
7 else
8 PhysAddr = Base[Segment] + Offset
9 Register = AccessMemory(PhysAddr)
In our running example, we can fill in values for the constants above. Specifically, SEG MASK would be set to 0x3000, SEG SHIFT to 12, and
OFFSET MASK to 0xFFF. You may also have noticed that when we use the top two bits, and we only have three segments (code, heap, stack),
one segment of the address space goes unused. Thus, some systems put code in the same segment as the heap and thus use only one bit to select
which segment to use. There are other ways for the hardware to determine which segment a particular address is in. In the implicit approach, the
1
OS - Lecture Memory Management - II
hardware determines the segment by noticing how the address was formed. If, for example, the address was generated from the program counter
(i.e., it was an instruction fetch), then the address is within the code segment; if the address is based off of the stack or base pointer, it must be in
the stack segment; any other address must be in the heap.
What about the Stack?
Thus far, we’ve left out one important component of the address space: the stack. The stack has been relocated to physical address 28KB in the
diagram above, but with one critical difference: it grows backwards – in physical memory, it starts at 28KB and grows back to 26KB,
corresponding to virtual addresses 16KB to 14KB; translation has to proceed differently. The first thing we need is a little extra hardware support.
Instead of just base and bounds values, the hardware also needs to knowwhich way the segment grows (a bit, for example, that is set to 1 when the
segment grows in the positive direction, and 0 for negative). Our updated view of what the hardware tracks is seen in Table 16.2.
Segment Base Size Grows Positive? Table 2: Segment Registers (With Negative-Growth Support)
Code 32K 2K 1 With the hardware understanding that segments can grow in the negative direction, the hardware
must now
Heap 34K 2K 1 translate such
Stack 28K 2K 0 virtual
addresses
slightly differently. Let’s take an example stack virtual
address and translate it to understand the process. In this
example, assume we wish to access virtual address 15KB,
which should map to physical address 27KB. Our virtual
address, in binary form, thus looks like this: 11 1100 0000
0000 (hex 0x3C00). The hardware uses the top two bits (11)
to designate the segment, but then we are left with an offset
of 3KB. To obtain the correct negative offset, we must
subtract the maximum segment size from 3KB: in this
example, a segment can be 4KB, and thus the correct
negative offset is 3KB - 4KB which equals -1KB. We
simply add the negative offset (-1KB) to the base (28KB) to
arrive at the correct physical address: 27KB. The bounds
check can be calculated by ensuring the absolute value of the
negative offset is less than the segment’s size.
Support for Sharing
As support for segmentation grew, system designers soon realized that they could realize new types of efficiencies with a little more hardware
support. Specifically, to save memory, sometimes it is useful to share certain memory segments between address spaces. In particular, code
sharing is common and still in use in systems today. To support sharing, we need a little extra support from the hardware, in the form of
protection bits. Basic support adds a few bits per segment, indicating whether or not a program can read or write a segment, or perhaps execute
code that lies within the segment. By setting a code segment to read-only, the same code can be shared across multiple processes, without worry of
harming isolation; while each process still thinks that it is accessing its own private memory, the OS is secretly sharing memory which cannot be
modified by the process, and thus the illusion is preserved. An example of the additional information tracked by the hardware (and OS) is shown in
Figure 3. As you can see, the code segment is set to read and execute, and thus the same physical segment in memory could be mapped into
multiple virtual address spaces. With protection bits, the hardware algorithm
described earlier would also have to change. In addition to checking whether a Segment Base Size Grows Positive? Protection
virtual address is within bounds, the hardware also has to check whether a Code 32K 2K 1 Read-Execute
particular access is permissible. If a user process tries to write to a read-only page,
or execute from a non-executable page, the hardware should raise an exception, Heap 34K 2K 1 Read-Write
and thus let the OS deal with the offending process. Table 2: Segment Registers
(With Negative-Growth Support) Stack 28K 2K 0 Read-Write
A segment table maps segment-offset addresses to physical addresses, and
simultaneously checks for invalid addresses, using a system similar to the page tables and relocation base registers discussed previously. ( Note
that at this point in the discussion of segmentation, each segment is
kept in contiguous memory and may be of different sizes, but that
segmentation can also be combined with paging.)
IA-32 Segmentation: The Pentium CPU provides both pure
A variable-length block of data that resides in secondary memory. An entire segment may temporarily be copied into an available
Segment region of main memory (segmentation) or the segment may be divided into pages, which can be individually copied into main
memory (combined segmentation and paging).
Examples of Logical-to-Physical Address Translation Typical Memory Management Formats
We see, then, that the effective access time is directly proportional to the page-fault rate. If one access out of 1,000 causes a page fault, the
effective access time is 8.2 microseconds. The computer will be slowed down by a factor of 40 because of demand paging! If we want
performance degradation to be less than 10 percent, we need
220 > 200 + 7,999,800 × p,
20 > 7,999,800 × p,
p < 0.0000025.
That is, to keep the slowdown due to paging at a reasonable level, we can allow fewer than one memory access out of 399,990 to page-fault. In
sum, it is important to keep the page-fault rate low in a demand-paging system. Otherwise, the effective access time increases, slowing process
execution dramatically.
An additional aspect of demand paging is the handling and overall use of swap space. Disk I/O to swap space is generally faster than that to the file
system. It is faster because swap space is allocated in much larger blocks, and file lookups and indirect allocation methods are not used. The
system can therefore gain better paging throughput by copying an entire file image into the swap space at process startup and then performing
demand paging from the swap space. Another option is to demand pages from the file system initially but to write the pages to swap space as they
are replaced. This approach will ensure that only needed pages are read from the file system but that all subsequent paging is done from swap
space.
Some systems attempt to limit the amount of swap space used through demand paging of binary files. Demand pages for such files are brought
directly from the file system. However, when page replacement is called for, these frames can simply be overwritten (because they are never
modified), and the pages can be read in from the file system again if needed. Using this approach, the file system itself serves as the backing store.
However, swap space must still be used for pages not associated with a file; these pages include the stack and heap for a process. This method
appears to be a good compromise and is used in several systems, including Solaris and BSD UNIX.
Characteristics of Paging and Segmentation:
Simple Paging Virtual Memory Paging Simple Segmentation Virtual Memory Segmentation
Main memory partitioned into small fixed-size
Main memory not partitioned.
chunks called frames.
Program broken into pages by the compiler or Program segments specified by the programmer to the compiler (i.e., the decision is
memory management system. made by the programmer).
Internal fragmentation within frames. No internal fragmentation.
No external fragmentation. External fragmentation.
4
OS - Lecture Memory Management - II
Operating system must maintain a page table for each Operating system must maintain a segment table for each process showing the load
process showing which frame each page occupies. address and length of each segment.
Operating system must maintain a free-frame list. Operating system must maintain a list of free holes in main memory.
All the pages of a Not all pages of a process All the segments of a
process must be in need be in main memory process must be in
Not all segments of a process need be in main memory for the
main memory for frames for the process to main memory for
process to run. Segments may be read in as needed.
process to run, unless run. Pages may be read in as process to run, unless
overlays are used. needed. overlays are used.
Address Translation in a Segmentation/Paging System. This sequence should look familiar; it is virtually identical to what we saw before with linear
page tables. The only difference, of course, is the use
of one of three segment base registers instead of the
single page table base register. The critical difference
in our hybrid scheme is the presence of a bounds
register per segment; each bounds register holds the
value of the maximum valid page in the segment. For
example, if the code segment is using its first three
pages (0, 1, and 2), the code segment page table will
only have three entries allocated to it and the bounds
register will be set to 3; memory accesses beyond the
end of the segment will generate an exception and
likely lead to the termination of the process. In this
manner, our hybrid approach realizes a significant
memory savings compared to the linear page table;
unallocated pages between the stack and the heap no
longer take up space in a page table (just to mark
them as not valid). However, as you might notice, this
approach is not without problems. First, it still requires
us to use segmentation; as we discussed before,
segmentation is not quite as flexible as we would like, as it assumes a certain usage pattern of the address space; if we have a large but sparsely-used heap, for
example, we can still end up with a lot of page table waste. Second, this hybrid causes external fragmentation to arise again. While most of memory is managed
in page-sized units, page tables now can be of arbitrary size (in multiples of PTEs). Thus, finding free space for them in memory is more complicated. For these
reasons, people continued to look for better approaches to implementing smaller page tables.
What If Memory Is Full? In the process described above, you may notice that we assumed there is plenty of free memory in which to page in a page
from swap space. Of course, this may not be the case; memory may be full (or close to it). Thus, the OS might like to first page out one or more pages to make
room for the new page(s) the OS is about to bring in. The process of picking a page to kick out, or replace is known as the page-replacement policy. As it turns
out, a lot of thought has been put into creating a good page replacement policy, as kicking out the wrong page can exact a great cost on program performance.
Making the wrong decision can cause a program to run at disk-like speeds instead of memory-like speeds; in current technology that means a program could run
10,000 or 100,000 times slower. Thus, such a policy is something we should study in some detail; indeed, that is exactly what we will do in the next section. For
now, it is good enough to understand that such a policy exists, built on top of the mechanisms described here.
1 VPN = (VirtualAddress & VPN_MASK) >> SHIFT
2 (Success, TlbEntry) = TLB_Lookup(VPN)
3 if (Success == True) // TLB Hit
4 if (CanAccess(TlbEntry.ProtectBits) == True)
5 Offset = VirtualAddress & OFFSET_MASK
6 PhysAddr = (TlbEntry.PFN << SHIFT) | Offset
7 Register = AccessMemory(PhysAddr)
8 else
9 RaiseException(PROTECTION_FAULT)
6
OS - Lecture Memory Management - II
10 else // TLB Miss
11 PTEAddr = PTBR + (VPN * sizeof(PTE))
12 PTE = AccessMemory(PTEAddr)
13 if (PTE.Valid == False)
14 RaiseException(SEGMENTATION_FAULT)
15 else
16 if (CanAccess(PTE.ProtectBits) == False)
17 RaiseException(PROTECTION_FAULT)
18 else if (PTE.Present == True)
19 // assuming hardware-managed TLB
20 TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits)
21 RetryInstruction()
22 else if (PTE.Present == False)
23 RaiseException(PAGE_FAULT)
Figure 21.2: Page-Fault Control Flow Algorithm (Hardware)
Page Fault Control Flow
With all of this knowledge in place, we can now roughly sketch the complete control flow of memory access. In other words, when somebody asks you “what
happens when a program fetches some data from memory?” you should have a pretty good idea of all the different possibilities. See the control flow in Figures
21.2 and 21.3 for more details; the first figure shows what the hardware does during translation, and the second what the OS does upon a page fault. From the
hardware control flow diagram in Figure 21.2, notice that there are now three important cases to understand when a TLB miss occurs. First, that the page was
both present and valid (Lines 18–21); in this case, the TLB miss handler can simply grab the PFN from the PTE, retry the instruction (this tim resulting in a TLB
hit), and thus continue as described (many times) before. In the second case (Lines 22–23), the page fault handler must be run; although this was a legitimate
page for the process to access (it is valid, after all), it is not present in physical memory. Third (and finally), the access could be to an invalid page, due for
example to a bug in the program (Lines 13–14). In this case, no other bits in the PTE really matter; the hardware traps this invalid access, and the OS trap handler
runs, likely terminating the offending process. From the software control flow in Figure 21.3, we can see what the OS rough must do in order to service the page
fault. First, the OS must find a physical frame for the soon-to-be-faulted-in page to reside within; if there is no such page, we’ll have to wait for the replacement
algorithm to run and kick some pages out of memory, thus freeing them for use here.
1 PFN = FindFreePhysicalPage()
2 if (PFN == -1) // no free page found
3 PFN = EvictPage() // run replacement algorithm
4 DiskRead(PTE.DiskAddr, pfn) // sleep (waiting for I/O)
5 PTE.present = True // update page table with present
6 PTE.PFN = PFN // bit and translation (PFN)
7 RetryInstruction() // retry instruction
Figure 21.3: Page-Fault Control Flow Algorithm (Software)
With a physical frame in hand, the handler then issues the I/O request to read in the page from swap space. Finally, when that slow operation completes, the OS
updates the page table and retries the instruction. The retry will result in a TLB miss, and then, upon another retry, a TLB hit, at which point the hardware will be
able to access the desired item.
When Replacements Really Occur
Thus far, the way we’ve described how replacements occur assumes that the OS waits until memory is entirely full, and only then replaces (evicts) a page to
make room for some other page. As you can imagine, this is a little bit unrealistic, and there are many reasons for the OS to keep a small portion of memory free
more proactively. To keep a small amount of memory free, most operating systems thus have some kind of high watermark (HW) and low watermark (LW) to
help decide when to start evicting pages from memory. How this works is as follows: when the OS notices that there are fewer than LW pages available, a
background thread that is responsible for freeing memory runs. The thread evicts pages until there are HW pages available. The background thread, sometimes
called the swap daemon or page daemon1, then goes to sleep, happy that is has freed some memory for running processes and the OS to use. By performing a
number of replacements at once, new performance optimizations become possible. For example, many systems will cluster or group a number of pages and
write them out at once to the swap partition, thus increasing the efficiency of the disk [LL82]; as we will see later when we discuss disks in more detail, such
clustering reduces seek and rotational overheads of a disk and thus increases performance noticeably. To work with the background paging thread, the control
flow in Figure 21.3 should be modified slightly; instead of performing a replacement directly, the algorithm would instead simply check if there are any free
pages available. If not, it would signal that the background paging thread that free pages are needed; when the thread frees up some pages, it would re-awaken
the original thread, which could then page in the desired page and go about its work.
Other VM Policies
Page replacement is not the only policy the VM subsystem employs (though it may be the most important). For example, the OS also has to decide when to bring
a page into memory. This policy, sometimes called the page selection policy (as it was called by Denning), presents the OS with some different options. For most
pages, the OS simply uses demand paging, which means the OS brings the page into memory when it is accessed, “on demand” as it were. Of course, the OS
could guess that a page is about to be used, and thus bring it in ahead of time; this behavior is known as prefetching and should only be done when there is
reasonable chance of success. For example, some systems will assume that if a code page P is brought into memory, that code page P+1will likely soon be
accessed and thus should be brought into memory too. Another policy determines how the OS writes pages out to disk. Of course, they could simply be written
out one at a time; however, many systems instead collect a number of pending writes together in memory and write them to disk in one (more efficient) write.
This behavior is usually called clustering or simply grouping of writes, and is effective because of the nature of disk drives, which perform a single large write
more efficiently than many small ones.
Thrashing
Before closing, we address one final question: what should the OS do When memory is simply oversubscribed, and the memory demands of the set of running
processes simply exceeds the available physical memory? In this case, the system will constantly be paging, a condition sometimes referred to as thrashing.
Some earlier operating systems had a fairly sophisticated set of mechanisms to both detect and cope with thrashing when it took place. For example, given a set
of processes, a system could decide not to run a subset of processes, with the hope that the reduced set of processes working sets (the pages that they are using
actively) fit in memory and thus can make progress. This approach, generally known as admission control, states that it is sometimes better to do less work well
than to try to do everything at once poorly, a situation we often encounter in real life as well as in modern computer systems (sadly). Some current systems take
more a draconian approach to memory overload. For example, some versions of Linux run an out-of-memory killer when memory is oversubscribed; this
daemon chooses a memory intensive process and kills it, thus reducing memory in a none-too-subtle manner. While successful at reducing memory pressure,
this approach can have problems, if, for example, it kills the X server and thus renders any applications requiring the display unusable.
7
OS - Lecture Memory Management - II
Thrashing
• If a process cannot maintain its minimum required number of frames, then it must be swapped out, freeing up frames for other
processes. This is an intermediate level of CPU scheduling.
• But what about a process that can keep its minimum, but cannot keep all of the frames that it is currently using on a regular basis? In
this case it is forced to page out pages that it will need again in the very near future, leading to large numbers of page faults.
• A process that is spending more time paging than executing is said to be thrashing.
9.6.1 Cause of Thrashing
• Early process scheduling schemes would control the level of multiprogramming allowed based on CPU utilization, adding in more
processes when CPU utilization was low.
• The problem is that when memory filled up and processes started spending
lots of time waiting for their pages to page in, then CPU utilization would
lower, causing the schedule to add in even more processes and exacerbating
the problem! Eventually the system would essentially grind to a halt.
• Local page replacement policies can prevent one thrashing process from
taking pages away from other processes, but it still tends to clog up the I/O
queue, thereby slowing down any other process that needs to do even a
little bit of paging ( or any other I/O for that matter. )
• To prevent thrashing we must provide processes with as many frames as
they really need "right now", but how do we know what that is?
• The locality model notes that processes typically access memory references
in a given locality, making lots of references to the same general area of memory before moving periodically to a new locality, as shown
in Figure 9.19 below. If we could just keep as many frames as are involved in the current locality, then page faulting would occur
primarily on switches from one locality to another. ( E.g. when one function exits and another is called. )
9.3 Copy-on-Write
• The idea behind a copy-on-write fork is that the pages for a parent process do not have to be actually copied for the child until one or
the other of the processes changes the page. They can be simply shared between the two processes in the meantime, with a bit set that
the page needs to be copied if it ever gets written to. This is a reasonable approach, since the child process usually issues an exec( )
system call immediately after the fork.
Figure 4-24. Belady’s anomaly. (a) FIFO with three page frames. (b) FIFO with four page frames. The P’s show which page references cause
page faults.
Behaviour of Four Page Replacement Algorithms. The example assumes a fixed frame allocation (fixed resident set size) for this process of
three frames. The execution of the process requires reference to five distinct pages. The page address stream formed by executing the program is
which means that the first page referenced is 2, the second page referenced is 3, and so on. The optimal policy produces three page faults after the
frame allocation has been filled.
The placement policy: determines
where in real memory a process piece
is to reside. In a pure segmentation
system, policies such as best-fit, first-
fit, and so on, are possible
alternatives. However, for a system
that uses either pure paging or paging
combined with segmentation,
placement is usually irrelevant.
Resident Set Management:
1. How many page frames are to be
allocated to each active process.
2. Whether the set of pages to be
considered for replacement should be
limited to those of the process that
caused the page fault or encompass all
the page frames in main memory
Replacement policy:
Among the set of pages considered,
which particular page should be
selected for replacement.
Frame Locking:
Some of the frames in main memory
may be locked. When a frame is
locked, the page currently stored in
that frame may not be replaced.
12
OS - Lecture Memory Management - II
time 0 1 2 3 4 5 6 7 8 9 10
ω c a d b e b a b c d
frame 0 a/00 a/00 a/11 a/11 a/11 a/01 a/01 a/11 a/11 a/01 a/01
frame 1 b/00 b/00 b/00 b/00 b/11 b/01 b/11 b/11 b/11 b/01 b/01
frame 2 c/00 c/10 c/10 c/10 c/10 e/10 e/10 e/10 e/10 e/00 d/10
frame 3 d/00 d/00 d/00 d/10 d/10 d/00 d/00 d/00 d/00 c/10 c/00
page fault 1 2 3
page(s) loaded e c d
page(s) removed c d e
13
OS - Lecture Memory Management - II
Clock This policy is similar to LRU and FIFO. Whenever a page is referenced, the use bit is set. When a page must be
replaced, the algorithm begins with the page frame pointed to. If the frame’s use bit is set, it is cleared and the pointer
advanced. If not, the page in that frame is replaced. Here the number after the page is the use bit; we’ll assume all
pages have been referenced initially.
time 0 1 2 3 4 5 6 7 8 9 10
ω c a d b e b a b c d
frame 0 a/0 →a/0 →a/1 →a/1 →a/1 e/1 e/1 e/1 e/1 →e/1 d/1
frame 1 b/0 b/0 b/0 b/0 b/1 →b/0 →b/1 b/0 b/1 b/1 b/0
frame 2 c/0 c/1 c/1 c/1 c/1 c/0 c/0 a/1 a/1 a/1 a/0
frame 3 d/0 d/0 d/0 d/1 d/1 d/0 d/0 →d/0 →d/0 c/1 c/0
page fault 1 2 3 4
page(s) loaded e a c d
page(s) removed a c d e
Second-chance Cyclic
This policy merges the clock algorithm and the NRU algorithm. Each page frame has a use and a modified bit. Whenever a
page is referenced, the use bit is set; whenever modified, the modify bit is set. When a page must be replaced, the algorithm
begins with the page frame pointed to. If the frame’s use bit and modify bit are set, the use bit is cleared and the pointer advanced;
if the use bit is set but the modify bit is not, the use bit is cleared and the pointer advanced; if the use bit is clear but the modify
bit is set, the modify bit is cleared (and the algorithm notes that the page must be copied out before being replaced; here, the
page is emboldened) and the pointer is advanced; if both the use and modify bits are clear, the page in that frame is replaced. In
the following, assume references at times 2, 4, and 7 are writes (represented by the bold page references). The two numbers
written after each page are the use and modified bits, respectively. Initially, all pages have been used but none are modified.
time 0 1 2 3 4 5 6 7 8 9 10
ω c a d b e b a b c d
frame 0 a/00 →a/00 →a/11 →a/11 →a/11 a/00 a/00 a/11 a/11 →a/11 a/00
frame 1 b/00 b/00 b/00 b/00 b/11 b/00 b/10 b/10 b/10 b/10 d/10
frame 2 c/00 c/10 c/10 c/10 c/10 e/10 e/10 e/10 e/10 e/10 →e/00
frame 3 d/00 d/00 d/00 d/10 d/10 →d/00 →d/00 →d/00 →d/00 c/10 c/00
fault 1 2 3
loaded e c d
removed c d e
Variable Number of Frames
Working Set (WS)
This policy tries to keep all pages in a process’ working set in memory. This table shows the pages consitiuting the working set
at each reference. Here, we take the working set to be that set of pages which has been referenced during the last τ = 4 units.
We also assume that a was referenced at time 0, d at time −1, and e at time −2. The window begins with the current reference.
time −2 −1 0 1 2 3 4 5 6 7 8 9 10
ω e d a c a d b e b a b c d
page a – – a a a a a a – a a a a
page b – – – – – – b b b b b b b
page c – – – c c c c – – – – c c
page d – d d d d d d d d – – – d
page e e e e e – – – e e e e – –
page fault 1 2 3 4 5 6
page(s) loaded c b e a c d
page(s) removed e c a d e
Page Fault Frequency (PFF)
This approximation to the working set policy tries to keep page faulting to some prespecified range. If the time between the current
and the previous page fault exceeds some critical value p, then all pages not referenced between those page faults are removed.
This table shows the pages resident at each reference. Here, we take p = 2 units and assume that initially, a, d, and e are resident.
This example assumes the interval between page faults does not include the reference that caused the previous page fault.
time 0 1 2 3 4 5 6 7 8 9 10
ω c a d b e b a b c d
page a a a a a a a a a a a a
page b – – – – b b b b b – –
page c – c c c – – – – – c c
page d d d d d d d d d d – d
page e e e e e – e e e e – –
page fault 1 2 3 4 5
page(s) loaded c b e c d
page(s) removed c,e d,e
14