Materi SO - 9
Materi SO - 9
SISTEM OPERASI
S-1 ILMU KOMPUTER USU
VIRTUAL MEMORY
Background
Demand Paging
Copy-on-Write
Page Replacement
Allocation of Frames
Thrashing
Memory-Mapped Files
Allocating Kernel Memory
Other Considerations
OBJECTIVES
To describe the benefits of a virtual memory system
To explain the concepts of demand paging, page-replacement
algorithms, and allocation of page frames
To discuss the principle of the working-set model
To examine the relationship between shared memory and
memory-mapped files
To explore how kernel memory is managed
BACKGROUND
Code needs to be in memory to execute, but entire program rarely
used
Error code, unusual routines, large data structures
Actually, a given instruction could access multiple pages -> multiple page
faults
Consider fetch and decode of instruction which adds 2 numbers from memory and stores
result back to memory
Pain decreased because of locality of reference
Demand page in from program binary on disk, but discard rather than paging out when
freeing frame
Used in Solaris and current BSD
Still need to write to swap space
Pages not associated with a file (like stack and heap) – anonymous memory
Pages modified in memory but not yet written back to the file system
Mobile systems
Typically don’t support swapping
Instead, demand page from file system and reclaim read-only pages (such as code)
COPY-ON-WRITE
Copy-on-Write (COW) allows both parent and child processes to initially share the same
pages in memory
If either process modifies a shared page, only then is the page copied
COW allows more efficient process creation as only modified pages are copied
In general, free pages are allocated from a pool of zero-fill-on-demand pages
Pool should always have free frames for fast demand page execution
Don’t want to have to free a frame as well as other processing on page fault
Why zero-out a page before allocating it?
vfork() variation on fork() system call has parent suspend and child using copy-
on-write address space of parent
Designed to have child call exec()
Very efficient
BEFORE PROCESS 1 MODIFIES PAGE C
AFTER PROCESS 1 MODIFIES PAGE C
WHAT HAPPENS IF THERE IS NO FREE FRAME?
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Continue the process by restarting the instruction that caused the trap
Note now potentially 2 page transfers for page fault – increasing EAT
PAGE REPLACEMENT
PAGE AND FRAME REPLACEMENT ALGORITHMS
Page-replacement algorithm
Want lowest page-fault rate on both first access and re-access
15 page faults
Stack implementation
Keep a stack of page numbers in a double link form:
Page referenced:
move it to the top
requires 6 pointers to be changed
But each update more expensive
No search for replacement
LRU and OPT are cases of stack algorithms that don’t have Belady’s
Anomaly
USE OF A STACK TO RECORD MOST RECENT PAGE REFERENCES
LRU APPROXIMATION ALGORITHMS
LRU needs special hardware and still slow
Reference bit
With each page associate a bit, initially = 0
When page is referenced bit set to 1
Replace any with reference bit = 0 (if one exists)
We do not know the order, however
Second-chance algorithm
Generally FIFO, plus hardware-provided reference bit
Clock replacement
If page to be replaced has
Reference bit = 0 -> replace it
reference bit = 1 then:
set reference bit 0, leave page in memory
replace next page, subject to same rules
SECOND-CHANCE (CLOCK) PAGE-REPLACEMENT ALGORITHM
ENHANCED SECOND-CHANCE ALGORITHM
Possibly, keep free frame contents intact and note what is in them
If referenced again before reused, no need to load contents again from disk
Generally useful to reduce penalty if wrong victim frame selected
APPLICATIONS AND PAGE REPLACEMENT
Operating system can given direct access to the disk, getting out of
the way of the applications
Raw disk mode
Many variations
FIXED ALLOCATION
Equal allocation – For example, if there are 100 frames (after
allocating frames for the OS) and 5 processes, give each process 20
frames
Keep some as free frame buffer pool
Local replacement – each process selects from only its own set of
allocated frames
More consistent per-process performance
But possibly underutilized memory
NON-UNIFORM MEMORY ACCESS
So far all memory accessed equally
Many systems are NUMA – speed of access to memory varies
Consider system boards containing CPUs and memory, interconnected over a system
bus
if D > m Thrashing
Simplifies and speeds file access by driving file I/O through memory
rather than read() and write() system calls
Also allows several processes to map the same file allowing the pages
in memory to be shared
But when does written data make it to disk?
Periodically and / or at file close() time
For example, when the pager scans for dirty pages
MEMORY-MAPPED FILE TECHNIQUE FOR ALL I/O