Chapter-04 (Copy)
Chapter-04 (Copy)
Memory Management
• Memory hierarchy
– small amount of fast, expensive memory – cache
– some medium-speed, medium price main memory
– gigabytes of slow, cheap disk storage
2
No Abstraction Memory Model
Every program simply saw the physical memory. When a program
executed an instruction like
MOV REGISTERS,1000
3
Basic Memory Management
Monoprogramming without Swapping or Paging
5
Relocation & Protection problems solved with Base
& Limit Registers
6
Relocation and Protection
• Cannot be sure where program will be loaded in
memory
– address locations of variables, code routines cannot be absolute
– must keep a program out of other processes’ partitions
7
Swapping (1)
8
Swapping (2)
1
Memory Management with Linked Lists
1
Virtual Memory
Paging (1)
1
Paging (2)
1
Page Tables (1)
• MOV REG, 32780 (32768 + 12) --> X Page fault. (Virtual Page 8)
• What OS does when Page Fault occures?
• Example: Suppose OS decides to replace virtual page 1, it will load page 8 at 4096.
makes two changes: 1. Page 1 entry is crossed & 2. Page 8 entry with 1.
• Combination of 16 bit virtual address with different pages size. Total Address space : 64KB
16 = 4 + 12 Page size: 2^12 = 4KB No of pages: 2^4 = 16
16 = 3 + 13 Page size: 2^13 = 8KB No of Pages: 2^3 = 08
16 = 5 + 11 Page size 2^11 = 2KB No of pages: 2^5 = 32
• Two issue with Paging System: Mapping from VA to PA must be fast and (MOV REG, 8192)
Size of page table when Virtual Addess space is large. (32 & 64bit VA)
• What about Program Counter? Does contain Physical address or Virtual address?
Array of HW Registers --> Fast mapping but not useful when page table is large. Also process switch is problem.
1
Process is an abstraction of Physical Processor &
Address space is an abstraction of Physical Memory.
Page Table in main memory: so simple instruction with move from memory to register will require two
memory reference , one for page table a
one for actual instruction fetch.
Most of programs tend to make a large number of references to small number of pages. That is the theme
behind the TLB or associative memory (HW).
1
Page Tables (2)
Second-level page tables
Top-level
page table
1
TLBs – Translation Lookaside Buffers
1
Inverted Page Tables
2
Inverted Page Table:
One entry per Page Frame in real memory rather than one entry per page of
virtual address space.
Ex. 64 bit virtual address, 4KB page size & 4GB of RAM
Inv. Page Table contains only 2^20 = 1048576 entries whereas
Normal Page Table contains 2^52 entries.
The entry keeps track of which (process, virtual page) is located in page frame.
Saves a lot of space but virtual to physical address translation is harder.
When process n references page p, entire inverted page table is to be searched
for an entry (n,p). To improve performance TLB can be used wtih Inv. Table.
1 5
1 3
On TLB miss search is required & to have hash table hashed on virtual address.
2
Page Replacement Algorithms
• Page fault forces choice
2
Optimal Page Replacement Algorithm
• Replace page needed at the farthest point in future
– Optimal but unrealizable
• Estimate by …
– logging page use on previous runs of process
– although this is impractical
2
Not Recently Used Page Replacement Algorithm
2
FIFO Page Replacement Algorithm
• Maintain a linked list of all pages
• Disadvantage
– page in memory the longest may be often used
2
Second Chance Page Replacement Algorithm
2
The Clock Page Replacement Algorithm
2
Least Recently Used (LRU)
• Assume pages used recently will used again soon
– throw out page that has been unused for longest time
2
Simulating LRU in Software (2)
2
Working Set Page Replacement Algorithm:
1. Demand paging: Processes are started up with none of their pages in memory.
So page faults will load the pages required in memory. (high page fault)
After some time most pages will be there in memory (low page fault).
Pages are loaded only on demand, not in advance.
2. Locality of Reference: During any phase of execution, the process references only
a relatively small fraction of its pages. Ex. Multipass compiler.
3. Working set: the set of pages that a process is currently using is its working set.
If entire working set is in memory all time then min. Page fault occurs.
4. Thrashing: A program causing page faults every few instructions is said to be...
5. In multiprogramming processes are often moved to disk for others to execute.
When such process are brought back in memory nothing to be done. The
Process will just cause page faults until it’s working set is loaded in memory.
6. So, many paging system try to keep track of each process’ working set and make
sure that it is in memory before letting the process run. This is working set
model.
7. Prepaging: Loading the pages before letting processes run is also called prepagin
3
Programs rarely referencess their address space uniformly, but the
References tend to cluster on a small number of pages.
A memory reference may fetch an instruction , data or it may store data.
At any instant of time t, there exist a set consisting of all the pages used by
k most recent memory reference. This set w(k,t) is the working set.
Because k =1 most recent memory references, must have used all the pages
used by k>1 most recent references. So it is monotonically non
drecresing function of k.
The limit of w(k,t) as k becomes large is finite because a program cannot
reference more pages than it’s address space contains and few programs
will use every single page.
Thus a contents of the working set is not sensitive to the value of k chosen.
Algo: OS keeps track of which pages are in working set. When a page
fault occurs, find a page not in the working set & evict it.
Some value of k must be chosen in advance.
Implementation using Shift Register: of size k. Left shift on every memory
reference & insert most recently ref. Page No on the right. At a page fault,
Shift register is read out & sorted then Duplicate page are removed.
3
The Working Set Page Replacement Algorithm (1)
• The working set is the set of pages used by the k most recent memory references
• w(k,t) is the size of the working set at time, t
3
One approximatin is to drop idea of counting back k memory references
and use execution time instead.
Ex. instead of defining working set as those pages used during previous 10
million memory references, we can define it as the set of pages used
during the past 100 msec of execution time.
Current Virtula Time: if a process starts at T & has 40msec of CPU time,
at real time T+100msec, for working set purpose its time is 40msec.
Algorithm: Each entry contains time the page was last used and R-bit.
The working of R bit is as usual. Every clock tick clears R-bit.
On every page fault, the page table is scanned to look for following:
Worst Case: if all pages have R=1, so one is chosen at random for removal,
preferably a clean page, if one exists ( by checking M-bit).
3
The Working Set Page Replacement Algorithm (2)
3
The WSClock Page Replacement Algorithm
3
Modeling Page Replacement Algorithms
Belady's Anomaly
7 4 6 5
3
The Distance String
3
The Distance String
4
Design Issues for Paging Systems
Local versus Global Allocation Policies (1)
• Original configuration
• Local page replacement
• Global page replacement
4
Local versus Global Allocation Policies (2)
4
Load Control
• Despite good designs, system may still thrash
• Solution :
Reduce number of processes competing for memory
– swap one or more to disk, divide up pages they held
– reconsider degree of multiprogramming
4
Page Size (1)
Small page size
• Advantages
– less internal fragmentation
– better fit for various data structures, code sections
– less unused program in memory
• Disadvantages
– programs need many pages, larger page tables
4
Page Size (2)
page table
space
• Where s e p
overhead
– s = average process size in bytes internal
– p = page size in bytes p 2 fragmentatio
n
– e = page entry
Optimized when
p 2 se
4
Separate Instruction and Data Spaces
4
Shared Pages
4
Cleaning Policy
• Need for a background process, paging daemon
– periodically inspects state of memory
4
Implementation Issues
Operating System Involvement with Paging
4
Page Fault Handling (1)
1. Hardware traps to kernel
2. General registers saved
3. OS determines which virtual page needed
4. OS checks validity of address, seeks page frame
5. If selected frame is dirty, write it to disk
5
Page Fault Handling (2)
OS brings schedules new page in from disk
Page tables updated
Faulting instruction backed up to when it began
Faulting process scheduled
Registers restored
Program continues
5
Instruction Backup
5
Locking Pages in Memory
5
Backing Store
5
Separation of Policy and Mechanism
5
Segmentation (1)
5
To specify an address in segmented or two dimensional memory, a program
must supply a two-part address, Segment No & address within segment.
A segment is a logical entity, which the progrmmer is aware of and uses as a
logical entity. Segment might contain a procedure, an array, or a stack,or
collection of scalar variables,but usually not contain mixture of diff. Type
Adv. of Segmentation: 1. handling of data structures which grow or shrink.
2. linking of procedures compiled separately is simplified, if each
procedure occupies separate segment, with 0 as its starting address.
Procedure call for procedure in segment n will use (n,0) address.
If the procedure in segment n is subsequently modified & recompiled,
no other procedures need be changed even if new version is larger than
old one.
3. Sharing of procedures & data among several processes.
Ex. Shared Library
5
Main func()
Sum Call : JMP add1
Fact Call: JMP add2
Sub call: JMP add3
add1
Sum()
add2
add3
Fact()
Sub()
5
Segmentation (3)
6
Main func()
add1 Sum()
add2 Fact()
add3
Sub()
6
Implementation of Pure Segmentation
6
Segmentation with Paging: MULTICS (1)
6
Segmentation with Paging: MULTICS (2)
6
Segmentation with Paging: MULTICS (3)
A Pentium selector
6
Segmentation with Paging: Pentium (2)
6
Segmentation with Paging: Pentium (3)
6
Segmentation with Paging: Pentium (4)
7
Segmentation with Paging: Pentium (5)
Level