0% found this document useful (0 votes)
8 views71 pages

Chapter-04 (Copy)

Chapter 4 discusses memory management, covering topics such as basic memory management, swapping, virtual memory, and page replacement algorithms. It explains the importance of memory hierarchy, the role of memory managers, and various strategies for managing memory efficiently, including the use of page tables and different page replacement algorithms. The chapter also addresses design and implementation issues related to paging systems and the handling of page faults.

Uploaded by

aand75372
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views71 pages

Chapter-04 (Copy)

Chapter 4 discusses memory management, covering topics such as basic memory management, swapping, virtual memory, and page replacement algorithms. It explains the importance of memory hierarchy, the role of memory managers, and various strategies for managing memory efficiently, including the use of page tables and different page replacement algorithms. The chapter also addresses design and implementation issues related to paging systems and the handling of page faults.

Uploaded by

aand75372
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 71

Chapter 4

Memory Management

4.1 Basic memory management


4.2 Swapping
4.3 Virtual memory
4.4 Page replacement algorithms
4.5 Modeling page replacement algorithms
4.6 Design issues for paging systems
4.7 Implementation issues
4.8 Segmentation
1
Memory Management
• Ideally programmers want memory that is
– large
– fast
– non volatile

• Memory hierarchy
– small amount of fast, expensive memory – cache
– some medium-speed, medium price main memory
– gigabytes of slow, cheap disk storage

• Memory manager handles the memory hierarchy

2
No Abstraction Memory Model
Every program simply saw the physical memory. When a program
executed an instruction like

MOV REGISTERS,1000

It was not possible to have two running programs in memory at the


same time.
If the first program wrote a new value to, say, location
12, this would erase whatever value the second program was storing
there.

3
Basic Memory Management
Monoprogramming without Swapping or Paging

Three simple ways of organizing memory


- an operating system with one user process
4
Relocation problem

5
Relocation & Protection problems solved with Base
& Limit Registers

6
Relocation and Protection
• Cannot be sure where program will be loaded in
memory
– address locations of variables, code routines cannot be absolute
– must keep a program out of other processes’ partitions

• Use base and limit values


– address locations added to base value to map to physical addr
– address locations larger than limit value is an error

7
Swapping (1)

Memory allocation changes as


– processes come into memory
– leave memory
Shaded regions are unused memory

8
Swapping (2)

• Allocating space for growing data segment


• Allocating space for growing stack & data segment
9
Memory Management with Bit Maps

• Part of memory with 5 processes, 3 holes


– tick marks show allocation units
– shaded regions are free
• Corresponding bit map
• Same information as a list

1
Memory Management with Linked Lists

Four neighbor combinations for the terminating process X

1
Virtual Memory
Paging (1)

The position and function of the MMU

1
Paging (2)

The relation between


virtual addresses
and physical
memory addres-
ses given by
page table

1
Page Tables (1)

Internal operation of MMU with 16 4 KB pages


1
• Page size is always power or 2.
• MOV REG, 8192 ---> 24576
• MOV REG, 20500 ( 20480 + 20) --> (12288+20) = 12308

• MOV REG, 32780 (32768 + 12) --> X Page fault. (Virtual Page 8)
• What OS does when Page Fault occures?
• Example: Suppose OS decides to replace virtual page 1, it will load page 8 at 4096.
makes two changes: 1. Page 1 entry is crossed & 2. Page 8 entry with 1.

• Combination of 16 bit virtual address with different pages size. Total Address space : 64KB
16 = 4 + 12 Page size: 2^12 = 4KB No of pages: 2^4 = 16
16 = 3 + 13 Page size: 2^13 = 8KB No of Pages: 2^3 = 08
16 = 5 + 11 Page size 2^11 = 2KB No of pages: 2^5 = 32

• For 32 bit virtual address with 4KB page size we have,


32 = 20 + 12 Page size: 2^12 = 4KB No of pages: 2^20 = 1048576 ( 1 million)

• For 64 bit virtual address with 4KB page size we have,


64 = 52 + 12 Page size: 2^12 = 4KB No of pages = 2^52 ( Very Large number )

• Two issue with Paging System: Mapping from VA to PA must be fast and (MOV REG, 8192)
Size of page table when Virtual Addess space is large. (32 & 64bit VA)
• What about Program Counter? Does contain Physical address or Virtual address?
Array of HW Registers --> Fast mapping but not useful when page table is large. Also process switch is problem.

1
Process is an abstraction of Physical Processor &
Address space is an abstraction of Physical Memory.

Page Table in main memory: so simple instruction with move from memory to register will require two
memory reference , one for page table a
one for actual instruction fetch.

Most of programs tend to make a large number of references to small number of pages. That is the theme
behind the TLB or associative memory (HW).

How TLB Works: If VP entry present in TLB & Protection Fault


If VP entry is not in TLB, Do ordinary page table lookup &
TLB entry replacement occurs by HW.
TLB entry replacement occurs by SW.
Soft Miss: Page referenced is not in TLB but in Memory
Hard Miss: But now suppose the page is not in Main memory then Page fault
Occurs and OS will manage the issue. Page Table walk.
Page fault has three possibilities: 1. Page is in memory but not in processe’s page table.(Minor)
2. Page is not actually in main memory (Major Page fault)
3. program simply accessed invalid address & no mapping to be
added in TLB. (Sagmentation Fault)

1
Page Tables (2)
Second-level page tables

Top-level
page table

• 32 bit address with 2 page table fields


• Two-level page tables
1
Page Tables (3)

Typical page table entry

1
TLBs – Translation Lookaside Buffers

A TLB to speed up paging

1
Inverted Page Tables

Comparison of a traditional page table with an inverted page table

2
Inverted Page Table:
One entry per Page Frame in real memory rather than one entry per page of
virtual address space.
Ex. 64 bit virtual address, 4KB page size & 4GB of RAM
Inv. Page Table contains only 2^20 = 1048576 entries whereas
Normal Page Table contains 2^52 entries.

The entry keeps track of which (process, virtual page) is located in page frame.
Saves a lot of space but virtual to physical address translation is harder.
When process n references page p, entire inverted page table is to be searched
for an entry (n,p). To improve performance TLB can be used wtih Inv. Table.

1 5

1 3
On TLB miss search is required & to have hash table hashed on virtual address.
2
Page Replacement Algorithms
• Page fault forces choice

– which page must be removed


– make room for incoming page

• Modified page must first be saved


– unmodified just overwritten

• Better not to choose an often used page


– will probably need to be brought back in soon

2
Optimal Page Replacement Algorithm
• Replace page needed at the farthest point in future
– Optimal but unrealizable

• Estimate by …
– logging page use on previous runs of process
– although this is impractical

2
Not Recently Used Page Replacement Algorithm

• Each page has Reference bit, Modified bit


– bits are set when page is referenced, modified
• Pages are classified
1. not referenced, not modified
2. not referenced, modified
3. referenced, not modified
4. referenced, modified
• NRU removes page at random
– from lowest numbered non empty class

2
FIFO Page Replacement Algorithm
• Maintain a linked list of all pages

– in order they came into memory

• Page at beginning of list replaced

• Disadvantage
– page in memory the longest may be often used

2
Second Chance Page Replacement Algorithm

• Operation of a second chance


– pages sorted in FIFO order
– Page list if fault occurs at time 20, A has R bit set
(numbers above pages are loading times)

2
The Clock Page Replacement Algorithm

2
Least Recently Used (LRU)
• Assume pages used recently will used again soon
– throw out page that has been unused for longest time

• Must keep a linked list of pages


– most recently used at front, least at rear
– update this list every memory reference !!

• Alternatively keep counter in each page table entry


– choose page with lowest value counter
– periodically zero the counter

2
Simulating LRU in Software (2)

• The aging algorithm simulates LRU in software


• Note 6 pages for 5 clock ticks, (a) – (e)

2
Working Set Page Replacement Algorithm:
1. Demand paging: Processes are started up with none of their pages in memory.
So page faults will load the pages required in memory. (high page fault)
After some time most pages will be there in memory (low page fault).
Pages are loaded only on demand, not in advance.
2. Locality of Reference: During any phase of execution, the process references only
a relatively small fraction of its pages. Ex. Multipass compiler.
3. Working set: the set of pages that a process is currently using is its working set.
If entire working set is in memory all time then min. Page fault occurs.
4. Thrashing: A program causing page faults every few instructions is said to be...
5. In multiprogramming processes are often moved to disk for others to execute.
When such process are brought back in memory nothing to be done. The
Process will just cause page faults until it’s working set is loaded in memory.
6. So, many paging system try to keep track of each process’ working set and make
sure that it is in memory before letting the process run. This is working set
model.

7. Prepaging: Loading the pages before letting processes run is also called prepagin
3
Programs rarely referencess their address space uniformly, but the
References tend to cluster on a small number of pages.
A memory reference may fetch an instruction , data or it may store data.
At any instant of time t, there exist a set consisting of all the pages used by
k most recent memory reference. This set w(k,t) is the working set.
Because k =1 most recent memory references, must have used all the pages
used by k>1 most recent references. So it is monotonically non
drecresing function of k.
The limit of w(k,t) as k becomes large is finite because a program cannot
reference more pages than it’s address space contains and few programs
will use every single page.
Thus a contents of the working set is not sensitive to the value of k chosen.
Algo: OS keeps track of which pages are in working set. When a page
fault occurs, find a page not in the working set & evict it.
Some value of k must be chosen in advance.
Implementation using Shift Register: of size k. Left shift on every memory
reference & insert most recently ref. Page No on the right. At a page fault,
Shift register is read out & sorted then Duplicate page are removed.
3
The Working Set Page Replacement Algorithm (1)

• The working set is the set of pages used by the k most recent memory references
• w(k,t) is the size of the working set at time, t

3
One approximatin is to drop idea of counting back k memory references
and use execution time instead.
Ex. instead of defining working set as those pages used during previous 10
million memory references, we can define it as the set of pages used
during the past 100 msec of execution time.

Current Virtula Time: if a process starts at T & has 40msec of CPU time,
at real time T+100msec, for working set purpose its time is 40msec.
Algorithm: Each entry contains time the page was last used and R-bit.
The working of R bit is as usual. Every clock tick clears R-bit.
On every page fault, the page table is scanned to look for following:

Worst Case: if all pages have R=1, so one is chosen at random for removal,
preferably a clean page, if one exists ( by checking M-bit).

3
The Working Set Page Replacement Algorithm (2)

The working set algorithm

3
The WSClock Page Replacement Algorithm

Operation of the WSClock algorithm


3
Review of Page Replacement Algorithms

3
Modeling Page Replacement Algorithms
Belady's Anomaly

• FIFO with 3 page frames


• FIFO with 4 page frames
• P's show which page references show page faults
3
Stack Algorithms

7 4 6 5

State of memory array, M, after each item in reference string is processed

3
The Distance String

Probability density functions for two hypothetical distance strings

3
The Distance String

• Computation of page fault rate from distance string


– the C vector
– the F vector

4
Design Issues for Paging Systems
Local versus Global Allocation Policies (1)

• Original configuration
• Local page replacement
• Global page replacement

4
Local versus Global Allocation Policies (2)

Page fault rate as a function of the number of page frames assigned

4
Load Control
• Despite good designs, system may still thrash

• When PFF algorithm indicates


– some processes need more memory
– but no processes need less

• Solution :
Reduce number of processes competing for memory
– swap one or more to disk, divide up pages they held
– reconsider degree of multiprogramming

4
Page Size (1)
Small page size
• Advantages
– less internal fragmentation
– better fit for various data structures, code sections
– less unused program in memory
• Disadvantages
– programs need many pages, larger page tables

4
Page Size (2)

• Overhead due to page table and internal fragmentation

page table
space
• Where s e p
overhead  
– s = average process size in bytes internal
– p = page size in bytes p 2 fragmentatio
n
– e = page entry

Optimized when
p  2 se

4
Separate Instruction and Data Spaces

• One address space


• Separate I and D spaces

4
Shared Pages

Two processes sharing same program sharing its page table

4
Cleaning Policy
• Need for a background process, paging daemon
– periodically inspects state of memory

• When too few frames are free


– selects pages to evict using a replacement algorithm

• It can use same circular list (clock)


– as regular page replacement algorithmbut with diff ptr

4
Implementation Issues
Operating System Involvement with Paging

Four times when OS involved with paging


1. Process creation

determine program size

create page table
2. Process execution

MMU reset for new process

TLB flushed
3. Page fault time

determine virtual address causing fault

swap target page out, needed page in
4. Process termination time

release page table, pages

4
Page Fault Handling (1)
1. Hardware traps to kernel
2. General registers saved
3. OS determines which virtual page needed
4. OS checks validity of address, seeks page frame
5. If selected frame is dirty, write it to disk

5
Page Fault Handling (2)

OS brings schedules new page in from disk

Page tables updated

Faulting instruction backed up to when it began

Faulting process scheduled

Registers restored

Program continues

5
Instruction Backup

An instruction causing a page fault

5
Locking Pages in Memory

• Virtual memory and I/O occasionally interact


• Proc issues call for read from device into buffer
– while waiting for I/O, another processes starts up
– has a page fault
– buffer for the first proc may be chosen to be paged out
• Need to specify some pages locked
– exempted from being target pages

5
Backing Store

(a) Paging to static swap area


(b) Backing up pages dynamically

5
Separation of Policy and Mechanism

Page fault handling with an external pager

5
Segmentation (1)

• One-dimensional address space with growing tables


• One table may bump into another
5
Segmentation (2)

A segmented memory allows each table to grow or shrink, independently

5
To specify an address in segmented or two dimensional memory, a program
must supply a two-part address, Segment No & address within segment.
A segment is a logical entity, which the progrmmer is aware of and uses as a
logical entity. Segment might contain a procedure, an array, or a stack,or
collection of scalar variables,but usually not contain mixture of diff. Type
Adv. of Segmentation: 1. handling of data structures which grow or shrink.
2. linking of procedures compiled separately is simplified, if each
procedure occupies separate segment, with 0 as its starting address.
Procedure call for procedure in segment n will use (n,0) address.
If the procedure in segment n is subsequently modified & recompiled,
no other procedures need be changed even if new version is larger than
old one.
3. Sharing of procedures & data among several processes.
Ex. Shared Library

5
Main func()
Sum Call : JMP add1
Fact Call: JMP add2
Sub call: JMP add3

add1
Sum()
add2

add3
Fact()

Sub()
5
Segmentation (3)

Comparison of paging and segmentation

6
Main func()

Sum Call : JMP add1


Fact Call: JMP add2
Sub call: JMP add3

add1 Sum()

add2 Fact()

add3
Sub()

6
Implementation of Pure Segmentation

(a)-(d) Development of checkerboarding


(e) Removal of the checkerboarding by compaction

6
Segmentation with Paging: MULTICS (1)

• Descriptor segment points to page tables


• Segment descriptor – numbers are field lengths

6
Segmentation with Paging: MULTICS (2)

A 34-bit MULTICS virtual address

6
Segmentation with Paging: MULTICS (3)

Conversion of a 2-part MULTICS address into a main memory address


6
Segmentation with Paging: MULTICS (4)

• Simplified version of the MULTICS TLB


• Existence of 2 page sizes makes actual TLB more complicated
6
Segmentation with Paging: Pentium (1)

A Pentium selector

6
Segmentation with Paging: Pentium (2)

• Pentium code segment descriptor


• Data segments differ slightly

6
Segmentation with Paging: Pentium (3)

Conversion of a (selector, offset) pair to a linear address

6
Segmentation with Paging: Pentium (4)

Mapping of a linear address onto a physical address

7
Segmentation with Paging: Pentium (5)

Level

Protection on the Pentium

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy