0% found this document useful (0 votes)
4 views55 pages

chapter8_memoryManagment

Chapter 8 discusses memory management techniques, including organization of memory hardware, paging, and segmentation, with a focus on the Intel Pentium's support for these methods. It explains the concepts of logical and physical addresses, memory protection, and various memory allocation strategies such as contiguous allocation, paging, and segmentation. The chapter also covers dynamic relocation, fragmentation issues, and the implementation of page tables and translation mechanisms for efficient memory management.

Uploaded by

shayanishaq004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views55 pages

chapter8_memoryManagment

Chapter 8 discusses memory management techniques, including organization of memory hardware, paging, and segmentation, with a focus on the Intel Pentium's support for these methods. It explains the concepts of logical and physical addresses, memory protection, and various memory allocation strategies such as contiguous allocation, paging, and segmentation. The chapter also covers dynamic relocation, fragmentation issues, and the implementation of page tables and translation mechanisms for efficient memory management.

Uploaded by

shayanishaq004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Chapter 8

Memory Management
Objectives

 Describe ways of organizing memory hardware


 discuss various memory-management techniques, including paging and
segmentation
 Description of the Intel Pentium, which supports both pure segmentation and
segmentation with paging
Background

 Program must be brought (from disk) into memory and placed within a process for it to
be run
 Main memory and registers are only storage CPU can access directly
 Register access in one CPU clock (or less)
 Main memory can take many cycles
 Cache sits between main memory and CPU registers
 Protection of memory required to ensure correct operation
Background
Main Memory

CPU cache
instructions
Registers
data Process
program image
in memory

Operating
System
Disk
Program addresses and
memory
 When code is generated (or assembly
program is written) we use memory func variable
addresses for variables, functions and
branching/jumping. func variable

func variable
 Those addresses can be physical or logical
memory addresses. main

program
 In very early systems they are just physical
memory addresses.
 A program has to be loaded to that
address to run.
 No relocation
Program addresses and
memory
Assume they are physical addresses 44 RAM
physical addresses
40 of RAM
36
Program
32
28
Add 12
24
Mov 8
20
… 4
16
Jump 8 0
Add 12
Mov 8
… 4
Jump 8 0
Program addresses and
memory
44 RAM
40 physical addresses
of RAM
36
32
Cmp 28

Program 2
Sub 24
Program 1 Program 2
… 20
Jump 12 16
Add 12 Cmp 12
Add 12
Mov 8 Sub 8 Program 1
Mov 8
… 4 … 4
… 4
Jump 8 0 Jump 12 0
Jump 8 0
Logical address space
concept
 We need logical address space concept, that is
different that the physical RAM (main RAM
memory) addresses. phy_max

 A program uses logical addresses.


logic_max limit Program
 Set of logical addresses used by the program
is its logical address space logical base
 Logical address space can be, for example, address Program
[0, max_address] space

0
 Logical address space has to be mapped
somewhere in physical memory 0
Base and Limit Registers
A pair of base and limit registers define the address space of a process
A process should be accessing
and using that range.

Protection and Relocation can be


provided in this way.

also called Relocation Register

Each physical address should be in


range [base, base+limit]
Logical vs. Physical Address
Space
 The concept of a logical address space that is bound to a separate physical address
space is central to proper memory management

 Logical address – generated by the CPU; also referred to as virtual address

 Physical address – address seen by the memory unit

 Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme
Logical and physical
addresses
CPU Main Memory (RAM)
base limit 60
24 32 56
int x 52
PC int y; 48

physical addresses
M[28+base] cmp .. 44
IR mov r1, M[28] mov r1, M[28] 40
M[28+24] mov r2, M[24] 36
M[52] add r1, r2, r3 32
a relocatable program
28 int x jmp 16 28
logical addresses

24 int y; mov .. 24
20 cmp ..
20
16 mov r1, M[28]
16
12 mov r2, M[24]
12
08 add r1, r2, r3
08
04 jmp 16
04
00 mov ..
00
Memory-Management Unit (MMU)

 Hardware device that maps logical (virtual) to physical address

 In MMU scheme, the value in the relocation register (i.e., base register) is added to every
address generated by a user process at the time it is sent to memory

 The user program deals with logical addresses; it never sees the real physical addresses
Dynamic relocation using a
relocation register
Contiguous Memory Allocation
(Dynamic Memory Allocation Problem)
 Main memory is partitioned usually into two partitions:
 Resident operating system, usually held in low memory with interrupt vector
 User processes then held in high memory

 Relocation registers used to protect user processes from each other, and from changing operating-
system code and data
 Base register contains value of smallest physical address
 Limit register contains range of logical addresses – each logical address must be less than the
limit register
 MMU maps logical addresses dynamically
Basic Memory Allocation
Strategies
 In this chapter, we will cover 3 basic main memory allocation strategies to processes

 1) Contiguous allocation

 2) Paging

 3) Segmentation
Hardware Support for Relocation and Limit
Registers
Contiguous Allocation (Cont)
 Multiple-partition allocation
 Hole – block of available memory; holes of various size are scattered throughout memory
 When a process arrives, it is allocated memory from a hole large enough to accommodate it
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2


Dynamic Storage-Allocation
Problem
How to satisfy a request of size n from a list of free holes
• First-fit: Allocate the first hole that is big enough
• Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size
– Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of


speed and storage utilization
Fragmentation
 External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
 Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition (allocation), but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory together in one large block
 Compaction is possible only if relocation is dynamic, and is done at execution time
 I/O problem
 Latch job in memory while it is involved in I/O

 Do I/O only into OS buffers


Paging

 Physical address space of a process can be noncontiguous; process is allocated physical memory
whenever the latter is available
 Physical address space will also be noncontiguous.
 Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512
bytes and 8,192 bytes)
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size n pages, need to find n free frames and load program
 Set up a page table to translate logical to physical addresses
 Internal fragmentation
Paging
RAM (Physical Memory)
a program
0
0
logical address space
1 a frame
1 (size = 2x)
2
2
3
3
4 physical memory:
4
5 set of fixed sized
5 frames
7
program: set of pages
6
8
Page size = Frame size
9
Paging
a program 0
0 0 1
1 2 2
RAM
2 3
3 load 1 4
4 5
5 0 mapped_to 1 3 7
1 mapped_to 4
2 mapped_to 2 5 6
3 mapped_to 7 8
4 mapped_to 9
page table 5 mapped_to 6 4 9
Example
Address Translation Scheme
– Assume Logical Addresses are m bits. Then logical address space is 2m bytes.
– Assume page size is 2n bytes.

• Logical Address generated by CPU is divided into:


– Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
– Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit

page number page offset


p d
(m – n) bits n bits
m bits
Paging Hardware:
address translation
Paging Example
LA = 5
page size = 4 bytes
PA = ?
= 22
5 is 0101
PA = 11001

4 bit logical address

32 byte memory
LA = 11
PA = ?
11 is 1011
PA = 00111

offset
page (dispacement) LA = 13
number inside PA = ?
page
13 is 1101
PA = 01001
Address translation example
2
0000
m=3; 23 = 8 logical addresses 2 bits for offset
0001 frame 00
n=2; page size = 22 = 4
0010
000 A 0011
001 B 0100
page 0
010 C 0101
011 D frame 01
0110
100 E 0111
1 bit for page# page 1 101 F 1000 E
110 G 1001 F frame 10
111 H 1010 G
Logical Memory 1011 H
page table 1100 A
0 11 1101 B frame 11
1 10 1110 C
each entry is used to map 1111 D
4 addresses (page size addresses) 2 bits for frame# Physical Memory
Free Frames
OS keeps info
about the frames
in its frame table

Before allocation After allocation


Implementation of Page
Table
 Page table is kept in main memory
 Page-table base register (PTBR) points to the page table
 Page-table length register (PTLR) indicates size of the page table
 In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.

 The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs)
 Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies
each process to provide address-space protection for that process
Implementation of Page
Table
RAM
Program Program
CPU
P1 P2
PC

PT1 PT2 Kernel


Memory
PTBR Page Page
Table Table
PTLR of of
P1 P2

Currently running
process is process 1 (P1)
PCB1 PCB2
TLB Associative Memory
• Associative memory – parallel search
Page # Frame #

Address translation (p, d)


– If p is in TLB, get frame # out
– Otherwise get frame # from page table in memory
Paging Hardware With TLB
Memory Protection
 Memory protection implemented by associating a protection bit with each
page
 Read only page
 Executable page
 Read-write page

 Valid-invalid bit attached to each entry in the page table:


 “valid” indicates that the page is in the process’ logical address space,
and is thus a legal page
 “invalid” indicates that the page is not in the process’ logical address
space
Valid (v) or Invalid (i) Bit In A
Page Table
Shared Pages
 Shared code
 One copy of read-only (reentrant) code shared among processes (i.e.,
text editors, compilers, window systems).
 Shared code must appear in same location in the logical address space of
all processes

 Private code and data


 Each process keeps a separate copy of the code and data
 The pages for the private code and data can appear anywhere in the
logical address space
Shared Pages Example
Structure of the Page Table

 Hierarchical Paging

 Hashed Page Tables

 Inverted Page Tables


Hierarchical Page Tables
 Break up the logical address space into multiple page tables

 A simple technique is a two-level page table

00

01
00
01
10
11
10
PT

11
PT
Log Mem Log Mem
Two-Level Paging Scheme
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 two-level 00
01
single level page table 10
Page table 11
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits
• Since the page table is paged, the page number is further divided into:
– a 12-bit page number
– a 10-bit page offset
• Thus, a logical address is as follows:

page number page offset


p1 p2 d

12 10 10
where pi is an index into the outer page table, and p2 is the index into the
inner page table
Address-Translation Scheme
Example(Duplicate): two level
page table need
logical address length = 32 bits
pagesize = 4096 bytes
used 8 MB
logical address division: 11, 9, 12

unused What is total size of two


232 bytes = 4 GB level page
table if entry size
is 4 bytes?
logical address space size
(only partially used)
used 12 MB
Example: two level page
table need
logical address length = 32 bits
pagesize = 4096 bytes
used 8 MB
logical address division: 10, 10, 12

unused What is total size of two


232 bytes = 4 GB level page
table if entry size
is 4 bytes?
logical address space size
(only partially used)
used 12 MB
Example: two level page
table need
Each entry of a second
10 10 12 level page table translates
a page# to a frame#;
i.e. each entry maps a page
which is 4096 bytes
210
There are 1024 entries
entries
In a second level page table
210
entries …… Hence, a second level
page table can map
210 210 * 212 = 222 = 4 MB
Top level entries of logical address space
page table
a second level page
table
Example: two level page
table need
8 MB 8 MB / 4 MB = 2 second level page
tables required to map
8 MB of logical memory

Total = 3 + 2 = 5 second level


232 bytes = 4 GB page tables required

12 MB / 4 MB = 3 second level page


tables required to map 12 MB
12 MB of logical memory
Example: two level page
table need
2nd level page tables
8 MB 210
entries
210
entries 1K * 4Bytes +
5 * 1K * 4Bytes
210 ….
2 bytes
32 unused
= 24 KB
entries 210
= 4 GB space needed
entries
to hold
210 the page
top level
entries tables of the
page table
process
12 MB 210
entries
Three-level Paging Scheme

64 bit addresses
Hashed Page Tables
 Common in address spaces > 32 bits

 Page table is a hash table

 A virtual page number is hashed into a page table entry


 A page table entry contains a chain of elements hashing to the same location
 each element = <a virtual page number, frame number>

 Virtual page numbers are compared in this chain searching for a match
 If a match is found, the corresponding physical frame is extracted
Hashed Page Table

frame number
virtual page number
Inverted Page Table
• One entry for each real page frame of physical memory

• Entry consists of the page number of the virtual page stored in that real memory
location (frame), with information about the process that owns that page
• Entry content: <pid, virtual page number>

• Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs
Inverted Page Table
Architecture
Linear Address in Linux
Broken into four parts:

• In a 64 bit machine (very large address space), we use all 4 parts

• In a 32 bit Intel machine, that uses two-level paging, we ignore the middle directory (its size is 0), and use 3 parts.
Three-level Paging in Linux
Linux Paging: some kernel
structures
struct task_struct
{


struct mm_struct *mm;
struct mm_struct
} {
the PCB object …
top level
of a process X pgd_t *pgd;
page table
….
of process X
}
(called
mm object
page
of process X
global
(keeps memory
directory)
management
related
information)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy