0% found this document useful (0 votes)
23 views81 pages

05 Chap03 PagingIssues 4slots (81 Slides)

Uploaded by

datvts25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views81 pages

05 Chap03 PagingIssues 4slots (81 Slides)

Uploaded by

datvts25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 81

OS

5
Memory Management – Part 2
(4 slots)
Chapter 3- Part 2
Design Issues for Paging Systems
Implementation Issues
Segmentation
OS
Objectives
• Design Issues for Paging Systems
– Local vs. Global Page Allocation Policies
– Load Control
– Page Size
– Shared Pages
– Shared Libraries
– Mapped Files
– Cleaning Policy
– Virtual Memory Interface
OS
Objectives…
• Implementation Issues
– OS Involvement with Paging
– Page Fault Handling
– Instruction Backup
– Locking Pages in Memory
– Backing Store
– Policy and Mechanism
• Segmentation
– Pure Segment
– Read yourself
• Segmentation with Paging: MULTICS
• Segmentation with Paging: Intel Pentium
OS

Review: Paging Algorithms


Algorithm Fullname Description
NRU Not Recently Used : Pages are divided into 4 classes: Class 0: not
referenced, not modified./ Class 1: not referenced, modified/ Class 2: referenced,
not modified/ Class 3: referenced, modified. A page in the lowest class will be
swapped out.
FIFO First In First Out: The first page will be swapped out
Second chance FIFO using a second change
Clock A curcular list is used. At a time, a process is selected as a page that may
be swapped out if R bit (referenced)=0
LRU Least recently used: Quá trình được tham khảo cũ nhất sẽ bị loại ra
NFU Not Frequently Used :Counter = counter + R. The process haring lowest
count will be swapped out.
Aging Right shift the counter of a process the add the R bit to theleft pf the
counter The process haring lowest count will be swapped out.
WSClock Working set clock: A circular list is used. The time when the process is
referenced is stored along with the R bit.
OS

5.1- Design Issues for Paging System


– Local vs. Global Page Allocation Policies
– Load Control
– Page Size
– Shared Pages
– Shared Libraries
– Mapped Files
– Cleaning Policy
– Virtual Memory Interface
OS
Design Issues for Paging System
Local vs. Global Allocation Policies

• Context: Page fault, Page replacement


• Question: should the page replacement algorithms try to find
LRU page consider only the page currently allocated to
particular process, or should it consider all the pages in memory?
– Local: pages of the current process are considered.
– Global: pages of all processes are considered.
• Answer: depends on the strategy used to allocate memory
between the competing (cạnh tranh) runable processes
– Local: Every process has a fixed fraction of memory
allocated (advantages)
– Global: page frames are dynamically allocated among runable
processes
OS
Design Issues for Paging System
Local vs. Global Allocation Policies
Local page replacement
(A5 has lowest age value)
Global page replacement
(B3 has lowest age value)

Tanenbaum, Fig. 3-23.

Original
configuration
OS
Design Issues for Paging System
Local vs. Global Allocation Policies

• Local algorithms
– The working set grows → thrashing
– The working set shrinks → waste of memory
• Global algorithms
– Work better, especially when the working set size can
vary over the lifetime of a process
– The thrashing can occurs to other processes of which
pages are chosen to replace → the other processes
can not control the page fault rate
– The working set size is smaller → waste of memory
OS
Design Issues for Paging System
Local vs. Global Allocation Policies
• Strategies for the system must continually decide how many page
frames to assign to each process
– Monitor the size of working set of all processes based on the
aging bits of pages (It does not necessarily prevent thrashing)
– Page frames allocation algorithm:
• To periodically determine the number of running processes
• Allocate pages proportionally with each process size
• Give each process a minimum number of frames
• The allocation is updated dynamically
– The PFF (Page Fault Frequency) algorithm (Count the
page number of faults per second)
• Some page replacement algorithms
– Can work with both policies (FIFO, LRU)
– Can work only with the local policy (WSClock)
( First In First out/ Least Recently Used/ Working Set Clock)
OS
Design Issues for Paging System
Load Control
• Context
– The system thrashes can occur even if the best page
replacement algorithm and optimal global allocation
of page frames is used
– When the combined working sets of all processes
exceed the capacity of memory → thrashing
– PFF (Page Fault Frequency) algorithm indicates that
some processes need more memory but no process
need less memory
 no way to give more memory to those processes
needing it without hurting some other processes
OS
Design Issues for Paging System
Load Control

• A good way to reduce the number of processing


competing for memory is to swap out some processes
and free up all the pages they are holding to divide those
pages up to thrashing processes
– Keep the page-fault rate acceptable (periodically,
some processes are swapped in and other ones are
swapped out)
– Take into account the degree of multiprogramming
• Considering not only process size and paging rate
when deciding which process to swap out, but also
its characteristics (CPU or I/O bound)
OS
Design Issues for Paging System
Page Size
• Is often a parameter that can be chosen by the OS
• Determining the best page size requires balancing
several competing factors (there is no overall
optimum)
• Arguments for small page size
– Internal fragmentation- wastage: haft of the final page is
empty  a process has text (code), data, stack segment (n
= segments in memory, p = bytes of a page size →
internal fragment is np/2 bytes
– Need many pages → the page table is large and
transferring takes more time
Review:
Internal fragmentation: Holes in memory allocated to a process
External fragmentation: Holes in memory allocated between processes
OS
Design Issues for Paging System
Page Size
• Argue for large size
– Cause more unused program to be in memory than a small size
– Transferring takes less time than small size
• Analyzed mathematically
s = average process size in bytes,
p = page size in bytes
s/p = approximate number of pages needed per process
e = number of bytes per page table entry
overhead of memory for a process = se/p + p/2, due to
page table size and internal fragmentation
optimum is found equating the first derivative to 0:
se/p2 + ½ = 0 
p  2 se
OS
Design Issues for Paging System
Separate Instruction and Data Spaces

• Problems
– Most computers have a
single address space that
holds both programs and
data
– If this address space is
often too small, forcing
programmer to stand on
their heads to fit
everything into the
Tanenbaum, Fig. 3-25.
address space
OS
Design Issues for Paging System
Separate Instruction and Data Spaces

Solutions: Separate address


spaces for
 Instructions (I-space)
 Data (D-space)
• Each address space runs from 0 to
maximum

Tanenbaum, Fig. 3-25.

• The linker and the hardware must know when separate spaces
are being used
• Both address spaces can be paged, independently and have its
own page table (mapping virtual pages to physical page frames)
• Practically is double the available address space
OS
Design Issues for Paging System
• Problem Shared Pages
In large multiprogramming system,
several users to be running the same
program at the same time  Having
some copies of the same page in
memory at the same time. How to
manage them?
• Solution
– Share the read-only pages (code pages
but data pages)
– If separate spaces are supported, two or
more processes use same their I-Space
but different their D-Space.
Implementing
• Page tables are data structures independent
of the process table
• Each process has two pointers in process
table: one to the I-space and one to the D- Tanenbaum, Fig. 3-26.
space
OS
Design Issues for Paging System
Shared Pages
Problem:
Both A & B processes running the editor & sharing
pages. When A terminates, some code pages are
remove, how does OS ensure the process B is not
affected?
Solution:
B generate a large number of page faults to bring them
back in again. Similar, a data structure to keep track of
shared pages is implemented to reduce the searching
(since searching is expensive)
OS
Design Issues for Paging System
Shared Pages
• Sharing data (modifiable pages)
Context
• Data can be shared and can be modified concurrently by many
processes
• Ex:
– The child process attempt to modified a page containing portion of stack
– The OS recognizes that the page may be shared and can be modified (by
both processes), the OS will create a copy of this page and map it to the
address space of the child process
– The process will modify its copied page and not the page belonging to
the parent process
Solution: Copy-on-write strategy is applied to modified-data
pages (This solution is applied to the OS using the
duplicating process strategies. Ex: Windows 2000, Linux,
Solaris 2)
OS
Design Issues for Paging System
Shared Libraries
• Problems
– In modern system,
there are many Process 1
libraries used by
many process
– Statically binding Process 2
all these libraries
to every Process 3
executable
program on the Process 4
disk would make
them even more Shared DLL
bloated
Solutions: Share libraries Tanenbaum, Fig. 3-27.
(Ex: DLL – Dynamic Link Libraries) Separate common routines to
share libraries
OS
Design Issues for Paging System
Shared Libraries
• Share libraries
– A program linked with a share libraries, instead of including
the actual function called, the linker includes a small stub
routine that binds to the called function at the run time
– Shared libraries is only loaded either when the program is
loaded or when functions in them are called for the first
time
– The entire libraries is not read into memory, it is paged in,
page by page, as needed, so functions that are not called
will not be brought into RAM
– Advantages
• Making executable file is smaller and saving space in memory.
• If a function in a shared library is updated to remove a bug, it is not
necessary to recompile the programs that call it. The old binaries
continue to work.
OS
Design Issues for Paging System
Shared Libraries
• Problems
– Two processes uses the shared library at a different
address in each process
– Each process and shared library are located at the
different addresses
Therefore, some functions may be gone to the different
absolute addresses in the shared library
The reference address will not be moved in exactly
because the relocation on the fly will not work with
using the shared libraries
The absolute address cannot be applied with using
shared libraries
OS
Design Issues for Paging System
Shared Libraries
Solution
– First Approach: Using copy-on-write
• This will create new pages and relocate them on the fly in
correctly for each process in creating progress.
This is not the shared solution because this solution copies
the non-modified pages.
– Second Approach: using position-independent-code
• The instruction in code should used only relative offsets
then the absolute addresses.
• This is word correctly with the shared libraries placing
anywhere in virtual address space.
Design Issues for Paging System OS

Memory-Mapped Files
• Context
– Manipulating a file on disk must be invoked the system call
open(), read() and write() and must access the disk slow
– Virtual memory needs to swap in/out memory pages from/to
disk.

• Solution: Use Memory Mapped File


– Treat file I/O as routine memory access (device driver – ROM
routines – read/write using binary format, data structure is not
concerned).
– Allowing a part of the virtual address space to be logically
associated with the file
– Map a disk block to a page(s) in memory and page used the file
on disk as the backing store.
 faster and simplifier
Design Issues for Paging System OS

Memory-Mapped Files
How does it work?
– Initial, when the file is accessed, the page fault is occurs
– Then, the pages are allocated and are loaded the content from
file system
– Subsequent, the file manipulations are handled as routine
memory accesses
– It is not necessarily be immediate writes file to disk when the
memory is modified. Instead of this, the OS updates the file on
disk in periodically if any if the memory has been modified
– When the process exits (e.g. file is closed), all the modified
pages are written back to the file automatically.
Design Issues for Paging System OS

Memory-Mapped Files
• In practical,
– Some OS using this solution only through a specific
system call (ex: mmap() in Solaris 2) and treat all other
file I/O using the standard system calls
– If two processes map onto the same file at the same time,
they can communicated over shared memory
• One process writing done to the share memory are
immediately visible when the others one reads (sharing
data and support to copy-on-write)  Provide high
bandwidth channel between process
• Shared libraries can used this mechanism, if memory-
mapped file is available
OS
Design Issues for Paging System
Cleaning Policy
All content of a page can be read from/write to file  The page
must be clean.
• Solution: Paging daemon is used.
– A background process sleeps most of time but is awakened
periodically to inspect the state of memory
– Ensure a plentiful of free page frame to paging works best
– If too few page frames are free, the paging daemon begins
selecting pages to evict using some page replacement
algorithm, then written them to disk (if it is modified)
– Requirement:
• The previous contents of page are remembered
 All the free frames are clean, so the written to disk in a
big hurry does not occur
OS
Design Issues for Paging System
Cleaning Policy
Cleaning policy
– Using two-handed clock
• Front hand (pointer for cleaning pages) is controlled by
the paging daemon. When it points to a dirty page, that
page is written back to disk and the front hand is
advanced. Otherwise, it is just advanced
• Back hand (pointer for choosing an evicted page) is used
for page replacement algorithm as in the standard clock
algorithm
OS
Design Issues for Paging System
Virtual Memory Interface
• Context
– Large virtual address space  Smaller physical memory
– Can programmers control over their memory map to allow
many processes to share memory?
• Solution: Share memory
– Programmer can name regions of their memory then these
names of memory regions can be shared to many processes.
 The page in memory can be shared to many processes
The shared memory region is used to write by one process and
read from another(pipe mechanism, IPC – InterProcess
Communication).
 High bandwidth, enhance the program’s progress
OS
Design Issues for Paging System
Virtual Memory Interface
• In message passing system:
– Problem: when messages are passed, the data are
copied from one address space to another 
complexity and waste time
– Solution: shared memory is applied by copying the
named pages instead of all the data
• The sender unmaps the pages containing the message
when the message are passed
• Then, the receiving maps the unmapped pages in (only
page names have to be copied)
 High performance
OS
Design Issues for Paging System
Virtual Memory Interface

• In distributed application,
– When a process references a page that is not mapped in,
the page fault occurs.
Solution:
(1)The page fault handler locates the machine holding the
page and sends it a message asking it to unmap the page
and send it over the network
(2) When the page arrives, it is mapped in and the fault
instruction is restarted
Allow multiple process over a network to share a set of
pages (high performance)
OS
Design Issues for Paging System
Virtual Memory Interface

CPU 0 references page 10, page 10 is


moved to CPU 0

CPU 1 references page 10, page 10 is


read –only and replication is used.

Tanenbaum, Fig. 8-22


OS
5.2- Implementation Issues
– OS Involvement with Paging
– Page Fault Handling
– Instruction Backup
– Locking Pages in Memory
– Backing Store
– Policy and Mechanism
OS
Impl. Issues: OS Involvement in Paging

• There are four situations when OS has paging-related


work to do
– Process creation time
– Process execution time
– Page fault time
– Process terminal time
OS
Impl. Issues: OS Involvement in Paging…

• At Process creation time


– OS has to determine how large the program and data will be
and create page table for it.
– Space has to be allocated in memory for page table and it has to
be initialized.
– The page table need not be resident when the process is swapped
out but has to be in memory when the process is running.
– Space has to be allocated in the swap area (that has to be
initialized with program text and data) on disk so that when a page
is swapped out, it has somewhere to go.
– Some systems page the program text(code) directly from the
executable file, thus saving disk space and initialization time.
– Information the page table and swap area on disk must be
recorded in the process table.
OS
Impl. Issues: OS Involvement in Paging…

• At Process execution time


– When a process is scheduled for execution, the MMU
( memory management unit) has to be reset for the new
process and the TLB (translation lookaside buffer) flushed,
to get rid (tống khứ) of traces of the previously executing
process.
– The new process’ page table has to be made current,
usually by copying it or a pointer to it to some hardware
registers.
– Optionally, some or all of the process’ pages can be
brought into memory to reduce the number of page faults
initially.
OS
Impl. Issues: OS Involvement in Paging…

• At Page fault time


– When a page fault occurs, the OS has to read out
hardware registers to determine which virtual address
causes the fault.
– OS must compute which page is needed and locate that
page on disk, then it must find an available page frame to
put the new page, evicting some old page if need be.
– OS reads the needed page into the page frame.
– OS must back up PC (program counter, instruction of
current instruction) to have it point to the faulting
instruction and let that instruction execute again.
OS
Impl. Issues: OS Involvement in Paging…

• At Process terminal time


– When a process exits, the OS must release its page table,
its page, and disk space that the pages occupy when they
are on disk
– If some of the pages are shared with other processes,
the pages in memory and on disk can only be released
when the last process using them has terminated.
OS
Impl. Issues: 10 steps for Page Fault Handling

• 1: The hardware trap to kernel; save PC on stack; save


information about the state of current instruction into a
special CPU registers.
• 2: An assembly code routine is started to save the
general registers and other volatile information to keep
the OS from destroying it.
• 3: The OS find the virtual page that is needed (page
fault occurs).
– The hardware registers contains this information.
– If not, the OS must retrieve the PC, fetch the instruction, and
parse it in software to figure out what it was doing when the
fault hit
OS
Impl. Issues: 10 steps for Page Fault Handling

• 4: Check if the address is valid and the protection


consistent with the access.
– If not, the process sent a signal or killed.
– If the address is valid and no protection fault has
occurred, the system checks to see if a page frame
is free. If no frames are free, the page replacement
algorithm is run to select a victim
• 5: If the “victim” page is dirty, the page is scheduled
for transfer to the disk, and a context switch take
place, suspending the faulting process and letting
another one run until the disk transfer has completed.
In any event, the frame is marked as busy to prevent it
from being used for another purpose
OS
Impl. Issues: 10 steps for Page Fault Handling
• 6: When the page frame is clean, the OS find on the disk
address where the needed page is, and scheduled a disk
operation to bring it in. While the page is being loaded, the
faulting process is still suspended and another user process is
run, if one is available.
• 7: When the disk interrupt indicates that the page has arrived,
the OS updates the page table and mark as normal the page
frame.
• 8: The faulting instruction is backed up to the state it had
when it began and the PC is reset to point to that instruction
• 9: The faulting process is scheduled, and the OS returns to the
routine that called it.
• 10: This routine reloads the registers and other state
information and returns to user space to continue execution, as
if no fault had occurred.
OS
Impl. Issues: Instruction Backup
• When the page fault occurs, the OS
fetch the page needed, How is the
instruction, causing the trap, restarted? F() Page
{….. will be
} loaded

Counter Method 1 F(); frame


Method 2 nextInstruction

PCB Method 1: The instruction at the line causing the page fault is
trapped  Counter+1 is the next instruction
Method 2: Counter = the instruction at the line right after the line
which causing the page fault is trapped.
OS
Impl. Issues: Instruction Backup…
• Context
– Where should the PC’s position be backed up when the
progressed instructions occur errors?
– Where should the PC’s position be loaded when the instruction is
restarted?
– The value of PC at the time of the trap depends on which operand
faulted and how the CPU’s microcode has been implemented
– In auto-incrementing mode
• First approach
– The increment may be done before the memory reference
– Before restarting the instruction, the OS must decrease the register
• Second approach
– The auto-increment may be done after the memory reference
– The OS do nothing to restart the instruction
– In auto-decrementing mode, a similar problems also occur
OS
Impl. Issues: Instruction Backup…
• Problem
– The page fault occurs when a instruction reference the
memory, the instruction can occur at the operands
– To restart the instruction, the OS must determine where the
first byte of instruction is
– The OS cannot know the command (operator) if the fault
occurs at the operand
 How can the OS determine what happened and how to repair
it?
• Solution
– A hidden internal register is used to store the PC before each
instruction is executed.
– A second register is used to store the value of the registers
auto-incremented or auto-decremented.
 When the fault instruction is restarted, the OS can
unambiguously undo all the effects of it
OS
Impl. Issues:Locking Pages in Memory
• Problems
– Global paging is used. The process A is suspended due to IO. The
process B is chosen to run and B causes a page pagefault. A page of
A will recieve data from IO is evicted.
→ In the case of DMA (Direct Memory access, a way to tranfer data
without controlling of CPU) used with I/O, the evicted page of A
causes the problems  The transferring data will be written into
either the buffer or this page?
• Solution
– First Approach: Lock pages engaged in I/O in memory so that
they will not be removed. (pinning – ghim)  If the locking pages
in memory always occurs with high ratio, the bottle neck or thrashing can
be occurs.
– Second Approach: Do all I/O to kernel buffers & copy the data
to user pages later.
OS

Impl. Issues: Backing Store


The simplest algorithm for allocating page space on disk
– A special swap partition exits on the disk.
– This partition uses block numbers relative to the start of the partition, instead of
using a normal file system .
– When the system is booted, the swap partition is empty and is represented in
memory as a single entry giving its origin and size.
– When the process is started, a chunk of the partition area the size of first process
is reserved and the remaining area reduced by that amount.
– As new processes are started, they are assigned chunks of the swap partition
equal in size to their core images.
– As they finished, their disk space is freed.
– Requirements: swap area must be initialized to copy entire process image to the
swap area before the process start.
→ associated with each process is the disk address of its swap area, that is where
on the swap partition its image is kept (that also kept in process table).
OS

Impl. Issues: Backing Store


• Problem
– Process can increase in size after starting due to the growing of
data and stack.
• Solution
– First Approach: Reserve separate swap areas for text, data,
stack and allow each of these areas to consist of more than one
chunk on the disk.
– Second Approach:
• Allocate nothing in advance and allocate disk space for each page
when it is swapped out and de-allocate it when it is swapped back in
 Processes in memory do not tie up any swap space
• Disadvantage
– A disk address is needed in memory to keep track of each page on disk
– In each process, there must be a table stored the location of each page on
disk
OS

Impl. Issues: Backing Store


• First approach in details:
-Paging to a static swap area
– The swap area on disk is as large as
the process virtual address space.
– Each page has a fixed location on
disk and store in contiguously in
order of the page number.
– A page is in memory always has a
shadow copy on disk.
– Disadvantages
• The synchronization on disk must
be progressed even the page in
memory has not been modified
• The page on disk may be out of
date due to not update the modified Paging to a static swap area
page from memory (for a long time) Tanenbaum, Fig. 3-29.
– Ex: It is applied in Unix.
OS

Impl. Issues: Backing Store


• Second approach in details,
- Backing up pages dynamically
• Disk map
» Is a table mapping the disk address per
virtual page is used
» Its entries contain an invalid disk address
or a bit marking them as not in used
• Pages do not have fixed addresses on disk.
• A page in memory has no copy on disk
• When the page is swapped out, an empty
disk space is chosen on the fly and the disk
map is updated accordingly
• Ex: Windows using swap file to apply this
strategy
Backing up pages dynamically
Tanenbaum, Fig. 3-29.
OS

Impl. Issues: Separation of Policy and Mechanism


• Three parts of memory management system
– A low-level MMU handler
• All the details of how the MMU works are encapsulated in the MMU handler.
• MMU handler is machine-dependent code and has to be rewritten for each new
platform the OS is ported to.
– A page fault handler that is part of the kernel
• A page fault handler is machine-independent code and contains most of the
mechanism for paging.
– An external pager running in user space
• The policy is largely determined by the external page, which runs as a user
process.
• When a process starts up, the external page is notified in order to set up the
process page map and allocate backing store on the disk if need be.
• As the process runs, it may map new objects into its address space, so the
external pager is again notified.
OS

Impl. Issues: Separation of Policy and Mechanism

Tanenbaum, Fig. 3-30.

• When the process starts running, it may get a page fault.


• The fault hander figures out which virtual page is needed and send message to the
external pager, telling it the problem.
• The external pager then reads the needed page from the disk and copies it to a
portion of its own address space, then tells the fault handler where the page is.
OS

Impl. Issues: Separation of Policy and Mechanism

• The fault handler then unmaps the page from the external pager’s address
space and asks the MMU handler to put it into the user’s address space at the
right place.
• The user process can be restarted.
• Problems
– External pager does not have access to the Referenced bit and Modified bit
of all the pages
– Some mechanism is needed to pass this information to the external pager or
the page replacement algorithm must go in the kernel
the fault handler tells the external pager which page it has selected for
eviction and provides the data, either by mapping it into the external pager’s
address space or including it in a message. Either way, the external pager
writes the data to disk
• Advantages: is more modular code and greater flexibility
• Disadvantages: the extra overhead of the various message being sent between
the pieces of the system.
OS
5.3- Segmentation
Context
– Programmer’s view of memory is not
usually as a single linear address space.
– Programmer cannot predict how large
these will be, or how they’ll grow, and
doesn’t want to manage where they go in
virtual memory.
– The virtual memory is one-dimensional
because the virtual address go from 0 to
some maximum address, one address
after one another.
– The virtual memory gives a process a
complete virtual address space to itself.
– A compiler may create some tables for
some objectives. These tables may grow
in size.
OS
Segmentation
• Problem
– The processes as compiler has many objects such as executed
code, Static/ Heap Data, Stack, constant table … that are
located in address space
– One of them may grow and shrinks in dynamically or
unpredictably
→ It is really needed a way of freeing manage the expanding and
contracting space that virtual memory eliminates the worry of
organizing the program into overlays
• Solution: Using Segmentation
– Segmentation provides the mechanism to management
memory with many completely independent address spaces
and are not fixed size as page
– Segmentation maintains multiple separate virtual address
spaces per process
– The address space is a collection of segments
– Segmentation can grow or shrink independently
Segmentation OS

Example

A segmented memory allows each table to grow or shrink indepenedly of the other tables
Tanenbaum, Fig. 3-32.
OS
Segmentation
• Characteristics of a Segment
– Is a logical entity.
– Consists of a linear sequence addresses, from 0 to some maximum  length
– Different segments may have different length and may change during
execution (ex: the stack segment may be increased or decreased)
– Has a separate address space, different segments can grow and shrink
independently without effecting each other
– Might contain a procedure, or a array, or a stack, or a collection of scalar
variables, but usually it does not contain a mixture of different types (
have different kinds of protection)
– Facilities sharing procedures or data between several processes (e.g. share
library)
• The compiler automatically construct segments reflecting the input
program
• To specify an address in the segmented memory:
<segment-number, offset>
Segmentation OS

Implementation using Hardware


• Segment table
– Each entry has segment
base and segment limit.
– Segment base contains the
starting physical address
where the segment resides
in memory.
– Segment limit specifies
the length of segment.
• Segment splits into <segment_number, offset> and using
segment_number as index into the segment table
• The real address is calculated by adding the offset to
segment base
Segmentation OS

Example

• Segment 2 begins at 4300


and has 400 bytes size. If
the 53rd byte of the
segment 2 is referenced:
 the physical address is
4300 + 53 = 4353
If the 1222nd byte of the segment 0 is referenced (out of its
space), the trap to the OS is occurs
OS
Segmentation
• Advantages
– Can grow or shrink independently
– Reduce costly because no starting addresses have been modified even if the
new version is subsequently modified and recompiled creating the larger or
the smaller size than the old one
• Each procedure occupies a separate segment, with address 0 as its starting
address
• When a procedure call to the procedure locating at segment n as (n, 0), no
procedures need be changed
– Vs. the one dimensional memory, the procedures are packed tightly next to each
other, with no address space between them → changing one procedure’s size requires
modifying some procedures (that are moved) following the new starting address
• Sharing
– The entries in the segment table of two different processes point to the same physical
location
→ Eliminating the need for many same code or data in every process’s address space
• Protection is supported by memory management in modern OS (e.g the code is
read only, the limit register on the hardware ….)
• Disadvantages: internal fragmentation → compacting
Segmentation OS

Example

Two processes has the same


segment address for their
editor ( the same code, the
code is read-only) but they
have different data
segments.
Segmentation OS

Paging vs. Segmentation

Tanenbaum, Fig. 3-33.


OS
Segmentation
Implementation of Pure Segment
• Checkerboarding and external fragmentation
– Memory will be divided up into a number of chunks, some
containing segments and some containing holes (after the
system has been running for a while)
– Wastes memory in the holes  compaction (means that
external fragmentation)

(a)-(d):
Development of
checkerboarding.
(e) Removal of the
checkerboarding
by compaction.

Tanenbaum, Fig. 3-34.


OS
Segmentation
Segmentation with Paging
• Context
– Both paging and segmentation have advantages and disadvantages
– Some system uses one of them to manage its memory.
– However, if the segments are large, it may be inconvenient, or
even impossible, to keep them in main memory in their entirety
• Solution: using paging the segmentation
– Treats each segment as a virtual memory and to page it,
combining
• The advantages of paging (uniform page size and not having to keep the
whole segment in memory if only part of it is being used)
• The advantages of segmentation (ease of programming, modularity,
protection, sharing)
– This strategy is applied on Intel Pentium and MULTICS
OS

Summary

• Design Issues for Paging Systems


• Implementation Issues
• Segmentation

Q&A
OS

Keep in Your Mind


• The global allocation policy is better than the local
policy.
• The system thrashes can occur even if the best page
replacement algorithm and optimal global allocation
of page frames is used.
• Page size small  waste is small but the page table
is large
• Separate code and data using 2 address spaces make
easier in managing data pages
• Shared pages are pinned (ghim lại) until no process
accesses it
OS

Keep in Your Mind


• Shared libraries:
– Shared libraries (DDL) is a solution to minimize process’s size
– Shared libraries is only loaded either when the program is loaded or when
functions in them are called for the first time
– The entire libraries is not read into memory, it is paged in, page by page, as
needed, so functions that are not called will not be brought into RAM
– If a function in a shared library is updated to remove a bug, it is not
necessary to recompile the programs that call it. The old binaries continue
to work.
• Copy-on-write is a solution for common data pages
• Memory-map file mechanism will periodically write modified
pages to disk. This will make swapping a page faster and
simplifier.
OS

Keep in Your Mind


• To clean frames: Use 2 pointer, Front hand pointer for cleaning
pages, back hand pointer for choosing an evicted page.
• Virtual memory interface: programmers can name regions of the
program then make them as shared regions  Shared memory is
applied by copying the named pages instead of all the data
• In distributed systems, common pages are transfered from a
machine to another machine.
• Works must be done when a process is created:
(1) Determine the program size (code, data)
(2) Prepare memory space for load them
(3) Set up it’s page table
(4) Prepare it’s back up files for swapping
(5) Update it’s PCB
OS

Keep in Your Mind


• Works must be done when a process runs:
(1) Reset MMU
(2) Reset TLB
(3) Make it’s page table is current
• Works must be done when a process causes a
page fault:
(1) Read hardware to determine the virtual address
causing the fault
(2) Determine the page which will be loaded
(3) Determne the evicted page if needed
(4) Backup the program counter
OS

Keep in Your Mind


• Instruction backup: When a page fault occcurs, besides
referenced page must be determined, the position of the
instruction which causes the page fault must be marked
also. Hence, after the referenced code executed, the next
instruction can continue.
• 2 approaches for backing store:
(1) Use static swap area: A rather large area on a disk partition
is allocated for a process. This approach needs to
synchronize frames in memory and pages on disk.
(2) Use dynamic swap area: Initially, no disk area is allocated.
Only when a page is swapped out, a disk area is allocated.
This approach needs a disk map which must be maintained
and updated accordingly.
OS

Keep in Your Mind


• 3 part of memory management system:
– In kernel: MMU handler, page fault handler
– In user space: External pager.
• Memory Segmentation
– A program is divided into some segments and they are
managed separately.
– An address is specified by
<segment-number, offset>
– OS maintains a segment table for each process.
– If segments are too large, a segment can be divided into
some pages (an association of memory paging and
segmentation)
OS

Read yourself
OS
Segmentation
Segmentation with Paging: The Intel Pentium

• The virtual memory is divided into two partition (8K


segment)
– Local Descriptor Table (LDT): describes segments local to
each program including codes, data, stack …
– Global Descriptor Table (GDT): describes system segments
including the OS itself.
– Each entry in the LDT and GDT consists 8 bytes and contains
the segment base and segment limit.
• Each program has its own LDT, but there is a single
GDT, shared by all the programs on the computer.
OS
Segmentation
Segmentation with Paging: The Intel Pentium
• The virtual address is a pair of
<selector,16 bits, offset,32 bits>.
• The machine has 6 segment registers
 Allowing 6 segments to be
addressed at any one time by a Tanenbaum, Fig. 3-39.
process.
• The machine has also 6 microprogram Structure of a Pentium
Selector:
registers (8 byte) • 13 bits specify the LDT or
– Holding the corresponding descriptors GDT entry number
• 1 bit tells whether the
from the LDT or GDT.
segment is local or global
– Including the segment’s base address, • 2 bits relate to protection with
size, and other information. value
OS
Segmentation
Segmentation with Paging: The Intel Pentium
Not used

Pentium code segment descriptor. Data segments differ slightly.


Tanenbaum, Fig. 3-40.

Gbit ( granularity): 0: Limit (Li) field contains the exact segment size
in bytes / 1: Limit field contains the segment size in pages. Pentium
page size: 4KB  20 bits are enough for segments up to 232 bytes.
Dbit: describes type of segment size ( 16 or 32 bits)
Pbit (present bit), DPL ( Descriptor for Privilege Level), Sbit (system)
OS
Segmentation
Segmentation with Paging: The Intel Pentium
• How to convert a (selector,
offset) to physical address?
– The microprogram finds descriptor
corresponding to selector
• The trap occurs if the currently page
out or segment does not exist

Tanenbaum, Fig. 3-41.

– Then, the hardware uses the Limit field to check if the offset is
beyond the end of the segment.
• If so, the trap occurs.
• Otherwise, the Pentium adds the 32 bit Base field in descriptor to the
offset to form what called a linear address .
OS
Segmentation
Segmentation with Paging: The Intel Pentium
How to convert a (selector, offset)
to physical address? …
– If paging is disable
• The linear address is interpreted as the
physical address and sent to the memory
for
the read and write
– If paging is enabled (following the
multilevel page table)
• The linear address is interpreted as a
virtual Dir: index into the page directory to
address and mapped onto the physical locate a pointer to the proper page table
Page: index into the page table to find the physical
address using page tables address of the page frame
• The linear address divided into three fields Offset: is added to the address of the page frame to
get the physical address of the byte or the word
<Dir, Page, Offset>. needed
OS
Segmentation
Segmentation with Paging: The Intel Pentium
• The Pentium supports four protection levels (0 – 3)
– 0: kernel of OS handles I/O, memory management, and other critical matters
– 1: the system calls handler is present
– 2: contains library procedures, possibly shared among many running
programs (call, read, or not modify)
– 3: user program

• A program restrict itself to using


segments at its own level
• Attempts to access data at higher
level are permitted
• Attempts to access data at a lower
level are illegal and cause traps
• Attempts to call procedures at a
different level are allowed, but in
carefully controlled way
Tanenbaum, Fig. 3-43.
OS
Segmentation
Segmentation with Paging: MULTICS

• MULTICS (Multiplexed
Information and Computing
Service) was an extra-ordinarily
influential early time-
sharing system.
• Each program has a segment
table, with one descriptor per
segment. The desciptor segment points to the page table

• The segment table is itself a Tanenbaum, Fig. 3-35.

segment and is paged.


OS
Segmentation
Segmentation with Paging: MULTICS

• A descriptor contains an
indication of whether the
segment is in main memory or
not.
• The descriptor (36 bits)
contains an 18 bit pointer to its
page table, 9 bit segment size, A segment desciptor (numbers are field lengths)
the protection bits, and few Tanenbaum, Fig. 3-35.

other items
OS
Segmentation
Segmentation with Paging: MULTICS

• Each segment is an ordinary virtual address space and is paged


• A vitual address (34 bits) consists of two parts

34-bit MULTICS virtual address


Tanenbaum, Fig. 3-36.

• In implementation,
– The descriptor segment is itself paged has been omitted
– A base register is used to locate the descriptor segment’s table.
This register points to the pages of descriptor segment
OS
Segmentation
Segmentation with Paging: MULTICS

Tanenbaum, Fig. 3-37.

• How to convert the <segment, offset> to physical address


– The segment number is used to checked if the segment’s table is in
memory or not. If so, the segment table is located.
• The page number is used to check if the virtual page is mapped (valid)
• If so, the page frame is added to the offset to give the physical address
• Otherwise, a page fault is triggered.
– If not, segment fault occurs.
– If there is a protection violation, a fault (trap) occurs.
OS
Segmentation
Segmentation with Paging: MULTICS
• In reality implementation,
– The TLB (Translation Lookaside Buffer) is used
– When an address is presented to the computer, the addressing
first checks to see if the virtual address is in the TLB
– If so, it gets the page frame number directly from the TLB and
forms the actual address of the referenced word without having
to look in the descriptor segment or page table
– Otherwise,
• The descriptor and page tables are referenced to find the page frame
address
• Then, the TLB is updated to include this page using Aging page
replacement

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy