05 Chap03 PagingIssues 4slots (81 Slides)
05 Chap03 PagingIssues 4slots (81 Slides)
5
Memory Management – Part 2
(4 slots)
Chapter 3- Part 2
Design Issues for Paging Systems
Implementation Issues
Segmentation
OS
Objectives
• Design Issues for Paging Systems
– Local vs. Global Page Allocation Policies
– Load Control
– Page Size
– Shared Pages
– Shared Libraries
– Mapped Files
– Cleaning Policy
– Virtual Memory Interface
OS
Objectives…
• Implementation Issues
– OS Involvement with Paging
– Page Fault Handling
– Instruction Backup
– Locking Pages in Memory
– Backing Store
– Policy and Mechanism
• Segmentation
– Pure Segment
– Read yourself
• Segmentation with Paging: MULTICS
• Segmentation with Paging: Intel Pentium
OS
Original
configuration
OS
Design Issues for Paging System
Local vs. Global Allocation Policies
• Local algorithms
– The working set grows → thrashing
– The working set shrinks → waste of memory
• Global algorithms
– Work better, especially when the working set size can
vary over the lifetime of a process
– The thrashing can occurs to other processes of which
pages are chosen to replace → the other processes
can not control the page fault rate
– The working set size is smaller → waste of memory
OS
Design Issues for Paging System
Local vs. Global Allocation Policies
• Strategies for the system must continually decide how many page
frames to assign to each process
– Monitor the size of working set of all processes based on the
aging bits of pages (It does not necessarily prevent thrashing)
– Page frames allocation algorithm:
• To periodically determine the number of running processes
• Allocate pages proportionally with each process size
• Give each process a minimum number of frames
• The allocation is updated dynamically
– The PFF (Page Fault Frequency) algorithm (Count the
page number of faults per second)
• Some page replacement algorithms
– Can work with both policies (FIFO, LRU)
– Can work only with the local policy (WSClock)
( First In First out/ Least Recently Used/ Working Set Clock)
OS
Design Issues for Paging System
Load Control
• Context
– The system thrashes can occur even if the best page
replacement algorithm and optimal global allocation
of page frames is used
– When the combined working sets of all processes
exceed the capacity of memory → thrashing
– PFF (Page Fault Frequency) algorithm indicates that
some processes need more memory but no process
need less memory
no way to give more memory to those processes
needing it without hurting some other processes
OS
Design Issues for Paging System
Load Control
• Problems
– Most computers have a
single address space that
holds both programs and
data
– If this address space is
often too small, forcing
programmer to stand on
their heads to fit
everything into the
Tanenbaum, Fig. 3-25.
address space
OS
Design Issues for Paging System
Separate Instruction and Data Spaces
• The linker and the hardware must know when separate spaces
are being used
• Both address spaces can be paged, independently and have its
own page table (mapping virtual pages to physical page frames)
• Practically is double the available address space
OS
Design Issues for Paging System
• Problem Shared Pages
In large multiprogramming system,
several users to be running the same
program at the same time Having
some copies of the same page in
memory at the same time. How to
manage them?
• Solution
– Share the read-only pages (code pages
but data pages)
– If separate spaces are supported, two or
more processes use same their I-Space
but different their D-Space.
Implementing
• Page tables are data structures independent
of the process table
• Each process has two pointers in process
table: one to the I-space and one to the D- Tanenbaum, Fig. 3-26.
space
OS
Design Issues for Paging System
Shared Pages
Problem:
Both A & B processes running the editor & sharing
pages. When A terminates, some code pages are
remove, how does OS ensure the process B is not
affected?
Solution:
B generate a large number of page faults to bring them
back in again. Similar, a data structure to keep track of
shared pages is implemented to reduce the searching
(since searching is expensive)
OS
Design Issues for Paging System
Shared Pages
• Sharing data (modifiable pages)
Context
• Data can be shared and can be modified concurrently by many
processes
• Ex:
– The child process attempt to modified a page containing portion of stack
– The OS recognizes that the page may be shared and can be modified (by
both processes), the OS will create a copy of this page and map it to the
address space of the child process
– The process will modify its copied page and not the page belonging to
the parent process
Solution: Copy-on-write strategy is applied to modified-data
pages (This solution is applied to the OS using the
duplicating process strategies. Ex: Windows 2000, Linux,
Solaris 2)
OS
Design Issues for Paging System
Shared Libraries
• Problems
– In modern system,
there are many Process 1
libraries used by
many process
– Statically binding Process 2
all these libraries
to every Process 3
executable
program on the Process 4
disk would make
them even more Shared DLL
bloated
Solutions: Share libraries Tanenbaum, Fig. 3-27.
(Ex: DLL – Dynamic Link Libraries) Separate common routines to
share libraries
OS
Design Issues for Paging System
Shared Libraries
• Share libraries
– A program linked with a share libraries, instead of including
the actual function called, the linker includes a small stub
routine that binds to the called function at the run time
– Shared libraries is only loaded either when the program is
loaded or when functions in them are called for the first
time
– The entire libraries is not read into memory, it is paged in,
page by page, as needed, so functions that are not called
will not be brought into RAM
– Advantages
• Making executable file is smaller and saving space in memory.
• If a function in a shared library is updated to remove a bug, it is not
necessary to recompile the programs that call it. The old binaries
continue to work.
OS
Design Issues for Paging System
Shared Libraries
• Problems
– Two processes uses the shared library at a different
address in each process
– Each process and shared library are located at the
different addresses
Therefore, some functions may be gone to the different
absolute addresses in the shared library
The reference address will not be moved in exactly
because the relocation on the fly will not work with
using the shared libraries
The absolute address cannot be applied with using
shared libraries
OS
Design Issues for Paging System
Shared Libraries
Solution
– First Approach: Using copy-on-write
• This will create new pages and relocate them on the fly in
correctly for each process in creating progress.
This is not the shared solution because this solution copies
the non-modified pages.
– Second Approach: using position-independent-code
• The instruction in code should used only relative offsets
then the absolute addresses.
• This is word correctly with the shared libraries placing
anywhere in virtual address space.
Design Issues for Paging System OS
Memory-Mapped Files
• Context
– Manipulating a file on disk must be invoked the system call
open(), read() and write() and must access the disk slow
– Virtual memory needs to swap in/out memory pages from/to
disk.
Memory-Mapped Files
How does it work?
– Initial, when the file is accessed, the page fault is occurs
– Then, the pages are allocated and are loaded the content from
file system
– Subsequent, the file manipulations are handled as routine
memory accesses
– It is not necessarily be immediate writes file to disk when the
memory is modified. Instead of this, the OS updates the file on
disk in periodically if any if the memory has been modified
– When the process exits (e.g. file is closed), all the modified
pages are written back to the file automatically.
Design Issues for Paging System OS
Memory-Mapped Files
• In practical,
– Some OS using this solution only through a specific
system call (ex: mmap() in Solaris 2) and treat all other
file I/O using the standard system calls
– If two processes map onto the same file at the same time,
they can communicated over shared memory
• One process writing done to the share memory are
immediately visible when the others one reads (sharing
data and support to copy-on-write) Provide high
bandwidth channel between process
• Shared libraries can used this mechanism, if memory-
mapped file is available
OS
Design Issues for Paging System
Cleaning Policy
All content of a page can be read from/write to file The page
must be clean.
• Solution: Paging daemon is used.
– A background process sleeps most of time but is awakened
periodically to inspect the state of memory
– Ensure a plentiful of free page frame to paging works best
– If too few page frames are free, the paging daemon begins
selecting pages to evict using some page replacement
algorithm, then written them to disk (if it is modified)
– Requirement:
• The previous contents of page are remembered
All the free frames are clean, so the written to disk in a
big hurry does not occur
OS
Design Issues for Paging System
Cleaning Policy
Cleaning policy
– Using two-handed clock
• Front hand (pointer for cleaning pages) is controlled by
the paging daemon. When it points to a dirty page, that
page is written back to disk and the front hand is
advanced. Otherwise, it is just advanced
• Back hand (pointer for choosing an evicted page) is used
for page replacement algorithm as in the standard clock
algorithm
OS
Design Issues for Paging System
Virtual Memory Interface
• Context
– Large virtual address space Smaller physical memory
– Can programmers control over their memory map to allow
many processes to share memory?
• Solution: Share memory
– Programmer can name regions of their memory then these
names of memory regions can be shared to many processes.
The page in memory can be shared to many processes
The shared memory region is used to write by one process and
read from another(pipe mechanism, IPC – InterProcess
Communication).
High bandwidth, enhance the program’s progress
OS
Design Issues for Paging System
Virtual Memory Interface
• In message passing system:
– Problem: when messages are passed, the data are
copied from one address space to another
complexity and waste time
– Solution: shared memory is applied by copying the
named pages instead of all the data
• The sender unmaps the pages containing the message
when the message are passed
• Then, the receiving maps the unmapped pages in (only
page names have to be copied)
High performance
OS
Design Issues for Paging System
Virtual Memory Interface
• In distributed application,
– When a process references a page that is not mapped in,
the page fault occurs.
Solution:
(1)The page fault handler locates the machine holding the
page and sends it a message asking it to unmap the page
and send it over the network
(2) When the page arrives, it is mapped in and the fault
instruction is restarted
Allow multiple process over a network to share a set of
pages (high performance)
OS
Design Issues for Paging System
Virtual Memory Interface
PCB Method 1: The instruction at the line causing the page fault is
trapped Counter+1 is the next instruction
Method 2: Counter = the instruction at the line right after the line
which causing the page fault is trapped.
OS
Impl. Issues: Instruction Backup…
• Context
– Where should the PC’s position be backed up when the
progressed instructions occur errors?
– Where should the PC’s position be loaded when the instruction is
restarted?
– The value of PC at the time of the trap depends on which operand
faulted and how the CPU’s microcode has been implemented
– In auto-incrementing mode
• First approach
– The increment may be done before the memory reference
– Before restarting the instruction, the OS must decrease the register
• Second approach
– The auto-increment may be done after the memory reference
– The OS do nothing to restart the instruction
– In auto-decrementing mode, a similar problems also occur
OS
Impl. Issues: Instruction Backup…
• Problem
– The page fault occurs when a instruction reference the
memory, the instruction can occur at the operands
– To restart the instruction, the OS must determine where the
first byte of instruction is
– The OS cannot know the command (operator) if the fault
occurs at the operand
How can the OS determine what happened and how to repair
it?
• Solution
– A hidden internal register is used to store the PC before each
instruction is executed.
– A second register is used to store the value of the registers
auto-incremented or auto-decremented.
When the fault instruction is restarted, the OS can
unambiguously undo all the effects of it
OS
Impl. Issues:Locking Pages in Memory
• Problems
– Global paging is used. The process A is suspended due to IO. The
process B is chosen to run and B causes a page pagefault. A page of
A will recieve data from IO is evicted.
→ In the case of DMA (Direct Memory access, a way to tranfer data
without controlling of CPU) used with I/O, the evicted page of A
causes the problems The transferring data will be written into
either the buffer or this page?
• Solution
– First Approach: Lock pages engaged in I/O in memory so that
they will not be removed. (pinning – ghim) If the locking pages
in memory always occurs with high ratio, the bottle neck or thrashing can
be occurs.
– Second Approach: Do all I/O to kernel buffers & copy the data
to user pages later.
OS
• The fault handler then unmaps the page from the external pager’s address
space and asks the MMU handler to put it into the user’s address space at the
right place.
• The user process can be restarted.
• Problems
– External pager does not have access to the Referenced bit and Modified bit
of all the pages
– Some mechanism is needed to pass this information to the external pager or
the page replacement algorithm must go in the kernel
the fault handler tells the external pager which page it has selected for
eviction and provides the data, either by mapping it into the external pager’s
address space or including it in a message. Either way, the external pager
writes the data to disk
• Advantages: is more modular code and greater flexibility
• Disadvantages: the extra overhead of the various message being sent between
the pieces of the system.
OS
5.3- Segmentation
Context
– Programmer’s view of memory is not
usually as a single linear address space.
– Programmer cannot predict how large
these will be, or how they’ll grow, and
doesn’t want to manage where they go in
virtual memory.
– The virtual memory is one-dimensional
because the virtual address go from 0 to
some maximum address, one address
after one another.
– The virtual memory gives a process a
complete virtual address space to itself.
– A compiler may create some tables for
some objectives. These tables may grow
in size.
OS
Segmentation
• Problem
– The processes as compiler has many objects such as executed
code, Static/ Heap Data, Stack, constant table … that are
located in address space
– One of them may grow and shrinks in dynamically or
unpredictably
→ It is really needed a way of freeing manage the expanding and
contracting space that virtual memory eliminates the worry of
organizing the program into overlays
• Solution: Using Segmentation
– Segmentation provides the mechanism to management
memory with many completely independent address spaces
and are not fixed size as page
– Segmentation maintains multiple separate virtual address
spaces per process
– The address space is a collection of segments
– Segmentation can grow or shrink independently
Segmentation OS
Example
A segmented memory allows each table to grow or shrink indepenedly of the other tables
Tanenbaum, Fig. 3-32.
OS
Segmentation
• Characteristics of a Segment
– Is a logical entity.
– Consists of a linear sequence addresses, from 0 to some maximum length
– Different segments may have different length and may change during
execution (ex: the stack segment may be increased or decreased)
– Has a separate address space, different segments can grow and shrink
independently without effecting each other
– Might contain a procedure, or a array, or a stack, or a collection of scalar
variables, but usually it does not contain a mixture of different types (
have different kinds of protection)
– Facilities sharing procedures or data between several processes (e.g. share
library)
• The compiler automatically construct segments reflecting the input
program
• To specify an address in the segmented memory:
<segment-number, offset>
Segmentation OS
Example
Example
(a)-(d):
Development of
checkerboarding.
(e) Removal of the
checkerboarding
by compaction.
Summary
Q&A
OS
Read yourself
OS
Segmentation
Segmentation with Paging: The Intel Pentium
Gbit ( granularity): 0: Limit (Li) field contains the exact segment size
in bytes / 1: Limit field contains the segment size in pages. Pentium
page size: 4KB 20 bits are enough for segments up to 232 bytes.
Dbit: describes type of segment size ( 16 or 32 bits)
Pbit (present bit), DPL ( Descriptor for Privilege Level), Sbit (system)
OS
Segmentation
Segmentation with Paging: The Intel Pentium
• How to convert a (selector,
offset) to physical address?
– The microprogram finds descriptor
corresponding to selector
• The trap occurs if the currently page
out or segment does not exist
– Then, the hardware uses the Limit field to check if the offset is
beyond the end of the segment.
• If so, the trap occurs.
• Otherwise, the Pentium adds the 32 bit Base field in descriptor to the
offset to form what called a linear address .
OS
Segmentation
Segmentation with Paging: The Intel Pentium
How to convert a (selector, offset)
to physical address? …
– If paging is disable
• The linear address is interpreted as the
physical address and sent to the memory
for
the read and write
– If paging is enabled (following the
multilevel page table)
• The linear address is interpreted as a
virtual Dir: index into the page directory to
address and mapped onto the physical locate a pointer to the proper page table
Page: index into the page table to find the physical
address using page tables address of the page frame
• The linear address divided into three fields Offset: is added to the address of the page frame to
get the physical address of the byte or the word
<Dir, Page, Offset>. needed
OS
Segmentation
Segmentation with Paging: The Intel Pentium
• The Pentium supports four protection levels (0 – 3)
– 0: kernel of OS handles I/O, memory management, and other critical matters
– 1: the system calls handler is present
– 2: contains library procedures, possibly shared among many running
programs (call, read, or not modify)
– 3: user program
• MULTICS (Multiplexed
Information and Computing
Service) was an extra-ordinarily
influential early time-
sharing system.
• Each program has a segment
table, with one descriptor per
segment. The desciptor segment points to the page table
• A descriptor contains an
indication of whether the
segment is in main memory or
not.
• The descriptor (36 bits)
contains an 18 bit pointer to its
page table, 9 bit segment size, A segment desciptor (numbers are field lengths)
the protection bits, and few Tanenbaum, Fig. 3-35.
other items
OS
Segmentation
Segmentation with Paging: MULTICS
• In implementation,
– The descriptor segment is itself paged has been omitted
– A base register is used to locate the descriptor segment’s table.
This register points to the pages of descriptor segment
OS
Segmentation
Segmentation with Paging: MULTICS