Operating System - Memory Management: Process Address Space
Operating System - Memory Management: Process Address Space
1 Symbolic addresses
The addresses used in a source code. The variable names, constants, and
instruction labels are the basic elements of the symbolic address space.
2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative
addresses.
3 Physical addresses
The loader generates these addresses at the time when a program is loaded into
main memory.
Virtual and physical addresses are the same in compile-time and load-time address-
binding schemes. Virtual and physical addresses differ in execution-time address-
binding scheme.
The set of all logical addresses generated by a program is referred to as a logical
address space.
The set of all physical addresses corresponding to these logical addresses is referred
to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device. MMU uses following mechanism
to convert virtual address to physical address.
The value in the base register is added to every address generated by a user
process, which is treated as offset at the time it is sent to memory. For example,
if the base register value is 10000, then an attempt by the user to use address
location 100 will be dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees the real physical
addresses.
The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well as
the time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second. The
actual transfer of the 1000K process to or from memory will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.
Memory Allocation
Main memory usually has two partitions −
Low Memory − Operating system resides in this memory.
High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.
1 Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user
processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less
than the limit register.
2 Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a partition is
free, a process is selected from the input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks remains unused. This
problem is known as Fragmentation.
Fragmentation is of two types −
S.N Fragmentation & Description
.
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it
is not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented
memory −
Paging
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard that's set up to emulate the computer's RAM. Paging technique plays an
important role in implementing virtual memory.
Paging is a memory management technique in which process address space is broken
into blocks of the same size called pages (size is power of 2, between 512 bytes and
8192 bytes). The size of the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have
optimum utilization of the main memory and to avoid external fragmentation.
Address Translation
Previous Page
Next Page
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
User written error handling routines are used only when an error occurred in the
data or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
The ability to execute a program that is only partially in memory would counter
many benefits.
Less number of I/O would be needed to load or swap each user program into
memory.
A program would no longer be constrained by the amount of physical memory
that is available.
Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory management
unit, or MMU, is built into the hardware. The MMU's job is to translate virtual addresses
into physical addresses. A basic example is given below −
Virtual memory is commonly implemented by demand paging. It can also be
implemented in a segmentation system. Demand segmentation can also be used to
provide virtual memory.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in
advance. When a context switch occurs, the operating system does not copy any of the
old program’s pages out to the disk or any of the new program’s pages into the main
memory Instead, it just begins executing the new program after loading the first page
and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in
the main memory because it was swapped out a little ago, the processor treats this
invalid memory reference as a page fault and transfers control from the program to the
operating system to demand the page back into the memory.
Advantages
Disadvantages
Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating System
decides which memory pages to swap out, write to disk when a page of memory needs
to be allocated. Paging happens whenever a page fault occurs and a free page cannot
be used for allocation purpose accounting to reason that pages are not available or the
number of free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced
again, it has to read in from disk, and this requires for I/O completion. This process
determines the quality of the page replacement algorithm: the lesser the time waiting
for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the
pages provided by hardware, and tries to select which pages should be replaced to
minimize the total number of page misses, while balancing it with the costs of primary
storage and processor time of the algorithm itself. There are many different page
replacement algorithms. We evaluate an algorithm by running it on a particular string of
memory reference and computing the number of page faults,
Reference String
The string of memory references is called reference string. Reference strings are
generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data, where we note
two things.
For a given page size, we need to consider only the page number, not the entire
address.
If we have a reference to a page p, then any immediately following references to
page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also, thrashing
results in severe performance problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism tries to load
many processes into the memory at the same time due to which degree of
Multiprogramming can be increased. Now in this situation, there are more processes in
the memory as compared to the available number of frames in the memory. Allocation
of the limited amount of frames to each process.
Whenever any process with high priority arrives in the memory and if the frame is not
freely available at that time then the other process that has occupied the frame is
residing in the frame will move to secondary storage and after that this free frame will be
allocated to higher priority process.
We can also say that as soon as the memory fills up, the process starts spending a lot
of time for the required pages to be swapped in. Again the utilization of the CPU
becomes low because most of the processes are waiting for pages.
Thus a high degree of multiprogramming and lack of frames are two main causes of
thrashing in the Operating system.
Effect of Thrashing
At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.
1. Working Set
The set of the pages in the most recent? page reference is known as the working set. If
a page is in active use, then it will be in the working set. In case if the page is no longer
being used then it will drop from the working set ? times after its last reference.
The working set mainly gives the approximation of the locality of the program.
The accuracy of the working set mainly depends on? what is chosen?
This working set model avoids thrashing while keeping the degree of multiprogramming
as high as possible.