Os Unit Iii
Os Unit Iii
Introduction : Memory is the important part of the computer that is used to store the data. Its
management is critical to the computer system because the amount of main memory available in a
computer system is very limited. At any time, many processes are competing for it. Moreover, to
increase performance, several processes are executed simultaneously. For this, we must keep several
processes in the main memory, so it is even more important to manage them effectively.
o Memory manager is used to keep track of the status of memory locations, whether it is free or
allocated. It addresses primary memory by providing abstractions so that software perceives a
large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using the
concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process from
being corrupted by another process. If this is not ensured, then the system may exhibit
unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
Here, are reasons for using memory management:
It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
Tracks whenever inventory gets freed or unallocated. According to it will update the status.
It allocates the space to application routines.
It also make sure that these applications do not interfere with each other.
Helps protect different processes from each other
It places the programs in memory so that memory is utilized to its full extent.
Memory Addresses:
There are three types of Memory Addresses. They are:
1. Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction labels are the
basic elements of the symbolic address space.
2. Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative addresses.
3. Physical addresses
The loader generates these addresses at the time when a program is loaded into main memory.
Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to as
a physical address space.
The Memory management Techniques can be classified into following main categories:
1. Contiguous memory management schemes
2. Non-Contiguous memory management schemes
The memory can be divided either in the fixed-sized partition or in the variable-sized
partition in order to allocate contiguous space to user processes.
1. B. Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits computers to
execute only one program at a time resulting in wastage in memory space and CPU time. The problem
of inefficient CPU use can be overcome using multiprogramming that allows more than one program
to run concurrently. To switch between two processes, the operating systems need to load both
processes into the main memory. The operating system needs to divide the available main memory
into multiple parts to load multiple processes into the main memory. Thus multiple processes can
reside in the main memory simultaneously.
1. B. 1. Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different sizes.
Each partition can hold a single process. The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of processes in memory. These partitions are made at
the time of system generation and remain fixed after that.
1. B. 2. Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning
scheme. In a dynamic partitioning scheme, each process occupies only as much memory as they
require when loaded for processing. Requested processes are allocated memory until the entire
physical memory is exhausted or the remaining space is insufficient to hold the requesting process. In
this scheme the partitions used are of variable size, and the number of partitions is not defined at the
system generation time.
Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.
Swapping:
Swapping is a memory management scheme in which any process can be temporarily
swapped from main memory to secondary memory so that the main memory can be made available
for other processes. It is used to improve main memory utilization. In secondary memory, the place
where the swapped-out process is stored is called swap space.
The purpose of the swapping in operating system is to access the data present in the hard disk
and bring it to RAM so that the application programs can use it. The thing to remember is that
swapping is used only when data is not present in RAM.
Although the process of swapping affects the performance of the system, it helps to run larger
and more than one process. This is the reason why swapping is also referred to as memory
compaction.
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM and adding it to the hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Note:
o In a single tasking operating system, only one process occupies the user program area of
memory and stays in memory until the process is complete.
o In a multitasking operating system, a situation arises when all the active processes cannot
coordinate in the main memory, then a process is swap out from the main memory so that
other processes can enter it.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes do
not have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the program
in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of Page
Fault and decrease the overall processing performance.
Memory Allocation
Main memory usually has two partitions −
Low Memory − Operating system resides in this memory.
High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.
1. Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes from
each other, and from changing operating-system code and data. Relocation register contains value of
smallest physical address whereas limit register contains range of logical addresses. Each logical
address must be less than the limit register.
2. Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized partitions
where each partition should contain only one process. When a partition is free, a process is selected
from the input queue and is loaded into the free partition. When the process terminates, the partition
becomes available for another process.
Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas
of memory. Every partition stores all the information for a specific task or job. This method consists
of allotting a partition to a job when it starts & unallocate when it ends.
What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into fixed-size blocks of physical memory called
frames. The size of a frame should be kept the same as that of a page to maximize the main memory
and avoid external fragmentation.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory is
divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame
should be kept the same as that of a page to have maximum utilization of the main memory and to
avoid external fragmentation. Paging is used for faster access to data, and it is a logical concept.
Paging is a memory management technique in which process address space is broken into blocks of
the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the
process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum utilization
of the main memory and to avoid external fragmentation.
Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.
o Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.
o With the help of Paging, the problem of external fragmentation is solved.
o Paging is one of the simplest algorithms for memory management.
Disadvantages of Paging
Disadvantages of the Paging technique are as follows:
In Paging, sometimes the page table consumes more memory.
Internal fragmentation is caused by this technique.
There is an increase in time taken to fetch the instruction since now two memory accesses are
required.
What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into variable-size blocks of physical memory called
segments. It is based on the way the programmer follows to structure their programs. With segmented
memory allocation, each job is divided into several segments of different sizes, one for each module.
Functions, subroutines, stack, array, etc., are examples of such modules.
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related functions.
Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of
variable-length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures,
and so on. The operating system maintains a segment map table for every process and a list of free
memory blocks along with segment numbers, their size and corresponding memory locations in main
memory. For each segment, the table stores the starting address of the segment and the length of the
segment. A reference to a memory location includes a value that identifies a segment and an offset.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.
Fragmentation is of two types −
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
contiguous, so it cannot be used.
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it
cannot be used by another process.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented memory –