We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 9
MSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
22516 Operating System
Basic Memory Management
Memory management is a crucial function of the operating system. Its primary purpose is to manage
the computer's primary memory, where data and programs are stored either temporarily or
permanently for processing by the CPU. Basic memory management in operating systems typically
involves the following tasks:
1. Keeping Track of Used and Unused Memory: The operating system keeps track of which
parts of memory af@ currently being used and by wham, aid which parts are not in use.
2. Allocating and Deallocating Memory Blocks: When a process requests memory, the
operating systern must allocate memory to that process if possible, After a process finishes
and no longer needsits allocated memory, the operating system must deallocate that
memory so it can be used by other processes.
3. Memory Protection: The operating system must ensure that a process can't interfere with
the memory of another process. This is achieved by setting up protections in hardware that
prevent a process from accessing memory thatit doesn't own,
4, Physical and Logical Address Space: Each process Operates in its own logical address space,
which it perceives as a contiguous sequence of addresses, i.e, a large array. In reality, these
logical addresses are mapped to’physical addresses in the computer's memory. The
operating systern manages this mapping.
The operating system might-use various strategies to manage memory, such as partitioning (dividing
memory into sections), paging (dividing memory into fixed-size pages), and segmentation (dividing
memory into variable-size segments based on logical divisions within a program).
‘These are just the basics; more advanced memory management schemes might involve the use of
virtual memory, where memory can be "extended" by using disk space. Memory management can
be a complex task, especially in modern operating systems where multiple processes might be
running concurrently, each of whi¢hirequires access to. mertiory.
Basic Memory Management Partitioning
Partitioning is a memory management technique used by operating systems to divide the primary
memory or main memory into distinct, non-overlapping sections or partitions.
There are two main types of partitioning:
1. Fixed Partitioning: In this approach, the memory is divided into a number of fixed-size
partitions. Each partition may contain exactly one process. When a partition is free, a process
is selected from the input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process. There are two types of fixed
partitioning:
+ Equal-size partitions: All partitions are of the same size. This approach is simple but
may not make efficient use of memory if processes are much smaller than the
partition sizeMSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
+ Unequal-size partitions: Partitions are of different sizes. This is more flexible and can
reduce wasted memory (internal fragmentation), but it may be more complex to
manage.
2. Dynamic Partitioning: In this approach, the partitions are of variable length and number.
When a process arrives and needs memory, the exact amount of memory required by the
process is allocated. While dynamic partitioning improves memory utilization and reduces
internal fragmentation, it can lead to external fragmentation as free memory can become
divided into non-contiguous blocks.
In both fixed and dynamic partitioning, the operating system must manage the partitions, keep track
of which partitions are in use and which are free, and assign processes to free partitions.
While simple and effective in certain situations, partitioning can lead to problems like internal
fragmentation (unused memory within a partition) or external fragmentation (unused memory
between partitions). Modern operating systems often use more sophisticated memory management
techniques such as paging and segmentation.
Partitioning Fixed and Variable
In the context of memory management in.operating systems, partitioningis a scheme to divide main
memory into sections where separate jobs or processes are loaded. There are two types of
partitioning: Fixed Partitioning and Variable (or Dynamic) Partitioning
1. Fixed Partitioning: In Fixed Partitioning, the main memory is divided into fixed-size partitions
at system initialization, and this division remains constant during the system operation. There
are two variations:
‘+ Equal-size Partitions: Here, memory is divided into equal-sized chunks. This is
straightforward but can lead to significant internal fragmentation (unused memory
within a partition) ifthe loaded processes are significantly smaller than the partition
* | Unequal-size Partitions: Here, memory is divided into partitions of different sizes.
This is often done to accommodate processes of different sizes more efficiently,
hence reducing internal fragmentation.
In both cases, each partition can only hold one process at a time.
2. Variable (Dynamic) Partitioning: In Variable Partitioning, the memory is not divided into.
partitions at system initialization. Instead, each time a progess arrives and needs to be
loaded into memory, it iSallocated exactly as much mernory as it requires. Over time, as
processes are loaded and removed, memory is continually divided and coalesced, creating
partitions of variable size and number.
‘The advantage of Variable Partitioning is that it reduces internal fragmentation because partitions
are made to fit the process sizes exactly. However, it can lead to external fragmentation, where free
memory becomes divided into non-contiguous chunks.
Both Fixed and Variable Partitioning have their pros and cons, and the choice depends on the specific
requirements and constraints of the system. Today, many systems use more advanced techniques like
paging and segmentation to manage memory, which can handle the memory in a more flexible and
efficient way.
Free Space management Techniques
Free space management is crucial in an operating system to keep track of all the unallocated space in
the memory. Without it, the operating system may not be able to allocate memory efficiently,MSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
leading to poor performance or even system failures. Here are some common techniques used in
free-space management:
1. Bitmaps: Memory is divided into allocation units, and each is represented by a bit in the
bitmap. If the bit is set (1), the allocation unit is occupied. Ifitis cleared (0), the unit is free.
Bitmaps provide an easy way to find contiguous free space but can be inefficient if the
allocation unit is small.
2. Linked List (Free List): This method uses a linked list to manage free memory. Each node in
the list points to a free block of memory. The list can be traversed to find a suitable block
when allocation is required. This method is simple and efficient for large allocation units but,
finding a suitable block may require traversing the lis.
3. Buddy System: This is a dynamic memory allocation system that partitions memory into
halves until it reaches a partition size adequate to satisfy a memory request. When a
partition is freed, the buddy system merges adjacent blocks back into larger ones if possible.
‘This method is efficient and reduces external fragmentation but can suffer from internal
fragmentation.
4, Boundary Tags: This method involves storing the size of free blocks in the header and footer,
of each block, allowing efficient coalescing of adjacent free blocks. The free blocks can be
organized ina binary tree, linked list, or similar data structure for efficient searching,
5. Indexed Allocation: In this method, an index block is associated with a file. The index block
contains pointers to every black that the file uses. It provides fast direct access but requires a
priori knowledge of how many blocks a file will need.
Each of these techniques has its own strengths and weaknesses. The best choice of free space
‘management technique often depends on the specifiesystem and its requirements.MSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
5.2 Virtual Memory - Introduction to Paging, Segmentation, Fragmentation, and Page fault.
Virtual Memory
Virtual memory is @ memory management technique used by modern operating systems to extend
the capacity of main memory (RAM). It creates the illusion of a memory space that is much larger
than the actual physical memory, thereby allowing programs to run even if they don't entirely fit into
main memory at once.
Virtual memory works by using a portion of the computer's disk space as a temporary storage for
information that is not currently being used by the CPU. The data that is most frequently accessed is
kept in physical memory while other data is moved to the virtual memory space.
Here are some key aspects of virtual memory:
1. Paging: This is @ core function of virtual memory that divides memory into fixed-size blocks
called pages. When a program needs to access data that is not in main memory, the
operating system will swap out a page from main memory to disk and then load the required
page into memory.
2. Swapping: This is the process where an entire process is Moved from main memory to disk
and back.
3. Page Fault: A page fault occurs when a program tries to access data that was mapped into
Virtual memory but jsfhot currently located in the physical memory, This prompts the
operating system to intervene and move thedata from the virtual memory into the physical
memory.
4, Page Replacement Algorithms: These are used to decide which memory pages to swap out
of main memory to disk when a page of memory needs to be allocated. Examples include the
Least Recently Used (LRU) and Most Recently Used (MRU) algorithms.
5. Memory Protection: Virtual memory provides a degree of isolation between processes,
thereby providing memory protection. Each process'Tuns in its own virtual memory space,
and it cannot actess the memory of other processes without explicit mechanisms to do so.
The main benefits of virtual memory include the ability to run larger programs than would otherwise
fit into main memory, increased system utilization, and enhanced memory protection and isolation.
However, it also introduces complexity to the operating system and can lead to degraded
performance if not managed correctly, such as excessive page swapping, known as thrashing,
Introduction to Paging
Paging is a method of managing computer memory. It is one of the key mechanisms that allows
Virtual memory systems to function.
Ina paging system, the operating system divides an application's virtual memory into blocks of
contiguous addresses known as "pages". The corresponding blocks in physical memory, where these
pages are loaded, are called "frames".
Key points about paging include:
1. Page and Frame Size: The size of a page (and corresponding frame) is typically determined.
by the hardware. Common page sizes range from 4KB to 64KB, though some systems support
larger page sizes.
2. Page Table: The operating system maintains a data structure, known as a page table, for each
process. The page table maps the virtual page numbers of a process to the corresponding
physical frames in memory.MSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
3. Translation Lookaside Buffer (TLB): Because accessing the page table can be time-
consuming, many systems use a special cache known as a TLB to store recent translations
from virtual to physical addresses,
4, Swapping and Page Faults: Ifa process tries to access a page that is not currently in physical
memory (known as a "page fault"), the operating system will choose a page to swap out from
memory to disk, load the needed page into the now-free frame, and then resume the
process.
5. Memory Protection: Paging also helps with memory protection. Because each process has
its own page table, one process cannot access the memory of another process unless the
operating system explicitly allows it.
6. Internal Fragmentation: Since memory is allocated in terms of fixed-size pages, there can be
internal fragmentation, where the last page of @ process might not be fully utilized.
Paging allows the physical memory to be used more efficiently by multiple processes and is a core
concept in modern operating systems that use virtual memory.
Introduction to Segmentation
Segmentation is another memory management technique used by operating systems, particularly for
systems that utilize virtual memory. Unlike paging, which divides memory into fixed-sized blocks,
segmentation divides memory into variable-sized blocks based on logical divisions in a program.
These blocks are known as "segments."
Key points about segmentation include:
1. Segments: Each program is divided into segments based on logical units such as functions,
data structures, and arrays: Each Segment represents a separate portion of a program's
address space, such as the code, stack, and heap segments.
2. Segment Table: The operating system maintains @'segment table for each process. This table
stores the startingaddress ahd length of each segment. When a process needs to access a
memory location, it specifies the segment number and the offset within the segment.
3. Advantages: Segmentation can be more efficient than paging because it reduces internal
fragmentation. it can also enhance security and simplicity by isolating different segments of a
program fromone another
4, Disadvantages: Segmentation can lead to external fragmentation, where the total free
memory is enough to satisfy a request, but it's not contiguous and thus can't be used.
Compaction, a costly process, is needed to reduce,external fragmentation.
5, Memory Protection: Like paging, segmentation also aids in memory protection. Different
segments can have different levels of protection, such as read-only or execute-only
6. Segmentation with Paging: Some systems combine segmentation and paging to gain the
benefits of both, dividing memory into segments for the logical benefits and then dividing
those segments into pages for better physical memory management. This is called paged
segmentation.
Segmentation offers several benefits over simple paging, especially for complex programs that
benefit from being broken down into logical units. However, it can also be more complex to
implement and manage.
Introduction to Fragmentation
Fragmentation is a common issue that occurs in computer systems, particularly in memory
‘management and storage. It refers to the inefficient use of memory or storage space, resulting inMSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
unused or partially used space that cannot be effectively utilized. Fragmentation can occur in two
‘main forms: internal fragmentation and external fragmentation.
1. Internal Fragmentation: internal fragmentation occurs when a portion of allocated memory
or storage is unused but cannot be utilized by other processes or programs. This happens
because memory is allocated in fixed-size blocks, and if the requested memory size is slightly
smaller than the block size, the remaining space within the block remains unused. This
unused space is called internal fragmentation.
For example, if a program requests a memory block of 16KB but is only able to use 14KB, there will
be 2K8 of internal fragmentation. In this case, the remaining 2KB within the allocated block is wasted
and cannot be used by other processes
2. External Fragmentation: External fragmentation occurs when there is enough total free
memory or storage space to satisfy @ request, but the available space is divided into small
non-contiguous chunks. As a result, the requested memory or storage cannot be allocated
because it is not available in a single contiguous block
External fragmentation can be moré problematic than internal fragmentation as it leads to memory
or storage being underutilized even though there is technically enough free space to fulfilla request.
Over time, external fragmentation can reduce the overall efficiency of memory or storage utilization,
Both internal and external fragmentation-can impact system performance and efficiency. Memory
‘management techniques such as compaction, which rearranges memory to consolidate free space,
and memory allocation strategies like paging and segmentation, aim to reduce the effects of
fragmentation and optimize memory utilization in computer systems. Similarly, defragmentation is
Used in storage systems to consolidate fragmented files and improve disk performance.
Introduction to Page fault
A page fault isan interrupt or exception that occurs whert'a'program tries to access a portion of
memory (a page) that is not currently stored in the physical memory (RAM), but rather in the virtual
memory (typically a hard drive or SSO), This happens when the operating system uses a technique
called paging to manage memory.
Here are the key steps that occur during a page fault:
1. When a program needs to access data that isn't in the physical memory, the system triggers a
page fault interrupt.
2. The operating system then checks if the request for the memory access is valid and if the
page is in the virtual memory, If it's not valid (i.e,, the process requested a non-existent
address or violated protection policies), the OS typically terminates the process. This is
known as a segmentation fault or access violation.
3. Ifthe request is valid, the OS will look for a free frame in the physical memory. if no such
frames exist, the OS will choose a frame to be swapped out based on a page replacement
algorithm (such as Least Recently Used (LRU, First-In, First-Out (FIFO), etc).
4, The required page is then loaded from the virtual memory into the freed frame in the
physical memory.
5. Finally, the page table is updated to reflect the new location of the page, and the interrupted
process resumes,
Page faults are a normal part of operating system operation when using virtual memory, but if they
occur too frequently (a situation known as thrashing), they can significantly degrade system
performance, as reading from disk is much slower than reading from RAM. Techniques like improving
the page replacement algorithm, increasing the amount of RAM, or modifying programs to access
memory in a more sequential manner can help reduce the frequency of page faults.MSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
Page Replacement Algorithms
Page replacement algorithms are used by the operating system to decide which pages should be
removed from memory (RAM) when it becomes full, and a new page needs to be loaded from disk.
The goal is to select the page that will not be needed for the longest time in the future. However,
predicting future page references is generally not possible, so various algorithms have been,
developed to approximate this. Here are a few common ones:
11, FIFO (First-In, First-Out): This is the simplest page replacement algorithm. In this scheme, the
operating system keeps track of all the pages in memory in a queue, with the most recent
arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the
page at the front of the queue (the oldest page) is selected.
2. LRU (Least Recently Used): This algorithm works on the idea that pages that have been most
heavily used in the past will probably be heavily Used again in the future. So, it discards the
least recently used pages first. This algorithm is often usedhin practice and can be
implemented by keeping a linked list of pages, with the most tecently used page at the front
and the least recently used page at the back.
3. Optimal Page Algorithm: Also known as OPT or MIN; this algorithm selects for replacement
the page that will not be used for the longest time in the future, While this algorithm is
excellent for minimizing page faults, it's not often used in practice because it requires perfect
knowledge of future page requests, which is typically impossible.
4. Clock Algorithm: This algorithm keeps a circtlar list of pages in memory, with a ‘hand’
(pointer) that points to the oldest page in the list: When a page fault occurs, the hand checks
if the 'use' bit of the page it points to is set. If its, it iscleared and the hand moves to the
next page. If it's not set, the pages replaced.
5. Least Frequently Used (LFU): This algorithm uses a counter to keep track of the number of
times a page is accessed. When the page frame js full and a page needs to be replaced, the
system selects the one with the smallest count. It's based on the argument that a page with
‘the smallest count was probably just brought in and yet to be used,
Each of these algorithms has its strengths and weaknesses, and the best choice of algorithm can
depend on the specific characteristics of the system, including the hardware architecture, the
number of page frames available, the memory access patterns of the current processes, and the
performance characteristics that are most important for the'system's workload.
Page Replacement Algorithms FIFO
‘The Firstin, First-Out (FIFO) page replacement algorithm is one of the simplest methods for
‘managing the swapping of pages in and out of memory.
‘The FIFO algorithm operates similarly to how a real-world queue operates. It maintains a list of all
the pages in memory in the order they were loaded. The oldest page, the one that was loaded into
memory first, is at the front of the queue, while the newest page is at the back of the queue.
When a page fault occurs, meaning the operating system needs to load a page into memory that is
not already there, and all the memory frames are full, the FIFO algorithm selects the page at the
front of the queue, the oldest page, to be removed from memory. The new page is then added to the
back of the queue.
While FIFO is simple and easy to understand, it does not always yield the best performance, as it
doesn't take into account the frequency or recency of page use. Asa result, it could potentiallyMSBTE Diploma All Branch Notes Available in FREE Visit Now: www.diplomasolution.com
YouTube: Diploma Solution WhatsApp No: 8108332570
remove a page from memory that is still frequently used, while a less frequently used page remains
in memory. This behavior can lead to an increased number of page faults, a situation known as,
Belady's anomaly.
For these reasons, other page replacement algorithms, such as Least Recently Used (LRU) or the
Clock algorithm, are often used instead, as they typically offer better performance in most scenarios.
However, the best page replacement algorithm to use can vary depending on the specific workload,
and system requirements.
Page Replacement Algorithms LRU
The Least Recently Used (LRU) page replacement algorithm is a common method for managing the
swapping of pages in and out of memory in an operating system.
The LRU algorithm operates on the principle that pages which have been most heavily used in the
recent past will likely be used heavily in the near future. Hence, when a page needs to be replaced
(e.g., during a page fault), the LRU@lgorithm selects the page that has not been used for the longest
amount of time.
LRU maintains a list of pages, with the most recently used page at the front of the list and the least
recently used page at the back. When-a page is referenced, it is moved to the front of the list. When
a page needs to be replaced, the page at the back of the list (the least recently used page) is chosen.
LRU can be mor@ efficient than simpler methods like FIFO (First-In, Fitst-Out), as it takes into account
the recency of page usage. However, implementing true LRU can be expensive in terms of time and
resources, as it requires maintaining a linked list of pages and updating this list every time a page is
referenced. Therefore, many systems use approximations of LRU, such as the Clock or Second Chance
algorithm, which offer a compromise between the efficiency of LRU and the simplicity of FIFO.
Page Replacement Algorithms Optimal
‘The Optimal Page Replacement Algorithm, also known as OPT or MIN, is_a page replacement policy
that selects for replacement the page that will not be Used for the longest time in the future. It's a
theoretical concept used mainly for comparative studies, because it requires perfect knowledge of
future requests, which is generally impossible in practice.
Here's how it works:
1. When a page needs to be swapped into memory and ther@ is no free space left, the
operating system looks attall the pages currently inimemory.
2. For each of these pages, it predicts how long it will be until the page is needed again, based
on the list of future page requests.
3. The OS then selects the page that won't be needed for the longest time in the future and
swaps it out to make room for the new page.
‘The optimal algorithm minimizes the number of page faults and is, by definition, the most efficient
algorithm, However, it's not practical for real systems because it requires advance knowledge of the
sequence of page requests, something that is not typically possible to know. For this reason, the
Optimal Page Replacement Algorithm serves as a benchmark to compare the performance of other,
more practical, page replacement algorithms such as LRU (Least Recently Used) or FIFO (First-In,
First-Out).2s) a AY
(@) +91 8108332570
a MSBTE Diploma Solution)
bs www.diplomasolution.com| a »
ad