0% found this document useful (0 votes)
15 views22 pages

Operating Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views22 pages

Operating Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Unit-3

Chapter-1 Memory management

Memory Management
In uni-programming system, the main memory is divided into two parts, one part is for
operating system and another part is for currently executing job. Consider the following:

Fig 3.5: Main memory partition

Partition 1 is used for operating system and partition 2 is used to store the user process.
In partition 2 some part of memory is wasted, it is indicated by blacked lines in figure.
In multiprogramming environment the user space is divided in to number of partitions.
Each partition is for one process. The task of sub division is carried out dynamically by the
operating system; this task is known as ―Memory Management. The efficient
memory management is possible with multiprogramming.

Logical- versus Physical-Address Space


An address generated by the CPU is commonly referred to as a logical address, where
as an address seen by the memory unit— that is, the one loaded into the memory-
address register of the memory—is commonly referred to as a physical address.

The compile-time and load-time address-binding methods generate identical logical


and physical addresses. However, the execution-time address-binding scheme results in
differing logical and physical addresses. In this case, we usually refer to the logical
address as a virtual address. We use logical address and virtual address interchangeably
in this text. The set of all logical addresses generated by a program is a logical-address
space; the set of all physical addresses corresponding to these logical addresses is a
physical-address space. Thus, in the execution-time address binding scheme, the logical-
and physical-address spaces differ.
The run-time mapping from virtual to physical addresses is done by a hardware device
called the memory-management unit (MMU).

1
The base register is now called a relocation register. The value in the relocation
register is added to every address generated by a user process at the time it is sent to
memory. For example, if the base is at 14000, then an attempt by the user to address
location 0 is dynamically relocated to location 14000; an access to location 346 is
mapped to location 14346.

Fig.3.6: Dynamic relocation using a relocation register.

Swapping:
Swapping is used to increase main memory utilization. For example main memory
consisting of 15 processes, assume that it is the maximum capacity of memory to hold
the processes. The CPU currently executing the process no:14 in the middle of the
execution the process 14 needs I/O. then the CPU switches to the another job and
process 14 is moved to a disk and the another process is loaded in to the main memory in
place of process 14. When the process 14 is completed its I/O operation then the
process 14 is moved to the main memory from disk. Switching process from main
memory to disk is known as ―swap out and switching from disk to main memory
is called ―swap in. This type of mechanism is said to be ―Swapping. We can achieve
the efficient memory utilization with swapping.
Swapping requires ‘Backing store‘. The backing store is commonly a fast disk. It must
be large enough to accommodate the copies of all process images for all users. When a
process is swapped out, its executable image is copied into backing store. When it is
swapped in, it is copied into the main memory at new block allocated by the memory
manager.

2
Fig.3.7: Swapping

Contiguous memory allocation:


Memory allocation methods:
The main memory contains both the operating system and the users processes. Following are
the various memory allocation methods.

1. Single Partition allocation


In this method, the operating system resides in the lower part of the memory, and
the remaining memory is treated as a single partition. This single partition is available for
the user‘s space. Only a single job can be loaded in this user space at time.
The short term scheduler selects a job from ready queue for execution, and the
dispatcher loads that job in main memory. The main memory consists of only one
process at time, because the user space treated as a single partition.
The main disadvantage of this scheme is the memory is not utilized fully. A lot of
memory is wasted.

Fig. 3.9: Memory allocation for single partition

3
2. Multiple partitioning:
This method can be implemented in three ways
a. Fixed equal multiple partitioning
b. Fixed variable multiple partitioning
c. Dynamic multiple partitioning

a. Fixed equal multiple partitioning:


In this scheme the operating system resides in the low memory and rest of main
memory is used as user space. The user space is divided in to fixed partitions. The size of
these partitions is depending up on the operating system. For example the total main
memory size is 6MB; 1MB is occupied by the operating system. The remaining 5MB is
partitioned in to 5 equal fixed partitions of 1MB each. P1, P2, P3, P4, P5 are the 5 jobs to
be loaded in the main memory, there size is given in the table below:

Job size
P1 450kb
P2 1000kb
P3 1024kb
P4 1500kb
P5 500kb

Fig. Main memory allocation

Internal and external fragmentation:


Process P1 is loaded into partition1. The maximum size of partition1 is 1024kb, the size of
P1 is 450kb. So 1024-450=574kb space is wasted, this wasted memory is said to be
Internal fragmentation‘. But there is no enough space to load process P4, because the
size of process P4 is greater than all the partitions, so the entire partition (partition5) is
wasted. This wasted memory is said to be External fragmentation‘.
Therefore, the total internal fragmentation is
=(1024-450)+(1024-1000)+(1024-500)
=574+24+524 =1122kb
The external fragmentation is 1024 kb.
A part of memory wasted within a partition is called internal fragmentation and the
wastage of an entire partition is called external fragmentation.

4
b. Fixed variable multiple partitioning:
In this method the user‘s space of main memory is divided into number of partitions,
but partitions size are different in length. The operating system keeps a table indicating
which partition of memory are available and which are occupied. When a process arrives
and needs memory, we search for a partition large enough for this process. If we find the
space large enough to fit the process, allocate the partition to that process.

There are three strategies used to allocate memory in this scheme


First-Fit: It allocates the first partition that is big enough. Searching can start either from
low memory or high memory. We can stop searching as soon as we find a free partition
that is large enough.
Best-Fit: It allocates the smallest partition that is big enough. Searching can be started
from either end of memory, Searches entire memory, and allocates the smallest partition
that is big enough for the process.
Worst-Fit: Search the entire memory and selects the partition which is largest of all.
For example, assume that we have 4000kb of main memory available, and operating
system occupies 500kb. The remaining 3500kb of memory is used for user‘s processes.

Job Queue Partitions


Job Size Arrival time Partition Size
(in ms)
J1 825KB 10 P1 700KB
J2 600KB 5 P2 400KB
J3 1200KB 20 P3 525KB
J4 450KB 30 P4 900KB
J5 650KB 15 P5 350KB
P6 625KB

Fig.3.11: Scheduling example

5
Here we use first-fit strategy to illustrate the problem. Out of 5 jobs J2 arrives first, the
size of J2 is 600KB,. Searching can be started from low memory to high memory and first
partition which is big enough is allocated. Here P1 is first partition which is big enough for
J2, so load the J2 in P1.
J1 is next job in the ready queue. The size is 825KB; P4 is the first partition that is big
enough, so load the J1 in to P4.
J5 arrives next, the size is 650KB, and there is no large enough partition to load that job,
so J5 has to wait until enough partition is available.
J3 arrives next, the size is 1200KB. There is no large enough space to load this one also. J4
arrives last, the size is 450KB, and partition P3 is large enough to load this process. So
load J4 in to P3.

Fig.3.12: Memory allocation


Partitions P2,P5,P6 are totally free, there is no processes in these partitions. This wasted
memory is said to be external fragmentation. The total external fragmentation is 1375,
(400+350+625). The total internal fragmentation is (700-600) + (525-450) + (900-825)
=250.

C. Dynamic multiple partitioning


In this method partitions are created dynamically, so that each process is loaded in to
partition of exactly the same size at that process. Here the entire user space is treated as
a single partition that is ‘big hole‘. The boundaries of partitions are dynamically changed.
These boundaries are depending on the size of processes. Consider following example.

6
Job Queue
Job Size Arrival Time
J1 825kb 10 The job queue is
J2 600kb 5
J3 1200kb 20
J4 450kb 30
J5 650kb 15

Job J2 arrives first, so load the J2 into memory first. Next J1 arrives. Load the J1 in to
memory next. Then load J5,J3 and J4 into the memory. Consider following figure 3.14 for
better understanding.

Fig.3.14: Dynamic Memory allocation

In figure (a), (b), (c), (d) jobs J2, J1, J5, and J3 are loaded. The last job is J4, the size of J4 is
450kb, but the available memory is 225kb, which is not enough to load J4, so the job J4
has to wait until the memory is free. Assume that after some time J5 has finished and it
releases its memory. Then the available memory becomes 225+650=875Kb. This memory
is enough to load J4. Consider the following figure 3.14 (e) and (f).

7
Fig. 3.14: Dynamic memory allocation

In this method partitions are changed dynamically, so it does not suffer from internal
fragmentation. Efficient memory and processor utilization are possible. This scheme
suffers from external fragmentation.

Compaction:
Compaction is a method of collecting all the free spaces together in one block, this
block can be allotted to some other job.
For example consider the example of Fixed variable multiple partitioning method. The
total internal fragmentation is (100+75+75=250Kb). The total external fragmentation is
(400+350+625=1375Kb). Collect the internal and external fragmentation together in one
block (250+1375=1625Kb). This type of mechanism is said to be compaction. Now the
compacted memory is 1625Kb.
Now the scheduler can load job J3 (1200Kb) in compacted memory. Thus efficient
memory utilization can be possible using compaction.

Fig. 3.15: Compaction

8
Paging:
The single and multiple partitioning methods supports continuous memory
allocation, the entire process loaded in partition. In paging the process is divided in
number of small parts, these are loaded in to elsewhere in the main memory. Paging is
an efficient memory management scheme because it is “non-contiguous” memory
allocation method.
The physical memory is divided in to fixed sized blocks called frames; the logical
address space (user Process) is divided in to fixed sized blocks called pages. The page size
and the frame size must be equal. The size of page or frame is depending on the
operating system. Generally the page size is 4KB.
In this method operating system maintain a data structure, called page table, it is
used for mapping purpose. The page table specifies some useful information, it tells
which frames are allocated and which frames are available, and how many total frames
are there and so on. The general page table consisting of two fields, one is page
number and other one is frame number. Each operating system has its own method for
storing page table.
Every address generated by the CPU is divided into two parts; one is ‘page number‘
and second is ‘page offset‘ or displacement. The page number is used index in page table.
Consider the following figure, the logical address space that is CPU generated
address space is divided into pages, each page having the page number(P) and
displacement (D). The pages are loaded in to available free frames in the physical
memory. Page table contain the base address of each page in physical memory, that
address is combined with offset to find the physical address of the page.
For better understanding consider the following example.
There are two jobs in the ready queue, the size of job1 is 16kb and job 2 is 24kb. The
page size is 4kb. The available main memory is 72kb i. e. 18 frames. So job 1 is divided in
to 4 pages and job 2 is divided in to 6 pages. Each process maintains a program table.
Consider the following figure for better understanding.

Fig.3.16: Structure of paging scheme

9
Fig.3.17: Example of paging

Four pages of job 1 are loaded in different locations in main memory. The O.S. provides a page
table for each process. The page table specifies the location in main memory. The capacity of
main memory in this example is 18 frames, and available jobs requires only 10 frames, so
remaining 8 frames are free. These free frames can be used for some other jobs.

Advantages: Paging supports the time sharing system. It does not effect from fragmentation.
It supports virtual memory.
Disadvantages: Paging may suffer ‘page breaks‘. For example consider a job with the logical
address space 17kb, the page size is 4kb. So this job requires 5 frames. The last frame i.e. fifth
frame requires only 1kb of memory, so the remaining 3kb is wasted. It is said to be page
breaks.
If the number of pages is large, then it is difficult to maintain the page table.

10
11
Shared Pages:
In multiprogramming environment, it is possible to share the common code by
number of processes at a time, instead of maintaining the number of copies of same
code. The logical address is divided into pages, these pages can be shared by number of
processes at a time. The pages which are shared by number of processes are called
shared pages.
For example, consider a multiprogramming environment with 10 users. Out of 10
users, 3 users wish to execute a text editor; they want to take their bio-data in text
editor. Assume that text editor requires 150kb and user bio-data occupies 50kb of data
space. So they would need 3*(150+50)=600kb. But in shared paging the text editor is
shared by all the users‘ jobs, so it requires 150kb and user files requires 50*3=150kb.
Therefore (150+150) =300kb enough to manage these three jobs instead of 600kb.
Thus shared paging saves 300kb of space.
The page size and frame size is 50kb. So 3 frames are required for text editor and 3
frames are required for user files. Each process (P1, P2, P3) has a page table, the page
table shows the frame numbers, the first 3 frame numbers in each page table i.e.
3,8,0 are common. It means the three processes shared the three pages. The main
advantage of shared pages is efficient memory utilization is possible.

Fig.3.18: Shared Paging

12
Segmentation:

A segment can be defined as a logical grouping of instructions, such as a subroutine,


array, or a data area. Every program is a collection of these segments. Segmentation is
the technique for managing these segments. For example consider following figure.
Each segment has a name and a length. The address of the segment specifies both
segment name and the offset within the segment. For example the length of the
segment ‘Main‘ is 100kb.
‘Main‘ is the name of the segment. The operating system searches the entire main
memory for free space to load a segment. This mapping is done by segment table. The
segment table is a table having two entries segment ‘Base‘ and segment ‘Limit‘

Fig.3.19: Segmented address space

Fig. User’s view of Segmentation Fig. Logical view of segmentation

13
The segment base contains the starting physical address where the segment resides in
memory. The segment limit specifies the length of the segment. Following figure shows
the basic hardware for segmentation.

Fig.3.20: Basic segmentation hardware

The logical address consists of two parts: a segment number (s), and offset or
displacement in to that segment (d). The segment number (s) is used as an index in to the
segment table.
For example consider the following figure in which the logical address space is (a job) is
divided in to four segments, numbered from 0 to 3. Each segment has an entry in the
segment table. The limit specifies the size of the segment and the base specifies the
starting address of the segment. Here segment ‘0‘ is loaded in to main memory from
1500kb to 2500kb, so 1500kb is base and 2500-1500=1000kb is the limit.

Fig.3.21: Example of segmentation

Segmentation supports virtual memory. It eliminates fragmentation by moving segments


around; fragmented memory space can be combined in to a single free area.

14
Comparison between paging and segmentation:
Paging and segmentation both scheme having advantages and disadvantages, sometimes
paging is useful and sometimes segmentation is useful. Consider the below table for
better comparison.

Paging Segmentation
1. The main memory partitioned in to 1. The main memory partitioned in to
frames (or) blocks. segments.
2. The logical address space divided into 2. The logical address space divided into
pages by compiler(or) memory segments, specified by the programmer.
management unit (MMU). 3. This scheme suffering from external
3. This scheme suffering from internal fragmentation.
fragmentation or page breaks.
4. The operating system maintains a free 4. The operating system maintains the
frames list need not to search for free particulars of available memory.
frame.
5. The operating system maintains a page 5. The operating system maintains a
map table for mapping between frames segment map table for mapping
and pages. purpose.
6. This scheme does not supports the users
view of memory. 6. It supports users view of memory.
7. Multilevel paging is possible. 7. Multi level segmentation is also possible,
but no use.

Virtual Memory:
“Virtual memory is a technique it allows the execution of a process, even the logical address
space is greater than the physical available memory”.

Virtual memory is a technique that allows the execution of a process that may not be
completely in memory. The main advantage of this scheme is that the programs can be larger
than physical memory.

15
Here, an entire program is not loaded into memory at once. Instead of it, the
program is divided into parts. The required part is loaded into main memory for the
execution. This is not visible to the programmer. The programmer thinks that he has
available lot main memory, but it is not true. Actually the system contains a small main
memory and large amount of auxiliary (secondary) memory.
Here, the question is how programs loaded in the main memory, if the main memory is
less than the logical memory. The answer is very simple. i.e. swapping. For example the
program size or logical address space is 15 MB, but the available memory is 12 MB. So 12
MB is loaded in main memory remaining 3MB is loaded in the secondary memory. When
the 3MB is needed for execution then swap out the 3MB from main memory to secondary
memory and swap in 3MB from secondary memory to main memory.

The advantage of virtual memory is, efficient main memory utilization is possible and
the programs can be loaded partially in the main memory, so more programs could be run
at the same time. So efficient CPU utilization and better throughput is possible. Virtual
memory makes the programmer no longer needs to worry about the amount of physical
memory available.

The address generated by CPU is known as logical address. The address seen by
the memory unit is known as physical address. The set of all logical addresses are known
as logical address space and set of physical addresses are known as physical address
space. Logical addresses are also known as virtual address.

Virtual memory can be implemented via Demand paging

Demand Paging:
Demand paging is the application of virtual memory. Virtual memory is a technique
which allows the execution of a process, even the logical address space is greater than the
physical address space. In this scheme a page is not loaded into the main memory
from secondary memory, until it needed, i.e. a page is loaded into the main memory by
demand. Hence this scheme is said to be Demand paging.

For example assume that the logical address space is 72KB, the page and frame size
is 8KB, so the logical address space is divided into 9 pages, numbered from 0 to 8. The
available memory is 40KB, i.e. 5 frames are available, so the 5 pages are loaded into the
main memory and the remaining 4 pages are loaded in the secondary storage device,
whenever those pages are required, the operating system swap-in those pages into main
memory.

16
In demand paging the page map table consists of three fields. One is page number,
second is frame number, and third one is valid/invalid bit. If a page is reside in main memory
the valid/invalid bit is set to ‘valid‘. Otherwise, if the page is reside in secondary storage the
bit is set to ‘invalid‘.

In figure 3.24 the page numbers 1,3,4,6 are loaded in secondary memory, so those
bits are set to invalid, remaining all pages reside in main memory, so those bits are set to
valid.
The available free frames in main memory are 5, so 5 pages are loaded in main memory.

Fig.3.24: Demand Paging

Page fault:
When the processor need to execute a particular page, and that page is not
available in main memory, this situation is said to be ‘page fault‘. When the page fault is
happened, the page replacement will be needed.

Page Replacement:
The page replacement means select a victim page in the main memory and replace that
page with the required page from the backing store (disk). The victim page is selected by the

17
―page replacement algorithm.
Page replacement algorithm:
There is several Page Replacement Algorithms are in use:
1. FIFO Page Replacement Algorithm
2. Optimal Page Replacement Algorithm
3. LRU Page Replacement Algorithm

First-In-First-Out Page Replacement Algorithm


FIFO algorithm associates with time of each page when it was brought into main
memory.
FIFO states that : “When a page must be replaced, the oldest page is chosen”.
 We can create a FIFO queue to hold all pages in memory.
 We replace the page at the Head of the queue.
 When a page is brought into memory, we insert it at the tail of the queue.

Example: Consider the below reference string and memory with three
frames 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

 First 3-references (7, 0, 1) cause 3-Page faults and are brought into these empty
frames.
 The next reference (2) replaces page 7, because page 7 was brought in first.
 Since 0 is the next reference and 0 is already in memory, we have no fault for this
reference.
 The first reference to 3 results in replacement of page 0, since it is now first in
line. Because of this replacement, the next reference to 0, will fault. Page 1 is then
replaced by page 0.
 By the end, there are Fifteen page faults altogether.

Problem: Belady’s Anomaly


Belady’s Anomaly states that: the page-fault rate may increase as the number of
allocated frames increases. Researchers identifies that Belady’s anomaly is solved by

18
using Optimal Replacement algorithm.
Optimal Page Replacement Algorithm (OPT Algorithm)
OPT states that: “Replace the page that will not be used for the longest period of
time”.
 It will never suffer from Belady’s anomaly.
 OPT guarantees the lowest possible page fault rate for a fixed number of frames.
Example: Consider the below reference string and memory with three
frames

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

 The first 3-references cause faults that fill the 3-empty frames.
 The reference to page 2 replaces page 7, because page 7 will not be used until
reference number 18, whereas page 0 will be used at 5 and page 1 at 14.
 The reference to page 3 replaces page 1 because page 1 will be the last of the
three pages in memory to be referenced again.
 At the end there are only 9-page faults by using optimal replacement algorithm
which is much better than a FIFO algorithm with 15-page faults.
Note: No replacement algorithm can process this reference string in 3-frames
with fewer than 9-faults.

Problem with Optimal Page Replacement algorithm


The optimal page-replacement algorithm is difficult to implement, because it requires
future knowledge of the reference string. The optimal algorithm is used mainly for
comparison studies (i.e. performance studies).

19
LRU Page Replacement Algorithm
In LRU algorithm, the “page that has not been used for the longest period of time
will be replaced” (i.e.) we are using the recent past as an approximation of the near
future.
LRU replacement associates with each page the time of that page’s last use.
Example: Consider the below reference string and memory with three
frames

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

 The first five faults are the same as those for optimal replacement.
 When the reference to page 4 occurs LRU replacement sees that out of the three
frames in memory, page 2 was used least recently.
 Thus, the LRU algorithm replaces page 2, not knowing that page 2 is about to be
used.
 When it then faults for page 2, the LRU algorithm replaces page 3, since it is
now the least recently used of the three pages in memory.
 The total number of page faults with LRU is 12 which is less as compared to FIFO.

In this way both processes share the available frames according to their “needs,” rather
than
equally.

20
THRASHING:
 If the number of process submitted to the CPU for execution, are increased the CPU
utilization will also increases. But increasing the process continuously at certain time
the CPU utilization falls sharply (The CPU treated this overloaded) and some times it
reaches to zero. This situation is said to be “Thrashing”.
 We can implement the same concept in ‘Paging also’. For suppose the main memory
consisting of 5 jobs initially at that time the page fault rate is 0.6, after few seconds
add the 5 jobs to memory, then the rate will increase to 0.8, after some time add
another 5 jobs, then the page fault rate drops suddenly to 0.1 (or) 0.2, some times it
may be reach to zero. This unexpected situation is said to be ‘Thrashing’.

Consider the above figure that show how thrashing will occur:
 As the degree of multiprogramming increases, CPU utilization also increases
until a maximum is reached.
 If the degree of multiprogramming is increased even further then thrashing
occurs and CPU utilization drops sharply.
 At this point, we must stop thrashing and increase the CPU utilization by
decreasing the degree of multiprogramming.

Translation Look Aside Buffer (TLB):

Whenever the processor need to access a particular page, first of all it search the
translation look aside buffer (TLB). The TLB acts like a cache and contains page table
entries, that entries must have been recently used or frequently used. If the desired page
table entry present in the TLB, (it is said to be ‘TLB Hit’)then the frame number is retrieved
and the real address is accessed. If the desired page table entry is not present in TLB (‘TLB
Miss’), then the processor search the page map table for corresponding page table entry.
If the valid/invalid bit shows, set to ‘valid’, then the page is in main memory, and the
processor can retrieve the frame number from the page table entry to form the real
address. If the bit show ‘invalid’, then the desired page is not in main memory and ‘page

21
fault’ happened. Then the operating system loads the desired page into main memory
from secondary memory.
Consider the following figure.

22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy