0% found this document useful (0 votes)
10 views8 pages

Memory Management

Uploaded by

moin latif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

Memory Management

Uploaded by

moin latif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

STORAGE MANAGEMENT

Memory Management
Memory management deals with the ways or methods through which memory in a
computer system is managed. The method or scheme of managing memory depends
upon its hardware design.
Memory is a large array of words or bytes with some addresses. Data read or
written to the memory locations are memory address. An instruction execution cycle
fetches an instruction from memory. This instruction is then decoded and may cause
operands to be fetched from memory. When instruction is executed completely results
may be stored back in the memory, so we will see how the operating system manages
memory.

Swapping
A process to be executed should be in the memory. A process in memory can
also be swapped out of memory temporarily to another storage e.g. hard disk and then
brought back into the memory for execution. When a time quantum expires in Round
Robin scheduling algorithm the memory manager swaps out a process and swaps in
another process.
Similarly in Priority based scheduling when a higher priority process arrives and
wants service, the memory manager swaps out the low priority process in order to load
and execute high priority process. When high priority process finishes execution, the
lower priority process will again be swapped in for execution. This is also called Roll
out/Roll-in.

OS

Process P1

swap out

User
swap in Process P2
space

Backing store

Main memory

Swapping of two processes using a disk as a backing store

Page 1 of 8
Normally a swapped out process is swapped in the same memory space that it was
using previously. Although, it depends upon address binding. If binding is done at
assembly or load time then process, cannot be moved to different location. If binding is
done at execution time, then it is possible to swap a process into different memory
location.
Swapping needs a backing store, that is commonly a disk etc. Disk should be
large enough to hold copies of memory images of all users and should provide direct
access to these memory images. The Operating System keeps information of all
processes that are in memory or on the backing store. When the Scheduler decides to
execute a process it calls the dispatcher. The dispatcher checks the memory regions to
place the process in the memory, if there is no free memory region, the dispatcher swaps
out the process currently in memory and swaps in the desired process.

Single-Partition Allocation
In Single-Partition Allocation scheme will divide memory into two partitions one
is for the user and the other for the Operating System. Operating System is either kept in
low or in high memory. Normally Operating System is kept in low memory. So when the
Operating System is in low memory the user programs will be in high memory, and we’ll
have to protect Operating System code from the user process. This protection is
implemented using the Base and Limit Registers.
The value of the Base should be fixed during the execution of the program, if the
user addresses are bound to physical address by the use of base then it the base changes
then these addresses will become invalid. So if the Base Address is changed, it should
only be changed before the execution of the program.
Sometimes the size of the Operating System changes, e.g. a device driver that is
not in use has occupied the memory. So it is better to release that memory so that other
programs can use the free space but removing the device driver from memory changes
the size of operating system.

0 OS
0 OS

User

User

512K 512K
Single User Loading User Program into high
System memory

Page 2 of 8
So, in such cases when the size of Operating System changes dynamically, we
load the user process into high memory down towards the base register value rather then
from the base register towards high memory. Its advantage is that all unused space is in
the middle and user as well as operating system can use this unused memory.

Multiple-Partition Allocation
In Multi Programming more than one process is kept in memory for execution
and CPU switches rapidly between these processes. A problem is to allocate memory to
different processes waiting to be brought into memory.
A simple way is to divide memory into fixed-size partition and each partition will
have one process in it. When a partition becomes available another waiting process is
brought in that place for execution.
Problem in this solution is that the number of partitions bound degree of Multi-
programming. Another problem is size e.g. if the size is large enough then small
processes waste the space and if the size is small then it becomes difficult for larger
programs to execute. This kind of scheme was used by IBM OS/360 Operating System, it
is no longer in use.
Another technique is that operating system maintains a table indicating which
parts of the memory are available and which are occupied. Initially, all the memory is
available for user processes and is taken as one large block of memory also called a hole.
When a process needs memory, we search the hole that should be enough for this
process. If the hole is found it is allocated to that process and rest of the hole i.e. free
memory is available to satisfy the requests of other processes or future requests of same
process etc.
Consider a situation where we have 2560K memory available. Operating System
is using 400K memory out of 2560K. So, 2160K memory is available to user processes.
The memory allocated to P1 is 600K, P2 = 1000K and P3 = 300K using FCFS scheduling
algorithm.
0 OS
Job Queue
Process Memory Time 400K
P1 600K 10
P2 1000K 5
2160K
P3 300K 20
P4 700K 8
P5 500K 15
2560K

So we have a hole of 260K that cannot be allocated to next process P4 because its
size is 700K. Using RR Scheduling, process P2 terminates first and releases its memory,
now process P4 will be given that memory hole which is of 1000K. P4 is of 700K, so we
again have a block of 300 k. Similarly, after sometime P1 will be completed and P5 of
size 500K will be given the memory hole of 600K freed by P1. So, another hole of 100K
will be created.

Page 3 of 8
0 0 0 0 0
OS OS OS OS OS

400K 400K 400K 400K 400K


P1 Allocate P5 P5
P1 P1 P1
Terminates
900K
1000K 1000K 1000K 1000K
1000K
P2 Allocate P4 P4 P4
P2 Terminates P4

1700K 1700K 1700K


2000K 2000K 2000K 2000K 2000K
P3 P3 P3 P3 P3
2300K 2300K 2300K 2300K 2300K

2560K 2560K 2560K 2560K 2560K

Memory allocation using Multiple Partition Allocation

Now we have a set of holes scattered throughout the memory. When a process
needs memory, we search a hole in which the process can be fitted, if the hole is large
than after storing the process in larger hole the remaining space is split to create a new
hole. When the process terminates, it releases its memory and again a block is available
where new processes can be stored again. This is the example of Dynamic-Storage
Allocation i.e. how to assign/allocate memory to processes. In reality we use First-fit,
Best-fit or Worst-fit algorithms to allocate free hole to processes.

First-Fit
Allocates the first hole that is big enough where the process that needs execution
can be easily fitted.

Best-Fit
In Best-Fit all holes are kept in some order by size and entire list is checked to
allocate the smallest hole that can be best fitted.

Worst-Fit
In worst fit all holes are kept in some order by size and entire list is checked to
allocate the largest hole.

The problem with these algorithms is that they suffer from External
Fragmentation. Processes are loaded and removed from memory, and free space is
broken in to pieces. External Fragmentation is that we have enough total free space to
satisfy a request but it is not contiguous but it is fragmented into small holes.

Page 4 of 8
Compaction
Compaction is a solution for solving the problem caused by External
Fragmentation. In Compaction, memory contents are shuffled to place all free memory
together in one large block. Compaction is possible only if reallocation is dynamic and is
done at execution time. If relocation is static and is done at load or assembly time then
compaction will not be possible.
Algorithms are developed that move process in order to form one large hole of
memory. One technique is to move all processes towards one end of memory and move
all holes in other directions to form one big hole. Another technique may be to move
some processes at one end and some other at the other end to form one big hole of
memory in the center.

0 0 0 0
OS OS OS OS

300K 300K 300K 300K


P1 P1 P1 P1
500K 500K 500K 500K
P2 P2 P2 P2
600K 600K 600K 600K
400K P3 P4
800K
1000K P4 1000K
P3 P3 900K
1200K 1200K 1200K
300K

1500K 1500K
900K 900K P4
P4
1900K 1900K
200K P3
2100K 2100K 2100K 2100K

original moved moved moved


alllocaltion 600K 600K 600K

Comparison of some different ways to compact memory

Page 5 of 8
Paging
Another solution of External Fragmentation is Paging. Normally all the processes
need contiguous space in memory, but paging allows a process’s memory to be non-
contiguous and allowing a process to allocate memory wherever it is available. So
problem of External Fragmentation is solved by Paging. Paging is used in many
operating systems.

Frames 1 Page 0
Pages
0 Page 0 2
0 1
1 Page 1 3 Page 2
1 4
2 Page 2 2 3 4 Page 1
3 7
3 Page 3 5
Page
Table 6
Logical
Memory
7 Page 3

Physical
Memory

Paging Model of Logical and Physical Memory

In Paging, physical memory is broken into fixed size blocks called


“frames”. Logical memory is broken into blocks of the same size called “pages”. When a
process needs execution, its pages are loaded into any available frames from the backing
store. The address generated by the CPU is divided into two parts.

i) Page number
ii) Page offset

Page number is used as an index into a page table.


Page offset is combined with the base address to define physical memory address.
Page Table contains the base address of each page in physical memory.

Page 6 of 8
0

Page Offset 0
0 0 A
4 I
1 1 B Page No. J

2 0 Frame No.
K 1
2 C L

3 3 D 8 M
N
0 4 E O 2
P
1 5 F
2 6 G
1 0 5 12

7
1 6 3
3 H
2 1
0 8 I 16
3 2
1 9 J 4
2 10 K
2 Page Table
20 A
3 11 L B

12
C 5
0 M D

1 13 N 24 E

2 14 O
3 F
G 6
H
3 15 P
28

Logical Memory 7

Page 7 of 8
Physical Memory

Paging example for a 32-word memory with 4-word Pages

We have a system that has page size of 4 words and a physical memory of 32
words (eight pages). As an example, we’ll see how the user’s view of memory can be
mapped into physical memory.
In the above mentioned diagram, Logical address 0 is page 0, offset 0. Indexing
into the page table, we can check that page 0 is in frame 5. So, logical address 0 maps to
physical address 20 that is
((5 x 4) + 0) = 20.
Similarly, logical address 3 (page 0, offset 3) maps to physical address 23 i.e. ((5
x 4) + 3) = 23. Logical address 4 is page 1, offset 0. According to the page table, page 1
is mapped to frame 6. Logical address 4 maps to physical address 24 i.e. ((6 x 4) + 0) =
24. Similarly, logical address 13 maps to physical address 9.

Page 8 of 8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy