0% found this document useful (0 votes)
11 views51 pages

Chapter 4

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views51 pages

Chapter 4

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

CHAPTER Four:

Memory Management

Hawassa University

Compiled by: Kassawmar M.


January, 2024
1
Contents
 Introduction
 Overlays
 Memory management requirements
 Logical address VS physical address
 Swapping and partitions
 Memory management technique
 Memory allocation algorithm
 Paging and Segmentation
 Page Replacement Algorithm
 Direct Memory Access
 Working sets and thrashing
 Caching

2
2
Introduction
• The main purpose of a computer system is to execute programs.
These programs, together with the data they access, must be at least
partially in main memory during execution.

• To improve both the utilization of the CPU and the speed of its
response to users, a general-purpose computer must keep several
processes in memory.
• Memory (RAM) is important resource that must be carefully
managed.
• Memory management is the functionality of an operating system
which handles or manages primary memory.

• RAM provides the working memory of your system; most


programmer need an infinitely large, and fast memory.

• More RAM is always better even systems that run parallel queries
and populate full-text indexes require more RAM. 3
Cont..
• Physical memory (RAM) is a form of very fast, but volatile data
storage.
• Most computers have memory hierarchy from very fast, volatile,
small amount of storage to slow, non-volatile, large amount of
storage.

• Memory management is the functionality of OS which manages


primary memory and moves processes back and forth between
main memory and disk during execution.

• Memory management keeps track of the status of each memory


block, whether it is allocated to a process or free for allocation.
• It checks how much memory is to be allocated to processes.

• It decides which process will get memory at what time.

4
Cont..
• If only a few processes can be kept in main memory, then much
of the time all processes will be waiting for I/O and the CPU will
be idle.
• Hence, memory needs to be allocated efficiently in order to pack
as many processes into memory as possible.
• Memory Manager
o Responsible to manage memory hierarchy
o Keep track of which memory is in use and which are not in
use.
o Manage swapping between main memory and disk when
RAM is too small to hold all process.
5
Overlays
• The main problem in Fixed partitioning is the size of a process has to
be limited by the maximum size of the partition, which means a
process can never be span over another.
• In order to solve this problem, people have used some solution which
is called as Overlays.
• The concept of overlays is that whenever a process is running it will
not use the complete program at the same time, it will use only some
part of it.
• Sometimes it happens that compare to the size of the biggest
partition, the size of the program will be even more, then, in that
case, you should go with overlays.
• So, overlay is a technique to run a program that is bigger than the
size of the physical memory by keeping only those instructions and
data that are needed at any given time.
6
Cont..
• Divide the program into modules in such a way that not all
modules need to be in the memory at the same time.

o Implemented by user, no special support from OS; programming


design of overlay structure is complex.

Advantage
• Reduce memory requirement
• Reduce time requirement

Disadvantage
• Overlap map must be specified by programmer
• Programmer must know memory requirement
• Overlapped module must be completely disjoint
• Programming design of overlays structure is complex and not possible
in all cases 7
Basic Memory Management
 Memory management requirements:
• Relocation: different location for swap out and in.
• Protection: avoid processes from interfering.
• Sharing: Allow several processes to access the same portion
of memory.
• Logical organization: Almost invariably, main memory in a
computer system is organized as a linear, or one-dimensional
address space, consisting of a sequence of bytes or words.
Secondary memory, at its physical level, is similarly
organized.

• Physical organization: the organization of the flow of


information between main and secondary memory is a major
system concern.
8
Cont…
• Memory management provides protection by using two
registers, a base register and a limit register.
• The base register: holds the smallest legal accepted
physical memory address.
• The limit register: specifies the size of the range.

9
Cont….
 A pair of base and limit registers define the logical address space

• For example, if the base


register holds 300000 and
the limit register is 120000,
then the program can
legally access all addresses
from 300000 through
420000.

10
Logical address VS physical address
• During execution of process we can have two address: logical and
physical addresses.
• An address generated by the CPU is a logical address.
• The address actually available on memory unit is a physical
address.
• Logical address is also known a Virtual address.
• The set of all logical addresses generated by a program is referred
to as a logical address space.
• The set of all physical addresses corresponding to these logical
addresses is referred to as a physical address space 11
Cont…
• The run-time mapping from virtual to physical address is done by
the memory management unit (MMU).
• Memory management unit (MMU) is the hardware that maps
virtual to physical address.

• Logical and physical addresses are the same in compile-time


and load-time address-binding schemes but they are differ in
execution-time address-binding scheme.

• The value in the base register is added to every address generated


by a user process which is treated as offset at the time it is sent to
memory. 12
Cont…
• For example, if the base register value is 10000, then an attempt
by the user to use address location 100 will be dynamically
reallocated to location 10100.
• Physical address seen by memory unit and user can access it
indirectly.
• Physical Address identifies a physical location of required data
in a memory.
• The user never directly deals with the physical address but can
access by its corresponding logical address.
• The user program generates the logical address but the program
needs physical memory for its execution.
• Therefore, the logical address must be mapped to the physical
address by MMU before they are used.
13
Cont.…
• Physical address space in a system can be defined as the size of
the main memory.

• It is really important to compare the process size with the


physical address space.

• The process size must be less than the physical address space.

• Physical Address Space = Size of the Main Memory

14
Cont…

15
Swapping
• Swapping is a mechanism in which a process can be swapped
temporarily out of main memory to a backing store, and vice versa.

• Backing store is a usually a hard disk drive or any other secondary


storage which fast in access and large enough to accommodate
copies of all memory images for all users.

• Swapping makes it possible for the total physical address space of


all processes to exceed the real physical memory of the system,
thus increasing the degree of multiprogramming in a system.

• Standard swapping involves moving processes between main


memory and a backing store.

• The backing store must be large enough to accommodate copies of


all memory images for all users, and it must provide direct access
to these memory images. 16
Cont…
• The system maintains a ready queue consisting of all processes
whose memory images are on the backing store or in memory
and are ready to run.

• Whenever the CPU scheduler decides to execute a process, it


calls the dispatcher.

• The dispatcher checks to see whether the next process in the


queue is in memory.
• If it is not, and if there is no free memory region, the dispatcher
swaps out a process currently in memory and swaps in the
desired process.
• It then reloads registers and transfers control to the selected
process. 17
Cont…
• Major time-consuming part of swapping is transfer time.

• Total transfer time is directly proportional to the amount of


memory swapped.

• Let us assume that the user process is of size 100KB and the
backing store is a standard hard disk with transfer rate of 1 MB per
second.

• The actual transfer of the 100K process to or from memory will


take.
100KB / 1000KB per second
= 1/10 second
= 100 milliseconds
18
Schematic View of Swapping

Conditions for swapping


• Process to be executed must be not in
memory
• No sufficient free memory location

19
Memory management technique
• How to allocate the main memory to various process?
• The main memory must accommodate both the operating system
and the various user processes. Therefore, we need to allocate main
memory in the most efficient way.
• In this figure we have a set of process wait in input queue that is
reside in secondary memory and waiting to load in the main
memory.

20
Cont…
• In main memory portion will allocated for operating system(low
memory) and the remain allocate to the process (held in high
memory).

• This memory allocation divided into two contiguous and non-


contiguous.

• In Contiguous memory allocation we place process in main memory


with contiguous fashion.

• In non-contiguous memory allocation process can be non-


contiguous.
21
Cont..
1. Contiguous memory allocation

• Is classical memory allocation model that assign a process


consecutive memory block.
• It is one of the oldest memory allocation schema when process
needs to execute, memory is requested by the process.

• The size of the process is compared with amount of contiguous


main memory available to execute process if sufficient
contiguous memory is found the process is allocated memory to
start its execution otherwise it is added to queue of wait until
sufficient free continues memory is available.

• There are two memory management techniques with in


contiguous memory allocation such as fixed and dynamic
partition. 22
Cont…
a. Fixed Partitioning
• Partitions are done before process request memory space.

• This memory management techniques in which any process whose size is


less than or equal to a partition size can be loaded into the partition.

• If all partitions are occupied, the operating system can swap a process out
of a partition.

• A program may be too large to fit in a partition.

• The programmer must then design the program with overlays.

• Partition main memory into a set of non overlapping regions called


partitions.

• Partitions can be of equal or unequal sizes. 23


Cont.…

• Main memory use is inefficient.


Any program, no matter how
small, occupies an entire
partition. This is called internal
fragmentation.
• Internal Fragmentation –
allocated memory may be
slightly larger than requested
memory; this size difference is
memory internal to a partition,
but not being used.
• Unequal-size partitions less these
problems but they still remain.

24
Cont…
i. Equal-size partitions
• If there is an available partition, a process can be loaded into that
partition because all partitions are of equal size, it does not matter
which partition is used.
o If all partitions are occupied by blocked processes, choose one
process to swap out to make room for the new process:- The
operating system can swap a process out of a partition If none are in
a ready or running state.

ii. Unequal-size partitions

• assign each process to the smallest partition within which it


will fit.
• Processes are assigned in such a way as to minimize wasted
memory within a partition.
• When its time to load a process into main memory the smallest
available partition that will hold the process is selected. 25
Cont…
b. Dynamic Partitioning
• Partitions are created dynamically during the allocation of
memory.
• Unlike fixed Partition in dynamic partitioning partition are done
when process request memory space.
• Size and number of partition are vary through of the operation of
the system.
• Each process is allocated exactly as much memory as it requires
• It leads to situation in which there are a lot of small useless hole
in memory. This is called external fragmentation
• External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous
• Solution to external fragmentation is compaction
• Compaction:- combining the multiple useless holes in to one big
hole(block) 26
Cont…
• Compaction:- combining the multiple useless holes in to one big
hole to reduce external fragmentation by compaction.

0 0
OS OS
400K 400K
P5 P5
900K 100K 900K
1000K
P4 P4
1700K 300K 1600K
2000K Compact 1900K P3
P3
2300K 660K
260K
2560K 2560K

27
Cont…
• External fragmentation happens when there’s a sufficient quantity
of area within the memory to satisfy the memory request of a
method.

• However, the process’s memory request cannot be fulfilled


because the memory offered is in a non-contiguous manner.

• Internal fragmentation happens when the memory is split into


mounted-sized blocks. Whenever a method is requested for the
memory, the mounted-sized block is allotted to the method.

• In the case where the memory allotted to the method is somewhat


larger than the memory requested, then the difference between
allotted and requested memory is called internal fragmentation.

28
Memory allocation (Placement) algorithm
• When new process is created or swapped in operating system must
decide which free block to allocate to a process.
• Goal: to reduce usage of compaction (time consuming).
• Possible Algorithms to allocate free space for newly created
process are:-
First-fit algorithm
o Scans memory form the beginning and chooses the first
available block that is large enough
o Simple and Fastest algorithm
Next-fit
o Begins to scan memory from the location of the last
placement, and chooses the next available block that is large
enough.
o More often allocate a block of memory at the end of memory
where the largest block is found.
29
Cont…
• Best-fit algorithm
o Chooses the block that is closest in size to the request
o Since smallest block is found for process, the smallest
amount of fragmentation is left
o Memory compaction must be done more often

• Worst-fit algorithm
o Scan all hole for largest available hole
o Choose the largest hole; must also search entire list
o Produces the largest waste hole

30
Paging and segmentation
• Paging is a technique in which physical memory is broken into
blocks of the same size called pages.

• Each page(block) is continues range of address


• These pages are mapped on physical address
• The virtual address space is divided into fixed size unit called
pages.
• The corresponding pages on physical memory is called page
frame.
• Page and page frame has the same size.
• When a process is to be executed, it's corresponding pages are
loaded into any available memory frames.
31
Cont…
• Paging allows a process to be stored in a memory in a non-
contiguous manner.

• Storing process in a non-contiguous manner solves the problem of


external fragmentation.
oPartition memory into small equal fixed-size chunks and divide each
process into the same size chunks.

• The partitions of secondary memory area unit and main memory


area unit known as pages and frames respectively.

• Paging is a memory management method accustomed fetch


processes from the secondary memory into the main memory in
the form of pages.

32
Cont…
• For implementing paging the physical and logical memory
spaces are divided into the same fixed sized blocks.

• These fixed-sized blocks of physical memory are called frames,


and the fixed-sized blocks of logical memory are called pages.

• When a process needs to be executed the process pages from


logical memory space are loaded into the frames of physical
memory address space.

• Now the address generated by CPU for accessing the frame is


divided into two parts i.e. page number and page offset.

33
Cont…
o Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
o Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit.
• The page table uses page number as an index; each process has its
separate page table that maps logical address to the physical address.

• The page table contains base address of the page stored in the frame
of physical memory space.
• The base address defined by page table is combined with the page
offset to define the frame number in physical memory where the
page is stored.
• Operating system maintains a page table for each process
o Contains the frame location for each page in the process
o Memory address consist of a page number and offset within the page
34
Implementation of Page Table
• Page table is kept in main memory

• Page-table base register (PTBR) points to the page table

• Page-table length register (PRLR) indicates size of the page table.

• In this scheme every data/instruction access requires two memory


accesses. One for the page table and one for the data/instruction.

• The two memory access problem can be solved by the use of a


special fast-lookup hardware cache called associative memory or
translation look-aside buffers (TLBs).

• Some TLBs store address-space identifiers (ASIDs) in each TLB


entry – uniquely identifies each process to provide address-space
protection for that process. 35
Segmentation
• Segmentation is another non-contiguous memory allocation
scheme like paging.
• The process is divided into the variable size segments and loaded to the
logical memory address space.
• It is similar to dynamic partitioning the logical address space is the
collection of variable size segments.
• Each segment has its name and length.
• For the execution, the segments from logical memory space are loaded to the
physical memory space.
• A program can be subdivided into segments – Segments may vary in length
• There is a maximum segment length Address generated by CPU is divided
into
o Segment number (s)- is used as an index into a segment table which contains
base address of each segment in physical memory and a limit of segment.
o Segment offset (o) - segment offset is first checked against limit and then is
combined with base address to define the physical memory address.
36
Page Replacement Algorithm
• When page fault is occurs the operating system has choice a
page to remove from memory to make room for page that has
to be brought.
• Some of the page replacement algorithm are:-
The optimal page replacement
The not recently used page replacement
FIFO page replacement algorithm
The Second chance page replacement
The clock page replacement
Least recently use(LRU)

37
Cont…
The optimal Page replacement algorithm
• It is easy to describe but impossible to implement.
• At the moment the page fault occurs each label can be
labelled with number of instruction that will be
executed first before that page is referenced
• Page with highest label will be removes

38
Cont…
 The not Recently use Page replacement algorithm
• OS is responsible to collects statics about which page are being used
• Most computers contain virtual memory with two status bits
R- is set whenever the page is referenced
M- is set whenever the page is modified
• OS divides all pages into four categories based on the current
values of their R and M bits.
Class 0: not referenced, not modified
Class 1: not referenced, modified
Class 2: referenced, not modified
 Class 3: referenced, modified
• NRU (Not Recently Used) algorithm removes a page at random
from the lowest
39
Cont…
FIFO page replacement algorithm

• Maintain the list of linked list of all pages in order they


came to into memory

• Page at beginning of list will be replaced

• May replace frequently used pages

40
Cont…
 The Second Chance Page Replacement
Algorithm
• Avoids the problem of throwing out a heavily used page
is to inspect the R bit of the oldest page
• If it is 0, the page is both old and unused, so it is
replaced immediately.
• If the R bit is 1, the bit is cleared, the page is put onto
the end of the list of pages
• Then the search continues.

41
Cont…
 The Clock Page Replacement Algorithm

42
Cont…
Least Recently Used Page
• Assume pages used recently will used again
• It through the page has been unused for long period
of time
• Keeps linked list of pages on the time they had
accessed lastly

43
Cont..

44
Virtual memory
• Virtual memory – A technique that allows a process to execute in the main
memory space which is smaller than the process size.
• Virtual memory is a memory management capability of OS that uses
hardware and software to allow a computer to compensate for physical
memory shortages by temporarily transferring data from RAM to disk
storage.
• Modern operating system memory management technique is virtual
memory.
• Virtual memory provides many benefits:
Only part of the program needs to be in memory for execution
Logical address space can be much larger than physical address
space
Allows address spaces to be shared by several processes
Allows for more efficient process creation
Higher transfer bandwidth :CPU can do other things during transfer
faster,
More consistent response times (no interrupt or polling overhead)
45
Direct Memory Access (DMA)
• DMA refers to the ability of an I/O device to transfer data directly
to and from memory without going through the CPU.
• It mean that it is a method that allows an I/O device to send or
receive data directly to or from the main memory, by passing the
CPU to speed up memory operations.
• The process is managed by a chip known as a DMA controller
(DMAC).
• DMA controller performs functions that would be normally carried
out by the processor: For each word, it provides the memory
address and all the control signals.
• To transfer a block of data, it increments the memory addresses and
keeps track of the number of transfers.
• DMA controller can transfer a block of data from an external
device to the processor, without any intervention from the
processor.
46
Working Set and Trash
• The working set is stands for parts of memory that the current algorithm
is using and is determined by which parts of memory the CPU just
happens to access.

• It is totally automatic to you. If you are processing an array and storing


the results in a table, the array and the table are your working set.
• The working set is a nice way to describe the memory you want stored.

• If it is small enough, it can all fit in the cache and your algorithm will run
very fast.
• On the OS level, the kernel has to tell the CPU where to find the physical
memory your application is using every time you access a new page so
also you want to avoid that hit as much as possible.
• The set of pages that a process is currently using is its working set
• If the entire working set is in memory
o Process will run without causing many faults 47
Cont…
• If the available memory is too small to hold the entire
working set.
o Process will cause many page faults
o Run slowly
• A program causing page faults every few instructions is
said to be thrashing

• During thrashing, the CPU spends less time on some


actual productive work and spend more time swapping.

48
Cache
• Fast memory placed between the CPU and main memory.
• When some data is requested, the cache is first checked to see
whether it contains that data.
o If data is already in the cache, it is called a cache hit, then the data
can be retrieved from the cache, which is much faster than retrieving
it from the original storage location.
o If the requested data is not in the cache, it is called a cache miss, then
the data needs to be fetched from the original storage location to
cache, which would take a longer time.

• Caching is used in different places:-


• In the CPU, caching is used to improve the performance by
reducing the time taken to get data from the main memory.

• In web browsers, web caching is used to store responses from


previous visits to web sites, in order to make the next visits faster. 49
Storage structure
Going down the hierarchy
• Decreasing cost per bit
• Increasing capacity
• Increasing access time

50
.

51

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy