0% found this document useful (0 votes)
27 views36 pages

Unit 4

Cache memory is a small, high-speed storage that improves CPU performance by reducing data access time from the slower main memory. It temporarily holds frequently accessed data and instructions, with different types including L1, L2, and L3 caches, each serving distinct roles. Cache mapping techniques such as direct, fully associative, and k-way set associative mapping determine how data from main memory is stored in cache, with replacement algorithms managing data when cache is full.

Uploaded by

aasthabaj6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views36 pages

Unit 4

Cache memory is a small, high-speed storage that improves CPU performance by reducing data access time from the slower main memory. It temporarily holds frequently accessed data and instructions, with different types including L1, L2, and L3 caches, each serving distinct roles. Cache mapping techniques such as direct, fully associative, and k-way set associative mapping determine how data from main memory is stored in cache, with replacement algorithms managing data when cache is full.

Uploaded by

aasthabaj6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

UNIT 4

Introduction to Cache Memory


Introduction to Cache Memory

• Main memory is slow.


• Cache is a small high-speed memory that
creates the illusion of a fast main memory.
• Stores data from some frequently used
addresses (of main memory).
• Data in primary memory can be accessed faster
than secondary memory but still, access times of
primary memory are generally in a few
microseconds, whereas the CPU is capable of
performing operations in nanoseconds.
Key Features of Cache Memory


Speed: Faster than the main memory (RAM), which helps the
CPU retrieve data more quickly.

Proximity: Located very close to the CPU, often on the CPU chip
itself, reducing data access time.

Function: Temporarily holds data and instructions that the CPU is
likely to use again soon, minimizing the need to access the slower
main memory.
Role of Cache Memory


Cache memory plays a crucial role in computer systems.

It provide faster access.

It acts buffer between CPU and main memory(RAM).

Primary role of it is to reduce average time taken to
access data, thereby improving overall system
performance.
Working of Cache Memory


Cache memory is faster, they can be accessed very fast.

Cache memory is smaller, a large amount of data cannot be
stored.

Whenever CPU needs any data it searches for corresponding data
in the cache (fast process) if data is found, it processes the data
according to instructions, however, if data is not found in the cache
CPU search for that data in primary memory(slower process) and
loads it into the cache. This ensures frequently accessed data are
always found in the cache and hence minimizes the time required
to access the data.
Need of Cache Memory


Cache memory improves CPU performance by reducing the time it
takes for the CPU to access data.

By storing frequently accessed data closer to the CPU, it
minimizes the need for the CPU to fetch data from the slower main
memory.
Types of Cache Memory


L1 or Level 1 Cache: It is the first level of cache memory that is present inside the
processor. It is present in a small amount inside every core of the processor
separately. The size of this memory ranges from 2KB to 64 KB.

L2 or Level 2 Cache: It is the second level of cache memory that may present inside
or outside the CPU. If not present inside the core, It can be shared between two
cores depending upon the architecture and is connected to a processor with the high-
speed bus. The size of memory ranges from 256 KB to 512 KB.

L3 or Level 3 Cache: It is the third level of cache memory that is present outside the
CPU and is shared by all the cores of the CPU. Some high processors may have this
cache. This cache is used to increase the performance of the L2 and L1 cache. The
size of this memory ranges from 1 MB to 8MB.
Cache Hit and Cache Miss


Cache Hit: When the CPU finds the required data in the cache memory, allowing for
quick access.On searching in the cache if data is found, a cache hit has occurred.

Cache Miss: When the required data is not found in the cache, forcing the CPU to
retrieve it from the slower main memory.On searching in the cache if data is not found,
a cache miss has occurred

Performance of cache is measured by the number of cache hits to the number of
searches. This parameter of measuring performance is known as the Hit Ratio.

Hit ratio=(Number of cache hits)/(Number of searches)

Hit ratio = percentage of memory accesses satisfied by the cache.

Miss ratio = 1 - hit ratio
Cache Line


Cache is partitioned into lines
(also called blocks). Each line
has 4-64 bytes in it.

During data transfer, a whole
line is read or written.

Each line has a tag that
indicates the address in M
from which the line has been
copied.
Process of Cache Mapping
The process of cache mapping helps us define how a certain block that is present in the
main memory gets mapped to the memory of a cache in the case of any cache miss.

In simpler words, cache mapping refers to a technique using which we bring the main
memory into the cache memory. Here is a diagram that illustrates the actual process of
mapping:
Techniques of Cache Mapping
Direct Mapping
In the case of direct mapping, a certain block of the main memory would be able to map a
cache only up to a certain line of the cache. The total line numbers of cache to which any
distinct block can map are given by the following:

Cache line number = (Address of the Main Memory Block ) Modulo (Total number of
lines in Cache)

For example,

Let us consider that particular cache memory is divided into a total of ‘n’ number of lines.

Then, the block ‘j’ of the main memory would be able to map to line number only of the cache
(j mod n).
Direct Mapping
Direct Mapping

Division of Physical Address


In the case of direct mapping, the division of the physical address occurs as
follows:
Direct Mapping

The Need for Replacement Algorithm


In the case of direct mapping,

There is no requirement for a replacement algorithm.

It is because the block of the main memory would be able to map to
a certain line of the cache only.

Thus, the incoming (new) block always happens to replace the
block that already exists, if any, in this certain line.
Fully Associative Mapping

In the case of fully associative mapping,


The main memory block is capable of mapping to
any given line of the cache that’s available freely at
that particular moment.

It helps us make a fully associative mapping
comparatively more flexible than direct mapping.
Fully Associative Mapping
Fully Associative Mapping

Here, we can see that,



Every single line of cache is available freely.

Thus, any main memory block can map to a line of
the cache.

In case all the cache lines are occupied, one of the
blocks that exists already needs to be replaced.
Fully Associative Mapping

The Need for Replacement Algorithm


In the case of fully associative mapping,

The replacement algorithm is always required.

The replacement algorithm suggests a block that is to be
replaced whenever all the cache lines happen to be
occupied.

So, replacement algorithms such as LRU Algorithm, FCFS
Algorithm, etc., are employed.
Fully Associative Mapping

Division of Physical Address


In the case of fully associative mapping, the division of the physical address
occurs as follows:
K-way Set Associative Mapping

In the case of k-way set associative mapping,



The grouping of the cache lines occurs into various sets where all the
sets consist of k number of lines.

Any given main memory block can map only to a particular cache set.

However, within that very set, the block of memory can map any cache
line that is freely available.

The cache set to which a certain main memory block can map is
basically given as follows:
Cache set number = ( Block Address of the Main Memory ) Modulo
(Total Number of sets present in the Cache)
K-way Set Associative Mapping
K-way Set Associative Mapping


k = 2 would suggest that every set consists of two cache lines.

Since the cache consists of 6 lines, the total number of sets that are
present in the cache = 6 / 2 = 3 sets.

The block ‘j’ of the main memory is capable of mapping to the set
number only (j mod 3) of the cache.

Here, within this very set, the block ‘j’ is capable of mapping to any
cache line that is freely available at that moment.

In case all the available cache lines happen to be occupied, then
one of the blocks that already exist needs to be replaced.
K-way Set Associative Mapping


The Need for Replacement Algorithm

In the case of k-way set associative mapping,

The k-way set associative mapping refers to a combination
of the direct mapping as well as the fully associative
mapping.

It makes use of the fully associative mapping that exists
within each set.

Therefore, the k-way set associative mapping needs a
certain type of replacement algorithm.
K-way Set Associative Mapping


Division of Physical Address

In the case of fully k-way set mapping, the division of the physical address
occurs as follows:
NUMERICAL

Question:-
Main Memory – 128MB
Cache Memory size – 128KB
Block Size – 16 words


Address format for direct mapping

Address format for associative mapping

Address format for 2-way set associative mapping
NUMERICAL

Solution :
No. of bytes required for representation address format in main
memory –
128MB = 227
No. of bytes required for representation address format in Cache
memory –
128KB = 217
Size of each block = 16 words = 24
NUMERICAL


Address format for direct
mapping

Address format for associative
mapping

Address format for 2-way set
associative mapping
NUMERICAL


Address format for direct
mapping

Address format for
associative mapping

Address format for 2-way set
associative mapping
NUMERICAL


Address format for direct
mapping

Address format for associative
mapping

Address format for 2-way set
associative mapping
Cache Replacement


In case of cache miss ,the required block is copied into the
cache .

But if the cache memory is full , then some block from the
cache memory needs to be deleted and space should be
created for the new block .

Now which block from the cache memory should be
deleted , algorithm are developed
Replacement Algorithms


Replacement algorithm decide which block frame will be
deleted from the cache memory to make space for the new
block.

In case of direct mapping , the new block from has to be
stored in a specified line or block frame of main
memory .

In associative and set-associative mapping , various
replacement algorithm are used.
Replacement Algorithms


Random Choice Algorithm: In this algorithm any block frame from the
cache is selected randomly and deleted without any relation to previous
page.

First-in-First-out Algorithm : This algorithm selects the block frame
which has been in the cache memeory for a long time , i.e. the first block
which entered the cache is one which has to be deleted.

Least Frequently Used Algorithm : This algorithm choosesthe block
frame that has been used very frequently by the CPU. The counter value of
each block frame gives the details of least frequently used block.

Least Recently Used Algorithm : This algorithm chooses the block frame
that haas been referenced by the CPU very less number of times from the
time it was mapped onto the cache memory. The counter value gives the
details of how many times the block has been refernced by the CPU.
Cache Write


When memory write operations are performed, CPU first writes into the cache
memory. These modifications made by CPU during a write operations, on the
data saved in cache, need to be written back to main memory or to auxiliary
memory.

If the address of the result to be stored is not present in the cache , then the main
memory get updated with the result.

If the address is present in the cache memory,then there are two possibilties:
Write-Through & Write-Back

Therefore there are two popular cache write policies (schemes) are:

Write-Through

Write-Back
Write-Through


In a write through cache, the main memory is updated each time
the CPU writes into cache.

The advantage of the write-through cache is that the main
memory always contains the same data as the cache contains.

This characteristic is desirable in a system which uses direct
memory access scheme of data transfer. The I/O devices
communicating through DMA receive the most recent data.
Write-Back


In a write back scheme, only the cache memory is updated during
a write operation.

The main memory gets updated only when the corresponding
word is to be deleted from the cache memory.

The updated locations in the cache memory are marked by a flag
so that later on, when the word is removed from the cache, it is
copied into the main memory.

The words are removed from the cache time to time to make
room for a new block of words.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy