0% found this document useful (0 votes)
14 views34 pages

Unit 5 COA

Uploaded by

Harshit Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views34 pages

Unit 5 COA

Uploaded by

Harshit Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-5

Memory
Dr. S. K. Verma
Introduction
 programs and the data they operate on are held
in the memory of the computer.
 memory would be fast, large, and inexpensive.
 maximum size of the memory that can be used
in any computer is determined by the
addressing scheme.
 memory is usually designed to store and
retrieve data in word-length quantities.
 connection between the processor and its
memory consists of address, data, and control
lines
Contd.

 Memory access time


◦ time that elapses between the initiation of an operation to transfer a word of data
and the completion of that operation
 memory cycle time
◦ the minimum time delay required between the initiation of two successive memory
operations
 memory unit is called a random-access memory (RAM) if the access
time to any location is the same, independent of the location’s address
 technology for implementing computer memories uses semiconductor
integrated circuits.
Contd.
 a small, fast memory inserted between the
larger, slower main memory and the
processor is known as Cache Memory.
 only the active portions of a program are
stored in the main memory, and the
remainder is stored on the much larger
secondary storage device is known as Virtual
Memory.
 Data are always transferred in contiguous
blocks involving tens, hundreds, or
thousands of words
Semiconductor RAM
Memories
 Memory cells are usually organized in the form
of an array, in which each cell is capable of
storing one bit of information
 Each row of cells constitutes a memory word,
and all cells of a row are connected to a
common line referred to as the word line, which
is driven by the address decoder on the chip.
 The cells in each column are connected to a
Sense/Write circuit by two bit lines, and the
Sense/Write circuits are connected to the data
input/output lines of the chip.
Contd.
Static Memories

 Memories that consist of circuits capable of


retaining their state as long as power is applied
are known as static memories.
 the word line is at ground level, the transistors
are turned off and the latch retains its state.
Contd.
 Read Operation
◦ the word line is activated to close switches T1 and T2
◦ If the cell is in state 1, the signal on bit line b is
high and the signal on bit line b is low. The opposite
is true if the cell is in state 0.
◦ The Sense/Write circuit at the end of the two bit
lines monitors their state and sets the
corresponding output accordingly.
 Write Operation
◦ the Sense/Write circuit drives bit lines b and b’
◦ It places the appropriate value on bit line b and its
complement on b and activates the word line.
Contd.

 CMOS Cell
 Transistor pairs (T3, T5) and (T4, T6) form the inverters in the latch.
 in state 1, the voltage at point X is maintained high by having transistors T3 and T6
on while T4 and T5 are off
 If T1 and T2 are turned on, bit lines b and b’ will have high and low signals,
respectively.
 Continuous power is needed for the cell to retain its state.
 SRAMs are said to be volatile memories.
 advantage of CMOS SRAMs is their very low power consumption and Static RAMs
can be accessed very quickly
Dynamic RAMs

 Static RAMs are fast, but their cells require


several transistors.
 Information is stored in a dynamic memory

cell in the form of a charge on a capacitor


 contents must be periodically refreshed
Contd.

 A 256-Megabit DRAM chip, configured as 32M × 8,


 fast page mode feature
Synchronous DRAMs
 SDRAMs have built-in
refresh circuitry, with
a refresh counter to
provide the addresses
of the rows to be
selected for
refreshing.
 address and data
connections of an
SDRAM may be
buffered by means of
registers.
Contd.
 memory latency is the amount of time it takes
to transfer the first word of a block
 the number of bits or bytes that can be

transferred in one second is known as


Bandwidth
 Double-Data-Rate SDRAM (Assignment)
 Rambus Memory (Assignment)
Structure of Larger Memories
Static Memory Systems
Contd.
 Dynamic Memory Systems
◦ Synchronous DRAM is used
◦ SIMMs (Single In-line Memory Modules) or DIMMs (Dual In-
line Memory Modules)
 Memory Controller
◦ controller accepts a complete address and the R/W signal
from the processor, under control of a Request signal which
indicates that a memory access operation is needed.
◦ forwards the R/W signals and the row and column portions of
the address to the memory and generates the RAS and CAS
signals, with the appropriate timing
 Refresh Overhead
◦ dynamic RAM cannot respond to read or write requests while
an internal refresh operation is taking place
Read-only Memories
 ROM
◦ only reading the stored data, a memory of this
type is called a read-only memory (ROM)
 PROM
◦ programmable ROM (PROM)
 EPROM
◦ erasable, reprogrammable ROM
 EEPROM
◦ electrically erasable PROM
 Flash Memory
◦ in a flash device, it is only possible to write an
entire block of cells
◦ Flash devices have greater density, which
leads to higher capacity and a lower cost
per bit.
 Flash Cards
 Flash Drives
Direct Memory Access
 A special control unit is provided to manage the
transfer, without continuous intervention by the
processor, called direct memory access, or DMA.
 unit that controls DMA transfers is referred to as
a DMA controller.
 To initiate the transfer of a block of words, the
processor sends to the DMA controller the
starting address, the number of words in the
block, and the direction of the transfer.
 When the entire block has been transferred, it
informs the processor by raising an interrupt.
Contd.
 Two registers are used for
storing the starting
address and the word
count.
 third register contains
status and control flags.
 the controller has
completed transferring a
block of data and is ready
to receive another
command, it sets the
Done flag to 1.
Contd.
 DMA transfer of a block
of data from the main
memory to one of the
disks, an OS routine
writes the address and
word count information
into the registers of the
disk controller.
 The DMA controller
proceeds independently
to implement the
specified operation.
Memory Hierarchy
 The fastest access is to data
held in processor registers
 next level of the hierarchy is
Cache.
 A primary cache is always
located on the processor chip.
 next level in the hierarchy is
the main memory
 Disk devices provide a very
large amount of inexpensive
memory, and they are widely
used as secondary storage in
computer systems.
Memory Hierarchy
Cache Memories
 based on a property of computer programs called locality of reference
 Temporal
◦ a recently executed instruction is likely to be executed again very soon.
◦ suggests that whenever an information item, instruction or data, is first
needed, this item should be brought into the cache, because it is likely to be
needed again soon
 Spatial
◦ instructions close to a recently executed instruction are also likely to be executed
soon
◦ suggests that instead of fetching just one item from the main memory to the
cache, it is useful to fetch several items that are located at adjacent
addresses as well.
 cache block or cache line refers to a set of contiguous address
locations of some size
Contd.
 The correspondence between the main memory blocks
and those in the cache is specified by a mapping
function.
 replacement algorithm
 Cache Hits
◦ write-through protocol, both the cache location and the main
memory location are updated.
◦ write-back, or copy-back, protocol is to update only the cache
location and to mark the block containing it with an
associated flag bit, often called the dirty or modified bit.
◦ write-through protocol is simpler than the write-back protocol
 Cache Misses
◦ Read operation for a word that is not in the cache constitutes a
Read miss
Mapping Functions of
Cache
 Consider a cache consisting of 128 blocks of 16 words
each, for a total of 2048 (2K) words, and assume that
the main memory is addressable by a 16-bit address.
 The main memory has 64K words, which viewed as 4K
blocks of 16 words each
 Direct Mapping
◦ block j of the main memory maps onto block j modulo 128 of
the cache,
 whenever one of the main memory blocks 0, 128,
256, . . . is loaded into the cache, it is stored in cache
block 0.
 Blocks 1, 129, 257, . . . are stored in cache block 1, and
so on.
Contd.
 low-order 4 bits select
one of 16 words in a
block.
 the 7-bit cache block
field determines the
cache position in which
this block must be
stored.
 The high-order 5 bits of
the memory address of
the block are stored in 5
tag bits associated with
its location in the cache.
Contd.
 Associative
Mapping
◦ a main memory
block can be placed
into any cache block
position
Contd.
 Set-Associative
Mapping
◦ The blocks of the cache are
grouped into sets, and the
mapping allows a block of
the main memory to reside
in any block of a specific set.
◦ memory blocks 0, 64, 128, . .
. , 4032 map into cache set 0
 number of blocks per set
is a parameter that can
be selected to suit the
requirements of a
particular computer
 A cache that has k blocks
per set is referred to as a
k-way set-associative
cache.
Contd.
 Stale Data
 A control bit called the valid bit, provided

for each cache block to indicate whether


the data in that block are valid.
Contd.
 Replacement
Algorithms
 the least

recently used
(LRU) block,
and the
technique is
cache has space for only eight blocks
called the LRU of data
replacement
each block consists of only one 16-bit
algorithm. word of data and the memory is
word-addressable with 16-bit
addresses
Locating a Block in the
Cache
Contd.
 Content Addressable Memory (CAM)
◦ CAM is a circuit that combines comparison and
storage in a single device
 Reducing the Miss Penalty Using
Multilevel Caches
◦ second-level cache is normally on the same chip
and is accessed whenever a miss occurs in the
primary cache.
Measuring and Improving Cache
Performance
Contd.
Contd.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy