Unit 4
Unit 4
Subject: COA
Branch – Computer Science & Information Technology
Semester- IV
Faculty Name : Prof. Vivek Rawat,
Dept of CSIT, Sagar Institute of Research & Technology
Faculty Name : Prof. Vivek Rawat,
Dept of CSIT, Sagar Institute of Research & Technology
Course outcomes
CO Blooms
No. Course Out Come Skills
Level
• When the WR input is enabled, the memory stores a byte from the
data bus into a location specified by the address input lines.
Rom Chip
Secondary Memory
• The most common auxiliary memory devices used in computer
systems are magnetic disks and tapes. Other components used,
but not as frequently, are magnetic drums, magnetic bubble
memory, and optical disks.
• If there is a hit, the CPU accepts the 12-bit data from cache. If there
is a miss, the CPU reads the word from main memory and the word
is then transferred to cache
MAPPING SCHEME:
1. Associative Mapping:
• The fastest and most flexible cache organization uses an associative memory.
• The associative memory stores both the address and content (data) of the
memory word.
• This permits any location in cache to store any word from main memory.
• The diagram shows three words presently stored in the cache.
• The address value of 15 bits is shown as a five-digit octal number and its corresponding 12-bit
word is shown as a four-digit octal number.
• A CPU address of 15 bits is placed in the argument register and the associative memory is searched
for a matching address.
• If the address is found, the corresponding 12 -bit data is read and sent to the CPU. If no match
occurs, the main memory is accessed for the word.
• The address data pair is then transferred to the associative cache memory.
• If the cache is full, an address data pair must be displaced to make room for a pair that is needed
and not present in the cache.
• The decision as to what pair is replaced is determined from the replacement algorithm that the
designer chooses for the cache.
• Direct Mapping:
• Associative memories are expensive compared to random-access memories because of the
added logic associated with each cell.
• The CPU address of 15 bits is divided into two fields. The nine least significant bits
constitute the indexfield and the remaining six bits forms the tagfield.
• The disadvantage of direct mapping is that the hit ratio can drop considerably if two or more
words whose addresses have the same index but different tags are accessed repeatedly.
• The word at address zero is presently stored in the cache (index = 000, tag = 00,
data = 1220).
• Suppose that the CPU now wants to access the word at address 02000.
• The index address is 000, so it is used to access the cache. The two tags are then
compared.
• The cache tag is 00 but the address tag is 02, which does not produce a match.
• Therefore, the main memory is accessed and the data word 5670 is transferred to
the CPU.
• The cache word at index address 000 is then replaced with a tag of 02 and data of
5670.
(a) Main memory (b) Cache memory
Direct mapping cache organization.
REPLACEMENT ALGORITHM:
A virtual memory system is a combination of hardware and software techniques. The memory
management software system handles all the software operations for the efficient utilization of
memory space.
It must decide:
Which page in main memory to be removed to make room for a new page.
When a new page is to be transferred from auxiliary memory to main memory.
Where the page is to be placed in main memory.
The hardware mapping mechanism and the memory management software together
constitute the architecture of a virtual memory. When a program starts execution,
one or more pages are transferred into main memory and the page table is set to
indicate their position. The program is executed from main memory until it attempts
to reference a page that is still in auxiliary memory. This condition is called page
fault. When page fault occurs, the execution of the present program is suspended
until the required page is brought into main memory.
Three of the most common replacement algorithms used is the First-In-First – Out (FIFO), Least
Recently Used(LRU) and the Optimal Page Replacement Algorithms.
FIFO:
• The FIFO algorithm selects for replacement the page that has been in memory the longest time.
• Each time a page is loaded into memory, its identification number is pushed into a FIFO stack.
• FIFO will be full whenever memory has no more empty blocks.
• When a new page must be loaded, the page least recently brought in is removed.
• The FIFO replacement policy has the advantage of being easy to implement. It has the
disadvantage that under certain circumstances pages are removed and loaded from memory too
frequently.
LRU:
• The LRU policy is more difficult to implement but has been more attractive on the
assumption that the least recently used page is a better candidate for removal than
the least recently loaded page as in FIFO.
• The LRU algorithm can be implemented by associating a counter with every page
that isin main memory.
• When a page is referenced, its associated counter is set to zero. At fixed intervals
of time, the counters associated with all pages presently in memory are
incremented by 1.
• The least recently used page is the page with the highest count. The counters are
often called aging registers, as their count indicates their age, that is, how long ago
their associated pages have been referenced.
Optimal Page Replacement Algorithms
• The optimal page algorithm simply says that the page with the highest label
should be removed.
• If one page will not be used for 8 million instructions and another page
will not be used for 6 million instructions, removing the former pushes the
page fault that will fetch it back as far into the future as possible.
IMPROVING CACHE PERFORMANCE:
There are three ways to improve cache performance:
1. Reduce the miss rate.
2. Reduce the miss penalty.
3. Reduce the time to hit in the cache.
VIRTUAL MEMORY
• In a memory hierarchy system, programs and data are first stored in auxiliary memory.
• Portions of a program or data are brought into main memory as they are needed by the CPU.
• Virtual memory is a concept used in some large computer systems that permit the user to
construct programs as though a large memory space were available, equal to the totality of auxiliary
memory.
• Each address that is referenced by the CPU goes through an address mapping from the so-called
virtual address to a physical address in main memory.
• Virtual memory is used to give programmers the illusion that they have a very large memory at
their disposal, even though the computer actually has a relatively small main memory.
MEMORY MANAGEMENT HARDWARE: