Unit 5-1
Unit 5-1
Memory Concepts and Hierarchy – Memory Management – Cache Memories: Mapping and Replacement
Techniques – Virtual Memory – DMA – I/O – Accessing I/O: Parallel and Serial Interface – Interrupt I/O –
Interconnection Standards: USB, SATA
Page | 1
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
This Memory Hierarchy Design is divided into 2 main types:
1. External Memory or Secondary Memory – Comprising of Magnetic Disk, Optical Disk, Magnetic
Tape i.e. peripheral storage devices which are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory – Comprising of Main Memory, Cache Memory & CPU
registers. This is directly accessible by the processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
1. Capacity:
It is the global volume of information the mehmory can store. As we move from top to bottom in the
Hierarchy, the capacity increases.
2. Access Time: It is the time interval between the read/write request and the availability of the data. As
we move from top to bottom in the Hierarchy, the access time increases.
3. Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the speed gap
increases between the CPU registers and Main Memory due to large difference in access time. This
results in lower performance of the system and thus, enhancement was required. This enhancement
was made in the form of Memory Hierarchy Design because of which the performance of the system
increases. One of the most significant ways to increase system performance is minimizing how far
down the memory hierarchy one has to go to manipulate data.
4. Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal
Memory is costlier than External Memory.
Page | 2
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• D RAM (Dynamic RAM): It uses capacitors and transistors and stores the data as a charge on the
capacitors. They contain thousands of memory cells. It needs refreshing of charge on capacitor after a
few milliseconds. This memory is slower than S RAM.
(ii) ROM (Read Only Memory): It is a non-volatile memory. Non-volatile memory stores information
even when there is a power supply failed/ interrupted/stopped. ROM is used to store information that is
used to operate the system. As its name refers to read-only memory, we can only read the programs and
data that is stored on it. It contains some electronic fuses that can be programmed for a piece of specific
information. The information stored in the ROM in binary format. It is also known as permanent memory.
ROM is of four types:
• MROM (Masked ROM): Hard-wired devices with a pre-programmed collection of data or
instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in this way.
• PROM (Programmable Read Only Memory): This read-only memory is modifiable once by the
user. The user purchases a blank PROM and uses a PROM program to put the required contents into
the PROM. Its content can‘t be erased once written.
• EPROM (Erasable Programmable Read Only Memory): It is an extension to PROM where you
can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40 minutes.
• EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written contents
can be erased electrically. You can delete and re-programme EEPROM up to 10,000 times. Erasing
and programming take very little time, i.e., nearly 4 -10 ms (milliseconds). Any area in an EEPROM
can be wiped and programmed selectively.
2. Secondary Memory: It is also known as auxiliary memory and backup memory. It is a non-volatile
memory and used to store a large amount of data or information. The data or information stored in
secondary memory is permanent, and it is slower than primary memory. A CPU cannot access secondary
memory directly. The data/information from the auxiliary memory is first transferred to the main memory,
and then the CPU can access it.
Characteristics of Secondary Memory:
• It is a slow memory but reusable.
• It is a reliable and non-volatile memory.
• It is cheaper than primary memory.
• The storage capacity of secondary memory is large.
• A computer system can run without secondary memory.
• In secondary memory, data is stored permanently even when the power is off.
Types of secondary memory:
(i) Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic coating on
it that is used for magnetic recording. Bits are recorded on tape as magnetic patches called RECORDS that
run along many tracks. Typically, 7 or 9 bits are recorded concurrently. Each track has one read/write
Page | 3
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
head, which allows data to be recorded and read as a sequence of characters. It can be stopped, started
moving forward or backward, or rewound.
(ii) Magnetic Disks: A magnetic disc is a circular metal or a plastic plate and these plates are coated with
magnetic material. The disc is used on both sides. Bits are stored in magnetized surfaces in locations
called tracks that run in concentric rings. Sectors are typically used to break tracks into pieces.
Hard discs are discs that are permanently attached and cannot be removed by a single user.
(iii) Optical Disks: It‘s a laser-based storage medium that can be written to and read. It is reasonably
priced and has a long lifespan. The optical disc can be taken out of the computer by occasional users.
Types of Optical Disks :
(a) CD – ROM:
• It‘s called Compact Disk. Only read from memory.
• Information is written to the disc by using a controlled laser beam to burn pits on the disc surface.
• It has a highly reflecting surface, which is usually aluminum.
• The diameter of the disc is 5.25 inches.
• 16000 tracks per inch is the track density.
• The capacity of a CD-ROM is 600 MB, with each sector storing 2048 bytes of data.
• The data transfer rate is about 4800KB/sec. & the new access time is around 80 milliseconds.
(b) WORM-(WRITE ONCE READ MANY):
• A user can only write data once.
• The information is written on the disc using a laser beam.
• It is possible to read the written data as many times as desired.
• They keep lasting records of information but access time is high.
• It is possible to rewrite updated or new data to another part of the disc.
• Data that has already been written cannot be changed.
• Usual size – 5.25 inch or 3.5 inch diameter.
• The usual capacity of 5.25 inch disk is 650 MB,5.2GB etc.
(c) DVDs:
Page | 4
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• The term ―DVD‖ stands for ―Digital Versatile/Video Disc,‖ and there are two sorts of DVDs:
(i)DVDR (writable) and (ii) DVDRW (Re-Writable)
• DVD-ROMS (Digital Versatile Discs): These are read-only memory (ROM) discs that can be used in a
variety of ways. When compared to CD-ROMs, they can store a lot more data. It has a thick
polycarbonate plastic layer that serves as a foundation for the other layers. It‘s an optical memory that
can read and write data.
• DVD-R: It is a writable optical disc that can be used just once. It‘s a DVD that can be recorded. It‘s a
lot like WORM. DVD-ROMs have capacities ranging from 4.7 to 17 GB. The capacity of 3.5 inch
disk is 1.3 GB.
3. Cache Memory: It is a type of high-speed semiconductor memory that can help the CPU run faster.
Between the CPU and the main memory, it serves as a buffer. It is used to store the data and programs that
the CPU uses the most frequently.
Advantages of cache memory:
• It is faster than the main memory.
• When compared to the main memory, it takes less time to access it.
• It keeps the programs that can be run in a short amount of time.
• It stores data in temporary use.
Disadvantages of cache memory:
• Because of the semiconductors used, it is very expensive.
• The size of the cache (amount of data it can store) is usually small.
Memory unit:
• Memories are made up of registers.
• Each register in the memory is one storage location.
• The storage location is also called a memory location. Memory locations are identified using Address.
• The total number of bits a memory can store is its capacity.
• A storage element is called a Cell.
• Each register is made up of a storage element in which one bit of data is stored.
• The data in a memory are stored and retrieved by the process called writing and reading respectively.
Page | 5
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• A word is a group of bits where a memory unit stores binary information.
• A word with a group of 8 bits is called a byte.
• A memory unit consists of data lines, address selection lines, and control lines that specify the
direction of transfer. The block diagram of a memory unit is shown below:
Page | 6
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Following are the important roles in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it is free or
allocated. It addresses primary memory by providing abstractions so that software perceives a large
memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute programs
larger than the size or amount of available memory. It does this by moving information back and
forth between primary memory and secondary memory by using the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process from being
corrupted by another process. If this is not ensured, then the system may exhibit unpredictable
behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two programs
can reside at the same memory location although at different times.
Memory management Techniques:
The Memory management Techniques can be classified into following main categories:
o Contiguous memory management schemes
o Non-Contiguous memory management schemes
Page | 7
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Contiguous memory management schemes:
In a Contiguous memory management scheme, each program occupies a single contiguous block of storage
locations, i.e., a set of memory locations with consecutive addresses.
Single contiguous memory management schemes:
The Single contiguous memory management scheme is the simplest memory management scheme used in
the earliest generation of computer systems. In this scheme, the main memory is divided into two contiguous
areas or partitions. The operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.
Advantages of Single contiguous memory management schemes:
o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is given full
processor's time, and no other processor will interrupt it.
Disadvantages of Single contiguous memory management schemes:
o Wastage of memory space due to unused memory as the process is unlikely to use all the available
memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main memory.
o It can not be executed if the program is too large to fit the entire available main memory space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs simultaneously.
Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it limits computers to execute only one
program at a time resulting in wastage in memory space and CPU time. The problem of inefficient CPU use
can be overcome using multiprogramming that allows more than one program to run concurrently. To switch
between two processes, the operating systems need to load both processes into the main memory. The
Page | 8
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
operating system needs to divide the available main memory into multiple parts to load multiple processes
into the main memory. Thus multiple processes can reside in the main memory simultaneously.
o Fixed Partitioning
o Dynamic Partitioning
Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory management
scheme or static partitioning. These partitions can be of the same size or different sizes. Each partition can
hold a single process. The number of partitions determines the degree of multiprogramming, i.e., the
maximum number of processes in memory. These partitions are made at the time of system generation and
remain fixed after that.
Advantages of Fixed Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Fixed Partitioning memory management schemes:
o This scheme suffers from internal fragmentation.
o The number of partitions is specified at the time of system generation.
Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme. In a
dynamic partitioning scheme, each process occupies only as much memory as they require when loaded for
processing. Requested processes are allocated memory until the entire physical memory is exhausted or the
remaining space is insufficient to hold the requesting process. In this scheme the partitions used are of
variable size, and the number of partitions is not defined at the system generation time.
Advantages of Dynamic Partitioning memory management schemes:
o Simple to implement.
o Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:
o This scheme also suffers from internal fragmentation.
o The number of partitions is specified at the time of system segmentation.
Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different blocks and loaded
at different portions of the memory that need not necessarily be adjacent to one another. This scheme can be
classified depending upon the size of blocks and whether the blocks reside in the main memory or not.
Cache memory
Page | 9
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Cache Memory is a special very high-speed memory. It is used to speed up and synchronizing with high-
speed CPU. Cache memory is costlier than main memory or disk memory but economical than CPU
registers. Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. It holds frequently requested data and instructions so that they are immediately available to the CPU
when needed.
Cache memory is used to reduce the average time to access data from the Main memory. The cache is a
smaller and faster memory which stores copies of the data from frequently used main memory locations.
There are various different independent caches in a CPU, which store instructions and data.
Levels of memory:
• Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
• Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily stored for faster access.
• Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once power is off data no
longer stays in this memory.
• Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for a corresponding
entry in the cache.
• If the processor finds that the memory location is in the cache, a cache hit has occurred and data is
read from cache
• If the processor does not find the memory location in the cache, a cache miss has occurred. For a
cache miss, the cache allocates a new entry and copies in data from main memory, then the request is
fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Page | 10
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher associativity, reduce miss rate,
reduce miss penalty, and reduce the time to hit in the cache.
Cache Measures
• Cache: Cache is small, fast storage used to improve average access time to slow memory. It applied
whenever buffering is employed to reuse commonly occurring items, i.e. file caches, name caches,
and so on.
• Cache Hit: CPU finds a requested data item in the cache.
• Cache Miss: The item in not in the cache at access.
• Block is a fixed size collection of data, retrieved from memory and placed into the cache.
• Advantage of Temporal Locality: If access data from slower memory, move it to faster memory. If
data in faster memory is unused recently, move it to slower memory.
• Advantage of Spatial Locality: If need to move a word from slower to faster memory, move
adjacent words at same time.
• Hit Rate (Hit Ratio): Fraction of accesses that are hits at a given level of the hierarchy.
• Hit Time: Time required accessing a level of the hierarchy, including time to determine whether
access is a hit or miss.
• Miss Rate (Miss Ratio): Fraction of accesses that are misses at a given level.
• Miss Penalty: Extra time required to fetch a block into some level from the next level down.
• The address space is usually broken into fixed size blocks, called pages. At each time, each page
resides either in main memory or on disk.
• Average memory access time is a useful measure to evaluate the performance of a memory-hierarchy
configuration.
Average Memory Access Time = Memory Hit Time + Memory Miss Rate x Miss Penalty
Cache Mapping
3. Discuss in detail about various cache mapping techniques.
• When the processor needs to read or write a location in main memory, it first checks for a
corresponding entry in the cache.
✓ If the processor finds that the memory location is in the cache, a cache hit has occurred and
data is read from cache
✓ If the processor does not find the memory location in the cache, a cache miss has occurred.
For a cache miss, the cache allocates a new entry and copies in data from main memory, and
then the request is fulfilled from the contents of the cache.
• The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit Ratio = Hit / (Hit + Miss) = No. of Hits / Total Accesses
Page | 11
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• We can improve Cache performance using higher cache block size, higher associativity, reduce miss
rate, reduce miss penalty and reduce the time to hit in the cache.
Cache Mapping
• Cache memory mapping is the way in which we map or organize data in cache memory, this is done
for efficiently storing the data which then helps in easy retrieval of the same.
• The three different types of mapping used for the purpose of cache memory are as follow,
✓ Direct Mapping
✓ Associative Mapping
✓ Set-Associative Mapping
Direct Mapping:
• In direct mapping, assigned each memory block to a specific line in the cache.
• If a line is previously taken up by a memory block when a new block needs to be loaded, the old
block is trashed.
• An address space is split into two parts index field and tag field.
• The cache is used to store the tag field whereas the rest is stored in the main memory.
• Direct mapping`s performance is directly proportional to the Hit ratio.
Associative Mapping:
• In this type of mapping, the associative memory is used to store content and addresses both of the
memory word. Any block can go into any line of the cache.
• This means that the word id bits are used to identify which word in the block is needed, but the tag
becomes all of the remaining bits.
• This enables the placement of the any word at any place in the cache memory.
• It is considered to be the fastest and the most flexible mapping form.
Page | 12
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Set-Associative Mapping:
• This form of mapping is an enhanced form of the direct mapping where the drawbacks of direct
mapping are removed.
• Set associative addresses the problem of possible thrashing in the direct mapping method.
• It does this by saying that instead of having exactly one line that a block can map to in the cache; we
will group a few lines together creating a set.
• Then a block in memory can map to any one of the lines of a specific set.
• Set-associative mapping allows that each word that is present in the cache can have two or more
words in the main memory for the same index address.
• Set associative cache mapping combines the best of direct and associative cache mapping techniques.
Uses of Cache
• Usually, the cache memory can store a reasonable number of blocks at any given time, but this
number is small compared to the total number of blocks in the main memory.
Page | 13
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• The correspondence between the main memory blocks and those in the cache is specified by a
mapping function.
Types of Cache
• Primary Cache – A primary cache is always located on the processor chip. This cache is small and
its access time is comparable to that of processor registers.
• Secondary Cache – secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the
processor chip.
Locality of Reference
• Since size of cache memory is less as compared to main memory.
• So to check which part of main memory should be given priority and loaded in cache is decided
based on locality of reference.
Types of Locality of Reference
• Spatial Locality of reference – this says that there is chance that element will be present in the close
proximity to the reference point and next time if again searched then more close proximity to the
point of reference.
• Temporal Locality of reference – In this Least recently used algorithm will be used. Whenever
there is page fault occurs within word will not only load word in main memory but complete page
fault will be loaded because spatial locality of reference rule says that if you are referring any word
next word will be referred in its register that‘s why we load complete page table so complete block
will be loaded.
Cache replacement Techniques:
In an operating system that uses paging for memory management, a page replacement algorithm is needed
to decide which page needs to be replaced when a new page comes in.
Page Fault: A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space but not loaded in physical memory. Since actual physical memory is much
smaller than virtual memory, page faults happen. In case of a page fault, Operating System might have to
replace one of the existing pages with the newly needed page. Different page replacement algorithms
suggest different ways to decide which page to replace. The target for all algorithms is to reduce the
number of page faults.
Page Replacement Algorithms:
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the number of page
faults.
Page | 14
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults. when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available
so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider
reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we increase
slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
Page | 15
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for
the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page fault. 4 will
takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement
algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames.
Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least recently
Page | 16
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
used —> 1 Page fault 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page
Fault Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used recently.
Belady‘s anomaly can occur in this algorithm.
4. Explain in detail about virtual memory with an example.(Nov/Dec 2019) (Nov/Dec 2021) or
Discuss the concept of virtual memory and explain how a virtual memory system is implemented, pointing
out the hardware and software support. (Nov/Dec 2017)Nov/Dec 2020.
VIRTUAL MEMORY
• Virtual memory divides physical memory into blocks (called page or segment) and allocates them to
different processes.
• With virtual memory, the CPU produces virtual addresses that are translated by a combination of
HW and SW to physical addresses, which accesses main memory.
• The process is called memory mapping or address translation.
• Today, the two memory-hierarchy levels controlled by virtual memory are DRAMs and magnetic
disks.
• Virtual Memory manages the two levels of the memory hierarchy represented by main memory and
secondary storage.
• Figure below shows the mapping of virtual memory to physical memory for a program with four
pages.
Page | 17
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• This separation provides large virtual memory for programmers when only small physical memory is
available.
• Virtual memory is a memory management capability of an OS that uses hardware and software to
allow a computer to compensate for physical memory shortages by temporarily transferring data
from random access memory (RAM) to disk storage.
• Virtual address space is increased using active memory in RAM and inactive memory in hard disk
drives (HDDs) to form contiguous addresses that hold both the application and its data.
• Computers have a finite amount of RAM so memory can run out, especially when
multiple programs run at the same time.
• A system using virtual memory can load larger programs or multiple programs running at the same
time, allowing each one to operate as if it has infinite memory and without having to purchase more
RAM.
• As part of the process of copying virtual memory into physical memory, the OS divides memory into
page files or swap files that contain a fixed number of addresses.
• Each page is stored on a disk and when the page is needed, the OS copies it from the disk to main
memory and translates the virtual addresses into real addresses.
Page | 18
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
5. Discuss the concept of Programmed I/O. Discuss about Programmed I/Os associated with
computers. (Apr/May 2018)
Programmed I/O
• If I/O operations are completely controlled by the CPU, the computer is said to be using
programmed I/O. In this case, the CPU executes programs that initiate, direct and terminate the I/O
operations.
• If a part of the main memory address space is assigned to I/O ports, then such systems are called as
Memory-Mapped I/O systems.
• In I/O-mapped I/O systems, the memory and I/O address space are separate. Similarly the control
lines used for activating memory and I/O devices are also different. Two sets of control lines are
available. READ M and WRITE M are related with memory and READ I/O and WRITE I/O are
related with I/O devices.
S.No. Parameter Memory-mapped I/O I/O-mapped I/O
1. Address space Memory and I/O devices share the Memory and I/O devices have
entire address space separate address space
2. Hardware No additional hardware required Additional hardware required
3. Implementation Easy to implement Difficult to implement
4. Address Same address cannot be used to refer Same address can be used to refer
both memory and I/O device. both memory and I/O device.
5. Control lines Memory control lines are used to Different set of control lines are used
control I/O devices. to control memory and I/O.
6. Control lines used The control lines are: READ, WRITE The control lines are: READ M,
WRITE M, READ I/O, WRITE I/O
I/O instructions
Two I/O instructions are used to implement programmed I/O.
• IN: The instruction IN X causes a word to be transferred from I/O port X to the accumulator register
A.
• OUT: The instruction OUT X transfer a word from the accumulator register A to the I/O port X.
Limitations of programmed I/O
The programmed I/O method has two limitations:
• The speed of the CPU is reduced due to low speed I/O devices.
• Most of the CPU time is wasted
6. Describe the DMA controller in a computer system with a neat block diagram. Explain
mechanism Direct Memory Access. (Nov/Dec2012, 2013, 2015, 2016) (Apr / May 2016, 2017,
Nov /Dec 2011) (Nov/Dec 2018).With a neat sketch explain the working principle of DMA.
(Apr/May 2019)
Page | 19
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
DIRECT MEMORY ACCESS
• A special control unit may be provided to allow the transfer of large block of data at high speed
directly between the external device and main memory, without continuous intervention by the
processor. This approach is called DMA.
• DMA transfers are performed by a control circuit called the DMA Controller.
To initiate the transfer of a block of words, the processor sends,
i) Starting address
ii) Number of words in the block
iii) Direction of transfer.
• When a block of data is transferred , the DMA controller increment the memory address for
successive words and keep track of number of words and it also informs the processor by raising an
interrupt signal.
• While DMA control is taking place, the program requested the transfer cannot continue and the
processor can be used to execute another program.
• After DMA transfer is completed, the processor returns to the program that requested the transfer.
R/W->Determines the direction of transfer
• When R/W =1, DMA controller read data from memory to I/O device.
• R/W =0, DMA controller perform write operation.
• Done Flag=1, the controller has completed transferring a block of data and is ready to receive
another command.
• IE=1, it causes the controller to raise an interrupt (interrupt Enabled) after it has completed
transferring the block of data.
• IRQ=1, it indicates that the controller has requested an interrupt.
Page | 20
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• A DMA controller connects a high speed network to the computer bus, and the disk controller for
two disks also has DMA capability and it provides two DMA channels.
• To start a DMA transfer of a block of data from main memory to one of the disks, the program
write‘s the address and the word count information into the registers of the corresponding channel of
the disk controller.
• When DMA transfer is completed, it will be recorded in status and control registers of the DMA
channel (ie) Done bit=IRQ=IE=1.
Page | 21
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Execution of a DMA-operation (single block transfer)
• The CPU prepares the DMA-operation by the construction of a descriptor (1), containing all
necessary information for the DMAC to independently perform the DMA-operation (offload engine
for data transfer).
• It initializes the operation by writing a command to a register in the DMAC (2a) or to a special
assigned memory area (command area), where the DMAC can poll for the command and/or the
descriptor (2b). Then the DMAC addresses the device data register (3) and read the data into a
temporary data register (4).
• In another bus transfer cycle, it addresses the memory block (5) and writes the data from the
temporary data register to the memory block (6).
Cycle Stealing:
• Requests by DMA devices for using the bus are having higher priority than processor requests.
• Top priority is given to high speed peripherals such as, Disk High speed Network Interface and
Graphics display device.
• Since the processor originates most memory access cycles, the DMA controller can be said to steal
the memory cycles from the processor. This interviewing technique is called Cycle stealing.
• Burst Mode: The DMA controller may be given exclusive access to the main memory to transfer a
block of data without interruption. This is known as Burst/Block Mode.
• Bus Master: The device that is allowed to initiate data transfers on the bus at any given time is
called the bus master.
BUS ARBITRATION:
7. Explain in detail about the Bus Arbitration techniques in DMA. (Nov 2011, 2012, 2014) Apr /
May 2011, 2017, May 2013
• Bus Arbitration: It is the process by which the next device to become the bus master is selected and
the bus mastership is transferred to it.
Page | 22
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Types: There are 2 approaches to bus arbitration. They are,
• Centralized arbitration (A single bus arbiter performs arbitration)
• Distributed arbitration (all devices participate in the selection of next bus master).
Centralized Arbitration:
• Here the processor is the bus master and it may grants bus mastership to one of its DMA controller.
• A DMA controller indicates that it needs to become the bus master by activating the Bus Request
line (BR) which is an open drain line.
• The signal on BR is the logical OR of the bus request from all devices connected to it.When BR is
activated the processor activates the Bus Grant Signal (BGI) and indicated the DMA controller that
they may use the bus when it becomes free.
• This signal is connected to all devices using a daisy chain arrangement.
• If DMA requests the bus, it blocks the propagation of Grant Signal to other devices and it indicates
to all devices that it is using the bus by activating open collector line, Bus Busy (BBSY).
BBS Y
BR
Processor
DMA DMA
controller controller
BG1 1 BG2 2
Page | 23
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
DMA controller 2
iTm
asserts the BR e
Processor asserts
B R
the BG1 signal
to DMA#2.
BG
2
B B SY
Bu
mast Processo DMA controller 2 Processo
er r r
Page | 24
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Each device on the bus is assigned a 4 bit id. When one or more devices request the bus, they assert
the Start-Arbitration signal & place their 4 bit ID number on four open collector lines, ARB0 to
ARB3.
• A winner is selected as a result of the interaction among the signals transmitted over these lines.
• The net outcome is that the code on the four lines represents the request that has the highest ID
number.
• The drivers are of open collector type. Hence, if the i/p to one driver is equal to 1, the i/p to another
driver connected to the same bus line is equal to ‗0‘ (ie. bus is in low-voltage state).
• Eg: Assume two devices A & B have their ID 5 (0101), 6(0110) and their code is 0111.
• Each device compares the pattern on the arbitration line to its own ID starting from MSB.
• If it detects a difference at any bit position, it disables the drivers at that bit position. It does this by
placing ‗0‘ at the i/p of these drivers.
• ‗A‘ detects a difference in line ARB1; hence it disables the drivers on lines ARB1 & ARB0. This
causes the pattern on the arbitration line to change to 0110 which means that ‗B‘ has won the
contention.
INPUT DEVICES:
8. Explain in detail about input devices with an example.
Input Devices
The Input Devices are the hardware that is used to transfer transfers input to the computer. The data can be
in the form of text, graphics, sound, and text. Output device display data from the memory of the computer.
Output can be text, numeric data, line, polygon, and other objects.
1. Keyboard
Page | 25
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner
Keyboard:
The most commonly used input device is a keyboard. The data is entered by pressing the set of keys. All
keys are labeled. A keyboard with 101 keys is called a QWERTY keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3. .. F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for fast entry of numeric
data.
Function of Keyboard:
Advantage:
Disadvantage:
Page | 26
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
1. Keyboard is not suitable for graphics input.
Mouse:
A Mouse is a pointing device and used to position the pointer on the screen. It is a small palm size box.
There are two or three depression switches on the top. The movement of the mouse along the x-axis helps in
the horizontal movement of the cursor and the movement along the y-axis helps in the vertical movement of
the cursor on the screen. The mouse cannot be used to enter text. Therefore, they are used in conjunction
with a keyboard.
Advantage:
1. Easy to use
2. Not very expensive
Trackball
It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop computer, instead
of a mouse. This is a ball which is half inserted, and by changing fingers on the ball, the pointer can be
moved.
Advantage:
1. Trackball is stationary, so it does not require much space to use it.
2. Compact Size
Spaceball:
It is similar to trackball, but it can move in six directions where trackball can move in two directions only.
The movement is recorded by the strain gauge. Strain gauge is applied with pressure. It can be pushed and
pulled in various directions. The ball has a diameter around 7.5 cm. The ball is mounted in the base using
rollers. One-third of the ball is an inside box, the rest is outside.
Applications:
Page | 27
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
5. It is used in the area of simulation and modeling.
Joystick:
A Joystick is also a pointing device which is used to change cursor position on a monitor screen. Joystick is
a stick having a spherical ball as its both lower and upper ends as shown in fig. The lower spherical ball
moves in a socket. The joystick can be changed in all four directions. The function of a joystick is similar to
that of the mouse. It is mainly used in Computer Aided Designing (CAD) and playing computer games.
When the wave signals are interrupted by some contact with the screen, that located is recorded. Touch
screens have long been used in military applications.
Light Pen
Light Pen (similar to the pen) is a pointing device which is used to select a displayed menu item or draw
pictures on the monitor screen. It consists of a photocell and an optical system placed in a small tube. When
its tip is moved over the monitor screen, and pen button is pressed, its photocell sensing element detects the
screen location and sends the corresponding signals to the CPU.
Uses:
1. Light Pens can be used as input coordinate positions by providing necessary arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
5. It can be used as valuators
Digitizers:
The digitizer is an operator input device, which contains a large, smooth board (the appearance is similar to
the mechanical drawing board) & an electronic tracking device, which can be changed over the surface to
follow existing lines. The electronic tracking device contains a switch for the user to record the desire x & y
Page | 28
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
coordinate positions. The coordinates can be entered into the computer memory or stored or an off-line
storage medium such as magnetic tape.
Advantages:
Disadvantages:
1. Costly
2. Suitable only for applications which required high-resolution graphics.
Touch Panels:
Touch Panels is a type of display screen that has a touch-sensitive transparent panel covering the screen. A
touch screen registers input when a finger or other object comes in contact with the screen.
When the wave signals are interrupted by some contact with the screen, that located is recorded. Touch
screens have long been used in military applications.
OUTPUT DEVICES:
It is an electromechanical device, which accepts data from a computer and translates them into form
understand by users.
1. Printers
2. Plotters
Printers:
Printer is the most important output device, which is used to print data on paper.
Page | 29
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Types of Printers:
1. Impact Printers: The printers that print the characters by striking against the ribbon and onto the
papers are known as Impact Printers.
1. Character Printers
2. Line Printers
2. Non-Impact Printers: The printers that print the characters without striking against the ribbon and
onto the papers are called Non-Impact Printers. These printers print a complete page at a time, therefore,
also known as Page Printers.
1. Laser Printers
2. Inkjet Printers
Laser Printers:
These are non-impact page printers. They use laser lights to produces the dots needed to form the characters
to be printed on a page & hence the name laser printers.
Step1: The bits of data sent by processing unit act as triggers to turn the laser beam on & off.
Step2: The output device has a drum which is cleared & is given a positive electric charge. To print a page
the modulated laser beam passing from the laser scans back & forth the surface of the drum. The positive
electric charge on the drum is stored on just those parts of the drum surface which are exposed to the laser
beam create the difference in electric which charges on the exposed drum surface.
Step3: The laser exposed parts of the drum attract an ink powder known as toner.
Step5: The ink particles are permanently fixed to the paper by using either heat or pressure technique.
Step6: The drum rotates back to the cleaner where a rubber blade cleans off the excess ink & prepares the
drum to print the next page.
Page | 30
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
LCD Display Monitor
The flat-panel display refers to a class of video devices that have reduced volume, weight and power
requirement in comparison to the CRT. You can hang them on walls or wear them on your wrists. Current
uses of flat-panel displays include calculators, video games, monitors, laptop computer, and graphics
display.
• Emissive Displays − Emissive displays are devices that convert electrical energy into light. For
example, plasma panel and LED (Light-Emitting Diodes).
• Non-Emissive Displays − Non-emissive displays use optical effects to convert sunlight or light
from some other source into graphics patterns. For example, LCD (Liquid-Crystal Device).
ACCESSING I/O
10. How to accessing I/O devices to a computer?
➢ A simple arrangement to connect I/O devices to a computer is to use a single bus arrangement. The
bus enables all the devices connected to it to exchange information.
➢ Typically, it consists of three sets of lines used to carry address, data, and control signals. Each I/O
device is assigned a unique set of addresses.
➢ When the processor places a particular address on the address line, the device that recognizes this
address responds to the commands issued on the control lines.
➢ The processor requests either a read or a write operation, and the requested data are transferred over
the data lines, when I/O devices and the memory share the same address space, the arrangement is
called memory-mapped I/O.
➢ With memory-mapped I/O, any machine instruction that can access memory can be used to transfer
data to or from an I/O device.
➢ For example, if DATAIN is the address of the input buffer associated with the keyboard, the
instruction Move DATAIN, R0 Reads the data from DATAIN and stores them into processor
register R0.
Page | 31
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ Similarly, the instruction Move R0, DATAOUT Sends the contents of register R0 to location
DATAOUT, which may be the output data buffer of a display unit or a printer.
➢ Most computer systems use memory-mapped I/O. some processors have special In and Out
instructions to perform I/O transfers.
➢ When building a computer system based on these processors, the designer had the option of
connecting I/O devices to use the special I/O address space or simply incorporating them as part of
the memory address space.
➢ The I/O devices examine the low-order bits of the address bus to determine whether they should
respond. The hardware required to connect an I/O device to the bus.
➢ The address decoder enables the device to recognize its address when this address appears on the
address lines.
➢ The data register holds the data being transferred to or from the processor. The status register
contains information relevant to the operation of the I/O device.
➢ Both the data and status registers are connected to the data bus and assigned unique addresses.
➢ The address decoder, the data and status registers, and the control circuitry required to coordinate I/O
transfers constitute the device‘s interface circuit.
➢ I/O devices operate at speeds that are vastly different from that of the processor. When a human
operator is entering characters at a keyboard, the processor is capable of executing millions of
instructions between successive character entries.
➢ An instruction that reads a character from the keyboard should be executed only when a character is
available in the input buffer of the keyboard interface.
➢ Also, we must make sure that an input character is read only once. This example illustrates program-
controlled I/O, in which the processor repeatedly checks a status flag to achieve the required
synchronization between the processor and an input or output device.
➢ We say that the processor polls the device. There are two other commonly used mechanisms for
implementing I/O operations: interrupts and direct memory access.
➢ In the case of interrupts, synchronization is achieved by having the I/O device send a special signal
over the bus whenever it is ready for a data transfer operation.
➢ Direct memory access is a technique used for high-speed I/O devices. It involves having the device
interface transfer data directly to or from the memory, without continuous involvement by the
processor.
➢ The routine executed in response to an interrupt request is called the interrupt service routine, which
is the PRINT routine in our example.
➢ Interrupts bear considerable resemblance to subroutine calls. Assume that an interrupt request arrives
during execution of instruction i in figure 1
Page | 32
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ The processor first completes execution of instruction i. Then, it loads the program counter with the
address of the first instruction of the interrupt-service routine.
➢ For the time being, let us assume that this address is hardwired in the processor. After execution of
the interrupt-service routine, the processor has to come back to instruction i +1.
➢ Therefore, when an interrupt occurs, the current contents of the PC, which point to instruction i+1,
must be put in temporary storage in a known location.
➢ A Return-from interrupt instruction at the end of the interrupt-service routine reloads the PC from the
temporary storage location, causing execution to resume at instruction i +1.
➢ In many processors, the return address is saved on the processor stack. We should note that as part of
handling interrupts, the processor must inform the device that its request has been recognized so that
it may remove its interrupt-request signal.
➢ This may be accomplished by means of a special control signal on the bus. An interrupt-
acknowledge signal.
➢ The execution of an instruction in the interrupt-service routine that accesses a status or data register
in the device interface implicitly informs that device that its interrupt request has been recognized.
➢ So far, treatment of an interrupt-service routine is very similar to that of a subroutine. An important
departure from this similarity should be noted.
➢ A subroutine performs a function required by the program from which it is called. However, the
interrupt-service routine may not have anything in common with the program being executed at the
time the interrupt request is received.
Page | 33
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ In fact, the two programs often belong to different users. Therefore, before starting execution of the
interrupt-service routine, any information that may be altered during the execution of that routine
must be saved.
➢ This information must be restored before execution of the interrupt program is resumed. In this way,
the original program can continue execution without being affected in any way by the interruption,
except for the time delay.
➢ The information that needs to be saved and restored typically includes the condition code flags and
the contents of any registers used by both the interrupted program and the interrupt-service routine.
➢ The task of saving and restoring information can be done automatically by the processor or by
program instructions.
➢ Most modern processors save only the minimum amount of information needed to maintain the
registers involves memory transfers that increase the total execution time, and hence represent
execution overhead.
➢ Saving registers also increase the delay between the time an interrupt request is received and the start
of execution of the interrupt-service routine. This delay is called interrupt latency.
SERIAL AND PARALLEL INTERFACE
11. Explain about serial and parallel interface.
➢ An I/O interface consists of the circuitry required to connect an I/O device to a computer bus. On one
side of the interface, we have bus signals.
➢ On the other side, we have a data path with its associated controls to transfer data between the
interface and the I/O device – port.
➢ We have two types: Serial port and Parallel port A parallel port transfers data in the form of a
number of bits (8 or 16) simultaneously to or from the device.
➢ A serial port transmits and receives data one bit at a time. Communication with the bus is the same
for both formats.
➢ The conversion from the parallel to the serial format, and vice versa, takes place inside the interface
circuit.
➢ In parallel port, the connection between the device and the computer uses a multiple-pin connector
and a cable with as many wires.
➢ This arrangement is suitable for devices that are physically close to the computer. In serial port, it is
much more convenient and cost-effective where longer cables are needed.
➢ Typically, the functions of an I/O interface are:
✓ Provides a storage buffer for at least one word of data
✓ Contains status flags that can be accessed by the processor to determine whether the buffer is full
or empty
✓ Contains address-decoding circuitry to determine when it is being addressed by the processor
✓ Generates the appropriate timing signals required by the bus control scheme
Page | 34
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
✓ Performs any format conversion that may be necessary to transfer data between the bus and the
I/O device, such as parallel-serial conversion in the case of a serial port.
Parallel Port
➢ The hardware components needed for connecting a keyboard to a processor Consider the circuit of
input interface which encompasses (as shown in below figure): –Status flag, SIN –R/~W –Master-
ready –Address decoder.
➢ A detailed figure showing the input interface circuit is presented in figure.
➢ Now, consider the circuit for the status flag (figure 4.30). An edge-triggered D flip-flop is used along
with read-data and master-ready signals
➢ The hardware components needed for connecting a printer to a processor are: the circuit of output
interface, and –Slave-ready –R/~W –Master-ready –Address decoder –Handshake control.
➢ The input and output interfaces can be combined into a single interface.
➢ The general purpose parallel interface circuit that can be configured in a variety of ways.
➢ For increased flexibility, the circuit makes it possible for some lines to serve as inputs and some lines
to serve as outputs, under program control.
Serial Port
➢ A serial interface circuit involves – Chip and register select, Status and control, Output shift
register, DATAOUT, DATAIN, Input shift register and Serial input/output
Page | 35
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
INTERRUPT
12. What is an interrupt? Explain the different types of interrupts and the different ways of
handling the interrupts. Explain Interrupt Handling. (Nov/Dec 2016) (Nov/Dec 2018)Nov/Dec
2020.
INTERRUPTS:
• An interrupt is an event that causes the execution of one program to be suspended and the execution
of another program to begin.
• In program‐controlled I/O, when the processor continuously monitors the status of the device, the
processor will not perform any function.
• An alternate approach would be for the I/O device to alert the processor when it becomes ready. The
Interrupt request line will send a hardware signal called the interrupt signal to the processor. On
receiving this signal, the processor will perform the useful function during the waiting period.
• The routine executed in response to an interrupt request is called Interrupt Service Routine. The
interrupt resembles the subroutine calls.
Program 1 Program 2
1
2
Interrupt
occurs i
here
i + 1
• The processor first completes the execution of instruction i. Then it loads the PC(Program Counter)
with the address of the first instruction of the ISR.
• After the execution of ISR, the processor has to come back to instruction i + 1
• Therefore, when an interrupt occurs, the current contents of PC which point to i +1 is put in
temporary storage in a known location.
• A return from interrupt instruction at the end of ISR reloads the PC from that temporary storage
location, causing the execution to resume at instruction i+1.
• When the processor is handling the interrupts, it must inform the device that its request has been
recognized so that it removes its interrupt requests signal.
• This may be accomplished by a special control signal called the interrupt acknowledge signal.
• The task of saving and restoring the information can be done automatically by the processor.
• The processor saves only the contents of program counter & status register (ie) it saves only the
minimal amount of information to maintain the integrity of the program execution.
Page | 36
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Saving registers also increases the delay between the time an interrupt request is received and the
start of the execution of the ISR. This delay is called the Interrupt Latency.
• Generally, the long interrupt latency in unacceptable. The concept of interrupts is used in Operating
System and in Control Applications, where processing of certain routines must be accurately timed
relative to external events. This application is also called as real-time processing.
Interrupt Hardware
Page | 37
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ The device raises an interrupt request.
➢ The processor interrupts the program currently being executed.
➢ Interrupts are disabled by changing the control bits is PS (Processor Status register)
➢ The device is informed that its request has been recognized & in response, it deactivates the
INTR signal.
➢ The actions are enabled & execution of the interrupted program is resumed.
Edge-triggered
• The processor has a special interrupt request line for which the interrupt handling circuit responds
only to the leading edge of the signal. Such a line said to be edge-triggered.
Handling Multiple Devices:
• When several devices requests interrupt at the same time, it raises some questions. They are.
➢ How can the processor recognize the device requesting an interrupt?
➢ Given that the different devices are likely to require different ISR, how can the processor obtain
the starting address of the appropriate routines in each case?
➢ Should a device be allowed to interrupt the processor while another interrupt is being serviced?
➢ How should two or more simultaneous interrupt requests be handled?
Polling Scheme:
• If two devices have activated the interrupt request line, the ISR for the selected device (first device)
will be completed & then the second request can be serviced.
• The simplest way to identify the interrupting device is to have the ISR polls all the encountered with
the IRQ bit set is the device to be serviced.
• IRQ (Interrupt Request) -> when a device raises an interrupt requests, the status register IRQ is set to
1.
Merit:
• It is easy to implement.
Demerit:
• The time spent for interrogating the IRQ bits of all the devices that may not be requesting any
service.
Vectored Interrupt: Nov / Dec 2011, 2012
• Here the device requesting an interrupt may identify itself to the processor by sending a special code
over the bus & then the processor start executing the ISR.
• The code supplied by the processor indicates the starting address of the ISR for the device.
• The code length ranges from 4 to 8 bits. The location pointed to by the interrupting device is used to
store the staring address to ISR.
• The processor reads this address, called the interrupt vector & loads into PC.
• The interrupt vector also includes a new value for the Processor Status Register.
Page | 38
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• When the processor is ready to receive the interrupt vector code, it activate the interrupt
acknowledge (INTA) line.
Interrupt Nesting: Multiple Priority Scheme:
• In multiple level priority schemes, we assign a priority level to the processor that can be changed
under program control.
• The priority level of the processor is the priority of the program that is currently being executed.
• The processor accepts interrupts only from devices that have priorities higher than its own.
• At the time the execution of an ISR for some device is started, the priority of the processor is raised
to that of the device.
• The action disables interrupts from devices at the same level of priority or lower.
Privileged Instruction:
• The processor priority is usually encoded in a few bits of the Processor Status word.
• It can also be changed by program instruction & then it is writing into PS. These instructions are
called privileged instruction.
• This can be executed only when the processor is in supervisor mode.
• The processor is in supervisor mode only when executing OS routines. It switches to the user mode
before beginning to execute application program.
Privileged Exception:
• User program cannot accidently or intentionally change the priority of the processor & disrupts the
system operation.
• An attempt to execute a privileged instruction while in user mode, leads to a special type of interrupt
called the privileged exception.
IN T
R1 I NR
Tp
Processor
Device Device Devic p
1 2 e
INTA1 INT p
A
Priority
arbitration
Implementation of Interrupt Priority using individual Interrupt request acknowledge lines
• Each of the interrupt request line is assigned a different priority level.
• Interrupt request received over these lines are sent to a priority arbitration circuit in the processor.
• A request is accepted only if it has a higher priority level than that currently assigned to the
processor.
Simultaneous Requests:
Page | 39
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Daisy Chain:
I T
N R
Processor
Device Device
INT p
Priority arbitratioAn
circuit
• At the devices end, an interrupt enable bit in a control register determines whether the device is
allowed to generate an interrupt requests.
• At the processor end, either an interrupt enable bit in the PS (Processor Status) or a priority structure
determines whether a given interrupt requests will be accepted.
Initiating the Interrupt Process:
• Load the starting address of ISR in location INTVEC (vectored interrupt).
Page | 40
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Load the address LINE in a memory location PNTR. The ISR will use this location as a pointer to
store i/o characters in the memory.
• Enable the keyboard interrupts by setting bit 2 in register CONTROL to 1.
• Enable interrupts in the processor by setting to 1, the IE bit in the processor status register PS.
Exception of ISR:
• Read the input characters from the keyboard input data register. This will cause the interface circuits
to remove its interrupt requests.
• Store the characters in a memory location pointed to by PNTR & increment PNTR.
• When the end of line is reached, disable keyboard interrupt & inform program main.
• Return from interrupt.
I/OPerformanceMeasures
13. Explain various ways to List and explain various I/O performance measures. (April/May 2017)
(Or)Explain in detail about I/O performance measures with an example. (April/May 2014,2015)
• Measures used to quantify I/O performance attempt to measure a more diverse range of properties
than the case of CPU performance.
• The traditional measures of performance/ namely response time and throughput/ also apply to I/O.
I/O throughput is sometimes called I/O bandwidth/ and response time is sometimes called latency.
• Fig. shows the traditional producer-server model of response time.
• The producer creates tasks to be performed and places them in a buffer; the server takes tasks from
the first-in-first-out buffer and performs them.
• Response time and throughput are non-linear. From a transaction server model:
1.Throughput is maximized when the queue is never empty;
2. Response time is minimized when die queue is empty.
• Figure below shows throughput versus response time for a typical storage system. Consider two
interactive computing environments/ one keyboard driven/ one graphical.
Page | 41
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Computing interaction or transaction time is divided into three components,
1. Entry Time : Time for user to make a request;
2. System Response Time : Time between request and response;
3. Think Time : Time between system response and next request
• The sum of these three parts is called the transaction time. Several studies report that user
productivity is inversely proportional to transaction time; transactions per hour are a measure of the
work completed per hour by the user.
• System response time is naturally the shortest duration. Does this minimize the impact of response
time?
• Effect of system response time on user 'thinking' time.
1. Any reduction to response time has more than a linear reduction on total transaction time.
2. Users need less time to think when given a faster response;
3. Possible to attach an economic benefit to response time and throughput;
4. In order to maintain user interest, response times need to be < 1.0 second.
Problem: [May 2019]
Suppose a processor sends 80 disks I/Os per second, these requests are exponentially distributed,
and the average service time of an older disk is 25 ms.
Answer the following questions:
1. On average, how utilized is the disk?
2. What is the average time spent in the queue?
3.What is the average response time for a disk request, including the queuing time
and disk service time?
Average number of Arriving tasks / second is 80 ms.
Average disk time to service a task is 25ms = (0 .025 sec).
The server utilization is then
Service utilization = Arrival rate x Time Server
= 80 x 0.025 = 2.0
Since the service times are exponentially distributed, we can use the simplifiedformula for the
average time spent waiting in line:
𝑠𝑒𝑟𝑣𝑒𝑟 𝑈𝑡i𝑙i𝑧𝑎𝑡i𝑜𝑛
𝑇i𝑚𝑒 Q𝑢𝑒𝑢𝑒 = 𝑇i𝑚𝑒 𝑆𝑒𝑟𝑣𝑒𝑟 X
(1 − 𝑆𝑒𝑟𝑣𝑒𝑟 𝑈𝑡i𝑙i𝑧𝑎𝑡i𝑜𝑛)
2
= 25 𝑚𝑠 X
1−2
= 50 ms (consider +ve value)
The average response time is
Time system = Time queue + Time Server
= 50 ms + 25 ms
Page | 42
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
= 75 ms
Thus, on average we spend 75% of our time waiting in the queue!
USB Architecture
Page | 43
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• When multiple I/O devices are connected to the computer through USB they all are organized in a
tree structure.
• Each I/O device makes a point-to-point connection and transfers data using the serial transmission
format we have discussed serial transmission in our previous content ‗interface circuit‘.
• As we know a tree structure has a root, nodes and leaves.
• The tree structure connecting I/O devices to the computer using USB has nodes which are also
referred to as a hub.
• Hub is the inter-mediatory connecting point between the I/O devices and the computer.
• Every tree has a root here; it is referred to as the root hub which connects the entire tree to the
hosting computer.
• The leaves of the tree here are nothing but the I/O devices such as a mouse, keyboard, camera, and
speaker.
• The USB works on the principle of polling.
• In polling, the processor keeps on checking whether the I/O device is ready for data transfer or not.
• So, the devices do not have to inform the processor about any of their statuses.
• It is the processor‘s responsibility to keep a check. This makes the USB simple and low cost.
• Whenever a new device is connected to the hub it is addressed as 0.
• Now at a regular interval the host computer polls all the hubs to get their status which lets the host
know of I/O devices that are either detached from the system or are attached to the system.
• When the host becomes aware of the new device it gets to know about the capabilities of the device
by reading the information present in the special memory of the device‘s USB interface.
• So that the host can use the appropriate device driver to communicate with the device.
• The host then assigns an address to this new device, this address is written to the register of the
device interface register.
• With this mechanism, USB serves plug-and-play capability.
Page | 44
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• The plug and play feature let the host recognize the existence of the new I/O device automatically
when the device is plugged in.
• The host software determines the capabilities of the I/O devices and if it has any special requirement.
• The USB is hot-pluggable which means the I/O device can be attached or removed from the host
system without performing any restart or shutdown.
• That means your system can keep running while the I/O device is plugged or removed.
USB Type A:
• This is the standard connector that can be found at one end of the USB cable and is also known as
upstream.
• It has a flat structure and has four connecting lines as you can see in the image below.
USB Type B:
• This is an older standard cable and was used to connect the peripheral devices also referred to as
downstream.
• It is approximately a square as you can see in the image below.
• This is now been replaced by the newer versions.
Mini USB:
• This type of USB is compatible with mobile devices.
• This type of USB is now superseded your micro-USB still you will get it on some devices.
Page | 45
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Micro USB:
• This type of USB is found on newer mobile devices. It has a compact 5 pin design.
USB Type C:
• This type of USB is used for transferring both data and power to the attached peripheral or I/O
device.
• The USB C does not have a fixed orientation as it is reversible i.e. you can plug it upside down or in
reverse.
Page | 46
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ Hard drive speeds were getting faster, and would soon outpace the capabilities of the older
standard—the fastest PATA speed achieved was 133MB/s, while SATA began at 150MB/s and
was designed with future performance in mind.
➢ Also, newer silicon technologies used lower voltages than PATA's 5V minimum.
➢ The ribbon cables used for PATA were also a problem; they were wide and blocked air flow, had
a short maximum length restriction, and required many pins and signal lines.
➢ SATA has a number of features that make it superior to Parallel ATA. The signaling voltages are
low and the cables and connectors are very small.
➢ SATA has outpaced hard drive performance, so the interface is not a bottleneck in a system.
➢ It also has a number of new features, including hot-plug support. SATA is a point-to-point
architecture, where each SATA link contains only two devices: a SATA host (typically a
computer) and the storage device.
➢ If a system requires multiple storage devices, each SATA link is maintained separately. This
simplifies the protocol and allows each storage device to utilize the full capabilities of the bus
simultaneously, unlike in the PATA architecture where the bus is shared.
➢ To ease the transition to the new standard, SATA maintains backward compatibility with PATA.
➢ To do this, the Host Bus Adapter (HBA) maintains a set of shadow registers that mimic the
registers used by PATA. The disk also maintains a set of these registers.
➢ When a register value is changed, the register set is sent across the serial line to keep both sets of
registers synchronized.
➢ This allows for the software drivers to be agnostic about the interface being used.
➢ Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also known as IDE.
➢ Hard drive speeds were getting faster, and would soon outpace the capabilities of the older
standard—the fastest PATA speed achieved was 133MB/s, while SATA began at 150MB/s and
was designed with future performance in mind.
➢ Also, newer silicon technologies used lower voltages than PATA's 5V minimum. The ribbon
cables used for PATA were also a problem; they were wide and blocked air flow, had a short
maximum length restriction, and required many pins and signal lines.
➢ SATA has a number of features that make it superior to Parallel ATA.
➢ The signaling voltages are low and the cables and connectors are very small. SATA has outpaced
hard drive performance, so the interface is not a bottleneck in a system. It also has a number of
new features, including hot-plug support.
➢ SATA is a point-to-point architecture, where each SATA link contains only two devices: a SATA
host (typically a computer) and the storage device.
➢ If a system requires multiple storage devices, each SATA link is maintained separately. This
simplifies the protocol and allows each storage device to utilize the full capabilities of the bus
simultaneously, unlike in the PATA architecture where the bus is shared.
Page | 47
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ To ease the transition to the new standard, SATA maintains backward compatibility with PATA.
➢ To do this, the Host Bus Adapter (HBA) maintains a set of shadow registers that mimic the
registers used by PATA.
➢ The disk also maintains a set of these registers. When a register value is changed, the register set
is sent across the serial line to keep both sets of registers synchronized.
➢ This allows for the software drivers to be agnostic about the interface being used.
Physical Layer:
➢ The physical layer is the lowest layer of the SATA protocol stack. It handles the electrical signal
being sent across the cable.
➢ The physical layer also handles some other important aspects, such as resets and speed
negotiation.
➢ SATA uses low-voltage differential signaling (LVDS). Instead of sending 1's and 0's relative to a
common ground, the data being sent is based on the difference in voltage between two conductors
sending data.
➢ In other words, there is a TX+ and a TX- signal. A logic 1 corresponds to a high TX+ and a low
TX-; and vice versa for a logic 0. SATA uses a ±125mV voltage swing.
Link Layer
➢ The link layer is the next layer and is directly above the physical layer. This layer is responsible
for encapsulating data payloads and manages the protocol for sending and receiving them.
➢ A data payload that is sent is called a Frame Information Structure (FIS). The link layer also
provides some other services for ensuring data integrity, handling flow control, and reducing EMI.
➢ The host and the disk each have their own transmit pair in a SATA cable, and theoretically data
could be sent in both directions simultaneously.
➢ However, this does not occur. Instead, the receiver sends ―backchannel‖ information to the
sender that indicates the status of the transfer in progress.
Page | 48
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
➢ For instance, if an error were to be detected mid- transmission, such as a disparity error, the
receiver could notify the sender of this.
Transport Layer
➢ The transport layer is responsible for constructing, delivering, and receiving Frame Information
Structures.
➢ It defines the format of each FIS and the valid sequence of FISes that can exchanged.
➢ The first byte of each FIS defines the type. The second byte contains type- dependent control
fields.
➢ The following table lists some of the types of FISes that are defined, and the value of their type
field.
Page | 49
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
2 Marks
Question Bank
1. What is Memory?
• Memory is a device used to store the data and instructions required for any operation.
2. What is the secondary memory?
• Secondary memory is where programs and data are kept on a long-term basis.
Common secondary storage devices are the hard disk and optical disks. The hard disk has enormous
storage capacity compared to main memory. The hard disk is usually contained inside the case of a
computer.
3. What are some examples of secondary storage device?
• Some other examples of secondary storage technologies are flash memory (e.g. USB flash drives or
keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega
Zip drives.
4. What are the characteristics of a secondary storage device?
• Characteristics of a secondary storage devices are,
• Capacity
• Speed
• Portability
• Durability
• Reliability
5. What are the three main categories of secondary storage?
Currently the most common forms of secondary storage device are:
• Floppy disks
• Hard disks
• Optical Disks
• Magnetic Tapes
• Solid State Devices
6. What is Bandwidth?
• The maximum amount of information that can be transferred to or from the memory per unit time is
called bandwidth.
7. Define a Cache.
• It is a small fast intermediate memory between the processor and the main memory.
8. What is Cache Memory?
• Cache memory is a very high speed memory that is placed between the CPU and primary or main
memory.
Page | 50
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• It is used to reduce the average time to access data from the main memory.
• The cache is a smaller and faster memory which stores copies of the data from frequently used main
memory locations.
• Most CPUs have different independent caches, including instruction and data.
9. Give the mapping techniques of cache.
The three different types of mapping techniques used for the purpose of cache memory are as follow,
✓ Direct Mapping
✓ Associative Mapping
✓ Set-Associative Mapping
10. What is Write Stall?
• When the processor must wait for writes to complete during write through, the processor caches is
said to write stall.
11. Define Mapping Functions.
• The correspondence of memory blocks in cache with the memory blocks in the main memory is
defined as mapping functions.
12. What is Address Translation?
• The conversion of virtual address to physical address is termed as address translation.
13. What is the transfer time?
• The time it takes to transmit or move data from one place to another.
• It is the time interval between starting the transfer and the completion of the transfer.
(Or)
• Transfer time is the time it takes to transfer a block of bits, typically a sector under the read / write
head.
14. What is latency and seek time?
• Seek Time is measured defines the amount of time it takes a hard drive's read/write head to find the
physical location of a piece of data on the disk.
• Latency is the average time for the sector being accessed to rotate into position under a head, after a
completed seek.
15. What is the clock cycle time?
• The speed of a computer processor, or CPU, is determined by the clock cycle, which is the amount
of time between two pulses of an oscillator.
• Computer processors can execute one or more instructions per clock cycle, depending on the type of
processor.
Page | 51
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
16. What is Access Time?
• The time a program or device takes to locate a single piece of information and make it available to
the computer for processing. DRAM (dynamic random access memory) chips for personal
computers have access times of 50 to 150 nanoseconds (billionths of a second).
17. What is meant by disk fragmentation?
• Fragmentation refers to the condition of a disk in which files are divided into pieces scattered around
the disk.
• It occurs naturally when you use a disk frequently, creating, deleting, and modifying files.
• At some point, the operating system needs to store parts of a file in noncontiguous clusters.
18. What is the average access time for a hard disk?
• Disk access times are measured in milliseconds (thousandths of a second), often abbreviated as ms.
• Fast hard disk drivesfor personal computers boast access times of about 9 to 15 milliseconds.
• Note that this is about 200 times slower than average DRAM.
19. What is rotational latency time?
• The amount of time it takes for the desired sector of a disk (i.e., the sector from which data is to be
read or written) to rotate under the read-write heads of the disk drive.
• It is also called as rotational delay.
20. What is meant by disk latency?
• Disk latency refers to the time delay between a request for data and the return of the data. It sounds
like a simple thing, but this time can be critical to the performance of a system.
21. Define Page Fault.
• If the processor access for the particular page in main memory and if the page is not present there
then it is known as page fault.
22. Define a Cache Unit.
• When the CPU refers to memory and finds a required word in cache it is termed as cache hit.
23. Define Hit Ratio.(Nov/Dec 2019)Nov/Dec 2021
• The ratio of the number of hits divided by the total CPU references to memory is the hit ratio.
24. Define a Miss.Nov/Dec 2021
• When the CPU refers to memory and if the required word is not found in cache it is termed as miss.
25. What is meant by memory stall cycles? (M 2016)
• The number of cycles during which the CPU is stalled waiting for a memory access is called memory
stall cycles.
26. What is Miss Penalty?
• The number of stall cycles depends on both the number of misses and the cost per miss, which is
called the miss penalty.
Page | 52
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
27. Write the formula to calculate average memory access time. (Or) Write the formula to measure
average memory access time for memory hierarchy performance. (Nov / Dec 2018)
• Average memory access time = Hit time + Miss rate x Miss penalty
28. What is a miss in a cache? (Or) What does Cache Miss mean? (Or) Define Cache Miss. (Nov/Dec
2010, April/May 2018)
• Cache miss is a state where the data requested for processing by a component or application is not
found in the cache memory.
• It causes execution delays by requiring the program or application to fetch the data from
other cache levels or the main memory.
29. Define Cache Hit. (Or) What does Cache Hit mean? (April/May 2018)
• A cache hit is a state in which data requested for processing by a component or application is found
in the cache memory.
• It is a faster means of delivering data to the processor, as the cache already contains the requested
data.
30. Differentiate between Cache Miss and Cache Hit.
• The difference between the two is the data requested by a component or application in a cache
memory being found or not.
• In a cache miss, the data is not found so the execution is delayed because the component or
application tries to fetch the data from main memory or other cache levels.
• In a cache hit, the data is found in the cache memory making it faster to process.
• The cache hit is when you look something up in a cache and it was storing the item and is able to
satisfy the query.
31. What is miss penalty for a cache?
• Cache is a small high-speed memory. Stores data from some frequently used addresses (of main
memory). Processor loads data from M and copies intocache.
• This results in extra delay, called miss penalty.
• Hit ratio = percentage of memory accesses satisfied by the cache.
32. What is miss rate in cache?
• The fraction or percentage of accesses that result in a hit is called the hit rate.
• The fraction or percentage of accesses that result in a miss is called the miss rate.
• It follows that hit rate + miss rate = 1.0 (100%).
• The difference between lower level access time and cache access time is called the miss penalty.
33. What is hit time in cache?
• AMAT's three parameters hit time (or hit latency), miss rate, and miss penalty provide a quick
analysis of memory systems.
Page | 53
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Hit latency (H) is the time to hit in thecache. Miss rate (MR) is the frequency of cache misses, while
average miss penalty (AMP) is the cost of a cache miss in terms of time.
34. How is cache memory measured?
• The CPU cache is a piece of hardware which reduces the access time to the data in the memory by
keeping some part of the frequently used data of the main memory in itself.
• It is smaller and faster than the main memory.
35. What is the memory cycle time?
• Cycle time is the time, usually measured in nanosecond s, between the start of one random
access memory (RAM) access to the time when the next access can be started.
• Access time is sometimes used as a synonym (although IBM deprecates it).
36. What is the memory access time?
• Memory access time is how long it takes for a character in memory to be transferred to or from the
CPU.
• In a PC or Mac, fast RAM chips have an access time of 70 nanoseconds (ns) or less.
• SDRAM chips have a burst mode that obtains the second and subsequent characters in 10 ns or less.
37. What is the data transfer rate?
• The speed with which data can be transmitted from one device to another.
• Data rates are often measured in megabits (million bits) or megabytes (million bytes) per second.
• These are usually abbreviated as Mbps and MBps, respectively. Another term for data transfer rate is
throughput.
38. What does access time measure?
• The total time it takes the computer to read data from a storage device such as computer memory,
hard drive, CD-ROM or other mechanism.
• Computer access time is commonly measured in nanoseconds or milliseconds and the lower
the access the time the better.
39. List the method to improve the cache performance.
Improving the cache performance following methods are used:
• Reduce the miss rate.
• Reduce the miss penalty.
• Reduce the time to hit in the cache.
40. What is Split Transactions?
• With multiple masters, a bus can offer higher bandwidth by using packets, as opposed to holding the
bus for the full transaction. This technique is called split transactions.
41. What is Cylinder?
• Cylinder is used to refer to all the tracks under the arms at a given points on all surfaces.
42. What is Synchronous Bus?
Page | 54
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Synchronous bus includes a clock in the control lines and a fixed protocol for sending address and
data relative to the clock.
43. Explain difference between latency and throughput.
• Latency is defined as the time required processing a single instruction, while throughput is defined as
the number of instructions processes per second.
44. What is called Pages?
• The address space is usually broken into fixed-size blocks, called pages. Each page resides either in
main memory or on disk.
45. What are the techniques to reduce hit time?
The techniques to reduce hit time are:
• Small and simple cache: Direct mapped.
• Avoid address translation during indexing of the cache.
• Pipelined cache access.
• Trace cache.
46. What are the categories of cache miss? (April/May 2013) (Or) Point out one simple technique used
to reduce each of the three "C" misses in cache memories. (Nov/Dec 2017)
• Categories of cache misses are,
✓ Compulsory
✓ Capacity
✓ Conflict
47. How the conflicts misses are divided? (Nov/Dec 2016)
Four divisions of conflict misses are:
• Eight way: Conflict misses due to going from fully associative to eight way associative.
• Four way:Conflict misses due to going from eight way associative to four way associative.
• Two way: Conflict misses due to going from four way associative to two way associative.
• One way: Conflict misses due to going from two way associative to one way associative.
48. What is Sequence Recorded?
• The sequence recorded on the magnetic medics is a sector number, a gap, the information for that
sector including error correction cede, a gap, the sector number of the next sector and so on.
49. Write the formula to calculate the CPU execution time.
• CPU execution time = (CPU clock cycles + Memory stall cycles) x Clock cycle time.
50. Write the formula to calculate the CPU time.
• CPU time = (CPU execution clock cycles + Memory stall clock cycles) x Clock cycle time.
51. What is RAID? (Nov/Dec 2011, April/May 2015, 2017)
• RAID stands forRedundant Array of Independent Disks.
• It is also called as redundant array of inexpensive disks.
Page | 55
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• It is a way of storing the same data in different places on multiple hard disks.
(Or)
• RAID is a storage technology that combines multiple disk drive components into a logical unit for
the purposes of data redundancy and performance improvement.
• Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on
the specific level of redundancy and performance required.
52. Explain the terms availability and dependability. (Nov/Dec 2017)
• Availability is a measure of the service accomplishment with respect to the alternation between the
two states of accomplishment and interruption.
• Dependability is the quality of delivered service such that reliance can justifiable be placed on this
service.
53. What are the differences and similarities between SCSI and IDE? (April/May 2017)
Parameters IDE SCSI
SCSI is often more expensive to
Cost IDE is a much cheaper solution
implement and support
It is capable of supporting up to 7 or
Expansion It allows 2 two devices per channel.
15 devices.
Configuring SCSI can be more
IDE is commonly a much easier
Ease difficult for most users when
product to setup than SCSI.
compared to IDE.
SCSI devices can communicate
IDE devices cannot communicate
CPU independently from the CPU over the
independently from the CPU.
SCSI bus.
54. Why does DRAM generally have much larger capacities than SRAMs constructed in the same
fabrication technology? (Nov/Dec 2016)
• DRAM bit cells require only two devices (capacitor and transistor) while SRAM bit cells typically
requires six transistors.
• This makes the bit cells of the DRAM much smaller than the bit cells of the SRAM, allowing the
DRAM to store more data in the same amount of chip space.
55. What are the measures of I/O performance? (Nov/Dec 2013)
I/O Performance Measures
• Measures used to quantify I/O performance attempt to measure a more diverse range of properties
than the case of CPU performance.
• The traditional measures of performance namely response time and throughput also apply to I/O. I/O
throughput is sometimes called I/O bandwidth and response time is sometimes called latency.
56. What are the types of storage devices? (Nov/Dec 2016, April/May 2017)
Page | 56
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Physical components or materials on which data is stored are called storage media.
• Hardware components that read/write to storage media are called storage devices. A floppy disk
drive is a storage device.
• Two main categories of storage technology used today are magnetic storage and optical storage.
Storage devices hold data, even when the computer is turned off. The physical material that actually
holds data is called storage medium.
57. What do you mean by Memory Interleaving?
• Interleaved memory is a design made to compensate for the relatively slow speed of dynamic
random-access memory (DRAM) or core memory, by spreading memory addresses evenly
across memory banks.
58. What are the factors responsible for the maximum I/O bus performance? (April/May 2005)
Factors to be considered for maximum I/O bus performance are:
• Latency &Bandwidth
59. What are the two major advantages and disadvantages of the bus? (May/June 2007)
Advantages:
• It is easy to set-up and extend bus network.
• Cable length required for this topology is the least compared to other networks. Bus topology costs
very less.
• Bus topology costs very less.
Disadvantages:
• There is a limit on central cable length and number of nodes that can be connected
• Dependency on central cable in this topology has its disadvantages. If the main cable (i.e. bus )
encounters some problem, whole network breaks down.
60. Is the RISC processor is necessary? Why? (May/June 2007)
• Although RISC was indeed able to scale up in performance quite quickly and cheaply and it is
necessary for building block of high performance parallel processors.
61. Define the terms cache miss and cache hit. (Nov/Dec 2011, May/June 2013)
• A cache miss, generally, is when something is looked up in the cache and is not found.
• The cache did not contain the item being looked up. The cache hit is when you look something up in
a cache and it was storing the item and is able to satisfy the query.
62. Compare software and hardware RAID.
Software RAID Hardware RAID
1. A simple way to describe software RAID is that 1. A hardware RAID solution has its own
the RAID task runs on the CPU of your processor and memory to run the RAID
computer system. application. In this implementation, the RAID
2. It is implemented in Pure Software model – system is an independent small computer system
Page | 57
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Operating System Software and hybrid model- dedicated to the RAID application, offloading
Assisted Software RAID. this task from the host system.
2. Hardware RAID can be implemented in a
variety of ways:
• as a discrete RAID Controller Card, or
• as integrated hardware based on RAID-
on-Chip technology
63. Explain the need to implement memory as a hierarchy. (April/May 2017)
• A "memory hierarchy" in computer storage distinguishes each level in the "hierarchy" by response
time.
• Each level of the hierarchy is of higher speed and lower latency, and is of smaller size, than lower
levels.
• The levels in the memory hierarchy not only differ in speed and cost. They are performing different
roles.
64. What is a register in memory?
• A register is a very small amount of very fast memory that is built into the CPU (central processing
unit) in order to speed up its operations by providing quick access to commonly used values.
• Registers are the top of the memory hierarchy and are the fastest way for the system to manipulate
data.
65. What is the storage hierarchy?
• A storage device hierarchy consists of a group of storage devices that have different costs for storing
data, different amounts of data stored, and different speeds of accessing the data.
• Level 0, including DFSMShsm-managed storage devices at the highest level of the hierarchy,
contains data directly accessible to you.
66. What is Cache Optimization?
• The idea behind this approach is to hide both the low main memory bandwidth and the latency of
main memory accesses which is slow in contrast to the floating-point performance of the CPUs.
67. List the six basic optimization techniques of cache. (Nov/Dec 2016) (Or) List the basic six cache
optimizations for improving cache performance. (Nov / Dec 2018)
• The six basic optimization techniques of cache are,
✓ Larger block size to reduce miss rate
✓ Bigger caches to reduce miss rate
✓ Higher associativity to reduce miss rate
✓ Multilevel caches to reduce miss penalty
✓ Giving priority to read misses over writes to reduce miss penalty
✓ Avoiding address translation during indexing of the cache to reduce hit time
Page | 58
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
68. Difference between Volatile and Non-volatile Memory. (Or) Outline the difference between volatile
and non-volatile memory. (April/May 2018)
Volatile Memory (RAM) Non-volatile Memory (ROM)
→ It is a volatile memory. → It is a non-volatile memory.
→ Contents are stored temporarily. → Contents are stored permanently.
→ Cost is very high. → Cost Effective.
→ Small storage capacity. → High storage capacity.
→ Processing speed is high. → Processing speed is low.
69. What is a Cache Performance?
• Cache Performance - Average memory access time is a useful measure to evaluate
the performance of a memory-hierarchy configuration.
70. What is hit and miss ratio?
• The hit ratio is the fraction of accesses which are a hit.
• The miss ratio is the fraction of accesses which are a miss. It holds that. miss rate = 1 − hit rate.
• The (hit/miss) latency (AKA access time) is the time it takes to fetch the data in case of a hit/miss.
71. Differentiate between SRAM and DRAM.
• SRAM (Static RAM) and DRAM (Dynamic RAM) holds data but in a different ways.
• DRAM requires the data to be refreshed periodically in order to retain the data.
• SRAM does not need to be refreshed as the transistors inside would continue to hold the data as long
as the power supply is not cut off.
• DRAM memory slower and less desirable than SRAM.
72. Differentiate between throughput and response time. [May 2019]
Throughput Time Response Time
1. This is the time difference between submission of 1. The number of processes that are completed per
a request until the response begins to be received. unit time is called the throughput.
2.The response time should be as low as possible so
that a large number of interactive users receive an 2. It is desirable to maximize CPU utilization and
acceptable response time. throughput and to minimize turnaround time and
response time.
73. Suppose a processor sends 80 disk I/Os per second, these requests are exponentially
distributed, and the average service time of an older disk is 25 ms. Nov/Dec 2020.
Answer the following questions:
1. On average, how utilized is the disk?
2. What is the average time spent in the queue?
3.What is the average response time for a disk request, including the queuing time and disk
service time?
Page | 59
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Average number of Arriving tasks / second is 80 ms.
Average disk time to service a task is 25ms = (0 .025 sec).
The server utilization is then
Service utilization = Arrival rate x Time Server
= 80 x 0.025 = 2.0
Since the service times are exponentially distributed, we can use the simplifiedformula for the
average time spent waiting in line:
𝑠𝑒𝑟𝑣𝑒𝑟 𝑈𝑡i𝑙i𝑧𝑎𝑡i𝑜𝑛
𝑇i𝑚𝑒 Q𝑢𝑒𝑢𝑒 = 𝑇i𝑚𝑒 𝑆𝑒𝑟𝑣𝑒𝑟 X
(1 − 𝑆𝑒𝑟𝑣𝑒𝑟 𝑈𝑡i𝑙i𝑧𝑎𝑡i𝑜𝑛)
2
= 25 𝑚𝑠 X
1−2
= 50 ms (consider +ve value)
The average response time is
Time system = Time queue + Time Server
= 50 ms + 25 ms
= 75 ms
Thus, on average we spend 75% of our time waiting in the queue!
74. What is IO mapped input output?
• A memory reference instruction activated the READ M (or) WRITE M control line anddoes not
affect the IO device. Separate IO instruction is required to activate the READ IO and WRITE IO
lines,which cause a word to be transferred between the address port and the CPU.
• The memory and IO address space are kept separate.
75. Specify the three types of the DMA transfer techniques?
• Single transfer mode (cyclestealing mode)
• Block Transfer Mode (Burst Mode)
• Demand Transfer Mode
• Cascade Mode
76. Name any three of the standard I/O interface.
• SCSI (small computer system interface), bus standards
• Back plane bus standards
• IEEE 796 bus (multibus signals)
• NUBUS & IEEE 488 bus standard
77. What is an I/O channel?
• An I/O channel is actually a special purpose processor; also called peripheral processor.
• The main processor initiates a transfer by passing the required information in the input output
channel. The channel then takes over and controls the actual transfer of data.
Page | 60
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
78. Why program controlled I/O is unsuitable for high-speed data transfer?
• In program controlled I/O considerable overhead is incurred. Because several program instructions
have to be executed for each data word transferred between the external devices and MM.
• Many high speed peripheral; devices have a synchronous modes of operation. That is data transfer is
controlled by a clock of fixed frequency, independent of the CPU.
79. What is the function of I/O interface?
• The function is to coordinate the transfer of data between the CPU and external devices.
80. Name some of the IO devices.
• Video terminals
• Video displays
• Alphanumeric displays
• Graphics displays
• Flat panel displays
• Printers
• Plotters
81. Define interface.
• The word interface refers to the boundary between two circuits or devices
82. What is programmed I/O?
• Data transfer to and from peripherals may be handled using this mode. Programmed I/O operations
are the result of I/O instructions written in the computer program.
83. What are the limitations of programmed I/O? (Nov / Dec 2011 )
The main limitation of programmed I/O and interrupt driven I/O is given below:
Programmed I/O
• It used only in some low-end microcomputers.
• It has single input and single output instruction.
• Each instructions selects one I/O device (by number) and transfers a single character (byte)
• Example: microprocessor controlled video terminal.
• Four registers: input status and character, output status and character.
Interrupt-driven I/O
• Primary disadvantage of programmed I/O is that CPU spends most of its time in a tight loop waiting
for the device to become ready. This is called busy waiting.
• With interrupt-driven I/O, the CPU starts the device and tells it to generate an interrupt when it is
finished.
• Done by setting interrupt-enable bit in status register.
• Still requires an interrupt for every character read or written.
• Interrupting a running process is an expensive business (requires saving context).
Page | 61
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• Requires extra hardware (DMA controller chip).
84. Differentiate Programmed I/O and Interrupt I/O. (Nov / Dec 2014)
Programmed I/O Interrupt I/O
In programmed I/O, processor has to check each External asynchronous input is used to tell the
I/O device in sequence and in effect ‗ask‘ each one processor that I/O device needs its service and
if it needs communication with the processor. This hence processor does not have to check whether
checking is achieved by continuous polling cycle I/O device needs it service or not.
and hence processor cannot execute other
instructions in sequence.
During polling processor is busy and therefore, has In Interrupt driven I/O, the processor is allowed to
serious and decremental effect on system execute its instructions in sequence and only stop
throughput. to service I/O device when it is told to do so by the
device itself. This increases system throughput.
It is implemented without interrupt hardware It is implemented using interrupt hardware
support. support.
It does not depend on interrupt status. Interrupt must be enabled to process Interrupt
driven I/O.
It does not need initialization of stack. It needs initialization of stack.
System throughput decreases as number of I/O System throughput does not depend on number of
devices connected in the system increases. I/O devices connected in the system.
85. Differentiate between memory-mapped I/O and I/O mapped I/O. (Apr / May 2011)
S.No. Parameter Memory-mapped I/O I/O-mapped I/O
1. Address space Memory and I/O devices share the Memory and I/O devices have
entire address space separate address space
2. Hardware No additional hardware required Additional hardware required
3. Implementation Easy to implement Difficult to implement
4. Address Same address cannot be used to refer Same address can be used to refer
both memory and I/O device. both memory and I/O device.
5. Control lines Memory control lines are used to Different set of control lines are used
control I/O devices. to control memory and I/O.
6. Control lines used The control lines are: READ, WRITE The control lines are: READ M,
WRITE M, READ I/O, WRITE I/O
86. What is DMA?
• A special control unit may be provided to enable transfer a block of data directly betweenan external
device and memory without contiguous intervention by the CPU. This approach is called DMA.
(OR)
Page | 62
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
• A direct memory access (DMA) is an operation in which data is copied (transported) from one
resource to another resource in a computer system without the involvement of the CPU.
87. What are MAR and MBR?
• MAR – Memory address register holds the address of the data to be transferred.
• MBR – Memory buffer register contains the data to be transferred to or from the main memory.
88. Define bus arbitration. List out its types. (May / June 2009)
Bus Arbitration: It is the process by which the next device to become the bus master is selected and the bus
mastership is transferred to it.
Types: There are 2 approaches to bus arbitration. They are,
• Centralized arbitration (A single bus arbiter performs arbitration)
• Distributed arbitration (all devices participate in the selection of next bus master).
89. Define memory interleaving. (Apr / May 2017)
Memory interleaving is the technique used to increase the throughput. The core idea is to split the memory
system into independent banks, which can answer read or write requests independents in parallel.
90. What is an interrupt?
• An interrupt is an event that causes the execution of one program to be suspended and another
program to be executed.
91. What are the uses of interrupts?
• Recovery from errors
• Debugging
• Communication between programs
• Use of interrupts in operating system
92. Define vectored interrupts.
• In order to reduce the overhead involved in the polling process, a device requesting an interrupt may
identify itself directly to the CPU. Then, the CPU can immediately start executing the corresponding
interrupt-service routine. The term vectored interrupts refers to all interrupt handling schemes base
on this approach.
93. What are the steps taken when an interrupt occurs?
Source of the interrupt
• The memory address of the required ISP
• The program counter & CPU information saved in subroutine
• Transfer control back to the interrupted program
94. Summarize the sequence of events involved in handling an interrupt request from a single device.
(Apr / May 2017) Write the sequence of operations carried out by a processor when interrupted by a
peripheral device connected to it. (Apr/May 2018)
Let us summarize the sequence of events involved in handling an interrupt request from a single device.
Page | 63
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
Assuming that interrupts are enabled, the following is a typical scenario.
1. The device raises an interrupt request.
2. The processor interrupts the program currently being executed.
3. Interrupts are disabled by changing the control bits in the PS (except in the case of edge-triggered
interrupts).
4. The device is informed that its request has been recognized, and in response, it deactivates the interrupt-
request signal. 5. The action requested by the interrupt is performed by the interrupt-service routine. 6.
Interrupts are enabled and execution of the interrupted program is resumed.
95. Point out how DMA can improve I/O speed. (May / June 2015)
DMA module controls exchange of data between main memory and the I/O device. Because of DMA device
can transfer data directly to and from memory, rather than using the CPU as an intermediary, and can thus
relieve congestion on the bus. CPU is only involved at the beginning and end of the transfer and interrupted
only after entire block has been transferred. In this case DMA improves the I/O speed of the system.
96. Define IO Processor.
The IOP attaches to the system I/O bus and one or more input/output adapters (IOAs). The IOP processes
instructions from the system and works with the IOAs to control the I/O devices.
97. Draw the Memory hierarchy in a typical computer system. (Nov/Dec 2018)(Or)
Draw the basic structure of a memory hierarchy. (Apr/May 2019)
Page | 64
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
100. Define Bus Structures.
In computer architecture, a bus (a contraction of the Latin omnibus) is a communication system that
transfers data between components inside a computer, or between computers. This expression covers all
related hardware components (wire, optical fiber, etc.) and software, including communication protocols.
101. What is the meaning of USB?(Nov/Dec 2019)
A Universal Serial Bus (USB) is a common interface that enables communication between devices
and a host controller such as a personal computer (PC). It connects peripheral devices such as digital
cameras, mice, keyboards, printers, scanners, media devices, external hard drives and flash drives.
102. what is baud rate?Nov/Dec 2020.
The baud rate is the rate at which information is transferred in a communication channel. Baud rate is
commonly used when discussing electronics that use serial communication. In the serial port context, "9600
baud" means that the serial port is capable of transferring a maximum of 9600 bits per second.
At baud rates above 76,800, the cable length will need to be reduced. The higher the baud rate, the
more sensitive the cable becomes to the quality of installation, due to how much of the wire is untwisted
around each device.
103. In memory organization,what is temporal locality? Nov/Dec 2021
Temporal locality means current data or instruction that is being fetched may be needed soon.
When CPU accesses the current main memory location for reading required data or instruction, it
also gets stored in the cache memory which is based on the fact that same data or instruction may be
needed in near future.
104. How many total bits are required for a direct-mapped cache with 16KB of data and 4 word
blocks, assuming a 32-bit address? (Apr/May 2019)
So, n = 10
m=2
= 210x (4 x 32 + (32 –10 –2 - 2) + 1)
= 210x 147
= 18.4 KB
105. Define SATA.
Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also known as IDE.
Hard drive speeds were getting faster, and would soon outpace the capabilities of the older standard—the
Page | 65
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET
fastest PATA speed achieved was 133MB/s, while SATA began at 150MB/s and was designed with
future performance in mind.
106. Explain about USB.
• The universal serial bus (USB) is a standard interface for connecting a wide range of devices to the
computer such as keyboard, mouse, smart phones, speakers, cameras etc.
• The USB was introduced for commercial use in the year 1995 at that time it has a data transfer speed
of 12 megabits/s.
Page | 66
CS3351/ DPCO/ Dr.J.Rajalakshmi/SRM MCET