0% found this document useful (0 votes)
24 views59 pages

CA QR Ans Key 23 24 Even Sem

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views59 pages

CA QR Ans Key 23 24 Even Sem

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Question Repository for APRIL/MAY 2024 Examinations

Subject Code 19CS305 Subject Name COMPUTER ARCHITECTURE Common To

Faculty Name P SANKAR Department INFORMATION TECHNOLOGY CSE,IT

(PART A – 2 Marks)
UNIT - I
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
What are the five classic components of a computer?
There are five basic components which include:
 Input Unit.
 Output Unit. CO1 K2 2
QA101*
 Memory Unit.
 Control Unit.
 Arithmetical and Logical Unit.

Define Amdal’s law.


A rule stating that performance enhancement possible with a given improvement is limited by
the amount that the improved feature is used.
𝑂𝑙𝑑 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒
Overall Speedup = CO1 K2 2
QA102* 𝑁𝑒𝑤 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒

1
= 𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑
(1−𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 )+ )
𝑆𝑝𝑒𝑒𝑑𝑢𝑝𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑
Write the formula for CPU execution time for a program.
CO1 K3 3
QA103*

What is instruction register?


Instruction Register holds the instruction that is currently being executed by the processor.Its
CO2 K2 2
QA104* output is available to the control circuits, which generates the timing signals that controls
various processing elements involved in executing the instruction.

Define immediate addressing mode.


The operand is given explicitly in the instruction.

CO2 K2 2
QA105*
Ex. MOVE #20 , A
 The value 20 is copied into register A.
 The special symbol # indicates, it is direct value.

UNIT - II
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Draw the circuit diagram for 4 bit full adder.
Full adder:

CO3 K2 2
QA201*
Write the overflow conditions for addition and subtraction
Overflow cannot occur in addition (subtraction), if the operands have different respective. Identical
signs.
Operation Operand A Operand B Result Indicating overflow CO3 K2 3
QA202*
A+B >=0 >=0 <0
A+B <0 <0 >=0
A-B >=0 <0 <0
A-B <0 >=0 >=0

What is fast multiplication? CO3 K2 2


QA203*
The performance of a fast multiplier depends on generating fewer partial products. The lower
number of partial products means high speed of multiplication

Give the representation of single precision floating point number.


The Single precision, 32 bits are used to represent the floating-point number.

CO3 K2 3
QA204*

List the types of division algorithm


Step1:
 It must first subtract the divisor register from the Remainder register and place the result in the
remainder register.
Step2:
 Next we performed the comparison in the set on less than instruction.
 If the result is positive, the divisor was smaller or equal to the dividend, so shift the Quotient CO3 K2 3
QA205* register to the left, setting the new rightmost bit to 1.
 If the result is negative the next step Restore the original value by adding the Divisor register to
the remainder register and placing the sum in the remainder register.
 Also shift the Quotient register to the left, setting the new least significant bit to 0
Step 3:
 The Divisor is shifted right by 1 bit and then we iterate again.
 The remainder and Quotient will be found in their registers after the iterations are complete.

UNIT - III
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Define data path.
Datapath is a unit used to operate on or hold data within processor. Its elements include the CO4 K2 2
QA301*
instruction and data memories, the register file, the ALU, and adders

What do you meant by delayed branch. CO4 K2 2


QA302*
An instruction that always executes after the branch is called delayed branch.

What is meant by dynamic branch Prediction?


Prediction of branches at runtime using runtime information is called dynamic branch prediction.
If a branch was taken the last time this instruction was executed and, if so to begin fetching new CO4 K2 3
QA303*
instructions from the same place as the last time. This technique is called dynamic branch
prediction.

What is pipeline stall? CO4 K2 2


QA304*
A pipeline stall is a delay in execution of an instruction in order to resolve a hazard.

What are the advantages of pipelining?


• Instruction throughput increases.
• Increase in the number of pipeline stages increases the number of instructions executed
CO4 K2 2
QA305* simultaneously.
• Faster ALU can be designed when pipelining is used.
• Pipelined CPU's works at higher clock frequencies than the RAM.

UNIT - IV
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
List the types of parallelism.
 Bit-level parallelism
 Instruction-level parallelism CO5 K2 2
QA401*
 Task Parallelism
 Data-level parallelism (DLP)

Definemulticoremicroprocessor
A multi-core processor is a processor chip that has more than one processor on a single chip CO5 K2 2
QA402*
contained in a single package.

What ismultithreading?
A mechanism by which the instruction streams is divided into several smaller Streams (threads) CO5 K2 2
QA403*
and can be executed in parallel is called multithreading.

What is PES’?
Processing elements (PEs) : Every processing element consists of ALU, local memory and its CO5 K2 2
QA404*
registers for storage of distributed data. This PEs has been interconnected via an interconnection
network.

Define uniform memory access (UMA).


UMA model share the physical memory uniformly. In UMA architecture, access time to a
memory location is independent of which processor makes the request or which memory chip CO5 K2 2
QA405*
contains the transferred data.

UNIT - V
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Consider a cache with 64 blocks and a block size of 16 bytes. To what block number does
byte address 1200 map?
Top of FormBottom of FormTop of FormBlock size =16
Total No of Block=64
Memory block Number/Byte addressBlock Size =1206/16 =75. CO6 K2 4
QA501* Now we have to find a cache block number related to memory block number.
So In Direct Mapped cache,
Cache block Number=Memory block Number mod(Total Number of a block in cache)
=75mod64 =11.

What are the techniques to improve the cache performance?


Two different techniques are used to improve cache performance, CO6 K2 2
QA502*
1. Reducing the miss rate.
2. Reducing the miss penalty by adding additional catch.

Define bus arbitration and write their types.


Bus Arbitration is the procedure by which the active bus master accesses the bus, relinquishes
control of it, and then transfers it to a different bus-seeking processor unit. There are two types of CO6 K2 2
QA503* bus arbitration namely
 Centralized Arbitration.
 Distributed Arbitration.

What are the different types of buses? CO6 K2 2


QA504*
There are three types of bus lines: Data bus, Address bus, and Control bus.

Define address mapping.


Address mapping is a process of determining a logical address knowing the physical address of
CO6 K2 2
QA505* the device and determining the physical address by knowing the logical address of the device.
Address mapping is required when a packet is routed from source host to destination host in the
same or different network.
(PART B – 13 Marks - Either Or Type)

UNIT - I
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)

Explain the various components of the Computer System with a neat diagram
Functional units of Computer System(7 )
Explanation(6 )
The computer is a combination of hardware and software. Hardware is the physical
component of a computer while the software is the set of programs or instructions. Both
hardware and software together make the computer system function. Every task given to a
computer follows an Input- Process- Output Cycle (IPO cycle).

CO1 K2 3
QB101 (a)*

The functional components of a computer:


Input Unit: The input unit is used to feed any form of data to the computer. Example:
Keyboard, mouse, etc.
Central Processing Unit: The CPU is the major component that interprets and executes
software instructions. It also controls the operation of all other components such as
memory, input, and output units. The CPU has three components which are the Control
unit, the Arithmetic, and logic unit (ALU), and the Memory unit.
Arithmetic and Logic Unit: The ALU is a part of the CPU where various computing
functions are performed on data. The ALU performs arithmetic operations such as addition,
subtraction, multiplication, division, and logical operations.
Control Unit: The control unit controls the flow of data between the CPU, memory, and
I/O devices. It also controls the entire operation of a computer.
Output Unit: An Output Unit is any hardware component that conveys information to
users in an understandable form. Example: Monitor Printer etc.
Memory Unit: The Memory Unit is of two types which are primary memory and
secondary memory. The primary memory is used to temporarily store the programs and
data when the instructions are ready to execute. The secondary memory is used to store the
data permanently. The Primary Memory is volatile, that is, the content is lost when the
power supply is switched off. The Random Access Memory (RAM) is an example of the
main memory.
The Secondary memory is nonvolatile, that is, the content is available even after the power
supply is switched off. Hard disk, CD-ROM, and DVD ROM are examples of secondary
memory.
(Or)

Discuss in detail the basic Operational Concepts.


Basic Operational Concepts (7 )
Data Transfer RAM and Memory(6)

A Computer has five functional independent units like Input Unit, Memory Unit,
Arithmetic & Logic Unit, Output Unit, and Control Unit.

Input Unit :-
CO2 K2 3
QB101 (b)* Computers take coded information via input unit. The most famous input device is
keyboard. Whenever we press any key it is automatically being translated to corresponding
binary code & transmitted over a cable to memory or proccessor.

Memory Unit :-

It stores programs as well as data and there are two types- Primary and Secondary Memory

Primary Memory is quite fast which works at electronic speed. Programs should be stored
in memory before getting executed. Random Access Memory are those memory in which
location can be accessed in a shorter period of time after specifying the address. Primary
memory is essential but expensive so we went for secondary memory which is quite
cheaper. It is used when large amount of data & programs are needed to store, particularly
the information that we dont access very frequently. Ex- Magnetic Disks, Tapes

Arithmetic & Logic Unit :-

All the arithmetic & Logical operations are performed by ALU and this operation are
initiated once the operands are brought into the processor.

Output Unit :– It displays the processed result to outside world.


Basic Operational Concepts
 Instructions take a vital role for the proper working of the computer.
 An appropriate program consisting of a list of instructions is stored in the memory so
that the tasks can be started.
 The memory brings the Individual instructions into the processor, which executes the
specified operations.
 Data which is to be used as operands are moreover also stored in the memory.

Example:

Add LOCA, R0
 This instruction adds the operand at memory location LOCA to the operand which
will be present in the Register R0.
 The above mentioned example can be written as follows:

Load LOCA, R1

Add R1, R0
 First instruction sends the contents of the memory location LOCA into processor
Register R0, and meanwhile the second instruction adds the contents of Register
R1 and R0 and places the output in the Register R1.
 The memory and the processor are are swapped and are started by sending the
address of the memory location to be accessed to the memory unit and issuing the
appropriate control signals.
 The data is then transferred to or from the memory.
Analysing how processor and memory are connected :–
 Processors have various registers to perform various functions :-
 Program Counter :- It contains the memory address of next instruction to be fetched.
 Instruction Register:- It holds the instruction which is currently being executed.
 MDR :- It facilities communication with memory. It contains the data to be written
into or read out of the addressed location.
 MAR :- It holds the address of the location that is to be accessed
 There are n general purpose registers that is R0 to Rn-1

Performance :-
 Performance means how quickly a program can be excecuted.

Computer organization
 In order to get the best performance it is required to design the compiler, machine
instruction set & hardware in a coordinated manner.
Connection B / W Processor & Memory

Connection B/W Processor & Memory


 The above mentioned block diagram consists of the following components
1) Memory
2) MAR
3) MDR
4) PC
5) IR
6) General Purpose Registers
7) Control Unit
8) ALU
 The instruction that is currently being executed is held by the Instruction Register.
 IR output is available to the control circuits, which generates the timing signal that
control the various processing elements involved in executing the instruction.
 The Memory address of the next instruction to be fetched and executed is contained
by the Program Counter.
 It is a specialized register.
 It keeps the record of the programs that are executed.
 Role of these registers is to handle the data available in the instructions. They store
the data temporarily.
 Two registers facilitate the communication with memory.

These registers are:

1) MAR(Memory Address Register)


2) MDR (Memory Data Register)
Memory Address Register:
The address of the location to be accessed is held by MAR.
Memory Data Register:
It contains the data to be written into or to be read out of the addressed location.
Working Explanation

A PC is set to point to the first instruction of the program. The contents of the PC are
transferred to the MAR and a Read control signal is sent to the memory. The addressed
word is fetched from the location which is mentioned in the MAR and loaded
into MDR. This post thus contains all the important basic operational concepts.
Explain in detail about the performance metrics in computer with examples
Performance Matrix(8)
Example(5)
Performance
“Computer performance is the amount of work accomplished by a computer system”.

 Assessing the performance of computers can be quite challenging.


 The scale and problem of modern software systems, together with the wide
range of performance improvement techniques employed by hardware designers,
have made performance assessment much more difficult.
 When trying to choose among different computers, performance is an
important attribute.
 Accurately measuring and comparing different computers is critical to CO2 K2 3
QB102 (a)* purchasersand therefore to designers.
 The people selling computers know this as well.
 Hence, understanding how best to measure performance and the limitations of
performance measurements is important in selecting a computer.

Defining Performance
 When we say one computer has better performance than another, what do we
mean?
 If you were running a program on two different desktop computers, you’d say that
the faster one is the desktop computer that gets the job done first.
 If you were running a datacenter that had several servers running jobs
submitted by many users, you’d say that the faster computer was the one that
completed the most jobs during a day.
 As an individual computer user, you are interested in reducing response time—the
time between the start and completion of a task—also referred to as execution
time.

Datacenter managers are oft en interested in increasing throughput or bandwidth— the total
amount of work done in a given time. Hence, in most cases, we will need different
performance metrics as well as different sets of applications to benchmark
personal mobile devices, which are more focused on response time, versus servers, which
are more focused on throughput.

Response Time
 Response time Also called execution time.
 “The total time required for the computer to complete a task”, including disk
accesses, memory accesses, I/O activities, operating system overhead, CPU
execution time, and so on.
Throughput
 It is the “number of tasks completed per unit time”.
 Throughput Also called bandwidth.

Execution Time
 The actual time the CPU spends computing for a specific task.
 CPU execution time also called CPU time.
User CPU time : The CPU time spent in a program itself.
System CPU time: The CPU time spent in the operating system performing tasks on behalf
of the program.
Instruction Count: Instruction counts The number of instructions executed by the program.

Clock Cycles per Instruction


 which is the average number of clock cycles each instruction takes to execute, is
often abbreviated as CPI.
 Since different instructions may take different amounts of time depending on what
they do, CPI is an average of all the instructions executed in the program.
 CPI provides one way of comparing two different implementations of the same
instruction set architecture, since the number of instructions executed for a program
will, of course, be the same.

 To maximize performance, we want to minimize response time or execution time


for some task.
 Thus, we can relate performance and execution time for a computer X:
(Or)
Explain in detail about instructions based on address with examples of each type.
Instructions( 7)
Operand ( 6)

Instructions :
 A segment of code that contains steps that need to be executed by the computer
processor.

Instruction Set :
 A list of all the instructions with all the variants, which a processor can execute.

Elements of the instruction


 Each instruction of the CPU has specific information fields, which are required to
execute it.
 These information fields of instructions are called elements of instructions.
 The instruction elements are,
1. Operation code
2. Source operand address CO2 K2 3
QB102 (b)*
3. Destination operand address

 Operation code
 Specifies the operation to be performed.
 The operation is specified by the binary code.
 It is otherwise called Opcode.

 Source Operand Address


 It directly specify the source operand address.

 Destination Operand Address


 It directly specify the destination operand address.

Example : MOVE ALPHA BETA


Here, MOVE is a Opcode
ALPHA is a source operand BETA is a destination operand
Types of Instructions
 According to number of addresses, the instructions are classified into four types,

1) Zero address instructions


2) One address instructions
3)Two address instructions
4)Three address instructions

1.Zero Address Instructions


 Instruction contains, no address field or address specified implicitly in the
instructions.
 That instruction contains only an Opcode.
 Example: CLEAR , START, END

2.One Address Instructions


 The instructions contain an Opcode and only one operand.
 Example: ADD B
Adds the content of the Register B with A register, and result is stored in A.

3.Two Address Instructions


 The instructions contain an Opcode and two operands.
 First operand is source operand and second one is destination.
Example: MOVE A , B
Content of register A is moved into register B.

4.Three address instructions


 The instructions contain an Opcode and three operands.
 Last two operands are source operands and first one is destination.
Example: ADD A , B , C
B and C contents are added and stored in A register.

According to number of operations, the instructions are classified into some more
types,
1. Load and Store instructions.
2. Arithmetic Instructions.
3. Comparison Instructions.
4. Jump Instructions.
1.Load and Store instructions
LDA - Load the value to Accumulator.
LDB : load the value to register B from operand
LDX – Load the value to Index Register.
STA – Store the Accumulator content to some variable.
STB : store the value from register B to some operand.
STX – Store the Index register content into some variable.
2.Arithmetic Instructions
ADD – Add the operand value with Accumulator and result is stored in Accumulator.
SUB - Subtract the operand value with Accumulator and result is stored in Accumulator.
MUL - Multiply the operand value with Accumulator and result is stored in
Accumulator. DIV - Divide the operand value with Accumulator and result is stored in
Accumulator.
ADDF, SUBF, MULF, DIVF – Floating point arithmetic instructions.

1. Comparison Instructions
COMP : that instruction compares the value in the register A with another variable.
And sets the condition code CC to indicate if the accumulator value is < or =
or >

2. Jump Instructions
JLT : Jump less than
JEQ : Jump equal to
JGT : Jump greater
than
 These instructions test the setting of CC and jumps accordingly.

Register to register instructions


ADDR : add the two registers contents and the result is stored rightmost register.
SUBR : subtract the two registers contents and the result is stored rightmost register.
MULR : multiply the two registers contents and the result is stored rightmost
register. DIVR : divide the two registers contents and the result is stored rightmost
register.
ADDR S , A
S register content and A register content is added and the result is stored A register.

Special supervisor call instruction.


SVC : it is a kind of system call.
Which transfer control to the OS rather than to the user program.

Explain in detail about instructions based on Operations with examples .


Instructions( 7)
Operations ( 6)
Types of operations
 Generally type of operations supported by most of the machines can be
categorizedas follows,

 Data transfer operations


CO2 K2 3
QB103 (a)*  Arithmetic operations
 Logical operations
 Conversion operations
 I/O operations
 System control operations
 Transfer of control operations
Data transfer operations
Operation Description

Move Transfer word from source to destination

Store Transfer word from Processor to Memory

Load Transfer word from Memory to Processor

Clear Reset the register contents

Set Make the register content 1

Push Transfer word from source to top of the stack

Pop Transfer word from top of the stack to destination

Arithmetic Operations

Operation Description

Addition Perform addition of two operands

Subtraction Perform subtraction of two operands

Multiplication Perform multiplication of two operands

Division Perform division of two operands

Absolute Replace the value with its absolute value

Negate Change the sign of the operand

Increment Adds 1 to operand

Decrement Subtracts 1 from operand


Logical operations
Operation Description

AND Performs logical AND


OR Performs logical OR
NOT Performs logical NOT
Exclusive-OR Performs logical XOR
Test Test the condition flags
Compare Compare the two operands and set the flag value
Left Shift Shift the operand to left position
Right Shift Shift the operand to right position
Conversion Operations

Operation Description

Translate Translate values in a section of memory.


Convert Converts the contents of a word from one form to another.
I/O Operations

Operation Description

Input (read) Transfer data from I/O port to Main memory

Output (write) Transfer data from main memory to I/O port

Start I/O Initiate I/O operations

Test I/O Test the status of the operations


Transfer Control System

Operation Description

Jump Unconditional transfer, load to specified address

Jump Conditional Test the condition and transfer the control


Store the current address in stack and Jump into specified
Call to subroutine
address
Return Return back the control specified from stack

Execute Execute the instruction.

Skip Increment PC to skip next instruction

Halt Stops the program execution

Wait Stop the program up to specified time

No operation No operation is performed, but program execution is continued.


(Or)
Briefly explain the logical and decision making statements with examples.

Logical(7)

Decision Making(6)

 Although the first computers operated on full words, it soon became clear that it was CO2 K2 3
QB103 (b)*
useful to operate on fields of bits within a word or even on individual bits.
 Examining characters within a word, each of which is stored as 8 bits.
 It follows that operations were added to programming languages and instruction set
architectures to simplify, among other things, the packing and unpacking of bits into
words.
 These instructions are called logical operations.
 Figure shows logical operations in C, Java, and MIPS.
 The first class of such operations is called shifts.
 They move all the bits in a word to the left or right, filling the emptied bits with 0s.
 They move all the bits in a word to the left or right, filling the emptied bits with 0s.
For example, if register $s0 contained.
AND
 A logical bit by bit operation with two operands that calculates a 1 only if there is a
1 in both operands.
OR
 A logical bit by bit operation with two operands that calculates a 1 if there is a 1in
either operand.

NOT
 A logical bit by bit operation with one operand that inverts the bits; that is, it
replaces every 1 with a 0, and every 0 with a 1.
NOR

 A logical bit by bit operation with two operands that calculates the NOT of the OR
of the two operands.
 That is, it calculates a 1 only if there is a 0 in both operands.
 Bitwise operator works on bits and performs bit-by-bit operation. Assume if a = 60
and b = 13; now in binary format they will be as follows −

A = 0011 1100
B = 0000 1101
a &b = 0000 1100

a|b = 0011 1101

~a = 1100 0011
UNIT - II
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Briefly Explain fast adder with propagate and generate function for 4 bit, 16 bit
circuits.

A carry-look ahead adder (CLA) is a type of adder used in digital logic. A carry-look ahead
adder improves speed by reducing the amount of time required to determine carry bits. The
carry-look ahead adder calculates one or more carry bits before the sum, which reduces the
wait time to calculate the result of the larger value bits.
Idea
 Try to “predict” Ck earlier than Tc*k
 Instead of passing through k stages, compute Ck separately using additional logic.
Operation Mechanism
Carry look ahead depends on two things:
1. Calculating, for each digit position, whether that position is going to propagate a
carry if one comes in from the right.
2. Combining these calculated values to be able to deduce quickly whether, for each
group of digits, that group is going to propagate a carry that comes in from the right. CO3 K2 2
QB201 (a)* CLA – Concept
 To reduce the computation time, there are faster ways to add two binary numbers by
using carry look ahead adders.
 They work by creating two signals P and G known to be Carry Propagator and
Carry Generator.
 The carry propagator is propagated to the next level whereas the carry generator is
used to generate the output carry regardless of input carry.
 The block diagram of a 4-bit Carry Look ahead Adder is shown here below
Design Issues
The corresponding Boolean expressions are given here to construct a carry look
ahead added. In the carry-look ahead circuit we need to generate the two signals
carry propagator(P)
and carry generator(G),
 Pi = Ai ⊕ Bi
 Gi = Ai · Bi
The output sum and carry can be expressed as
 Sumi = Pi ⊕ Ci
 Ci+1 = Gi + ( Pi · Ci)
Having these we could design the circuit. We can now write the Boolean function for
the carry output of each stage and substitute for each Ci its value from the previous
equations:
 C1 = G0 + P0 · C0
 C2 = G1 + P1 · C1 = G1 + P1 · G0 + P1 · P0 · C0
 C3 = G2 + P2 · C2 = G2 P2 · G1 + P2 · P1 · G0 + P2 · P1 · P0 · C0
 C4 = G3 + P3 · C3 = G3 P3 · G2 P3 · P2 · G1 + P3 · P2 · P1 · G0 + P3 · P2 · P1 · P0 · C0

Generate: Cout = 1 independent of C


G =A • B
Propagate: Cout = C
P =A B
Kill: Cout = 0 independent of C
K =~A • ~B
Cout = G + P.Cin

Algebraic calculations for carry out


ci+1 = gi + pici
c1 = g0 + p0c0
c2 = g1 + p1c1
= g1 + p1(g0 + p0c0)
= g1 + p1g0 + p1p0c0
c3 = g2 + p2c2
= g2 + p2(g1 + p1g0 + p1p0c0)
= g2 + p2g1 + p2p1g0 + p2p1p0c0
c4 = g3 + p3c3
= g3 + p3(g2 + p2g1 + p2p1g0 + p2p1p0c0)
= g3 + p3g2 + p3p2g1 + p3p2p1g0 + p3p2p1p0c0
4 Bit – Carry lookahead adder circuit diagram

16 Bit – Carry lookahead adder circuit diagram

(Or)
Explain in detail about the half adder, full adder and Nbit full adder with a circuit
diagram and also list the advantages and disadvantages.
Half adder( 4)
Half Adder
 With the help of half adder, we can design circuits that are capable of performing
simple addition with the help of logic gates.
 Let us first take a look at the addition of single bits. 0+0 = 0
0+1 = 1
1+0 = 1
1+1 = 10
 These are the least possible single-bit combinations. But the result for 1+1 is 10.
 Though this problem can be solved with the help of an EXOR Gate, if you do care
about the output, the sum result must be re-written as a 2-bit output.
 Thus the above equations can be written as 0+0 = 00
0+1 = 01
1+0 = 01
1+1 = 10
Here the output ‘1’of ‘10’ becomes the carry-out. The result is shown in a truth- table
below. ‘SUM’ is the normal output and ‘CARRY’ is the carry-out.
CO3 K2 2
QB201 (b)*
INPUTS OUTPUTS
A B SUM CARRY
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

 From the equation it is clear that this 1-bit adder can be easily implemented with the
help of EXOR Gate for the output ‘SUM’ and an AND Gate for the carry.
 Take a look at the implementation below.
 For complex addition, there may be cases when you have to add two 8-bit bytes
together. This can be done only with the help of full-adder logic.
Full Adder
 This type of adder is a little more difficult to implement than a half-adder.
 The main difference between a half-adder and a full-adder is that the full-
adder hasthree inputs and two outputs.
 The first two inputs are A and B and the third input is an input carry
designated asCin.
 When a full adder logic is designed we will be able to string eight of them
together to create a byte-wide adder and cascade the carry bit from one adder
to the next.
 The output carry is designated as COUT and the normal output is designated
as S. Take a look at the truth-table.
INPUTS OUTPUTS
A B CIN COUT S
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1
 From the above truth-table, the full adder logic can be implemented.
 We can see that the output S is an EX-OR between the input A and the
half-adderSUM output with B and Cin inputs.
 We must also note that the Cout will only be true if any of the two inputs out
of thethree are HIGH.
 Thus, we can implement a full adder circuit with the help of two half adder
circuits.
N bit full adder(3 )
N bit adder can be shown as following,

 Here n number of binary bits are sent to the adders simultaneously.


 The LSB bit Full Adder, first produce the sum0, and carry1.
 The can be input to the second Full Adder.
 This process continued up to all Full Adders and final sum is generated.

Advantages ( 3)
 Allows addition of larger binary numbers.
 Highly scalable.
 Supports addition of numbers with more bits.
Disadvantages (3 )

 Requires more complex circuitry.


 Increased propagation delay with larger N-bit adders.
 Higher chance of errors in implementation due to complexity.

Implement the Booth’s multiplication algorithm for A = 010111, B = 101100.


Problem Solving(7)
 Initialize the product accumulator (P) to 0, and initialize the multiplier (A) and CO3 K3 2
QB202 (a)*
multiplicand (B).
 Repeat the following steps for the number of bits in the multiplier:
1. If the least significant bit of the multiplier is 1 and the previous bit is 0 (i.e.,
10), subtract the multiplicand from the product accumulator.
2. If the least significant bit of the multiplier is 0 and the previous bit is 1 (i.e.,
01), add the multiplicand to the product accumulator.
3. Right shift the multiplier and the product accumulator by 1 bit.
 The final value in the product accumulator is the result of the multiplication.
Steps for Problem(6)

A = 010111
B = 101100

Initialize:
Product (P) = 000000
Multiplier (A) = 010111
Multiplicand (B) = 101100

Step 1:
A = 010111
B = 101100
P = 000000

Step 2:
1. Since the LSB of A is 1 and the previous bit is 0, subtract B from P: P = P -
B = 000000 - 101100 = 010100
2. Right shift A and P: A = 001011, P = 001010

Step 3:
1. Since the LSB of A is 1 and the previous bit is 1, no operation needed.
2. Right shift A and P: A = 000101, P = 000101

Step 4:
1. Since the LSB of A is 0 and the previous bit is 1, add B to P: P = P + B =
000101 + 101100 = 110001
2. Right shift A and P: A = 000010, P = 011000

Step 5:
1. Since the LSB of A is 1 and the previous bit is 0, subtract B from P: P = P -
B = 011000 - 101100 = 001100
2. Right shift A and P: A = 000001, P = 000110

Step 6:
1. Since the LSB of A is 1 and the previous bit is 1, no operation needed.
2. Right shift A and P: A = 000000, P = 000011

The final product accumulator P is 000011, which is equivalent to the decimal


value 11.
Therefore, the result of the multiplication of 010111 and 101100 is 11 in
decimal.

(Or)
Multiply two numbers using Hardware multiplication A=1010 B=1101

Problem Solving(7)

Points to remember while multiplication


 The product of binary digit 0 with itself is 0.
 The product of binary digit 1 with itself is 1.
 The product of binary digit 0 with the binary digit 1 is 0.
 The product of binary digit 1 with the binary digit 0 is 0.
 The sum of binary digit 0 with itself is 0.
QB202 (b)*  The sum of binary digit 1 with itself is 10. CO3 K3 2
 The sum of binary digit 0 with 1 is 1.
 The sum of binary digit 1 with 0 is 1.
Steps for Problem(6)

Step 1: Write the given binary numbers as in the conventional method of


multiplication i.e. one below the other. The number on the upper position is
known as multiplicand and the number placed below the multiplicand is called
the multiplier. For example, for multiplication of the binary numbers (1101)₂
and (1010)₂
(1101)₂

X (1010)₂

Step 2: To begin with multiplication, we consider the corner most digit from
the right side. Taking the digit from the extreme right, first, multiply it with
the extreme right digit of the multiplicand and proceed in the same way
towards the left of the multiplicand.

1101

X 1010

___________

= 0000

Here, as the product of binary digit 0 with 0 and 1 is 0 so place 0s in the first
row.

Step 3: Proceeding the same way for the rest of the digits of the multiplicand
and multiplier, we get

1101

X1010

___________

= 0000

1101X

0000XX
1101XXX

___________

The product obtained in each row by multiplying a digit of the multiplier with
all the digits of the multiplicand is called an intermediate product.

Step 4: To obtain the final product, add up all the numbers obtained till now.
Adding all the intermediate products, we get

1101

X1010

___________

= 0000

1101X

0000XX

+1101XXX

___________

Step 5: Before adding all the numbers, always remember to apply the addition
rule for binary digits i.e.

0+0=0

0+1=1
1+0=1

1 + 1 = 10

For binary addition of 1 with itself, the sum becomes 10 and is written as 0 in
the place and 1 carried to the next digit.

On binary addition, we get

1101

X1010

___________

0000

1101X

0000XX

+1101XXX

___________

10000010
___________

Hence, the binary multiplication of (1101)₂ and (1010)₂ is (10000010)₂.

CO3 K3 2
QB203 (a)* Represent the number (1259.125)10 in single and double precision representation.
(Or)
Explain the algorithm and flow chart for floating point multiplication with an
example.

Here, we have discussed an algorithm to multiply two floating point numbers, x and y.
Algorithm:-
1. Convert these numbers in scientific notation, so that we can explicitly represent
hidden 1.
2. Let ‘a’ be the exponent of x and ‘b’ be the exponent of y.
3. Assume resulting exponent c = a+b. It can be adjusted after the next step.
4. Multiply mantissa of x to mantissa of y. Call this result m.
5. If m does not have a single 1 left of radix point, then adjust radix point so it does, and
adjust exponent c to compensate.
6. Add sign bits, mod 2, to get sign of resulting multiplication.
7. Convert back to one byte floating point representation, truncating bits if needed.

CO3 K3 2
QB203 (b)*
Example :-
Suppose you want to multiply following two numbers:

Now, these are steps according to above algorithm:


1. Given, A = 1.11 x 2^0 and B = 1.01 x 2^2
2. So, exponent c = a + b = 0 + 2 = 2 is the resulting exponent.
3. Now, multiply 1.11 by 1.01, so result will be 10.0011
4. We need to normalize 10.0011 to 1.00011 and adjust exponent 1 by 3 appropriately.
5. Resulting sign bit 0 (XOR) 0 = 0, means positive.
6. Now, truncate and normalize it 1.00011 x 2^3 to 1.000 x 2^3.
Therefore, resultant number is,

UNIT - III
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Draw and explain the function block diagram for the basic MIPS implementation with
necessary multiplexers and control lines
QB301 (a)* Function block diagram(7) CO4 K2 3

Basic MIPS implementation (6 )

(Or)
What is pipelining? Discuss about pipelined datapath and control.

Pipelining(3)

 Using pipeline modern computers to achieve high performance.


 Pipelined organization requires sophisticated compilation techniques, and
optimizingcompilers have been developed for this purpose.
 Among other things, such compilers rearrange the sequence of operations to
maximize thebenefits of pipelined execution.
Pipelined data path(5)
QB301 (b)*  A datapath element is a unit used to operate on or hold data within a processor. K2 2
 In the MIPS implementation, the datapath elements include,
1. Instruction Memory
2. Data Memory
3. Register File
4. ALU
5. Adders
Building A MIPS Datapath Consists of Datapath For

 Fetching the instruction and incrementing the PC


 Executing an arithmetic and logic instructions
 Executing a memory-reference instruction
 Executing a branch instruction
Data path For Fetching The Instruction And Incrementing The PC
Pipelined Control path(5)
 The speed of execution of programs is influenced by many factors.
 One way to improve performance is to use faster circuit technology to
build the processor and the main memory.
 Another possibility is to arrange the hardware so that more than one
operation can be performed at the same time.
 In this way, the number of operations performed per second is increased
even though the elapsed time needed to perform any one operation is
not changed.
 The concept of multiprogramming and explained how it is possible for
I/O transfers and computational activities to proceed simultaneously.

 Pipelining is a particularly effective way of organizing concurrent


activity in a computer system.
 The basic idea is very simple.
 Consider how the idea of pipelining can be used in a computer.
 The processor executes a program by fetching and executing
instructions, one after the other. Let Fi and Ei refer to the fetch and
execute steps for instruction Ii.
 Executions of a program consist of a sequence of fetch and execute
steps.
Explain basic operation of a four stage pipelining with a neat diagram.

Basic operation (7)


Four stage pipelining (6 )

A four-stage pipelined processor divides the execution of instructions into four sequential
stages, allowing multiple instructions to be processed concurrently. The four stages
typically include:

1. Instruction Fetch (IF):


 In the first stage, the processor fetches the instruction from memory based on the
value of the program counter (PC).
 The fetched instruction is loaded into an instruction register.
2. Instruction Decode (ID):
 In the second stage, the processor decodes the instruction fetched in the previous
stage.
QB302 (a)*  Decoding involves identifying the operation to be performed and the operands
CO4 K2 3
involved.
 Control signals are generated based on the instruction type to control subsequent
stages.
3. Execute (EX):
 The third stage is where the actual execution of the instruction takes place.
 Arithmetic or logic operations specified by the instruction are performed in this
stage.
 For memory operations, addresses may be calculated, and data may be accessed
from memory.
4. Memory Access/Write Back (MEM/WB):
 In the final stage, the processor either accesses memory (for load/store instructions)
or writes back the result of the execution to the appropriate register (for arithmetic
or logic instructions).
 Memory access may involve fetching data from memory or storing data to memory,
depending on the instruction type.
Once the pipeline is filled with instructions, each stage of the pipeline works on a different
instruction simultaneously. As each clock cycle progresses, each instruction moves one
stage forward in the pipeline. This allows multiple instructions to be processed
concurrently, improving throughput and performance.

It's important to note that while a four-stage pipeline simplifies the processor design
compared to pipelines with more stages, it may also introduce limitations such as
potentially lower clock speeds due to shorter stage durations and higher susceptibility to
pipeline hazards. Managing these limitations effectively is crucial to maximizing the
benefits of pipelining in four-stage pipeline architecture.
2-STAGE PIPELINED EXECUTION

3 - STAGE PIPELINED EXECUTION


4-STAGE PIPELINED EXECUTION

5-STAGE PIPELINED EXECUTION


6-STAGE PIPELINED EXECUTION

(Or)
Explain the basic concepts of pipelining.

(i)Basic operation (7)


(ii)Advantages(3)
(iii)Disadvantages(3)

The basic concepts of pipelining operation involve breaking down the execution of
instructions into smaller, sequential stages to improve processor efficiency. Here's an
overview of how pipelining works:

Instruction Fetch (IF):

 The first stage of the pipeline is responsible for fetching instructions from memory.
 The program counter (PC) is used to determine the address of the next instruction to
be fetched.
 The instruction fetched is usually stored in an instruction register.
Instruction Decode (ID):

In this stage, the fetched instruction is decoded to determine its type and operands.
QB302 (b)* The necessary control signals are generated to control subsequent stages of the pipeline CO4 K2 3
based on the instruction type.

Execution (EX):

In this stage, the instruction is executed.


The specific operation indicated by the instruction is performed, such as arithmetic, logic,
or data transfer.
For complex instructions, multiple cycles may be required to complete the execution.
Memory Access (MEM):

This stage is responsible for accessing memory if required by the instruction.


Memory operations include reads or writes to memory locations, such as loading data from
memory or storing data to memory.

Write Back (WB):

 The final stage of the pipeline is where the results of the instruction execution are
written back to the appropriate registers.
 This stage updates the processor's architectural state, such as updating register
values with the result of arithmetic or logic operations.
 These stages operate sequentially, with each stage processing a different instruction
concurrently. As one instruction moves from one stage to the next, the next
instruction enters the pipeline, resulting in a continuous flow of instructions through
the pipeline.

Key concepts of pipelining operation include:

Parallelism: Pipelining enables parallel execution of multiple instructions by overlapping


their execution in different pipeline stages.

Overlap: Instructions are overlapped in time, so while one instruction is being executed,
another instruction can be decoded, and a third instruction can be fetched.

Throughput: Pipelining increases the throughput of the processor by allowing multiple


instructions to be processed simultaneously, thereby improving overall performance.

Hazards: Pipelining introduces hazards such as data hazards, structural hazards, and
control hazards, which must be managed to ensure correct execution and maintain
performance efficiency.

Advantages of Pipelining:

 Increased Throughput: Pipelining allows multiple instructions to be processed


simultaneously, thereby increasing the overall throughput of the processor. This
leads to higher performance and faster execution of programs.

 Improved Efficiency: By breaking down instruction execution into smaller stages


and overlapping them, pipelining maximizes the utilization of hardware resources,
leading to improved efficiency.

 Enhanced Performance: Pipelining enables better utilization of the processor's


resources, resulting in improved performance for a wide range of applications and
workloads.

 Resource Sharing: Pipelining allows different stages of the pipeline to share


resources efficiently, leading to better resource utilization and reduced hardware
costs.
 Scalability: Pipelining can be scaled to accommodate increasingly complex
instruction sets and larger pipelines, making it suitable for a wide range of processor
architectures and applications.

Disadvantages of Pipelining:

 Pipeline Hazards: Pipelining introduces various hazards such as data hazards,


structural hazards, and control hazards, which can impact pipeline efficiency and
require additional hardware mechanisms to handle.

 Increased Complexity: Implementing a pipelined processor requires more complex


hardware design and control logic compared to non-pipelined architectures. This
complexity can lead to higher development costs and longer design cycles.

 Pipeline Stall: Pipeline stalls occur when a hazard prevents the smooth progression
of instructions through the pipeline, leading to decreased throughput and
performance. Managing pipeline stalls effectively requires additional hardware
mechanisms, which can increase design complexity.

 Branch Prediction Overhead: Handling branch instructions in a pipelined


processor incurs overhead due to branch prediction mechanisms. Incorrect branch
predictions can result in pipeline flushes and wasted computational resources.

 Instruction Dependencies: Pipelining relies on the availability of independent


instructions to achieve parallelism. Dependencies between instructions, such as data
dependencies or control dependencies, can limit the effectiveness of pipelining and
require additional hardware mechanisms to resolve.

What is data hazard? How do you overcome it? What are its side effects?

Data Hazard(7)
A data hazard occurs in pipelined processors when there is a dependence between
QB303 (a)* instructions that can potentially lead to incorrect execution or stalls in the pipeline. This CO4 K2 4
dependence arises when an instruction requires data that is produced by a previous
instruction that has not yet completed execution. There are three types of data hazards:

 Read after Write (RAW) Hazard: Also known as a data dependence hazard, it
occurs when an instruction reads a register before a prior instruction writes to it.
This can cause the second instruction to use incorrect or outdated data.

 Write after Read (WAR) Hazard: This hazard occurs when an instruction writes
to a register before a prior instruction reads from it. While this doesn't affect the
correctness of the program's results, it can cause issues if the order of instructions
affects control flow or interrupts.

 Write after Write (WAW) Hazard: This hazard occurs when two instructions
write to the same register in quick succession, potentially overwriting each other's
results.

To overcome data hazards and maintain correct program execution in a pipelined


processor, various techniques can be employed:

 Forwarding (or data bypassing): Also known as data forwarding, this technique
involves passing data directly from the output of one stage to the input of another
without writing it to memory. By doing this, the processor can provide the required
data to instructions that need it before the data is written back to memory.

 Stall (or bubble insertion): This technique involves inserting bubbles (no-
operation instructions) into the pipeline to delay instruction execution until the
required data is available. While effective in resolving hazards, it can decrease
overall throughput and performance.

 Out-of-order execution: This technique allows the processor to reorder instructions


dynamically to avoid hazards. By executing independent instructions out of order
while ensuring program semantics are preserved, the processor can minimize
pipeline stalls caused by data hazards.

Side effects(6)

Side effects of data hazards and their resolution techniques include:

 Performance degradation: Data hazards can lead to pipeline stalls or


bubbles, reducing the overall throughput and performance of the
processor.
 Increased latency: Stalls or bubble insertion introduce delays in
instruction execution, increasing the latency of the program.

 Complexity: Implementing forwarding, stall insertion, or out-of-order


execution adds complexity to the processor design and may increase
power consumption.

 Potential for incorrect results: Inadequate handling of data hazards


can lead to incorrect program results due to the use of outdated or
incorrect data.

 Resource contention: Forwarding data directly between pipeline stages


may require additional hardware resources and introduce contention for
those resources.

 Hardware overhead: Implementing techniques to resolve data hazards,


such as forwarding logic or out-of-order execution units, requires
additional hardware, which increases chip area and manufacturing costs.
(Or)
What is a control hazard? Explain the methods for dealing with the control hazards.

Control Hazard(6)
A control hazard occurs when there is a delay in the availability of a required control signal
for instruction execution in a pipelined processor. This delay can cause a stall in the
pipeline, leading to decreased performance and efficiency.

There are several methods for dealing with control hazards:


QB303 (b)* CO4 K2 3
1. Branch prediction: By predicting the outcome of branches in advance, the
processor can speculatively execute instructions, reducing the impact of control
hazards. Techniques such as branch target prediction and branch history table are
commonly used for branch prediction.

2. Delayed branching: Instead of stalling the pipeline when a branch instruction is


encountered, the processor can execute one or more instructions following the
branch instruction before determining the branch outcome. This allows for
speculative execution and helps in reducing the impact of control hazards.

3. Compiler optimization: Compiler techniques such as loop unrolling, software


pipelining, and branch target optimization can help in minimizing the impact of
control hazards by rearranging the code to reduce the number of branches or
ensure that branches are predicted correctly.

4. Hardware support: Dedicated hardware structures such as branch prediction


units, speculation mechanisms, and forwarding can be implemented in the
processor to predict and handle control hazards efficiently.

5. Out-of-order execution: By allowing instructions to be executed out of order and


reordering the instructions in the pipeline, the processor can reduce the impact of
control hazards and improve performance..

Explanation(7)

Techniques for Branch Prediction

1. Predict never taken


2. Predict always taken
3. Predict by opcode
4. Taken/not taken switch
5. Branch history table

A small memory /buffer, which contains a bit that says whether the branch was recently
taken or not.
 A branch predictor tells us whether or not a branch is taken.
 Calculates the branch target address.
 Using a cache to hold the branch target buffer

Branch Prediction Flowchart


Types of Branch Predictor

Correlating predictor - Combines local behavior and global behavior of a


particular branch

Tournament branch predictor - Makes multiple predictions for each branch


and a selection mechanism that chooses which predictor to enable for a given
branch.

Delayed Branch
In MIPS, branches are delayed.
 Branch Folding - This means that the instruction immediately following
the branch is always executed, independent of whether the branch
condition is true or false.
 When the condition is false, the execution looks like a normal branch.
 When the condition is true, a delayed branch first executes the
instruction immediately following the branch in sequential instruction
order before jumping to the specified branch target address.

UNIT - IV
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Discuss about SISD, SIMD in detail with suitable diagrams and examples.
QB401 SISD, SIMD(7)
Diagram(3)
CO5 K2 3
(a)*
Explanation(3)
(Or)
QB401 What is hardware multithreading? Compare and contrast Fine grained Multi-
Threading and Coarse grained Multi-Threading CO5 K2 2
(b)* Hardware multi threading (7)
Comparison (6)

Draw and discuss about the Cluster Architecture and its types
QB402
Cluster Architecture(6) CO5 K2 3
(a)* Types(7)

(Or)

QB402 Discuss shared memory multiprocessor with a neat diagram CO5 K2 3


(b)* Shared memory multi processor (7)
Example (6)

QB403 Explain Multi core processors with a suitable diagram and examples.
Multi core processors(7 ) CO4 K2 4
(a)* Diagram ( 3)
Example (3)
(Or)

QB403 Explain cluster and other Message passing Multiprocessor. CO4 K2 3


(b)* Cluster and message passing Multi processor(7)
Explanation( 6)

UNIT - V
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)

Explain in detail about memory Hierarchy with neat diagram.


QB501 (a)* Explanation( 7) CO5 K2 3
Diagram(6)
(Or)
Describe the basic operations of cache in detail with diagram and discuss the various
QB501 (b)* mapping schemes used in cache design with example CO5 K2 2
Basic operation of cache(7)
Mapping scheme(6)

QB502 (a)* Discuss the methods used to measure and improve the performance of the cache . CO5 K2 3
Methods and performance(7)
Explanations(6)
(Or)

Explain the virtual memory address translation and TLB with necessary diagram
QB502 (b)* Memory address translation(8) CO5 K2 3
Diagram(5)

QB503 (a)* Describe in detail about programmed Input/ Output with neat diagram. CO6 K2 4
Programmed Input and output(7)
Diagram(6)
(Or)
Explain in detail about interrupts with diagram.
QB503 (b)* Interrupts( 7) CO6 K2 3
Diagram( 6)

(PART C – 15 Marks - Either Or Type)

UNIT - I
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Explain in detail the instructions based address, operations and operations and
QC101
operands in detail. CO2 K3 4
(a)*
Instructions( 5)
Operations(5)
Operand ( 5)

(Or)
Explain the following addressing modes in detail with diagram.
i)Immediate addressing ii)Register Addressing iii)Base or Displacement addressing
QC101 iv)PC-Relative Addressing v)Pseudo Direct Addressing
CO2 K3 4
(b)* Addressing types(7 )
Explanation(5 )
Diagram(3)

UNIT - II
Knowledg Difficulty
Q. No Questions CO e Level Level (1-
(Blooms) 5)
Explain in detail about floating point addition and add the numbers ( 0.510 ) and (-
QC201 0.437510) using binary floating point addition algorithm.
Problem Solving(8) CO3 K3 5
(a)*
Steps for Problem(7)
(Or)

Divide (12)10 by (3)10 using the Restoring and Non- restoring division algorithm with
step by step intermediate results and explain.
QC201 (b)*
Problem Solving(8)
CO3 K3 5
Steps for Problem(7)

UNIT - III
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Explain in detail how exceptions are handled in MIPS architecture?
Function block diagram(7)
QC301 (a)* MIPS architecture(8) CO4 K3 3

(Or)

Define pipelined Hazards? List the types and explain the different types of hazards
with example and solutions to the hazards.
QC301 (b)* CO4 K3 4
Pipelining hazard(3)
Different types of pipeline hazards (7)
Examples(5)

UNIT - IV
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Explain the different types of processors with neat diagram.
Types of processor (7)
QC401 (a)* CO5 K3 3
diagram (3)
Explanation(5)
(Or)
Explain the four principle approaches to multithreading with necessary diagrams
Types of approaches(8)
QC401 (b)* CO5 K3 4
Diagram(2)
Explanation(5)

UNIT - V
Knowledge
Difficulty
Q. No Questions CO Level
Level (1-5)
(Blooms)
Explain in detail about the bus arbitration techniques.
QC501 (a)* Bus arbitration techniques(8) CO6 K3 4
Explanation(7)

(Or)
Draw the typical block diagram of a DMA controller and explain how it is used for
direct data transfer between memory and peripherals.
QC501 (b) DMA Controller(8) CO6 K3 4
Diagram(2)
Explanation(5)

Knowledge Level (Blooms Taxonomy)

Applying
K1 Remembering (Knowledge) K2 Understanding (Comprehension) K3
(Application of Knowledge)

K4 Analysing (Analysis) K5 Evaluating (Evaluation) K6 Creating (Synthesis)

Note: For each Question, mention as follows


(i) K1 or K2 etc. for Knowledge Level
(ii) CO1, CO2 etc. for Course Outcomes
(iii) Any number from 1 to 5 for Difficulty Level (With 1 as Most Easy & 5 as Most Difficult)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy