0% found this document useful (0 votes)
4 views19 pages

Computer Architecture Solve 2018-19

The document covers various topics related to computer architecture and assembly language programming, including instruction execution cycles, types of hazards in pipelining, cache memory concepts, and addressing modes in the 8085 microprocessor. It also discusses the bootstrap loader, data transfer schemes, parallel processing, and memory management techniques such as virtual memory and memory interleaving. Additionally, it provides examples of assembly language instructions and compares different memory mapping techniques.

Uploaded by

suritdey2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views19 pages

Computer Architecture Solve 2018-19

The document covers various topics related to computer architecture and assembly language programming, including instruction execution cycles, types of hazards in pipelining, cache memory concepts, and addressing modes in the 8085 microprocessor. It also discusses the bootstrap loader, data transfer schemes, parallel processing, and memory management techniques such as virtual memory and memory interleaving. Additionally, it provides examples of assembly language instructions and compares different memory mapping techniques.

Uploaded by

suritdey2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Q2.

Program: Add two 8-bit numbers (8085 Assembly)

MVI A, 05H ; Load 05H into accumulator A (first number)


MVI B, 07H ; Load 07H into register B (second number)
ADD B ; Add contents of B to A (A = A + B)
STA 3000H ; Store result from A into memory location 3000H
HLT ; Terminate the program

(Not part of answer:


MVI is used to move immediate data into a register.​

ADD B adds the content of register B to the accumulator.​

STA 3000H stores the result into memory location 3000H.​

HLT halts the program.


)

Q3.

The instruction execution cycle is the complete process of fetching, decoding, and
executing an instruction. It includes several machine cycles such as:

1.​ Opcode Fetch Cycle – Fetch the instruction from memory.​

2.​ Memory Read/Write Cycle – Access operands from memory.​

3.​ Execution Cycle – Perform the operation (e.g., addition).

Timing Diagram Example: MOV A, M

For the instruction MOV A, M, the processor performs:

●​ Opcode Fetch Cycle (4 T-states)​

●​ Execution Cycle (3 T-states)​

Timing Diagram:
Machine Cycle: | Opcode Fetch | Execution |
T-states (clock): | T1 T2 T3 T4| T1 T2 T3 |
Signals:
------------------------------------------------
Address Bus | MAR | - |
Data Bus | Opcode | - |
RD
̅ | LOW | - |
ALE | HIGH | - |
IO/M
̅ | LOW | - |

●​ T1-T4: Clock cycles for opcode fetch.​

●​ ALE: Address Latch Enable, active in T1.​

●​ RD̅: Read control signal, low during memory read.​

Q4.

Types of Hazards in Pipelining:

Hazards are problems that arise in pipelined processors which prevent the next instruction
from executing in the next clock cycle.

1. Data Hazard

Occurs when instructions depend on the results of previous instructions.

Example:​

asm​
CopyEdit​
MOV R1, R2
ADD R3, R1 ; R1 is not ready yet

●​
●​ Solution:​
○​ Forwarding (Bypassing)​

○​ Stalling (Inserting NOPs)​

○​ Compiler scheduling​

2. Control Hazard (Branch Hazard)

Occurs due to branch instructions (e.g., JMP, BEQ) where the next instruction is uncertain.

●​ Solution:​

○​ Branch prediction​

○​ Delayed branching​

○​ Speculative execution​

3. Structural Hazard

Occurs when two instructions require the same hardware resource at the same time.

●​ Solution:​

○​ Use of duplicated hardware resources​

○​ Pipeline scheduling​

Q5.

1. Locality of Reference in Cache Memory (2 marks)


Locality of reference refers to the tendency of a processor to access the same set of memory
locations repetitively over a short period of time. It improves cache performance.

There are two main types:

●​ Temporal Locality: Recently accessed data is likely to be accessed again soon.​

●​ Spatial Locality: Nearby memory locations (adjacent addresses) to recently accessed


data are likely to be accessed next.

2. Differences between L1 Cache and L2 Cache (3 marks)


Feature L1 Cache L2 Cache

Speed Faster (closest to CPU core) Slower than L1 but faster than RAM

Size Smaller (16KB–128KB) Larger (128KB–1MB or more)

Locatio Inside the CPU core May be on-chip or on a separate


n chip

Purpose Stores frequently used Acts as backup to L1


instructions/data

Latency Very low Higher than L1

Q6.

1. What is Bootstrap Loader? (2 marks)

A bootstrap loader is a small program stored in ROM (Read-Only Memory) that starts
automatically when the computer is powered on or reset. It is responsible for loading the
operating system (OS) into memory.

2. Functionality of Bootstrap Loader (3 marks)


●​ When the computer is turned on, the CPU looks to the ROM, where the bootstrap
loader resides.​

●​ The bootstrap loader initializes hardware components and performs POST


(Power-On Self-Test).​

●​ It locates the operating system kernel, usually from a hard disk or SSD.​

●​ It then loads the OS into main memory (RAM) and transfers control to it.​

●​ After that, the operating system takes over, completing the booting process.

Q7.

1. How many addressing modes are found in 8085 microprocessor? (5


Marks)

There are five addressing modes in the 8085 microprocessor:

1.​ Immediate Addressing Mode​

○​ Data is given directly in the instruction.​

○​ Example: MVI A, 32H​

2.​ Register Addressing Mode​

○​ Data is in a register and accessed using register names.​

○​ Example: MOV A, B​

3.​ Direct Addressing Mode​

○​ The 16-bit address of the operand is given in the instruction.​


○​ Example: LDA 2050H​

4.​ Register Indirect Addressing Mode​

○​ The address of the operand is in a register pair (e.g., HL).​

○​ Example: MOV A, M (M = memory pointed by HL)​

5.​ Implied (Implicit) Addressing Mode​

○​ The operand is implied in the instruction itself.​

○​ Example: CMA (complements accumulator)​

2. What are they? (5 Marks)


Mode Description Example

Immediate Operand is part of the instruction MVI A,


05H

Register Operand is in register MOV B, C

Direct Address of operand is specified directly LDA


2050H

Register Indirect Register pair holds address of operand MOV A, M

Implied Operation acts on implicit data (like accumulator) CMA

3. What do you know about SIM and RIM instructions? (2 Marks)

●​ SIM (Set Interrupt Mask)​

○​ Used to control hardware interrupts and send serial output data.​

○​ It enables/disables specific interrupts like RST7.5, RST6.5, and RST5.5.​

●​ RIM (Read Interrupt Mask)​


○​ Used to read status of interrupt masks and serial input data.​

○​ Returns a byte showing the state of interrupts and serial input.​

4. How many types of data transfer schemes are there? Describe any one.
(3 Marks)

There are three main types of data transfer schemes in microprocessors:

1.​ Programmed I/O​

2.​ Interrupt-driven I/O​

3.​ Direct Memory Access (DMA)​

Example: Programmed I/O

●​ The CPU continuously checks (polls) the I/O device for data.​

●​ Once ready, CPU reads/writes data to/from the I/O port.​

●​ Simple but inefficient due to CPU involvement in all transfers.

(Also u can write from the given note of Data Transfer)

Q8.

1. What is Parallel Processing? (3 marks)

Parallel processing is the simultaneous use of multiple processors (or cores) to execute
several instructions concurrently. It improves computational speed and efficiency, especially
in large or complex programs.
Example: A task is divided into subtasks, each processed simultaneously on
different processors.

2. What is Arithmetic Pipelining? (3 marks)

Arithmetic pipelining is a technique where multiple arithmetic operations are divided into
stages. Each stage processes a different part of an operation, allowing overlapping execution
and increasing throughput.

Example: While one multiplication is in stage 1, another is in stage 2, etc.

3. What is Vector Processing? (1 mark)

Vector processing refers to the execution of operations on entire arrays (vectors) of data in a
single instruction, rather than processing each data element individually.

Used in scientific and engineering applications involving large datasets.

4. Matrix Multiplication Using Vector Processing (3 marks)

In vector processing:

●​ Matrices are represented as vectors of rows and columns.​

●​ The processor uses vector instructions to perform operations on entire rows/columns


in parallel.​

●​ Each element of the resulting matrix is computed as the dot product of a row from the
first matrix and a column from the second matrix.​

Advantage: Faster computation due to reduced loop overhead and parallelism.

5. BOOTH’S MULTIPLICATION of +13 and -11


Q9.

a)

i) Two-Address Mode

In two-address mode, each instruction specifies a destination and one source operand. The
destination is also one of the operands.

MOV R1, B ; R1 = B
DIV R1, C ; R1 = B / C
ADD R1, A ; R1 = A + (B / C)

MOV R2, D ; R2 = D
MUL R2, E ; R2 = D * E
SUB R2, F ; R2 = D * E - F

MUL R1, R2 ; R1 = (A + B / C) * (D * E - F)
MOV X, R1 ; Store result in X

ii) One-Address Mode

Here, one operand is in memory and the other is implicitly in the accumulator (AC).
LOAD B ; AC = B
DIV C ; AC = B / C
ADD A ; AC = A + (B / C)
STORE TEMP1 ; Store intermediate result

LOAD D ; AC = D
MUL E ; AC = D * E
SUB F ; AC = D * E - F
MUL TEMP1 ; AC = (A + B / C) * (D * E - F)
STORE X ; Store result in X

iii) Zero-Address Mode (Stack-Based)

Zero-address machines use an implicit stack for operations. Instructions operate on the top of
the stack.

PUSH A
PUSH B
PUSH C
DIV ; B / C
ADD ; A + (B / C) → Temp1

PUSH D
PUSH E
MUL ; D * E
PUSH F
SUB ; D * E - F → Temp2

MUL ; Final multiplication


POP X ; Store result in X

b) Subroutine Call (4 Marks)

Read from my note on SUBROUTINE

A subroutine is a set of instructions designed to perform a specific task, grouped as a single


unit. Subroutines help in modular programming, reduce code duplication, and improve
readability.
Subroutine Call Mechanism:

When a subroutine is called, the processor:

1.​ Saves the return address (address of the next instruction after the call) on the stack.​

2.​ Jumps to the subroutine address.​

3.​ Executes the subroutine.​

4.​ Returns back to the main program using RET instruction.​

Example in 8085 Assembly:


MVI A, 09H ; Load 09H into Accumulator
CALL SUB ; Call subroutine at label SUB
HLT ; Stop execution

SUB: INR A ; Increment A by 1


RET ; Return to main program

Read from M Mano Memory chapter. Diagram should be from the mentioned book only. I am
giving a brief comparative study below.

Here is a comparison between Direct Mapping and Set Associative Mapping techniques, as
asked in the question:

Feature Direct Mapping Set Associative Mapping

Definition Each block of main memory maps to Each block can be placed in a set of
only one cache line. cache lines.

Flexibility Very rigid (fixed location for each More flexible (block can be placed in
block). multiple lines in a set).

Complexity Simple to implement. More complex due to need for


associative search within sets.
Cost Less costly due to simpler hardware. More expensive due to more logic
required.

Hit Rate Lower (more conflicts due to fixed Higher (reduced conflict misses).
mapping).

Q 10.

a) What is CAM?

Content Addressable Memory (CAM) is a special type of memory used in associative


mapping techniques (like fully associative or set-associative caches).

Key Features of CAM:


Feature Description

Access Data is accessed by content, not by address.


Method

Use in Used to search all tags in a cache simultaneously for a match.


Caching

Speed Enables very fast lookup, since all comparisons are done in
parallel.

Cost More expensive and power-hungry than traditional RAM.

Application Common in associative and set-associative cache


implementations.

Role in Set Associative Mapping:


In set associative caches, CAM is used within each set to find the correct block by comparing
the tag of each line in parallel. This allows flexible block placement while still achieving fast
access.

b) Virtual Memory (5 Marks)

Definition:​
Virtual memory is a memory management technique that creates an illusion for users of having
a very large main memory. It enables a computer to compensate for physical memory shortages
by temporarily transferring data from RAM to disk storage.

Key Points:

●​ Uses both hardware and software (MMU and OS).​

●​ Allows execution of programs larger than physical memory.​

●​ Provides process isolation and memory protection.​

●​ Implements paging or segmentation for address translation.​

●​ Reduces fragmentation and improves system multitasking.​

Diagram:

Virtual Address (CPU)



Page Table (MMU)

Physical Address (RAM)

c) Register Stack and Memory Stack (5 Marks)

Register Stack:

●​ A fast, small memory structure built from CPU registers.​

●​ Used in expression evaluation and subroutine calls.​

●​ Very limited in size but very fast.​


Memory Stack:

●​ Located in RAM and used for storing return addresses, local variables, and function
parameters.​

●​ Larger in size but slower.​

●​ Grows dynamically (usually downward in memory).​

Comparison Table:

Feature Register Stack Memory Stack

Location CPU Registers RAM

Speed Very Fast Slower

Size Very Small Larger

Use Temporary expression Subroutine handling


results

d) Asynchronous Data Transfer (5 Marks)

Definition:​
Asynchronous data transfer occurs when data is sent between devices that do not share a
common clock. Instead, they use control signals to coordinate timing.

Key Techniques:

●​ Strobe Control: One line to indicate readiness.​

●​ Handshake Control: Two-way control signals for reliability.​

Example:​
CPU communicating with I/O devices like keyboards or printers.

Advantages:

●​ Allows communication with slower devices.​

●​ More flexible and tolerant of speed mismatches.​


Diagram (Handshake Example):

CPU → Request → Device


CPU ← Acknowledge ← Device

e) Memory Interleaving (5 Marks)

Definition:​
Memory interleaving is a technique used to increase the speed of memory access by
organizing memory into banks that can be accessed in parallel.

Working:

●​ Memory is divided into multiple modules.​

●​ Consecutive addresses are placed in different memory banks.​

●​ While one bank is being accessed, others can be precharged or accessed in parallel.​

Types:

●​ Low-order interleaving: Low bits of address determine the bank.​

●​ High-order interleaving: High bits determine the bank.​

Advantages:

●​ Reduces memory access time.​

●​ Increases CPU throughput.​

Diagram:

Address: 0 → Bank 0
Address: 1 → Bank 1
Address: 2 → Bank 2
...

Q11.
a)​ M Morris Mano
b)​
d)​

Difference Between Paging and Segmentation


Feature Paging Segmentation

Definition Divides memory into fixed-size Divides memory into variable-size


blocks called pages. blocks called segments.

Size Pages are of fixed size. Segments are of variable size.

Purpose Handles physical memory Handles logical division of a


allocation efficiently. program (code, data, stack).

Address Consists of page number + Consists of segment number +


Structure offset. offset.

Fragmentati Can lead to internal Can lead to external


on fragmentation. fragmentation.

User View Not visible to user; purely for Visible to user; corresponds to
memory management. logical units.

Example:

●​ Paging: A 4 KB page is mapped from logical address 0x1A2F to a frame in physical


memory.​

●​ Segmentation: A code segment of 8 KB, a data segment of 5 KB, and a stack segment
of 4 KB.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy