0% found this document useful (0 votes)
6 views14 pages

Coa Solved Paper

The document contains a series of questions and answers related to Computer Organization and Architecture, covering topics such as RISC vs. CISC architecture, auto increment and decrement operations, memory read sequences, half adder logic equations, and more. It also discusses cache coherence, Flynn's classifications, and various addressing modes. The content is structured into sections with specific questions followed by detailed answers, providing a comprehensive overview of fundamental concepts in computer architecture.

Uploaded by

manikmahajan108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views14 pages

Coa Solved Paper

The document contains a series of questions and answers related to Computer Organization and Architecture, covering topics such as RISC vs. CISC architecture, auto increment and decrement operations, memory read sequences, half adder logic equations, and more. It also discusses cache coherence, Flynn's classifications, and various addressing modes. The content is structured into sections with specific questions followed by detailed answers, providing a comprehensive overview of fundamental concepts in computer architecture.

Uploaded by

manikmahajan108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

PREVIOUS YEARS QUESTION PAPER

SUBJECT – COMPUTER ORGANIZATION AND ARCHITECTURE

Section – A

Q 1 compare RISC and CISC Architecture ?

Ans :-

Feature RISC CISC


Complex Instruction Set
Full Form Reduced Instruction Set Computer
Computer
Instruction Set Small and simple Large and complex
Execution Time per
One clock cycle Multiple clock cycles
Instruction
Instruction Length Fixed Variable
Fewer instructions, more
Memory Usage More instructions, less complex
complex
Focus Software-based optimization Hardware-based optimization
Examples of Processors ARM, MIPS, SPARC Intel x86, VAX
Easy to implement due to uniform Difficult due to varying
Pipelining
instruction size instruction size
Widely used in smartphones, tablets, Common in desktops, laptops,
Usage
embedded systems and servers

Q 2 Distinguish Between Auto Increment And Auto Decrement Code?

Ans :-

Feature Auto Increment Auto Decrement


Increases the value of a variable Decreases the value of a variable by
Definition
by 1 automatically 1 automatically
Operator ++ --
Syntax (in C/Python-style
i++ or ++i i-- or --i
pseudocode)
Effect If i = 5, after i++, i = 6 If i = 5, after i--, i = 4
Loop counters, moving forward Loop counters (in reverse), moving
Common Use
in arrays backward in arrays
for(i=0; i<5; i++) → Runs 0 for(i=4; i>=0; i--) → Runs 4
Example in Loop
to 4 to 0
Q 3 write a register transfer sequence to read a word from memory ?

Ans  MAR = Memory Address Register

 MDR = Memory Data Register

 Read = Control signal to initiate memory read

 R1 = General-purpose register where the data will be loaded

MAR ← Address, Read, MDR ← Memory[MAR], R1 ← MDR

Q 4 write the logic equations of binary half adder?

Ans : A half adder adds two input bits and produces:

 Sum (S)
 Carry (C)

Let:

 A and B be the two input bits.

Logic Equations:

 Sum (S) = A ⊕ B → (XOR operation)


 Carry (C) = A · B → (AND operation)

Truth Table:

A B Sum (S) Carry (C)


0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

Q 5 what is an opcode ?

Ans An opcode is like a command or instruction to the processor, such as:

 Add
 Subtract
 Load
 Store
 Jump

Example:

Let’s say we have a machine instruction:

1010 1100

Here:

 1010 = Opcode (e.g., might mean "ADD")


 1100 = Operand or address (e.g., the data to be added)

Q 6 How Interrupt Requests From Multiple Devices Be Handled ?

Ans common methods to handle multiple interrupts

Method Description
1. Daisy Chaining Devices are connected in a chain. Priority is based on position. The
(Hardware Priority) device closest to the CPU gets serviced first.
2. Polling (Software CPU checks each device one by one to see who raised the interrupt.
Priority) Simple but slower.
3. Interrupt Vector Each device has a unique interrupt number. CPU uses this number to
Vectored Interrupts) jump to the correct Interrupt Service Routine (ISR).
4. Priority Interrupt A hardware chip (like Intel 8259) manages multiple interrupt lines
Controller (PIC) and assigns priorities. Very efficient.
5. Nested Interrupts Higher priority interrupts can interrupt lower priority ones.

Q 7 what are the disadvantages of increasing the number of stages in pipeline


processing ?

Ans Disadvantages of Increasing Pipeline Stages

No. Disadvantage Description


More stages mean more control logic is needed, making design more
1 Increased Complexity
complex and harder to manage.
Each stage needs registers to store intermediate data, which adds
2 Pipeline Overhead
hardware overhead.
Higher Latency per While throughput improves, each instruction may take longer to
3
Instruction complete due to passing through more stages.
Data, control, and structural hazards become harder to manage with
4 More Frequent Hazards
more stages.
No. Disadvantage Description
Difficult to Balance It's challenging to divide tasks evenly across all stages; imbalance
5
Stages can reduce pipeline efficiency.
Increased Branch Branch mispredictions cause more waste because more stages need
6
Penalty to be flushed.

Q 8 what is SRAM and DRAM ?

Ans SRAM (Static Random Access Memory) and DRAM (Dynamic Random Access Memory)
are two types of semiconductor memory used in computers. SRAM uses flip-flops to store data,
which makes it faster and more reliable, but also more expensive and larger in size. It does not
need to be refreshed, so it retains data as long as power is supplied. SRAM is commonly used in
cache memory due to its high speed. On the other hand, DRAM stores data using capacitors,
which are smaller and cheaper, allowing for higher memory density. However, the charge in
capacitors leaks over time, so DRAM requires constant refreshing to maintain data. Because of
its lower cost and higher storage capacity, DRAM is used as the main memory (RAM) in most
computers.

Q 9 Describe Magnetic Disk ?

Ans A magnetic disk is a type of secondary storage device used to store data using magnetic
patterns. It consists of one or more circular platters coated with magnetic material, which spin
at high speed. Data is read and written using a read/write head that moves across the surface
of the disk. Each platter is divided into tracks, sectors, and cylinders for data organization.
Magnetic disks offer large storage capacity, random access to data, and are commonly used in
hard disk drives (HDDs).

Q 10 What Is Auxiliary Memory ?

Ans Auxiliary memory refers to any non-volatile storage device that is used to store data
permanently or semi-permanently, unlike primary memory (RAM), which is volatile. It is also
known as secondary storage because it is typically used to store data that is not currently in use
but is needed for future reference. Examples of auxiliary memory include hard disk drives
(HDDs), solid-state drives (SSDs), optical disks (CDs, DVDs), and magnetic tapes.

Section –B

Q 1 Explain The Working Of Carry Look Ahead Adder ?

Ans A Carry Look-Ahead Adder (CLA) is a faster way to add binary numbers by quickly determining if
there will be a carry in each bit, without waiting for the carry to propagate from one bit to the next (like
in simpler adders).
Step-by-Step Process:

1. Basic Idea: In a regular adder, when you add two bits, the carry from the previous bit
needs to be calculated first. But in a CLA, we look ahead to determine the carry for each
bit in advance, instead of waiting for each bit’s carry to pass through all the previous
bits.
2. Generate and Propagate:
o Generate (G): If both bits are 1, a carry is generated.
o Propagate (P): If at least one bit is 1, the carry will propagate to the next bit.
3. How CLA Speeds Up:
o Instead of waiting for each carry to move one by one, the CLA calculates all the
carries at once.
o It looks ahead and figures out which bits will cause a carry without waiting for
the previous ones.
4. How It's Done:
o It uses the Generate and Propagate information to instantly figure out the
carries for each bit position.
o This makes the CLA faster than normal adders because it doesn’t have to wait for
each carry to propagate through the bits one after the other.

Example:

Let's add two 4-bit numbers, A = 1011 and B = 1101.

 Normal Adder: The carry from one bit is passed on to the next, causing delays.
 Carry Look-Ahead Adder: It calculates the carries for all bits at once, speeding up the
process.

Q 2 Explain The Design Of Micro-programmed Control Unit In Detail ?

Ans in a Micro-programmed Control Unit, control signals are generated by reading a stored set
of microinstructions, which define the operations of the control unit. These microinstructions are
stored in a special memory called Control Memory (CM), and they are used to generate control
signals in a sequence that controls the operation of the processor.

How Does a Micro-programmed Control Unit Work?

1. Control Memory:
o The micro-program is stored in Control Memory (CM), a read-only memory
(ROM), which holds the microinstructions.
o Each microinstruction corresponds to a small operation (like setting a register,
activating an ALU operation, or enabling a memory read/write)

2. Microinstructions:
o A microinstruction is a low-level instruction that specifies the control signals for
one machine cycle.
o A microinstruction can include:
 Control bits: These bits control the various components (ALU, registers,
memory, etc.).
 Address part: This points to the next microinstruction to be executed.

3. Micro-program Counter (MPC):
o The Micro-program Counter (MPC) holds the address of the next
microinstruction to be fetched from the Control Memory.
o After executing each microinstruction, the MPC is updated (either sequentially or
conditionally) to fetch the next microinstruction.

4. Control Signals:
o Each microinstruction consists of control bits that control various parts of the
processor (ALU, registers, buses, etc.).
o These control signals dictate what operations should occur during the current
machine cycle.
o
5. Sequencing:
o The Sequencer determines the flow of execution within the microprogram. It uses
the MPC to fetch the next microinstruction.
o The sequencer decides whether to continue with the next instruction in sequence
or branch to another microinstruction depending on conditions like the type of
instruction being executed (e.g., jump, branch, etc.).

Q 3 what is cache coherence and why it is important in a shared multiple


processors system?

Ans Cache coherency refers to the consistency of shared data stored in multiple caches in a
multiprocessor system. When two or more processors cache the same memory location, cache
coherency ensures that any update made by one processor is visible to all others. It prevents
situations where one processor works with old (stale) data while another has the updated
value.
In shared multiprocessor systems:

 Each processor has its own cache for faster access.


 They often share memory and may access the same variables.
 If one processor changes a value, others must see that change to avoid inconsistent or
incorrect results.

Without cache coherency:

 One processor might use outdated data from its cache.


 This can cause data conflicts, bugs, and wrong outputs.

With cache coherency:

 All processors always have the latest and correct value of shared data.
 It ensures reliable operation, especially in parallel processing and synchronization.

Example:

If Processor A updates variable X to 10 in its cache, but Processor B still has X = 5 in its
cache, then both are working with different values. Cache coherency ensures that both see X =
10, keeping the system consistent.

Q 4 Explain In Detail About The Flynn’s Classifications ?

Ans Flynn classified computer systems into four types based on Instruction Stream (IS) and
Data Stream (DS):

Instruction Data
Classification Full Form Example
Stream Stream
Single Instruction, Traditional single-core
SISD 1 1
Single Data processor
Single Instruction,
SIMD 1 Many GPUs, Vector processors
Multiple Data
Multiple Instruction, Rare or theoretical, used in
MISD Many 1
Single Data some fault-tolerant systems
Multiple Instruction, Multi-core CPUs, parallel
MIMD Many Many
Multiple Data computers

1. SISD:
o One processor fetches and executes one instruction at a time on one data.
o Like basic CPUs.
2. SIMD:
o One instruction operates on multiple data points at the same time.
o Used in parallel processing like image and signal processing.
3. MISD:
Multiple instructions operate on the same data.
o
Very rare; mainly for fault tolerance systems.
o
4. MIMD:
o Many processors execute different instructions on different data.
o Common in modern systems with multiple processors working in parallel.

Q 5 Explain the data transfer and program control instruction in detail?

Ans Data Transfer :

Data transfer instructions are used to move data from one location to another within the
computer system.

These instructions do not perform any calculations, they simply transfer data between:

 Registers
 Memory
 I/O ports
 CPU and memory

Example of data transfer

MOV R1, R2 ; Copy contents of R2 into R1

LOAD R1, 2000 ; Load the content of memory location 2000 into R1

STORE R1, 3000; Store the content of R1 into memory location 3000

Program Control Instructions

program control instructions : are used to change the sequence of execution of a program. They
manage branching, jumping, and interrupt handling.

These instructions alter the normal flow of instruction execution.

Example of program control instruction :

CMP A, B ; Compare A and B

JZ 1000 ; Jump to address 1000 if A == B


CALL SUBROUTINE ; Call a function

RET ; Return from function

HLT ; Stop the execution

Section – C

Q1 Discuss about the various components of computer system?

Ans A computer system is made up of hardware, software, and users working together to
perform tasks. These components are organized into major functional parts:

1. Input Unit

 Accepts data and instructions from the user.


 Converts input into a format the computer can understand.
 Sends data to memory or CPU for processing.

Examples

 Keyboard
 Mouse
 Scanner
 Microphone

2. Central Processing Unit (CPU)

The brain of the computer, also called the processor, responsible for executing instructions.

It has two main parts:

a) Arithmetic Logic Unit (ALU):

 Performs arithmetic operations (add, subtract, multiply, divide).


 Performs logical operations (AND, OR, NOT, comparisons).
b) Control Unit (CU):

 Directs the flow of data between the CPU, memory, and input/output devices.
 Interprets instructions from programs and initiates actions.

c) Registers (small part of CPU):

 High-speed storage inside the CPU.


 Holds data or instructions temporarily during execution.

3. Memory Unit (Main Memory)

Stores data and instructions that the CPU needs.

Types:

 RAM (Random Access Memory):


o Temporary, volatile memory (data lost when power is off).
o Stores active programs and data.

 ROM (Read Only Memory):


o Permanent, non-volatile memory.
o Stores firmware and essential startup instructions.

4. Storage Unit (Secondary Memory)

Stores data and programs permanently.

Examples:

 Hard Disk Drive (HDD)


 Solid State Drive (SSD)
 Optical Disks (CD/DVD)
 Flash Drives

5. Output Unit

Displays or outputs the results of processing to the user.


Examples:

 Monitor
 Printer
 Speaker
 Projector

Q 2 Explain Direct And Indirect Adressing Modes?

Ans 1. Direct Addressing Mode

In Direct Addressing, the address of the operand is directly specified in the instruction itself.
The operand is located at the memory address provided in the instruction.

How It Works:

 The address field of the instruction contains the actual memory address where the
operand is located.
 The CPU directly fetches the operand from the given address.

Advantages:

 Simple and efficient because the operand’s location is directly known.


 Faster because there’s no need to look for a reference address; it’s directly given.

Disadvantages:

 Not very flexible because the operand address is hardcoded in the instruction.

2. Indirect Addressing Mode

In Indirect Addressing, the address field of the instruction contains the address of the operand,
which is called the effective address. This means that the instruction points to a memory
location, but this location contains the actual address where the operand is stored.
How It Works:

 The instruction contains the memory address of a location that holds the actual address
of the operand.
 The CPU first accesses the memory address specified in the instruction, then fetches the
operand from the address it found there.

Advantages:

 More flexible than direct addressing because you can change the operand’s address by
modifying just one memory location, instead of changing the instruction.
 Allows for dynamic memory addressing and easier pointer manipulation.

Disadvantages:

 Slower than direct addressing because it requires two memory accesses:


1. One to get the address of the operand.
2. One to access the operand itself.
 More complex to implement.

Q 3 Explain the following in details

A )REGISTER STACK

B)MEMORY STACK

Ans : Register stack:

A Register Stack is a stack-like structure stored entirely in CPU registers. It is used to manage
function calls, return addresses, and local variables that are pushed and popped during program
execution.

How it Works:

 A stack pointer (SP) keeps track of the top of the stack.


 Data is stored in CPU registers, and the stack operates by following the Last In,
First Out (LIFO) principle.

o Push: Data is placed on top of the stack.


o Pop: Data is removed from the top of the stack.
 Typical Use: Primarily used to store return addresses for function calls and temporary
data like local variables.

Memory stack :

A Memory Stack is a stack structure stored in main memory (RAM). It is used to hold data such
as function parameters, return addresses, local variables, and other temporary data needed
during program execution.

How it Works:

 The stack pointer (SP) tracks the top of the stack in memory.

 It works on the LIFO principle, similar to the register stack, with operations like push and
pop being performed.

 Stack growth: The stack grows downwards (in some systems, it grows upwards). As data
is pushed onto the stack, the stack pointer is either decremented or incremented
depending on the architecture.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy