Coa Solved Paper
Coa Solved Paper
Section – A
Ans :-
Ans :-
Sum (S)
Carry (C)
Let:
Logic Equations:
Truth Table:
Q 5 what is an opcode ?
Add
Subtract
Load
Store
Jump
Example:
1010 1100
Here:
Method Description
1. Daisy Chaining Devices are connected in a chain. Priority is based on position. The
(Hardware Priority) device closest to the CPU gets serviced first.
2. Polling (Software CPU checks each device one by one to see who raised the interrupt.
Priority) Simple but slower.
3. Interrupt Vector Each device has a unique interrupt number. CPU uses this number to
Vectored Interrupts) jump to the correct Interrupt Service Routine (ISR).
4. Priority Interrupt A hardware chip (like Intel 8259) manages multiple interrupt lines
Controller (PIC) and assigns priorities. Very efficient.
5. Nested Interrupts Higher priority interrupts can interrupt lower priority ones.
Ans SRAM (Static Random Access Memory) and DRAM (Dynamic Random Access Memory)
are two types of semiconductor memory used in computers. SRAM uses flip-flops to store data,
which makes it faster and more reliable, but also more expensive and larger in size. It does not
need to be refreshed, so it retains data as long as power is supplied. SRAM is commonly used in
cache memory due to its high speed. On the other hand, DRAM stores data using capacitors,
which are smaller and cheaper, allowing for higher memory density. However, the charge in
capacitors leaks over time, so DRAM requires constant refreshing to maintain data. Because of
its lower cost and higher storage capacity, DRAM is used as the main memory (RAM) in most
computers.
Ans A magnetic disk is a type of secondary storage device used to store data using magnetic
patterns. It consists of one or more circular platters coated with magnetic material, which spin
at high speed. Data is read and written using a read/write head that moves across the surface
of the disk. Each platter is divided into tracks, sectors, and cylinders for data organization.
Magnetic disks offer large storage capacity, random access to data, and are commonly used in
hard disk drives (HDDs).
Ans Auxiliary memory refers to any non-volatile storage device that is used to store data
permanently or semi-permanently, unlike primary memory (RAM), which is volatile. It is also
known as secondary storage because it is typically used to store data that is not currently in use
but is needed for future reference. Examples of auxiliary memory include hard disk drives
(HDDs), solid-state drives (SSDs), optical disks (CDs, DVDs), and magnetic tapes.
Section –B
Ans A Carry Look-Ahead Adder (CLA) is a faster way to add binary numbers by quickly determining if
there will be a carry in each bit, without waiting for the carry to propagate from one bit to the next (like
in simpler adders).
Step-by-Step Process:
1. Basic Idea: In a regular adder, when you add two bits, the carry from the previous bit
needs to be calculated first. But in a CLA, we look ahead to determine the carry for each
bit in advance, instead of waiting for each bit’s carry to pass through all the previous
bits.
2. Generate and Propagate:
o Generate (G): If both bits are 1, a carry is generated.
o Propagate (P): If at least one bit is 1, the carry will propagate to the next bit.
3. How CLA Speeds Up:
o Instead of waiting for each carry to move one by one, the CLA calculates all the
carries at once.
o It looks ahead and figures out which bits will cause a carry without waiting for
the previous ones.
4. How It's Done:
o It uses the Generate and Propagate information to instantly figure out the
carries for each bit position.
o This makes the CLA faster than normal adders because it doesn’t have to wait for
each carry to propagate through the bits one after the other.
Example:
Normal Adder: The carry from one bit is passed on to the next, causing delays.
Carry Look-Ahead Adder: It calculates the carries for all bits at once, speeding up the
process.
Ans in a Micro-programmed Control Unit, control signals are generated by reading a stored set
of microinstructions, which define the operations of the control unit. These microinstructions are
stored in a special memory called Control Memory (CM), and they are used to generate control
signals in a sequence that controls the operation of the processor.
1. Control Memory:
o The micro-program is stored in Control Memory (CM), a read-only memory
(ROM), which holds the microinstructions.
o Each microinstruction corresponds to a small operation (like setting a register,
activating an ALU operation, or enabling a memory read/write)
2. Microinstructions:
o A microinstruction is a low-level instruction that specifies the control signals for
one machine cycle.
o A microinstruction can include:
Control bits: These bits control the various components (ALU, registers,
memory, etc.).
Address part: This points to the next microinstruction to be executed.
3. Micro-program Counter (MPC):
o The Micro-program Counter (MPC) holds the address of the next
microinstruction to be fetched from the Control Memory.
o After executing each microinstruction, the MPC is updated (either sequentially or
conditionally) to fetch the next microinstruction.
4. Control Signals:
o Each microinstruction consists of control bits that control various parts of the
processor (ALU, registers, buses, etc.).
o These control signals dictate what operations should occur during the current
machine cycle.
o
5. Sequencing:
o The Sequencer determines the flow of execution within the microprogram. It uses
the MPC to fetch the next microinstruction.
o The sequencer decides whether to continue with the next instruction in sequence
or branch to another microinstruction depending on conditions like the type of
instruction being executed (e.g., jump, branch, etc.).
Ans Cache coherency refers to the consistency of shared data stored in multiple caches in a
multiprocessor system. When two or more processors cache the same memory location, cache
coherency ensures that any update made by one processor is visible to all others. It prevents
situations where one processor works with old (stale) data while another has the updated
value.
In shared multiprocessor systems:
All processors always have the latest and correct value of shared data.
It ensures reliable operation, especially in parallel processing and synchronization.
Example:
If Processor A updates variable X to 10 in its cache, but Processor B still has X = 5 in its
cache, then both are working with different values. Cache coherency ensures that both see X =
10, keeping the system consistent.
Ans Flynn classified computer systems into four types based on Instruction Stream (IS) and
Data Stream (DS):
Instruction Data
Classification Full Form Example
Stream Stream
Single Instruction, Traditional single-core
SISD 1 1
Single Data processor
Single Instruction,
SIMD 1 Many GPUs, Vector processors
Multiple Data
Multiple Instruction, Rare or theoretical, used in
MISD Many 1
Single Data some fault-tolerant systems
Multiple Instruction, Multi-core CPUs, parallel
MIMD Many Many
Multiple Data computers
1. SISD:
o One processor fetches and executes one instruction at a time on one data.
o Like basic CPUs.
2. SIMD:
o One instruction operates on multiple data points at the same time.
o Used in parallel processing like image and signal processing.
3. MISD:
Multiple instructions operate on the same data.
o
Very rare; mainly for fault tolerance systems.
o
4. MIMD:
o Many processors execute different instructions on different data.
o Common in modern systems with multiple processors working in parallel.
Data transfer instructions are used to move data from one location to another within the
computer system.
These instructions do not perform any calculations, they simply transfer data between:
Registers
Memory
I/O ports
CPU and memory
LOAD R1, 2000 ; Load the content of memory location 2000 into R1
STORE R1, 3000; Store the content of R1 into memory location 3000
program control instructions : are used to change the sequence of execution of a program. They
manage branching, jumping, and interrupt handling.
Section – C
Ans A computer system is made up of hardware, software, and users working together to
perform tasks. These components are organized into major functional parts:
1. Input Unit
Examples
Keyboard
Mouse
Scanner
Microphone
The brain of the computer, also called the processor, responsible for executing instructions.
Directs the flow of data between the CPU, memory, and input/output devices.
Interprets instructions from programs and initiates actions.
Types:
Examples:
5. Output Unit
Monitor
Printer
Speaker
Projector
In Direct Addressing, the address of the operand is directly specified in the instruction itself.
The operand is located at the memory address provided in the instruction.
How It Works:
The address field of the instruction contains the actual memory address where the
operand is located.
The CPU directly fetches the operand from the given address.
Advantages:
Disadvantages:
Not very flexible because the operand address is hardcoded in the instruction.
In Indirect Addressing, the address field of the instruction contains the address of the operand,
which is called the effective address. This means that the instruction points to a memory
location, but this location contains the actual address where the operand is stored.
How It Works:
The instruction contains the memory address of a location that holds the actual address
of the operand.
The CPU first accesses the memory address specified in the instruction, then fetches the
operand from the address it found there.
Advantages:
More flexible than direct addressing because you can change the operand’s address by
modifying just one memory location, instead of changing the instruction.
Allows for dynamic memory addressing and easier pointer manipulation.
Disadvantages:
A )REGISTER STACK
B)MEMORY STACK
A Register Stack is a stack-like structure stored entirely in CPU registers. It is used to manage
function calls, return addresses, and local variables that are pushed and popped during program
execution.
How it Works:
Memory stack :
A Memory Stack is a stack structure stored in main memory (RAM). It is used to hold data such
as function parameters, return addresses, local variables, and other temporary data needed
during program execution.
How it Works:
The stack pointer (SP) tracks the top of the stack in memory.
It works on the LIFO principle, similar to the register stack, with operations like push and
pop being performed.
Stack growth: The stack grows downwards (in some systems, it grows upwards). As data
is pushed onto the stack, the stack pointer is either decremented or incremented
depending on the architecture.