0% found this document useful (0 votes)
13 views104 pages

BCA II SEM Question Set

Bca 2

Uploaded by

ridervinayyy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views104 pages

BCA II SEM Question Set

Bca 2

Uploaded by

ridervinayyy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Topic: Sequential Circuits

Q1. What are Sequential Circuits? How are they different from Combinational Circuits?

Ans. Sequential circuits are digital circuits that store and use the previous state information to
determine their next state. Unlike combinational circuits, which only depend on the current input
values to produce outputs, sequential circuits depend on both the current inputs and the previous
state stored in memory elements.

Parameters Combinational Circuit Sequential Circuit

Meaning and It is a type of circuit that generates an It is a type of circuit in which the output does
Definition output by relying on the input it receives not only rely on the current input. It also relies
at that instant, and it stays independent on the previous ones.
of time.

Feedback A Combinational Circuit requires no The output of a Sequential Circuit, on the other
feedback for generating the next output. hand, relies on both- the previous feedback and
It is because its output has no the current input. So, the output generated from
dependency on the time instance. the previous inputs gets transferred in the form
of feedback. The circuit uses it (along with
inputs) for generating the next output.

Performance We require the input of only the current In the case of a Sequential Circuit, the
state for a Combinational Circuit. Thus, it performance is very slow and also
performs much faster and better in comparatively lower. Its dependency on the
comparison with the Sequential Circuit. previous inputs makes the process much more
complex.

Complexity It is very less complex in comparison. It This type of circuit is always more complex in
is because it basically lacks its nature and functionality. It is because it
implementation of feedback. implements the feedback, depends on previous
inputs and also on clocks.

Elementary Logic gates form the building/ elementary Flip-flops form the building/ elementary blocks
Blocks blocks of a Combinational Circuit. of a Sequential Circuit.

Operation One can use these types of circuits for You can mainly make use of these types of
both- Boolean as well as Arithmetic circuits for storing data.
operations.

Q2. What is the difference between Latch and Flip Flop?

Ans. A flip-flop is a digital memory circuit that stores one bit of data. They are the primary
blocks of the most sequential circuits. It is also called one-bit memory. A flip-flop state
repeatedly changes at an active state of the clock pulses. They remain unaffected even
when the clock pulse does not stay active. The clocked flip-flops particularly act as the
memory elements of the synchronous sequential circuit- while the un-clocked ones (latches)
function as the memory elements of asynchronous sequential circuits.
A latch is an electronic device that changes its output immediately on the basis of the
applied input. One can use it to store either 0 or 1 at a specified time. A latch contains two
inputs- SET and RESET, and it also has two outputs. They complement each other. One can
use a latch for storing one bit of data. It is a memory device- just like the flip-flop. But it is not
synchronous, and it does not work on the edges of the clock like the flip-flop.
Difference Between Flip-flop and Latch

Parameter Flip-Flop Latch

Basic Flip-flop utilizes an edge triggering Latch follows a level-triggering approach.


Principle approach.

Clock Signal The clock signal is present. The clock signal is absent.

Designed You can design it using Latches along You can design it using Logic gates.
Using with a clock.

Sensitivity The flip-flop is sensitive to the applied Latches are sensitive to the applied input
input and the clock signal. signal- only when enabled.

Operating It has a slow operating speed. It has a comparatively fast operating speed.
Speed

Classification You can classify a flip-flop into a A user cannot classify the Latch this way.
synchronous or asynchronous flip-flop.

Working Flip-flops work using the binary input and Latches operate only using binary inputs.
the clock signal.

Power It requires more power. It requires comparatively less power.


Requirement

Analysis of It is quite easy to perform circuit analysis. Analyzing the circuit is quite complex.
Circuit

Type of Flip-flop performs Synchronous Latch performs Asynchronous operations.


Operation operations.
Performed

Robustness Flip-flops are comparatively more robust. Latches are comparatively less robust.

Dependency The operation relies on the present and The operation depends on the present and
of Operation past input bits along with the past output past input along with the past output binary
and clock pulses. values.

Usage as a A flip-flop is capable of working as a A latch cannot serve as a register as the


Register register as it contains clock signals in its register requires further advanced electronic
input. circuits (EC). Time also plays an essential
role here.

Types J-K, S-R, D, and T Flip-flops. J-K, S-R, D, and T Latches.

Area It requires more area. It requires comparatively less area.


Required

Uses They constitute the building blocks of Users can utilize these for designing
many sequential circuits such as sequential circuits. But they are still not
counters. generally preferred.
Input and A flip-flop checks the inputs. It only The latch responds to the changes in inputs
Output changes the output at times defined by continuously as soon as it checks the inputs.
any control signal like the clock signal.

Synchronicity A flip-flop is synchronous. It works based A latch is asynchronous. It does not work
on the clock signal. based on the time signal.

Faults Flip-flops stay protected against any fault. The latches are responsive to any occurring
faults on the enable pin.

Q3. What are SR Latch and SR Flip Flops? Explain using a diagram.

Ans.

SR Latch
S-R latches i.e., Set-Reset latches are the simplest form of latches and are implemented using two
inputs: S (Set) and R (Reset). The S input sets the output to 1, while the R input resets the output to
0. When both S and R inputs are at 1, the latch is said to be in a “Hold” state. They are also known as
preset and clear states. The SR latch forms the basic building blocks of all other types of flip-flops.

Truth Table of SR Latch

S R Q(n+1)

0 0 invalid

0 1 1

1 0 0

1 1 Hold

As we see in the diagram, the SR latch comprises two cross-coupled NAND gates, where the output
of one gate is connected to the input of the other and vice versa. The latch takes two binary inputs, S
and R, and produces two binary outputs, Q and Q-bar.
Case 1: When S = 0 and R = 1 (SET)

The NAND gates provide an output of 1 whenever there is a 0 in the input to the logic gate. In the
case of S = 0 and R = 1, the logic gate connected to the set input receives a 0, so it will output a 1.
That 1 will then propagate to the NAND gate connected to the reset input, providing an output of 0
as 1 NAND with 1 is 0.

So the Q values will be 1, and the value of Q-bar will be 0 showing that we set the value of the
output Q as 1.

Case 2: When S = 1 and R = 0 (RESET)

This is the reverse of Case 1 we went through above; over here, we have reversed the inputs S and R
to be 1 and 0; hence our output for Q would then be 0, and Q-bar would be 1 showing that we have
reset the output bit Q.

Case 3: When S = 0 and R = 0 (INVALID)

In this case, we see that one of the inputs to both NAND gates is 0; this means that the output of
both gates will always be 1 regardless of the 2nd input to the NAND gates. Since the value of Q and
Q-bar is the same, we call this SR latch state invalid.

Case 4: When S = 1 and R = 1 (HOLD)

This case is a bit tricky to solve as none of the inputs is 0, which means that the output depends on
the 2nd input to the NAND gates. We see that there will be no change in already stored bits for Q
and Q-bar upon solving the equation; hence we call this state the hold state since none of the values
of the output change.

Applications of an SR latch

Now that we have gone through what is an SR latch and its internal working, now let's look at some
of the applications of an SR latch below.

• Memory circuits: As the SR latch can store a single bit of data, it can be used to create large
memory circuits for temporary memory storage.

• Control signals: As the SR latches can hold states, they can be used in control signals to hold
some bits till a certain condition is met.

• Flip flops and counters: The SR latch is a basic building block for various types of flip flops
and counter circuits.

SR Flip Flop
The SR Flip Flop Stands for Set Reset Flip Flop. It is the most basic sequential logic circuit. It is built
with two logic gates. The output of each gate is connected to the input of the other gate so a
feedback signal can go from the output to the input. The SR flip flop has two inputs named as Set(S)
and Reset(R), that is why it is called SR Flip Flop. The rest input is used to get back the flip flop to its
old original state from the current state. The output of the flip flop depends upon its inputs such as
the input SET(S) making the output 1 while the input RESET(R) making the output 0. A basic SR flip
Flop is constructed with NAND gates and it has four terminals - Two Active Low input Terminals(S, R)
and two output terminals Q and Q'.

Case 1: When S = 0 and R = 1 (RESET)

The NAND gates provide an output of 1 whenever there is a 0 in the input to the logic gate. In the
case of S = 0 and R = 1, and clock = 1 the logic gate connected to the set input receives a 0, so it will
output a 1. As the value of clock is set to 1 and R = 1 as well it will give an output 0. When set = 1 and
reset = 0 in SR latch then the output comes as 0.

Case 2: When S = 1 and R = 0 (SET)


This is the reverse of Case 1 we went through above; over here, we have reversed the inputs S and R
to be 1 and 0; hence our output for Q would then be 1, and Q-bar would be 0.

Case 3: When S = 0 and R = 0 (Hold)

In this case, we see that one of the inputs to both NAND gates is 0; this means that the output of
both gates will always be 1 regardless of the 2nd input to the NAND gates. When Both inputs in SR
latch are provided as 1 and 1 the ‘HOLD’ state occurs.

Case 4: When S = 1 and R = 1 (Invalid)

In this case, we see that one of the inputs to both NAND gates is 1 and the value of clock is also set
as 1; this means that the output of both gates will always be 0. When Both inputs in SR latch are
provided as 0 and 0 the ‘INVALID’ state occurs.

SR Flip-Flop Characteristic Table

Characteristic Equation for SR Flip-Flop

The characteristic equation is an algebraic expression for the characteristic table’s binary
information. It specifies the value of the next state of a flip-flop in terms of its present state and
present excitation. To obtain the characteristic equation of SR flip-flop, the K-map for the next state
Qn+1 in terms of present state and inputs is shown as:
Excitation Table of SR Flip-Flop

Q 4. What is Excitation table in SR Flip Flop?

Ans. The truth table of flip-flop refers to the operation characteristic of the flip-flop. Still, in the
designing of sequential circuits, we often face situations where the present state and the next state
of the flip-flop are specified, and we must determine the input conditions that must exist in order for
the intended output condition to occur.
The excitation table of SR flip-flop lists the present state, and the next state and the excitation table
of SR flip-flop indicate the excitations required to take the flip-flop from the present state to the
next state.

Q 5. What are the drawbacks of SR Flip Flop over which JK Flip Flop was introduced?

Ans. The SR (Set-Reset) flip-flop, also known as the SR latch, is a fundamental building block in digital
logic circuits. One of the main disadvantages of the basic SR flip-flop is the possibility of entering an
invalid state when both the S (Set) and R (Reset) inputs are asserted (set to 1) simultaneously. This
condition is known as the "forbidden" or "invalid" state, where both the Q and Q' outputs are high,
which leads to unpredictable behavior.

This issue can be overcome by using a gated SR flip-flop or by incorporating a clock signal into the
flip-flop design to create a clocked SR flip-flop. The clocked SR flip-flop, also known as the D flip-flop
or data flip-flop, addresses the problem of the forbidden state by allowing changes in the inputs (S
and R) to only occur when the clock signal transitions. This ensures that the inputs are only
considered during specific time intervals defined by the clock signal, thus preventing the occurrence
of the forbidden state.

In summary, the disadvantage of the basic SR flip-flop is the potential for entering an invalid state
when both inputs are asserted simultaneously. This can be overcome by using a clocked SR flip-flop
design, such as the D flip-flop, which incorporates a clock signal to control the timing of input
changes and prevent the occurrence of the forbidden state.

Q 6. What is JK Flip Flop? Explain the Race around condition in JK Flip Flop and also the Master
Slave JK Flip Flop.

Ans. The “JK flip flop,” also known as the Jack Kilby flip flop, is a sequential logic circuit designed by
Jack Kilby during his tenure at Texas Instruments in the 1950s. This flip flop serves the purpose of
storing and manipulating binary information within digital systems. JK flip flop operates on
sequential logic principle, where the output is dependent not only on the current inputs but also on
the previous state. There are two inputs in JK Flip Flop Set and Reset denoted by J and K. It also has
two outputs Output and complement of Output denoted by Q and Q̅ . The internal circuitry of a JK
Flip Flop consists of a combination of logic gates, usually NAND gates.
JK flip flop comprises four possible combinations of inputs: J=0, K=0; J=0, K=1; J=1, K=0; and J=1, K=1.
These input combinations determine the behavior of flip flop and its output.

• J=0, K=0: In this state, flip flop retains its preceding state. It neither sets nor resets itself,
making it go into a “HOLD” state.

• J=0, K=1: This input combination forces flip flop to reset, resulting in Q=0 and Q̅ =1. It is often
referred to as the “reset” state.

• J=1, K=0: Here, flip flop resides in the set mode, causing Q=1 and Q̅ =0. It is known as the
“set” state.

• J=1, K=1: This combination “toggles” flip flop. If the previous state is Q=0, it switches to Q=1
and vice versa. This makes it valuable for frequency division and data storage applications.

Characteristic Equation

Qn+1 = Q’nJ + QnK’

Excitation Table
Operation Modes of JK Flip Flop

Apart from its basic functionality, there are two essential operating modes in JK Flip Flop: edge-
triggered and level-triggered.

• Edge-Triggered: In this mode, flip flop responds to a signal transition occurring at a clock
pulse. It is commonly used in synchronous systems, where the output changes only when
the clock signal changes from low to high or high to low. The edge-triggered JK Flip
Flop ensures stable output and prevents glitches caused by rapid changes in input values.

• Level-Triggered: Unlike the edge-triggered mode, the level-triggered JK Flip Flop responds to
the input values continuously as long as the clock signal is held at a specific level (high or
low). This mode is mainly used in asynchronous systems or applications where the input
changes are directly reflected in the output.

Applications of JK Flip Flop

JK Flip Flop finds extensive use in various applications, including:

• Counters
• Shift Registers
• Memory Units
• Frequency Division

RACE AROUND CONDITION

▪ For J-K flip-flop, if J=K=1, and if clk=1 for a long period of time, then output Q will toggle as
long as CLK remains high which makes the output unstable or uncertain.
▪ This is called a race around condition in J-K flip-flop.
▪ We can overcome this problem by making the clock =1 for very less duration.
▪ The circuit used to overcome race around conditions is called the Master Slave JK flip flop.
Master Slave JK flip flop

▪ Here two JK flip flops are connected in series.


▪ The first JK flip flop is called the “master” and the other is a “slave”.
▪ The output from the master is connected to the two inputs of the slave whose output is fed
back to inputs of the master.
▪ The circuit also has an inverter other than the two flip flops.
▪ The Clock Pulse and inverter are connected because of which the flip flops get an inverted
clock pulse.
▪ In other words, if CP=0 for a master flip-flop, then CP=1 for a slave flip-flop and vice versa.

Working of a Master Slave flip flop

▪ When the clock pulse goes high, the slave is isolated; J and K inputs can affect the state of
the system. The slave flip-flop is isolated when the CP goes low.
▪ When the CP goes back to 0, information is transmitted from the master flip-flop to the slave
flip-flop and output is obtained.
▪ As the master flip flop is positive triggered it responds first and the slave later (it is negative
edge triggered).
▪ The master goes to the K input of the slave when both inputs J=0 and K=1, and also Q’ = 1. In
this case the slave copies the master as the clock forces the slave to reset.
▪ The master goes to the J input of the slave when both J=1 and K=0, Q = 1. The clock is set
due to the negative transition of the clock.
▪ There is a state of toggle when both J=1 and K=1. On the negative transition of clock slave
toggles and the master toggles on the positive transition of the clock.
▪ Both the flip flops are disabled when both J=0 and K=0 and Q is unchanged.

Q 7. What is D Flip Flop?

Ans. D Flip flops or data flip flops or delay flip flops can be designed using SR flip flops by connecting
a not gate in between S and R inputs and tying them together. D flip flops can be used in place of SR
flip flops where you need only SET and RESET state.
D Flip-Flop Working

Let us look at the possible cases and write it down in our truth table. The clock is always 1, so only
two cases are possible where D can be high or low.

Case 1: D = 0

Gate 1 = 1, Gate 2 = 0, Gate 4 / Q(n+1)’ = 1, Gate 3 / Q(n+1) = 0

Note: Since one input of Gate4 is 0 and Gate4 is a NAND gate. Irrespective of the other input, the
output of Gate 4 will be 1 as per the property of NAND gates.

Case 2: D = 1

Gate 1 = 0, Gate 2 = 1, Gate 3 / Q(n+1) = 1, Gate 4 / Q(n+1)’ = 0

Note: Since one input of Gate3 is 0 and Gate3 is a NAND gate. Irrespective of the other input, the
output of Gate 3 will be 1 as per the property of NAND gates.

Now let us write the truth table-

D Flip-Flop Truth Table


CLK D Q(n+1) State
– 0 0 RESET
– 1 1 SET

We will use this truth table to write the characteristics table for the D flip-flop. In the truth table, you
can see there is only one input D and one output Q(n+1). But in the characteristics table, you will see
there are two inputs D and Qn, and one output Q(n+1).

From the logic diagram above it is clear that Qn and Qn’ are two complementary outputs that also
act as inputs for Gate3 and Gate4 hence we will consider Qn i.e the present state of Flip flop as input
and Q(n+1) i.e. the next state as output.

After writing the characteristic table we will draw a 2-variable K-map to derive the characteristic
equation.
D Qn Q(n+1)
0 0 0
0 1 0
1 0 1
1 1 1

D Flip Flop K Map

From the K-map you get 2 pairs. On solving both we get the following characteristic equation:

Q(n+1) = D

Advantages

There are several advantages to using a D flip-flop. Some of them are listed below:

• Single input: The D flip-flop has a single data input, which makes it simpler to use and easier
to interface with other digital circuits.
• No feedback loop: The D flip-flop does not have a feedback loop, which eliminates the
possibility of a race condition and makes it more stable than other types of flip-flops.
• No invalid states: It does not have any weak states, which helps to avoid unpredictable
behavior in digital systems.
• Reduced power consumption: The D flip-flop consumes less power than other types of flip-
flops, making it more energy-efficient.
• Bi-stable operation: Like other flip-flops, the D flip-flop has a bi-stable operation, which
means that it can hold a state indefinitely until it is changed by an input signal.

Limitations

Apart from several advantages, there are some limitations associated with D flip-flops. Some of
them are listed below:

• No feedback: The D flip-flop does not have a feedback path, which means that it cannot be
used for applications that require feedback control, such as servo systems or motor control.
• No toggling: It cannot be used for toggling applications since it only responds to the data
input and not to the clock signal.
• Propagation delay: It has a propagation delay, which can lead to timing issues in digital
systems with tight timing constraints.
• Limited scalability: The D flip-flop can be challenging to scale up to more complex digital
systems, as it can lead to increased complexity and the potential for errors.

Applications
Some of the applications of D flip flop in real-world includes:

• Shift registers: D flip-flops can be cascaded together to create shift registers, which are used
to store and shift data in digital systems. Shift registers are commonly used in serial
communication protocols such as UART, SPI, and I2C.
• State machines: It can be used to implement state machines, which are used in digital
systems to control sequences of events. State machines are commonly used in control
systems, automotive applications, and industrial automation.
• Counters: It can be used in conjunction with other digital logic gates to create binary
counters that can count up or down depending on the design. This makes them useful in
real-time applications such as timers and clocks.
• Data storage: D flip-flops can be used to store temporary data in digital systems. They are
often used in conjunction with other memory elements to create more complex storage
systems.

Q 8. What is T Flip Flop?

Ans. T flip flop is similar to JK flip flop. Just tie both J and K inputs together to get a T Flip flop. Just
like the D flip flop, it has only one external input along with a clock.

T Flip-Flop Working

Let us take a look at the possible cases and write it down in our truth table. The clock is always 1, so
only two cases are possible where T can be high or low.

Case 1: Let’s say, T = 0 and clock pulse is high i.e, 1, then output of both, AND gate 1, AND gate 2 will
be 0, gate 3 output will be Q and similarly gate 4 output will be Q’ so both the values of Q and Q’ are
same as their previous value, which means Hold state.

Case 2: Let’s say, T=1, then output of both AND gate 1 will be (T * clock * Q), and since T and clock
both are 1, then the output of AND gate 1 will be Q, and similarly output of AND gate 2 will be (T *
clock * Q’) i.e, Q’. Now, gate 3 output will be (Q’+Q)’ and let’s say Q’ is zero, then gate 3 output will
be (0+Q)’ which means Q’ and similarly gate 4 output will be (Q+Q’)’ and since Q’ is zero, so gate 4
output will be Q’ which means 0 as Q’ is zero. Hence in this case we can say that the output toggles,
because T=1.
Characteristic Equation

Q(n+1) = TQn’ + T’Qn = T XOR Qn

Excitation Table

Applications of T Flip Flop

There are numerous applications of T Flip Flop in Digital System, which are listed below:

• Counters: T Flip Flops used in counters. Counters counts the number of events that occurs in
a digital system.

• Data Storage: T Flip Flops used to create memory which are used to store data, when the
power is turned off.

• Synchronous logic circuits: T flip-flops can be used to implement synchronous logic circuits,
which are circuits that perform operations on binary data based on a clock signal. By
synchronizing the logic circuit’s operations to the clock signal using T flip-flops, the circuit’s
behavior can be made predictable and reliable.
• Frequency division: It is used to divide the frequency of a clock signal by 2. Flip-flop will
toggle its output every time the clock signal transitions from high to low or low to high,
hence dividing the clock frequency by 2.

• Shift registers: T flip-flops can be used in shift registers which are used to shift binary data in
one direction.

Topic: Combinational Circuits


Q1. What are combinational Circuits?

Ans. The combinational logic circuits are the circuits that contain different types of logic gates.
Simply, a circuit in which different types of logic gates are combined is known as a combinational
logic circuit. The output of the combinational circuit is determined from the present combination of
inputs, regardless of the previous input. The input variables, logic gates, and output variables are the
basic components of the combinational logic circuit. There are different types of combinational logic
circuits, such as Adder, Subtractor, Decoder, Encoder, Multiplexer, and De-multiplexer.

There are the following characteristics of the combinational logic circuit:

o At any instant of time, the output of the combinational circuits depends only on the present
input terminals.
o The combinational circuit doesn't have any backup or previous memory. The present state of
the circuit is not affected by the previous state of the input.
o The n number of inputs and m number of outputs are possible in combinational logic
circuits.

Q 2. What are Half Adders?


Ans. A half adder is a digital logic circuit that performs binary addition of two single-bit binary
numbers. It has two inputs, A and B, and two outputs, SUM and CARRY. Half adder is the simplest
of all adder circuits. Half adder is a combinational arithmetic circuit that adds two numbers and
produces a sum bit (s) and carry bit (c) both as output. The addition of 2 bits is done using a
combination circuit called a Half adder. The input variables are augend and addend bits and output
variables are sum & carry bits. A and B are the two input bits.
Truth Table:
Operation and Truth Table for Half Adder
Operation:
Case 1: A= 0, B= 0;
According to Binary addition, the sum of these numbers is 0 with no carry bit generation.
0
+ 0

一一一一一

一一一一一

Hence, S= 0, C= 0
Case 2: A= 0, B= 1;
As per Binary addition, the sum of these numbers is 1 with no carry bit generation.
0
+ 1

一一一一一

一一一一一

Hence, S= 1, C= 0
Case 3: A= 1, B= 0;
As per Binary addition, the sum of these numbers is 1 with no carry bit generation.
1
+ 0

一一一一一

一一一一一

Hence, S= 1, C= 0
Case 4: A= 1, B= 1;
According to Binary addition, the sum of these numbers is 1 with a carry bit generation of 1.
1
1
+ 1

一一一一一

一一一一一

Hence, S= 0, C= 1
Or
1+1 =2 and Binary value of 2 is 10. Carry = 1 and Sum = 0.

Logical Expression:

For Sum:

Sum = A XOR B

For Carry:

Carry = A AND B

Implementation:
Note: Half adder has only two inputs and there is no provision to add a carry coming from the lower
order bits when multi addition is performed.

Advantages and disadvantages of Half Adder in Digital Logic :

Advantages of Half Adder in Digital Logic :


1.Simplicity: A half viper is a straightforward circuit that requires a couple of fundamental parts like
XOR AND entryways. It is not difficult to carry out and can be utilized in numerous advanced
frameworks.
2.Speed: The half viper works at an extremely rapid, making it reasonable for use in fast
computerized circuits.

Disadvantages of Half Adder in Digital Logic :

1.Limited Usefulness: The half viper can add two single-piece numbers and produce a total and a
convey bit. It can’t perform expansion of multi-bit numbers, which requires the utilization of
additional intricate circuits like full adders.
2.Lack of Convey Info: The half snake doesn’t have a convey input, which restricts its value in more
mind boggling expansion tasks. A convey input is important to perform expansion of multi-bit
numbers and to chain numerous adders together.
3.Propagation Deferral: The half snake circuit has a proliferation delay, which is the time it takes for
the result to change in light of an adjustment of the info. This can cause timing issues in computerized
circuits, particularly in fast frameworks.

Application of Half Adder in Digital Logic:

1.Arithmetic circuits: Half adders are utilized in number-crunching circuits to add double numbers.
At the point when different half adders are associated in a chain, they can add multi-bit double
numbers.
2.Data handling: Half adders are utilized in information handling applications like computerized
signal handling, information encryption, and blunder adjustment.
3.Address unraveling: In memory tending to, half adders are utilized in address deciphering circuits
to produce the location of a particular memory area.
4.Encoder and decoder circuits: Half adders are utilized in encoder and decoder circuits for
computerized correspondence frameworks.
5.Multiplexers and demultiplexers: Half adders are utilized in multiplexers and demultiplexers to
choose and course information.
6.Counters: Half adders are utilized in counters to augment the count by one.

Q 3. What are Full Adders?


Ans. Full Adder is the adder that adds three inputs and produces two outputs. The first two inputs are
A and B and the third input is an input carry as C-IN. The output carry is designated as C-OUT and
the normal output is designated as S which is SUM. The C-OUT is also known as the majority 1’s
detector, whose output goes high when more than one input is high. A full adder logic is designed in
such a manner that can take eight inputs together to create a byte-wide adder and cascade the carry bit
from one adder to another. we use a full adder because when a carry-in bit is available, another 1-bit
adder must be used since a 1-bit half-adder does not take a carry-in bit. A 1-bit full adder adds three
operands and generates 2-bit results.

Full Adder Truth Table:


Operation and Truth Table for Full Adder

Operation:

Case 1: A= 0, B= 0, and D= 0;

The sum of the three binary numbers 0, 0, and 0 yields a sum of 0 and
generates no carry bit.

+0

+0

一一一一一

一一一一一

Hence, S= 0, C= 0

Case 2: A= 0, B= 0 and D= 1;

The sum of the three binary numbers 0, 0, and 1 yields a sum of 1 and
generates no carry bit.

+0

+1
一一一一一

一一一一一

Hence, S= 1, C= 0

Case 3: A= 0, B= 1, and D= 0;

The sum of the three binary numbers 0, 1, and 0 yields a sum of 1 and
generates no carry bit.

+1

+0

一一一一一

一一一一一

Hence, S= 1, C= 0

Case 4: A= 0, B= 1 and D= 1;

The sum of the three binary numbers 0, 1, and 1 generates a carry bit 1.

This makes the sum to be 0.

Hence, S= 0, C= 1

Case 5: A= 1, B= 0 and D= 0;
The sum of the three binary numbers 1, 0, and 0 yields a sum of 1 and
generates no carry bit.

+0

+0

一一一一一

一一一一一

Hence, S= 1, C= 0

Case 6: A= 1, B= 0 and D= 1;

The sum of the three binary numbers 1, 0, and 1 generates a carry bit 1.

This makes the sum to be 0.

+0

+1

一一一一一

一一一一一

Hence, S= 0, C= 1

Case 7: A= 1, B= 1 and D= 0;

The sum of the three binary numbers 1, 1, and 0 generates a carry bit 1.

This makes the sum to be 0.


1

+1

+0

一一一一一

一一一一一

Hence, S= 0, C= 1

Case 8: A= 1, B= 1 and D= 1;

The sum of the three binary numbers 1, 1, and 1 generates a carry bit 1.

The sum of the first 1 and 1 is 0 with a carry bit 1.

The sum 0 gets added to the third bit 1 and the carry gets further to MSB.

This makes the sum 1.

+1

+1

一一一一一

一一一一一

Hence, S= 1, C= 1
Full Adder logic circuit.

Implementation of Full Adder using Half Adders:

2 Half Adders and an OR gate is required to implement a Full Adder.

With this logic circuit, two bits can be added together, taking a carry from the next lower order of
magnitude, and sending a carry to the next higher order of magnitude.

Karnaugh Map for Sum of Full Adder:

Fig. 8 Full Adder Karnaugh Map for SUM

(*Note* D here is C-in)


Advantages and Disadvantages of Full Adder in Digital Logic

Advantages of Full Adder in Digital Logic:

1.Flexibility: A full snake can add three information bits, making it more flexible than a half viper. It
can likewise be utilized to add multi-bit numbers by binding different full adders together.
2.Carry Info: The full viper has a convey input, which permits it to perform expansion of multi-bit
numbers and to chain different adders together.
3.Speed: The full snake works at an extremely fast, making it reasonable for use in rapid
computerized circuits.

Disadvantages of Full Adder in Digital Logic:

1.Complexity: The full snake is more mind boggling than a half viper and requires more parts like
XOR, AND, or potentially entryways. It is likewise more challenging to execute and plan.
2.Propagation Deferral: The full viper circuit has a proliferation delay, which is the time it takes for
the result to change in light of an adjustment of the info. This can cause timing issues in computerized
circuits, particularly in fast frameworks.

Application of Full Adder in Digital Logic:

1.Arithmetic circuits: Full adders are utilized in math circuits to add twofold numbers. At the point
when different full adders are associated in a chain, they can add multi-bit paired numbers.
2.Data handling: Full adders are utilized in information handling applications like advanced signal
handling, information encryption, and mistake rectification.
3.Counters: Full adders are utilized in counters to addition or decrement the count by one.
4.Multiplexers and demultiplexers: Full adders are utilized in multiplexers and demultiplexers to
choose and course information.
5.Memory tending to: Full adders are utilized in memory addressing circuits to produce the location
of a particular memory area.
6.ALUs: Full adders are a fundamental part of Number juggling Rationale Units (ALUs) utilized in
chip and computerized signal processors.

Q4. What are Half Subtractor?


Ans. Half subtractor is a combination circuit with two inputs and two outputs that
are different and borrow. It produces the difference between the two binary bits at the input and also
produces an output (Borrow) to indicate if a 1 has been borrowed. In the subtraction (A-B), A is
called a Minuend bit and B is called a Subtrahend bit.
Truth Table

The SOP form of the Diff and Borrow is as follows:


Diff= A'B+AB'
Borrow = A'B

Operation:
(*D- Difference, P- Borrow)

Case 1: A = 0, B = 0;

According to Binary subtraction, the difference of these numbers is 0 with no


previous borrow.

–0

一一一一一

一一一一一

Hence, D= 0, P= 0
Case 2: A= 0, B= 1;

According to Binary subtraction, the difference between these two numbers is


1 with a previous borrow of 1.

–1

→1 (Previous Borrow)

一一一一一

一一一一一

Hence, D= 1, P= 1

Case 3: A= 1, B= 0;

As per Binary subtraction, the difference between these two numbers is 1


with no previous borrow.

–0

一一一一一

一一一一一

Hence, D= 1, P= 0

Case 4: A= 1, B= 1;

According to Binary subtraction, the difference between these two numbers is


1 with no previous borrow.

–1

一一一一一
0

一一一一一

Hence, D= 0, P= 0

Implementation

Logical Expression

Difference = A XOR B
Borrow = \overline{A}B

Advantages of Half Adder and Half Subtractor


1. Simplicity: The half adder and half subtractor circuits are simple and easy to design,
implement, and debug compared to other binary arithmetic circuits.
2. Building blocks: The half adder and half subtractor are basic building blocks that can be used
to construct more complex arithmetic circuits, such as full adders and subtractors, multiple-bit
adders and subtractors, and carry look-ahead adders.
3. Low cost: The half adder and half subtractor circuits use only a few gates, which reduces the
cost and power consumption compared to more complex circuits.
4. Easy integration: The half adder and half subtractor can be easily integrated with other digital
circuits and systems.

Disadvantages of Half Adder and Half Subtractor


1. Limited functionality: The half adder and half subtractor can only perform binary addition
and subtraction of two single-bit numbers, respectively, and are not suitable for more complex
arithmetic operations.
2. Inefficient for multi-bit numbers: For multi-bit numbers, multiple half adders or half
subtractors need to be cascaded, which increases the complexity and decreases the efficiency
of the circuit.
3. High propagation delay: The propagation delay of the half adder and half subtractor is higher
compared to other arithmetic circuits, which can affect the overall performance of the system.

Application of Half Subtractor in Digital Logic:


1.Calculators: Most mini-computers utilize advanced rationale circuits to perform numerical tasks. A
Half Subtractor can be utilized in a number cruncher to deduct two parallel digits from one another.
2.Alarm Frameworks: Many caution frameworks utilize computerized rationale circuits to identify
and answer interlopers. A Half Subtractor can be utilized in these frameworks to look at the upsides of
two parallel pieces and trigger a caution in the event that they are unique.
3.Automotive Frameworks: Numerous advanced vehicles utilize computerized rationale circuits to
control different capabilities, like the motor administration framework, stopping mechanism, and
theater setup. A Half Subtractor can be utilized in these frameworks to perform computations and
examinations.
4.Security Frameworks: Advanced rationale circuits are usually utilized in security frameworks to
identify and answer dangers. A Half Subtractor can be utilized in these frameworks to look at two
double qualities and trigger a caution in the event that they are unique.
5.Computer Frameworks: Advanced rationale circuits are utilized broadly in PC frameworks to
perform estimations and examinations. A Half Subtractor can be utilized in a PC framework to deduct
two paired values from one another.

Q5. What are Full Subtractors?


Ans. A full subtractor is a combinational circuit that performs subtraction of two bits, one is minuend
and other is subtrahend, taking into account borrow of the previous adjacent lower minuend bit. This
circuit has three inputs and two outputs.
The three inputs A, B and Bin, denote the minuend, subtrahend, and previous borrow, respectively.
The two outputs, D and Bout represent the difference and output borrow, respectively.
Here’s how a full subtractor works:

1. First, we need to convert the binary numbers to their two’s complement form if we are subtracting a
negative number.
2. Next, we compare the bits in the minuend and subtrahend at the corresponding positions. If the
subtrahend bit is greater than or equal to the minuend bit, we need to borrow from the previous stage
(if there is one) to subtract the subtrahend bit from the minuend bit.
3. We subtract the two bits along with the borrow-in to get the difference bit. If the minuend bit is
greater than or equal to the subtrahend bit along with the borrow-in, then the difference bit is 1,
otherwise it is 0.
4. We then calculate the borrow-out bit by comparing the minuend and subtrahend bits. If the
minuend bit is less than the subtrahend bit along with the borrow-in, then we need to borrow for the
next stage, so the borrow-out bit is 1, otherwise it is 0.
The circuit diagram for a full subtractor usually consists of two half-subtractors and an additional OR
gate to calculate the borrow-out bit. The inputs and outputs of the full subtractor are as follows:

Inputs:
A: minuend bit
B: subtrahend bit
Bin: borrow-in bit from the previous stage
Outputs:
Diff: difference bit
Bout: borrow-out bit for the next stage

Truth Table –

From above table we can draw the K-Map as shown for “difference” and “borrow”.
Logical expression for difference –

D = A’B’Bin + A’BBin’ + AB’Bin’ + ABBin


= Bin(A’B’ + AB) + Bin’(AB’ + A’B)
= Bin( A XNOR B) + Bin’(A XOR B)
= Bin (A XOR B)’ + Bin’(A XOR B)
= Bin XOR (A XOR B)
= (A XOR B) XOR Bin

Logical expression for borrow –


Bout = A’B’Bin + A’BBin’ + A’BBin + ABBin
= A’B’Bin +A’BBin’ + A’BBin + A’BBin + A’BBin + ABBin
= A’Bin(B + B’) + A’B(Bin + Bin’) + BBin(A + A’)
= A’Bin + A’B + BBin

OR

Bout = A’B’Bin + A’BBin’ + A’BBin + ABBin


= Bin(AB + A’B’) + A’B(Bin + Bin’)
= Bin( A XNOR B) + A’B
= Bin (A XOR B)’ + A’B

Logic Circuit for Full Subtractor –

Implementation of Full Subtractor using Half Subtractors – 2 Half Subtractors and an OR gate is
required to implement a Full Subtractor.

Q6. Explain Parallel Adder and Parallel Subtractor.


Ans. Parallel Adder –

A single full adder performs the addition of two one bit numbers and an input carry. But a Parallel
Adder is a digital circuit capable of finding the arithmetic sum of two binary numbers that is greater
than one bit in length by operating on corresponding pairs of bits in parallel. It consists of full adders
connected in a chain where the output carry from each full adder is connected to the carry input of the
next higher order full adder in the chain. A n bit parallel adder requires n full adders to perform the
operation. So for the two-bit number, two adders are needed while for four bit number, four adders are
needed and so on. Parallel adders normally incorporate carry lookahead logic to ensure that carry
propagation between subsequent stages of addition does not limit addition speed.

Working of parallel Adder –


1. As shown in the figure, firstly the full adder FA1 adds A1 and B1 along with the carry C1 to
generate the sum S1 (the first bit of the output sum) and the carry C2 which is connected to
the next adder in chain.
2. Next, the full adder FA2 uses this carry bit C2 to add with the input bits A2 and B2 to
generate the sum S2(the second bit of the output sum) and the carry C3 which is again further
connected to the next adder in chain and so on.
3. The process continues till the last full adder FAn uses the carry bit Cn to add with its input An
and Bn to generate the last bit of the output along last carry bit Cout.

Parallel Subtractor –

A Parallel Subtractor is a digital circuit capable of finding the arithmetic difference of two binary
numbers that is greater than one bit in length by operating on corresponding pairs of bits in parallel.
The parallel subtractor can be designed in several ways including combination of half and full
subtractors, all full subtractors or all full adders with subtrahend complement input.
Working of Parallel Subtractor –

1. As shown in the figure, the parallel binary subtractor is formed by combination of all full
adders with subtrahend complement input.
2. This operation considers that the addition of minuend along with the 2’s complement of the
subtrahend is equal to their subtraction.
3. Firstly the 1’s complement of B is obtained by the NOT gate and 1 can be added through the
carry to find out the 2’s complement of B. This is further added to A to carry out the
arithmetic subtraction.
4. The process continues till the last full adder FAn uses the carry bit Cn to add with its input An
and 2’s complement of Bn to generate the last bit of the output along last carry bit Cout.

Advantages of parallel Adder/Subtractor –


1. The parallel adder/subtractor performs the addition operation faster as compared to serial
adder/subtractor.
2. Time required for addition does not depend on the number of bits.
3. The output is in parallel form i.e all the bits are added/subtracted at the same time.
4. It is less costly.

Disadvantages of parallel Adder/Subtractor –


1. Each adder has to wait for the carry which is to be generated from the previous adder in chain.
2. The propagation delay( delay associated with the travelling of carry bit) is found to increase
with the increase in the number of bits to be added.

Topic: Register Transfer and Microoperations


Q1. What are Registers?

Ans. Registers are a type of computer memory used to quickly accept, store, and
transfer data and instructions that are being used immediately by the CPU. The
registers used by the CPU are often termed as Processor registers.

A processor register may hold an instruction, a storage address, or any data (such as
bit sequence or individual characters).

• Computer registers are designated by upper case letters (and optionally followed by
digits or letters) to denote the function of the register.
• For example, the register that holds an address for the memory unit is usually called
a memory address register and is designated by the name MAR.
• Other designations for registers are PC (for program counter), IR (for instruction
register, and R1 (for processor register).

• The individual flip-flops in an n-bit register are numbered in sequence from 0


through n-1, starting from 0 in the rightmost position and increasing the numbers
toward the left.
• Figure 4-1 shows the representation of registers in block diagram form.
• The most common way to represent a register is by a rectangular box with the name
of the register inside, as in Fig. 4-1(a).
• The individual bits can be distinguished as in (b).
• he numbering of bits in a 16-bit register can be marked on top of the box as shown in
(c).
• 16-bit register is partitioned into two parts in (d). Bits 0 through 7 are assigned the
symbol L (for low byte) and bits 8 through 15 are assigned the symbol H (for high
byte).
• The name of the 16-bit register is PC. The symbol PC (0-7) or PC (L) refers to the low-
order byte and PC (8-15) or PC (H) to the high-order byte.

Q2. What are different types of Registers?


Ans.
S.NO NAME SYMBOL FUNCTIONING

1 Accumulator AC An accumulator is the most often utilized register, and it is used


to store information taken from memory.

2 Memory MAR Address location of memory is stored in this register to be


address accessed later. It is called by both MAR and MDR together
registers

3 Memory data MDR All the information that is supposed to be written or the
registers information that is supposed to be read from a certain memory
address is stored here

4 General- GPR Consist of a series of registers generally starting from R0 and


purpose running till Rn - 1. These registers tend to store any form of
register temporary data that is sent to a register during any undertaking
process.
More GPR enables the register to register addressing, which
increases processing speed.

5 Program PC These registers are utilized in keeping the record of a program


counter that is being executed or under execution. These registers
consist of the memory address of the next instruction to be
fetched.
PC points to the address of the next instruction to be fetched
from the main memory when the previous instruction has been
completed successfully. Program Counter (PC) also functions to
count the number of instructions.
The incrementation of PC depends on the type of architecture
being used. If we use a 32-bit architecture, the PC gets
incremented by 4 every time to fetch the next instruction.

6 Instructions IR Instruction registers hold the information about to be executed.


registers The immediate instructions received from the system are
fetched and stored in these registers.
Once the instructions are stored in registers, the processor
starts executing the set instructions, and the PC will point to the
next instructions to be executed

7 Condition These have different flags that depict the status of operations.
code registers These registers set the flags accordingly if the result of
operation caused zero or negative

8 Temporary TR Holds temporary data


registers

9 Input registers INPR Carries input character

10 Output OUTR Carries output character


registers
11 Index registers BX We use this register to store values and numbers included in
the address information and transform them into effective
addresses. These are also called base registers.
These are used to change operand address at the time of
execution, also stated as BX

12 Memory MBR MBR - Memory buffer registers are used to store data content
buffer register or memory commands used to write on the disk. The basic
functionality of these is to save called data from memory.
MBR is very similar to MDR

13 Stack control SCR Stack is a set of location memory where data is stored and
registers retrieved in a certain order. Also called last in first out ( LIFO ),
we can only retrieve a stack at the second position only after
retrieving out the first one, and stack control registers are
mainly used to manage the stacks in the computer.
SP - BP is stack control registers. Also, we can use DI, SI, SP, and
BP as 2 byte or 4-byte registers.
EDI, ESI, ESP, and EBP are 4 - byte registers

14 Flag register FR Flag registers are used to indicate a particular condition. The
size of the registered flag is 1 - 2 bytes, and each registered flag
is furthermore compounded into 8 bits. Each registered flag
defines a condition or a flag.
The data that is stored is split into 8 separate bits.
Basic flag registers -
Zero flags
Carry flag
Parity flag
Sign flag
Overflow flag.

15 Segment SR Hold address for memory


register

16 Data register DX Hold memory operand


Q3. What is Register Transfer Language?

Ans.
• Information transfer from one register to another is designated in symbolic form by
means of a replacement operator.
• The statement R2← R1 denotes a transfer of the content of register R1 into register
R2.
• It designates a replacement of the content of R2 by the content of R1.
• By definition, the content of the source register R 1 does not change after the
transfer.
• If we want the transfer to occur only under a predetermined control condition then
it can be shown by an if-then statement. if (P=1) then R2← R1.
• P is the control signal generated by a control section.
• We can separate the control variables from the register transfer operation by
specifying a Control Function.
• Control function is a Boolean variable that is equal to 0 or 1.
• control function is included in the statement as P: R2← R1
• Control condition is terminated by a colon implies transfer operation be executed by
the hardware only if P=1.
• Every statement written in a register transfer notation implies a hardware
construction for implementing the transfer.
• Figure 4-2 shows the block diagram that depicts the transfer from R1 to R2.
• The n outputs of register R1 are connected to the n inputs of register R2.
• The letter n will be used to indicate any number of bits for the register. It will be
replaced by an actual number when the length of the register is known.
• Register R2 has a load input that is activated by the control variable P.
• It is assumed that the control variable is synchronized with the same clock as the one
applied to the register.
• As shown in the timing diagram, P is activated in the control section by the rising
edge of a clock pulse at time t.
• The next positive transition of the clock at time t + 1 finds the load input active and
the data inputs of R2 are then loaded into the register in parallel.

• P may go back to 0 at time t+1; otherwise, the transfer will occur with every clock
pulse transition while P remains active.
• Even though the control condition such as P becomes active just after time t, the
actual transfer does not occur until the register is triggered by the next positive
transition of the clock at time t +1.
• The basic symbols of the register transfer notation are listed in below table.

• A comma is used to separate two or more operations that are executed at the same
time.
• The statement T : R2← R1, R1← R2 (exchange operation) denotes an operation that
exchanges the contents of two rgisters during one common clock pulse provided that
T=1.

Q4. Explain about Bus and Memory Transfer.

Ans.
• A more efficient scheme for transferring information between registers in a
multiple-register
• configuration is a Common Bus System.
• A common bus consists of a set of common lines, one for each bit of a register.
• Control signals determine which register is selected by the bus during each
particular register
• transfer.
• Different ways of constructing a Common Bus System
➢ Using Multiplexers
➢ Using Tri-state Buffers

Common bus system is with multiplexers:


➢ The multiplexers select the source register whose binary information is then
placed on the bus.
➢ The construction of a bus system for four registers is shown in below Figure.

➢ The bus consists of four 4 x 1 multiplexers each having four data inputs, 0
through 3, and two selection inputs, S1 and S0.
➢ For example, output 1 of register A is connected to input 0 of MUX 1 because this
input is labelled A1.
➢ The diagram shows that the bits in the same significant position in each register
are connected to the data inputs of one multiplexer to form one line of the bus.
➢ Thus MUX 0 multiplexes the four 0 bits of the registers, MUX 1 multiplexes the
four 1 bits of the registers, and similarly for the other two bits.
➢ The two selection lines Si and So are connected to the selection inputs of all four
multiplexers.
➢ The selection lines choose the four bits of one register and transfer them into the
four-line common bus.
➢ When S1S0 = 00, the 0 data inputs of all four multiplexers are selected and
applied to the outputs that form the bus.
➢ This causes the bus lines to receive the content of register A since the outputs of
this register are connected to the 0 data inputs of the multiplexers.
➢ Similarly, register B is selected if S1S0 = 01, and so on.
➢ Table 4-2 shows the register that is selected by the bus for each of the four
possible binary value of the selection lines.

➢ In general a bus system has


▪ multiplex “k” Registers
▪ each register of “n” bits
▪ to produce “n-line bus”
▪ no. of multiplexers required = n
▪ size of each multiplexer = k x 1
➢ When the bus is includes in the statement, the register transfer is symbolized as
follows: BUS← C, R1← BUS
➢ The content of register C is placed on the bus, and the content of the bus is
loaded into register R1 by activating its load control input. If the bus is known to
exist in the system, it may be convenient just to show the direct transfer. R1← C.

Memory Transfer:

• The transfer of information from a memory word to the outside environment is


called a read operation.
• The transfer of new information to be stored into the memory is called a write
operation.
• A memory word will be symbolized by the letter M.
• The particular memory word among the many available is selected by the memory
address during the transfer.
• It is necessary to specify the address of M when writing memory transfer operations.
• This will be done by enclosing the address in square brackets following the letter M.
• Consider a memory unit that receives the address from a register, called the address
register, symbolized by AR.
• The data are transferred to another register, called the data register, symbolized by
DR.
• The read operation can be stated as follows: Read: DR<- M [AR]
• This causes a transfer of information into DR from the memory word M selected by
the address in AR.
• The write operation transfers the content of a data register to a memory word M
selected by the address. Assume that the input data are in register R1 and the
address is in AR.
• The write operation can be stated as follows: Write: M [AR] <- R1

Q5. What are Microoperations? Explain Different types of microoperations.

Ans. In computer central processing units, micro-operations (also known as micro-ops) are the
functional or atomic, operations of a processor. These are low level instructions used in some
designs to implement complex machine instructions. They generally perform operations on data
stored in one or more registers.

Types of Micro-operations:

• Register Transfer Micro-operations: Transfer binary information from one register to


another.
• Arithmetic Micro-operations: Perform arithmetic operation on numeric data stored
in registers.
• Logical Micro-operations: Perform bit manipulation operations on data stored in
registers.
• Shift Micro-operations: Perform shift operations on data stored in registers.
• Register Transfer Micro-operation doesn’t change the information content when the
binary information moves from source register to destination register.
• Other three types of micro-operations change the information change the
information content during the transfer.

Arithmetic Micro-operations:

We can perform arithmetic operations on the numeric data which is stored inside the registers.

Example :

R3 <- R1 + R2

The value in register R1 is added to the value in the register R2 and then the sum is transferred into
register R3. Similarly, other arithmetic micro-operations are performed on the registers.

• Addition –
In addition micro-operation, the value in register R1 is added to the value in the register R2
and then the sum is transferred into register R3.

• Subtraction –
In subtraction micro-operation, the contents of register R2 are subtracted from contents of
the register R1, and then the result is transferred into R3.
There is another way of doing the subtraction. In this, 2’s complement of R2 is added to R1, which is
equivalent to R1 – R2, and then the result is transferred into register R3.

• Increment –
In Increment micro-operation, the value inside the R1 register is increased by 1.

• Decrement –
In Decrement micro-operation, the value inside the R1 register is decreased by 1.

• 1’s Complement –
In this micro-operation, the complement of the value inside the register R1 is taken.

• 2’s Complement –
In this micro-operation, the complement of the value inside the register R2 is taken and then
1 is added to the value and then the final result is transferred into the register R2. This
process is also called Negation. It is equivalent to -R2.

Arithmetic micro-operations are the basic building blocks of arithmetic operations performed by a
computer’s central processing unit (CPU). These micro-operations are executed on the data stored in
registers, which are small, high-speed storage units within the CPU.

There are several types of arithmetic micro-operations that can be performed on register data,
including:
1. Addition: This micro-operation adds two values together and stores the result in a register.

2. Subtraction: This micro-operation subtracts one value from another and stores the result in
a register.

3. Increment: This micro-operation adds 1 to the value in a register.

4. Decrement: This micro-operation subtracts 1 from the value in a register.

5. Multiplication: This micro-operation multiplies two values together and stores the result in a
register.

6. Division: This micro-operation divides one value by another and stores the quotient and
remainder in separate registers.

7. Shift: This micro-operation shifts the bits in a register to the left or right, depending on the
direction specified.

These arithmetic micro-operations are used in combination with logical micro-operations, such as
AND, OR, and NOT, to perform more complex calculations and manipulate data within the CPU.

Logic Micro-operations:

• Logic microoperations specify binary operations for strings of bits stored in registers.
• These operations consider each bit of the register separately and treat them as
binary variables.
• For example, the exclusive-OR microoperation with the contents of two registers RI
and R2 is symbolized by the statement
• It specifies a logic microoperation to be executed on the individual bits of the
registers provided that the control variable P = 1.

List of Logic Microoperations:

• There are 16 different logic operations that can be performed with two binary
variables.
• They can be determined from all possible truth tables obtained with two binary
variables as shown in Table 4-5.

• The 16 Boolean functions of two variables x and y are expressed in algebraic form in
the first column of Table 4-6.
• The 16 logic microoperations are derived from these functions by replacing variable
x by the binary content of register A and variable y by the binary content of register
B.
• The logic micro-operations listed in the second column represent a relationship
between the binary content of two registers A and B.

Shift Microoperations:

• Shift microoperations are used for serial transfer of data.


• The contents of a register can be shifted to the left or the right.
• During a shift-left operation the serial input transfers a bit into the rightmost
position.
• During a shift-right operation the serial input transfers a bit into the leftmost
position.
• There are three types of shifts: logical, circular, and arithmetic.
• The symbolic notation for the shift microoperations is shown in Table 4-7.
• Logical Shift:
➢ A logical shift is one that transfers 0 through the serial input.
➢ The symbols shl and shr for logical shift-left and shift-right
microoperations.
➢ The microoperations that specify a 1-bit shift to the left of the content
of register R and a 1-bit shift to the right of the content of register R
shown in table 4.7.
➢ The bit transferred to the end position through the serial input is
assumed to be 0 during a logical shift.
• Circular Shift:
➢ The circular shift (also known as a rotate operation) circulates the bits of the
register around the two ends without loss of information.
➢ This is accomplished by connecting the serial output of the shift register to its
serial input.
➢ We will use the symbols cil and cir for the circular shift left and right,
respectively.
• Arithmetic Shift:
➢ An arithmetic shift is a microoperation that shifts a signed binary number to
the left or right.
➢ An arithmetic shift-left multiplies a signed binary number by 2.
➢ An arithmetic shift-right divides the number by 2.
➢ Arithmetic shifts must leave the sign bit unchanged because the sign of the
number remains the same when it is multiplied or divided by 2.

Arithmetic Logic Shift Unit:

• Instead of having individual registers performing the microoperations directly,


computer systems employ a number of storage registers connected to a common
operational unit called an arithmetic logic unit, abbreviated ALU.
• The ALU is a combinational circuit so that the entire register transfer operation from
the source registers through the ALU and into the destination register can be
performed during one clock pulse period.
• The shift microoperations are often performed in a separate unit, but sometimes the
shift unit is made part of the overall ALU.
• The arithmetic, logic, and shift circuits introduced in previous sections can be
combined into one ALU with common selection variables. One stage of an arithmetic
logic shift unit is shown in Fig. 4- 13.
• Particular microoperation is selected with inputs S1 and S0. A 4 x 1 multiplexer at the
output chooses between an arithmetic output in Di and a logic output in Ei .
• The data in the multiplexer are selected with inputs S3 and S2. The other two data
inputs to the multiplexer receive inputs Ai-1 for the shift-right operation and Ai+1 for
the shift-left operation.
• The circuit whose one stage is specified in Fig. 4-13 provides eight arithmetic
operation, four logic operations, and two shift operations.
• Each operation is selected with the five variables S3, S2, S1, S0 and Cin.
• The input carry Cin is used for selecting an arithmetic operation only.
• Table 4-8 lists the 14 operations of the ALU. The first eight are arithmetic operations
and are selected with S3S2 = 00.
• The next four are logic and are selected with S3S2 = 01.
• The input carry has no effect during the logic operations and is marked with don't-
care x’s.
• The last two operations are shift operations and are selected with S3S2= 10 and 11.
• The other three selection inputs have no effect on the shift.
Topic: Basic Computer Organisation and Design

Q1. What are Instruction Codes?

Ans. Instruction Code: group of bits that instruct the computer to perform specific operation.

Instruction code is usually divided into two parts: Opcode and address(operand).

• Operation Code (opcode):


➢ group of bits that define the operation
➢ Eg: add, subtract, multiply, shift, complement.
➢ No. of bits required for opcode depends on no. of operations available in
computer.
➢ n bit opcode >= 2 n (or less) operations
• Address (operand):
➢ specifies the location of operands (registers or memory words)
➢ Memory words are specified by their address
➢ Registers are specified by their k-bit binary code
➢ k-bit address >= 2 k registers
Addressing of Operand:

• Sometimes convenient to use the address bits of an instruction code not as an


address but as the actual operand.
• When the second part of an instruction code specifies an operand, the
instruction is said to have an immediate operand.
• When the second part specifies the address of an operand, the instruction is said
to have a direct address.
• When second part of the instruction designate an address of a memory word in
which the address of the operand is found such instruction have indirect address.
• One bit of the instruction code can be used to distinguish between a direct and
an indirect address.
• The instruction code format shown in Fig. 5-2(a). It consists of a 3-bit operation
code, a 12-bit address, and an indirect address mode bit designated by I. The
mode bit is 0 for a direct address and 1 for an indirect address.

• A direct address instruction is shown in Fig. 5-2(b).


• It is placed in address 22 in memory. The I bit is 0, so the instruction is recognized as
a direct address instruction. The opcode specifies an ADD instruction, and the
address part is the binary equivalent of 457.
• The control finds the operand in memory at address 457 and adds it to the content
of AC.
• The instruction in address 35 shown in Fig. 5-2(c) has a mode bit I = 1.
• Therefore, it is recognized as an indirect address instruction.
• The address part is the binary equivalent of 300. The control goes to address 300 to
find the address of the operand. The address of the operand in this case is 1350.
• The operand found in address 1350 is then added to the content of AC.
• The effective address to be the address of the operand in a computation-type
instruction or the target address in a branch-type instruction.
• Thus the effective address in the instruction of Fig. 5-2(b) is 457 and in the
instruction of Fig 5-2(c) is 1350.
Q2. What are computer registers?

Ans. Computer registers are high-speed storage locations that hold data and instructions being
processed by the CPU. They enable quick access to data for arithmetic and logic operations, and
store memory addresses, program counters, and other control information needed for program
execution. Registers also help optimize instruction execution by reducing the need to access slower
memory locations, which improves overall system performance and efficiency.

The need of the registers in computer for:

▪ Instruction sequencing needs a counter to calculate the address of the next


instruction after execution of the current instruction is completed (PC).
▪ Necessary to provide a register in the control unit for storing the instruction code
after it is read from memory (IR).
▪ Needs processor registers for manipulating data (AC and TR) and a register for
holding a memory address (AR).
The above requirements dictate the register configuration shown in Fig. 5-3.
The registers are also listed in Table 5.1 together with a brief description of their function and the
number of bits that they contain.

• The data register (DR) holds the operand read from memory.
• The accumulator (AC) register is a general purpose processing register.
• The instruction read from memory is placed in the instruction register (IR).
• The temporary register (TR) is used for holding temporary data during the
processing.
• The memory address register (AR) has 12 bits since this is the width of a memory
address.
• The program counter (PC) also has 12 bits and it holds the address of the next
instruction to be read from memory after the current instruction is executed.
• Two registers are used for input and output.
 The input register (INPR) receives an 8-bit character from an input device.  The
output register (OUTR) holds an 8-bit character for an output device.

Q3. Explain Common Bus System using diagram.

Ans.

The basic computer has eight registers, a memory unit, and a control unit

• Paths must be provided to transfer information from one register to another and
between memory and registers.
• A more efficient scheme for transferring information in a system with many registers
is to use a common bus.
• The connection of the registers and memory of the basic computer to a common bus
system is shown in Fig. 5-4.
• The outputs of seven registers and memory are connected to the common bus.
• The specific output that is selected for the bus lines at any given time is determined
from the binary value of the selection variables S2, S1, and S0.
• The number along each output shows the decimal equivalent of the required binary
selection.
• For example, the number along the output of DR is 3. The 16-bit outputs of DR are
placed on the bus lines when S2S1S0 = 011.
• The lines from the common bus are connected to the inputs of each register and the
data inputs of the memory.
• The particular register whose LD (load) input is enabled receives the data from the
bus during the next clock pulse transition.
• The memory receives the contents of the bus when its write input is activated.
• The memory places its 16-bit output onto the bus when the read input is activated
and S2S1S0 = 111.
• Two registers, AR and PC, have 12 bits each since they hold a memory address.
• When the contents of AR or PC are applied to the 16-bit common bus, the four most
significant bits are set to 0's.
• When AR or PC receives information from the bus, only the 12 least significant bits
are transferred into the register.
• The input register INPR and the output register OUTR have 8 bits each.
• They communicate with the eight least significant bits in the bus.
• INPR is connected to provide information to the bus but OUTR can only receive
information from the bus.
• This is because INPR receives a character from an input device which is then
transferred to AC.
• OUTR receives a character from AC and delivers it to an output device.
• Five registers have three control inputs: LD (load), INR (increment), and CLR (clear).
• This type of register is equivalent to a binary counter with parallel load and
synchronous clear.
• Two registers have only a LD input.
• The input data and output data of the memory are connected to the common bus,
but the memory address is connected to AR.
• Therefore, AR must always be used to specify a memory address.
• The 16 inputs of AC come from an adder and logic circuit. This circuit has three sets
of inputs.
➢ One set of 16-bit inputs come from the outputs of AC.
➢ Another set of 16-bit inputs come from the data register DR.
➢ The result of an addition is transferred to AC and the end carry-out of the
addition is transferred to flip-flop E (extended AC bit).
➢ A third set of 8-bit inputs come from the input register INPR.
• The content of any register can be applied onto the bus and an operation can be
performed in the adder and logic circuit during the same clock cycle.
• For example, the two microoperations DR<-AC and AC <- DR can be executed at the
same time.
• This can be done by placing the content of AC on the bus (with S2S1S0 = 100),
enabling the LD (load) input of DR, transferring the content of DR through the adder
and logic circuit into AC, and enabling the LD (load) input of AC, all during the same
clock cycle.

Q4. What are Computer Instructions?

Ans.

The basic computer has 16-bit instruction register (IR) which can denote either memory
reference or register reference or input-output instruction.
1. Memory Reference – These instructions refer to memory address as an operand.
The other operand is always accumulator. Specifies 12-bit address, 3-bit opcode
(other than 111) and 1-bit addressing mode for direct and indirect addressing.

Example – IR
register contains = 0001XXXXXXXXXXXX, i.e. ADD after fetching and decoding of
instruction we find out that it is a memory reference instruction for ADD operation.
Hence, DR ← M[AR]
AC ← AC + DR, SC ← 0

1. Register Reference – These instructions perform operations on registers rather than


memory addresses. The IR(14 – 12) is 111 (differentiates it from memory reference)
and IR(15) is 0 (differentiates it from input/output instructions). The rest 12 bits
specify register operation.

Example – IR
register contains = 0111001000000000, i.e. CMA after fetch and decode cycle we
find out that it is a register reference instruction for complement accumulator.
Hence, AC ← ~AC

1. Input/Output – These instructions are for communication between computer and


outside environment. The IR(14 – 12) is 111 (differentiates it from memory
reference) and IR(15) is 1 (differentiates it from register reference instructions). The
rest 12 bits specify I/O operation.

Example – IR
register contains = 1111100000000000, i.e. INP after fetch and decode cycle we find
out that it is an input/output instruction for inputing character. Hence, INPUT
character from peripheral device.
Essential PC directions are the principal tasks that a PC can perform. These directions are
executed by the focal handling unit (central processor) of a PC, and they structure the
reason for additional perplexing tasks. A few instances of essential PC directions include:

1.Load: This guidance moves information from the memory to a computer processor
register.
2.Store: This guidance moves information from a computer chip register to the memory.
3.Add: This guidance adds two qualities and stores the outcome in a register.
4.Subtract: This guidance deducts two qualities and stores the outcome in a register.

5.Multiply: This guidance duplicates two qualities and stores the outcome in a register.
6.Divide: This guidance isolates two qualities and stores the outcome in a register.
7.Branch: This guidance changes the program counter to a predefined address, which is
utilized to execute restrictive and genuine leaps.

8.Jump: This guidance changes the program counter to a predefined address.


9.Compare: This guidance looks at two qualities and sets a banner demonstrating the
consequence of the examination.
10.Increment: This guidance adds 1 to a worth in a register or memory area.

The set of instructions incorporated in16 bit IR register are:


1. Arithmetic, logical and shift instructions (and, add, complement, circulate left, right,
etc)
2. To move information to and from memory (store the accumulator, load the
accumulator)
3. Program control instructions with status conditions (branch, skip)
4. Input output instructions (input character, output character)

Symbol Hexadecimal Code Description

AND 0xxx 8xxx And memory word to AC

ADD 1xxx 9xxx Add memory word to AC

LDA 2xxx Axxx Load memory word to AC

STA 3xxx Bxxx Store AC content in memory

BUN 4xxx Cxxx Branch Unconditionally

BSA 5xxx Dxxx Branch and Save Return Address

ISZ 6xxx Exxx Increment and skip if 0

CLA 7800 Clear AC

CLE 7400 Clear E(overflow bit)

CMA 7200 Complement AC

CME 7100 Complement E

CIR 7080 Circulate right AC and E

CIL 7040 Circulate left AC and E

INC 7020 Increment AC

SPA 7010 Skip next instruction if AC > 0


Symbol Hexadecimal Code Description

SNA 7008 Skip next instruction if AC < 0

SZA 7004 Skip next instruction if AC = 0

SZE 7002 Skip next instruction if E = 0

HLT 7001 Halt computer

INP F800 Input character to AC

OUT F400 Output character from AC

SKI F200 Skip on input flag

SKO F100 Skip on output flag

ION F080 Interrupt On

IOF F040 Interrupt Off

Q5. What are different types of Instruction Formats?

Ans. The instruction formats are a sequence of bits (0 and 1). These bits, when grouped, are known
as fields. Each field of the machine provides specific information to the CPU related to the operation
and location of the data.

The instruction format also defines the layout of the bits for an instruction. It can be of variable
lengths with multiple numbers of addresses. These address fields in the instruction format vary as
per the organization of the registers in the CPU. The formats supported by the CPU depend upon the
Instructions Set Architecture implemented by the processor.

Depending on the multiple address fields, the instruction is categorized as follows:

1. Three address instruction


2. Two address instruction
3. One address instruction
4. Zero address instruction
Zero Address Instruction

The location of the operands is tacitly represented because this instruction lacks an operand field.
These commands are supported by the stack-organized computer system. It is necessary to translate
the arithmetic expression into a reverse polish notation in order to assess it.

Example of Zero address instruction: Consider the actions below, which demonstrate how the
expression X = (A + B) (C + D) will be formatted for a stack-organized computer.

TOS: Top of the Stack

PUSH A TOS ← A

PUSH B TOS ← B

ADD TOS ← (A + B)

PUSH C TOS ← C

PUSH D TOS ← D

ADD TOS ← (C + D)

MUL TOS ← (C + D) ∗ (A + B)

POP X M [X] ← TOS

One Address Instruction

This instruction performs data manipulation tasks using an implied accumulator. A register that the
CPU uses to carry out logical processes is called an accumulator. The accumulator is inferred in one
address instruction, so it doesn’t need an explicit reference. A second register is required for
addition and subtraction. Instead, we’ll ignore the second register in this case and presume that the
accumulator already holds the outcomes of all the operations.

Example of One address instruction: The program to evaluate X = (A + B) ∗ (C + D) is as follows:

LOAD A AC ← M [A]

ADD B AC ← A [C] + M [B]

STORE T M [T] ← AC

LOAD C AC ← M [C]

ADD D AC ← AC + M [D]

MUL T AC ← AC ∗ M [T]

STORE X M [X] ← AC

All actions involve a memory operand and the accumulator (AC) register.
Any memory address is M[].
M[T] points to a temporary memory spot where the interim outcome is kept.
There is only one operand space in this instruction format. To transfer data, this address field
employs two unique instructions, namely:

• LOAD: This is used to transfer the data to the accumulator.


• STORE: This is used to move the data from the accumulator to the memory.
Two Address Instructions

The majority of commercial computers use this command. There are three operand fields in this
address command format. Registers or memory addresses can be used in the two address sections.

Example of Two address instruction: The program to evaluate X = (A + B) ∗ (C + D) is as follows:

MOV R1, A R1 ← M [A]

ADD R1, B R1 ← R1 + M [B]

MOV R2, C R2 ← M [C]

ADD R2, D R2 ← R2 + M [D]

MUL R1, R2 R1 ← R1∗R2

MOV X, R1 M [X] ← R1

The MOV command moves the operands from the processor registers to the memory. sensors R1,
R2.

Three Address Instruction

A three-address command must have three operand elements in its format. These three fields could
either be registers or memory locations.

Example of Three address instruction: The assembly language program X = (A + B) * (C + D) Take a


look at the instructions that follow, which describe the register transfer procedure for each
instruction.

ADD R1, A, B R1 ← M [A] + M [B]

ADD R2, C, D R2 ← M [C] + M [D]

MUL X, R1, R2 M [X] ← R1 ∗ R2

R1 and R2 are the two CPU registers.


The operand at the memory location represented by A is indicated by the symbol M [A]. The data or
location that the CPU will use is contained in operands 1 and 2. The address of the output is in
operand 3.

Q6. Explain the Instruction Cycle using flow chart.

Ans. A program residing in the memory unit of a computer consists of a sequence of instructions.
These instructions are executed by the processor by going through a cycle for each instruction.

In a basic computer, each instruction cycle consists of the following phases:


1. Fetch instruction from memory.
2. Decode the instruction.
3. Read the effective address from memory.
4. Execute the instruction.

Initiating Cycle

During this phase, the computer system boots up and the Operating System loads into the central
processing unit's main memory. It begins when the computer system starts.

Fetching of Instruction

The first phase is instruction retrieval. Each instruction executed in a central processing unit uses the
fetch instruction. During this phase, the central processing unit sends the PC to MAR and then the
READ instruction to a control bus. After sending a read instruction on the data bus, the memory
returns the instruction that was stored at that exact address in the memory. The CPU then copies
data from the data bus into MBR, which it then copies to registers. The pointer is incremented to the
next memory location, allowing the next instruction to be fetched from memory.

Decoding of Instruction

The second phase is instruction decoding. During this step, the CPU determines which instruction
should be fetched from the instruction and what action should be taken on the instruction. The
instruction's opcode is also retrieved from memory, and it decodes the related operation that must
be performed for the instruction.

Read of an Effective Address

The third phase is the reading of an effective address. The operation's decision is made during this
phase. Any memory type operation or non-memory type operation can be used. Direct memory
instruction and indirect memory instruction are the two types of memory instruction available.

Execution of Instruction

The last step is to carry out the instructions. The instruction is finally carried out at this stage. The
instruction is carried out, and the result is saved in the register. The CPU gets prepared for the
execution of the next instruction after the completion of each instruction. The execution time of
each instruction is calculated, and this information is used to determine the processor's processing
speed.

• This cycle repeats indefinitely unless a HALT instruction is encountered.


• The basic computer has three instruction code formats.
• Each format has 16 bits.
• The operation code (opcode) part of the instruction contains three bits and the
meaning of the remaining 13 bits depends on the operation code encountered.
• A memory reference instruction uses 12 bits to specify an address and one bit to specify
the addressing mode I.
• I is equal to 0 for direct address and to 1 for indirect address
• ´Initially PC is loaded with the address of the first instruction in the program  ´SC is
cleared to 0 providing a decoded timing signal T0 .
• ´SC is incremented to 1 so that timing signals go through T0 through T15 The
microoperations for fetch and decode phases are:
➢ T0 :AR←PC
➢ T1 :IR←M[AR] , PC<- PC+1
➢ T2 :D0 ,D1 ..D7←Decode IR(12-14), AR←IR(0- 11), I←IR(15)
• At T0 Transfers the address from PC to AR
• At T1 Instruction read from memory is placed in IR and PC is incremented by 1 to get
the address of next instruction
• At T2 opcode in IR is decoded , Indirect bit is transferred to flipflop I & address part is
transferred to AR
• After decoding next step is to determine the type of instruction After decoding timing
signal active is T3 during which instruction type is identified

Memory Reference
If D7=0 opcode will be 000 through 110
If D7=0 and I=1,indirect and D7=0 and I=0 direct
Microoperations for indirect address should be initially AR←M[AR]
Register reference/ io
If D7=1 and I=0 - Register
If D7=1 and I=1 – i/o

• Decoder output D7 is equal to 1 if the operation code is equal to binary 111.


• We determine that if D1 = 1, the instruction must be a register-reference or input-output
type.
• If D7 = 0, the operation code must be one of the other seven values 000 through 110,
specifying memory reference instruction.
• Control then inspects the value of the first bit of the instruction, which is now available in
flip-flop I

If D7 = 0 and I = 1, we have a memoryreference instruction with an indirect address.

It is then necessary to read the effective address from memory.

The microoperation for the indirect address condition can be symbolized by the register transfer
statement

AR ← M [AR]

The 3 instructions subdivided into 4 paths

➢ D7 ’IT3 :AR←M[AR] (Mem reference and indirect)


➢ D7 ’I’T3 :Nothing(Mem reference and direct)
➢ D7 I’T3: Execute register reference
➢ D7 IT3:Execute io instruction
• When a memory-reference instruction with I = 0 is encountered, it is not necessary to
do anything since the effective address is already in AR.
• However, the sequence counter SC must be incremented when D’7 T3 = 1, so that the
execution of the memory-reference instruction can be continued with timing variable
T4.
• A register-reference or input-output instruction can be executed with the clock
associated with timing signal T3.
• After the instruction is executed, SC is cleared to 0 and control returns to the fetch
phase with T0 = 1.
• The timing signal that is active after the decoding is T3.
• During time T3, the control unit determines the type of instruction that was just read
from memory.
• The following flowchart presents an initial configuration for the instruction cycle and
shows how the control determines the instruction type after the decoding.
• Note that the sequence counter SC is either incremented or cleared to 0 with every
positive clock transition.
• We will adopt the convention that if SC is incremented, we will not write the statement
SC ← SC + 1, but it will be implied that the control goes to the next timing signal in
sequence.
• When SC is to be cleared, we will include the statement Sc ← 0

Q7. What are Addressing Modes? Explain Different types of ADDRESSING Modes.

Ans. The term addressing modes refers to how the operand of an instruction is specified. The
addressing mode specifies a rule for interpreting or modifying the address field of the instruction
before the operand is executed.

Types of Addressing Modes-


In computer architecture, there are the following types of addressing modes-
Implied Addressing Mode-
In this addressing mode, the definition of the instruction itself specify the operands implicitly. It is
also called as implicit addressing mode.
Examples-
• The instruction “Complement Accumulator” is an implied mode instruction (CMA).
• In a stack organized computer, Zero Address Instructions are implied mode instructions.
(since operands are always implied to be present on the top of the stack)

Stack Addressing Mode-


In this addressing mode, the operand is contained at the top of the stack. Examples-
• ADD This instruction simply pops out two symbols contained at the top of the stack.
• The addition of those two operands is performed.
• The result so obtained after addition is pushed again at the top of the stack.

Immediate Addressing Mode-


In this addressing mode, the operand is specified in the instruction explicitly. Instead of address
field, an operand field is present that contains the operand.

Examples-
• ADD 10 will increment the value stored in the accumulator by 10.
• MOV R #20 initializes register R to a constant value 20.

Direct Addressing Mode-

In this addressing mode, the address field of the instruction contains the effective address of the
operand. Only one reference to memory is required to fetch the operand. It is also called as absolute
addressing mode.

Example-
• ADD X will increment the value stored in the accumulator by the value stored at
memory location X. AC ← AC + [X]

Indirect Addressing Mode-

In this addressing mode, the address field of the instruction specifies the address of memory
location that contains the effective address of the operand. Two references to memory are required
to fetch the operand.

Example-
• ADD X will increment the value stored in the accumulator by the value stored at
memory location specified by X. AC ← AC + [[X]]

Register Direct Addressing Mode-


In this addressing mode, the operand is contained in a register set. The address field of the
instruction refers to a CPU register that contains the operand. No reference to memory is required
to fetch the operand.

Example-
ADD R will increment the value stored in the accumulator by the content of register R.
AC ← AC + [R]
• This addressing mode is similar to direct addressing mode.
• The only difference is address field of the instruction refers to a CPU register instead
of main memory.

Register Indirect Addressing Mode-

In this addressing mode, the address field of the instruction refers to a CPU register that contains the
effective address of the operand. Only one reference to memory is required to fetch the operand.

Example-
ADD R will increment the value stored in the accumulator by the content of memory location
specified in register R.
AC ← AC + [[R]]
• This addressing mode is similar to indirect addressing mode.
• The only difference is address field of the instruction refers to a CPU register.

Relative Addressing Mode-

In this addressing mode,

• Effective address of the operand is obtained by adding the content of program counter with
the address part of the instruction.

Effective Address
= Content of Program Counter + Address part of the instruction

NOTE:

• Program counter (PC) always contains the address of the next instruction to be executed.
• After fetching the address of the instruction, the value of program counter immediately
increases.
• The value increases irrespective of whether the fetched instruction has completely executed
or not.

Indexed Addressing Mode-

In this addressing mode,


• Effective address of the operand is obtained by adding the content of index register with the
address part of the instruction.

Effective Address = Content of Index Register + Address part of the instruction


Base Register Addressing Mode-

In this addressing mode,

• Effective address of the operand is obtained by adding the content of base register with the
address part of the instruction

Auto-Increment Addressing Mode-

• This addressing mode is a special case of Register Indirect Addressing Mode where-

Effective Address of the Operand

= Content of Register

In this addressing mode,

• After accessing the operand, the content of the register is automatically incremented by
step size ‘d’.

• Step size ‘d’ depends on the size of operand accessed.

• Only one reference to memory is required to fetch the operand.

Example-
Assume operand size = 2 bytes.

Here,
• After fetching the operand 6B, the instruction register RAUTO will be automatically
incremented by 2.
• Then, updated value of RAUTO will be 3300 + 2 = 3302.
• At memory address 3302, the next operand will be found.

NOTE:
In auto-increment addressing mode,
• First, the operand value is fetched.
• Then, the instruction register RAUTO value is incremented by step size ‘d’.

12. Auto-Decrement Addressing Mode-

• This addressing mode is again a special case of Register Indirect Addressing Mode where-

Effective Address of the Operand


= Content of Register – Step Size

In this addressing mode,


• First, the content of the register is decremented by step size ‘d’.
• Step size ‘d’ depends on the size of operand accessed.
• After decrementing, the operand is read.
• Only one reference to memory is required to fetch the operand.

Example-
Assume operand size = 2 bytes.
Here,
• First, the instruction register RAUTO will be decremented by 2.
• Then, updated value of RAUTO will be 3302 – 2 = 3300.
• At memory address 3300, the operand will be found.

NOTE-

In auto-decrement addressing mode,


• First, the instruction register RAUTO value is decremented by step size ‘d’.
• Then, the operand value is fetched

Topic: Input Output Organization and Memory Organization

Q1. Explain the different modes of transfer.

Ans. We store the binary information received through an external device in the memory unit. The
information transferred from the CPU to external devices originates from the memory unit. Although
the CPU processes the data, the target and source are always the memory unit. We can transfer this
information using three different modes of transfer.
1. Programmed I/O
2. Interrupt- initiated I/O
3. Direct memory access(DMA)

1. Programmed I/O
Programmed I/O uses the I/O instructions written in the computer program. The instructions in the
program initiate every data item transfer. Usually, the data transfer is from a memory and CPU
register. This case requires constant monitoring by the peripheral device's CPU.
Advantages:
• Programmed I/O is simple to implement.
• It requires very little hardware support.
• CPU checks status bits periodically.

Disadvantages:
• The processor has to wait for a long time for the I/O module to be ready for either
transmission or reception of data.
• The performance of the entire system is severely degraded.
2. Interrupt-initiated I/O
In the above section, we saw that the CPU is kept busy unnecessarily. We can avoid this situation by
using an interrupt-driven method for data transfer. The interrupt facilities and special commands
inform the interface for issuing an interrupt request signal as soon as the data is available from any
device. In the meantime, the CPU can execute other programs, and the interface will keep
monitoring the i/O device. Whenever it determines that the device is ready for transferring data
interface initiates an interrupt request signal to the CPU. As soon as the CPU detects an external
interrupt signal, it stops the program it was already executing, branches to the service program to
process the I/O transfer, and returns to the program it was initially running.
Working of CPU in terms of interrupts:
• CPU issues read command.
• It starts executing other programs.
• Check for interruptions at the end of each instruction cycle.
• On interruptions:-
o Process interrupt by fetching data and storing it.
o See operation system notes.
• Starts working on the program it was executing.
Advantages:
• It is faster and more efficient than Programmed I/O.
• It requires very little hardware support.
• CPU does not check status bits periodically.
Disadvantages:
• It can be tricky to implement if using a low-level language.
• It can be tough to get various pieces of work well together.
• The hardware manufacturer / OS maker usually implements it, e.g., Microsoft.

3. Direct Memory Access (DMA)


The data transfer between any fast storage media like a memory unit and a magnetic disk gets
limited with the speed of the CPU. Thus it will be best to allow the peripherals to directly
communicate with the storage using the memory buses by removing the intervention of the CPU.
This mode of transfer of data technique is known as Direct Memory Access (DMA). During Direct
Memory Access, the CPU is idle and has no control over the memory buses. The DMA controller
takes over the buses and directly manages data transfer between the memory unit and I/O devices.
CPU Bus Signal for DMA transfer

Bus Request - We use bus requests in the DMA controller to ask the CPU to relinquish the control
buses.
Bus Grant - CPU activates bus grant to inform the DMA controller that DMA can take control of the
control buses. Once the control is taken, it can transfer data in many ways.
Types of DMA transfer using DMA controller:
• Burst Transfer: In this transfer, DMA will return the bus control after the complete data
transfer. A register is used as a byte count, which decrements for every byte transfer, and
once it becomes zero, the DMA Controller will release the control bus. When the DMA
Controller operates in burst mode, the CPU is halted for the duration of the data transfer.
• Cyclic Stealing: It is an alternative method for data transfer in which the DMA controller will
transfer one word at a time. After that, it will return the control of the buses to the CPU. The
CPU operation is only delayed for one memory cycle to allow the data transfer to “steal” one
memory cycle.
Advantages
• It is faster in data transfer without the involvement of the CPU.
• It improves overall system performance and reduces CPU workload.
• It deals with large data transfers, such as multimedia and files.
Disadvantages
• It is costly and complex hardware.
• It has limited control over the data transfer process.
• Risk of data conflicts between CPU and DMA

Q2. What is Associative Memory? Draw and explain its block diagram. How read and write
operations are performed in Associative Memory?
Ans. An associative memory can be treated as a memory unit whose saved information can be
recognized for approach by the content of the information itself instead of by an address or memory
location. Associative memory is also known as Content Addressable Memory (CAM).
• Data is accessed by data content rather than data address.
• Data is stored at very first empty location found in memory.
• When Data is written no address is given.
• When data is read, only the key i.e, data or part of data is provided.
• Memory locates all words which match specified content and marks them for reading.
• It does parallel searches by data association.
Hardware Organisation
• Argument Register (A): It contains word to be searched. It has n bits.
• Key Register (K): It specifies part of the argument word that needs to be compared with
words in memory. If all the bits in key register are 1, the entire word should be compared,
otherwise only the bits having 1’s in their corresponding position are compared.
• Associative Memory Array: It contains the word that are to be compared with argument
word in parallel. Contains m words of n bits each.
• Match Register (M): It has m bits, one bit corresponding to each word in the memory array.
After the matching process, the bits corresponding matching words in match register are set
to 1.
Searching is performed in parallel manner and reading in sequential manner.
• It consist of flip flops storage element Fij and circuit for reading, writing and matching the
cell.

• The input bit is transferred into storage cell during a write operation.

• The bit stores is read out during a read operation.

• The match logic compares the contents of the storage cell with the corresponding unmasked
bit of the arguments and sets the bit in Mi
Q3. Explain Main Memory. Differentiate between RAM and ROM.

Ans. The main memory acts as the central storage unit in a computer system. It is a relatively
large and fast memory which is used to store programs and data during the run time
operations.

The primary technology used for the main memory is based on semiconductor integrated
circuits. The integrated circuits for the main memory are classified into two major units.

1. RAM (Random Access Memory) integrated circuit chips

2. ROM (Read Only Memory) integrated circuit chips

Random Access Memory

Random Access Memory (RAM) is used to store the programs and data being used by the CPU
in real time. The data on the random access memory can be read, written, and erased any
number of times. RAM is a hardware element where the data currently used is stored. It is a
volatile memory. It is also called as Main Memory or Primary Memory. This is user’s memory.
The software (program) as well as data files are stored on the hard disk when the software or
those files are opened. They get expanded into RAM. It is the space where temporary data are
automatically stored until the user saves it into the secondary storage devices.

Types of RAM

1. Static RAM: Static RAM or SRAM stores a bit of data using the state of a six-transistor
memory cell.

2. Dynamic RAM: Dynamic RAM or DRAM stores a bit of data using a pair of transistors and
capacitors which constitute a DRAM memory cell.

Read Only Memory

Read Only Memory (ROM) is a type of memory where the data has been pre-recorded. Data
stored in ROM is retained even after the computer is turned off i.e., non-volatile. It is generally
used in Embedded Parts, where the programming requires almost no changes. It is also called as
Secondary Memory. It is a permanent CNO4 erasable memory gets initiated when the power is
supplied to the computer ROM is a memory chip fixed on the motherboard at the time of
manufacturing. It stores a program called BIOS(Basic Input Output Setup). This program checks
the status of all the devices attached to the computer.
Types of ROM

1. Programmable ROM: It is a type of ROM where the data is written after the memory chip
has been created. It is non-volatile.

2. Erasable Programmable ROM: It is a type of ROM where the data on this non-volatile
memory chip can be erased by exposing it to high-intensity UV light.

3. Electrically Erasable Programmable ROM: It is a type of ROM where the data on this non-
volatile memory chip can be electrically erased using field electron emission.

4. Mask ROM: It is a type of ROM in which the data is written during the manufacturing of the
memory chip.

Difference between RAM and ROM

Random Access Memory


Difference (RAM) Read Only Memory (ROM)

RAM is a volatile memory that ROM is a non-volatile memory that the


Data-
could store the data as long as could retain the data even when the power
Retention
the power is supplied. is turned off.

Read and write operations are


Read/Write Only read operations are supported.
supported.

It is typically used to store firmware or


Used to store the data that has
microcode, which is used to initialize and
Use to be currently processed by
control hardware components of the
CPU temporarily.
computer.

Speed It is a high-speed memory. It is much slower than the RAM.

CPU CPU can easily access data CPU cannot easily access data stored in
Interaction stored in RAM. ROM.

Size and Large size with higher capacity, Small size with less capacity, concerning
Capacity concerning ROM. RAM.

Used as/in CPU Cache, Primary memory. Firmware, Micro-controllers.

The data stored is easily The data stored is not as easily accessible as
Accessibility
accessible. in the concerning RAM.
Random Access Memory
Difference (RAM) Read Only Memory (ROM)

RAM is more costlier than


Cost ROM is cheaper than RAM.
ROM.

A RAM chip can store only a A ROM chip can store multiple megabytes
Chip Size
few gigabytes (GB) of data. (MB) of data.

Used for the temporary storage


Used to store firmware, BIOS, and other
Function of data currently being
data that needs to be retained.
processed by the CPU.

Advantages of RAM

• Speed: RAM is much faster than other types of memory, such as hard disk drives, making it
ideal for storing and accessing data that needs to be accessed quickly.

• Volatility: RAM is volatile memory, which means that it loses its contents when power is
turned off. This property allows RAM to be easily reprogrammed and reused.

• Flexibility: RAM can be easily upgraded and expanded, allowing for more memory to be
added as needed.

Disadvantages of RAM

• Limited capacity: RAM has a limited capacity, which can limit the amount of data that can be
stored and accessed at any given time.

• Volatility: The volatile nature of RAM means that data must be saved to a more permanent
form of storage, such as a hard drive or SSD, to prevent data loss.

• Cost: RAM can be relatively expensive, particularly for high-capacity modules, which can
make it difficult to scale memory as needed.

Advantages of ROM

• Non-volatile: ROM is non-volatile memory, which means that it retains its contents even
when power is turned off. This property makes ROM ideal for storing permanent data, such
as firmware and system software.

• Stability: ROM is stable and reliable, which makes it a good choice for critical systems and
applications.

• Security: ROM cannot be easily modified, which makes it less susceptible to malicious
attacks, such as viruses and malware.

Disadvantages of ROM

• Limited flexibility: ROM cannot be easily reprogrammed or updated, which makes it difficult
to modify or customize the contents of ROM.

• Limited capacity: ROM has a limited capacity, which can limit the amount of data that can
be stored and accessed at any given time.
• Cost: ROM can be relatively expensive to produce, particularly for custom or specialized
applications, which can make it less cost-effective than other types of memory.

Q4. Write difference between SRAM, DRAM, PROM, EPROM, EEPROM.

Ans. Both SRAM and DRAM are the types of random access memory. Although, they are quite
different from each other. The important differences between SRAM and DRAM are highlighted in
the following table −

Parameter SRAM DRAM

Full Form SRAM stands for Static Random DRAM stands for Dynamic
Access Memory. Random Access Memory.

Component SRAM stores information with the DRAM stores data using
help of transistors. capacitors.

Need to Refresh In SRAM, capacitors are not used In DRAM, contents of a capacitor
which means refresh is not needed. need to be refreshed periodically.

Speed SRAM provides faster speed of data DRAM provides slower speed of
read/write. data read/write.

Power SRAM consumes more power. DRAM consumes less power.


Consumption

Data Life SRAM has long data life. DRAM has short data life.

Cost SRAM are expensive. DRAM are less expensive.

Density SRAM is a low density device. DRAM is a high density device.

Usage SRAMs are used as cache memory in DRAMs are used as main memory
computer and other computing in computer systems.
devices.

PROM EPROM EEPROM


Programmed only once by Programmed multiple times by Programmed multiple times by
user. user. user.
Once it is programmed it is not Once it is programmed it can Once it is programmed it can
erasable. be erased by exposing it to UV be erased by electrically.
lights.
Whole chip is erased at once Any location can be selectively
to program it again. erased and programmed.
Hence, programming is flexible
but slow.
Q5. Describe Cache Memory. Also explain the different cache mapping techniques.

Ans. Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There are
various different independent caches in a CPU, which store instructions and data. The most
important use of cache memory is that it is used to reduce the average time to access data from
the main memory.

Characteristics of Cache Memory

• Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.

• Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.

• Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.

• Cache Memory is used to speed up and synchronize with a high-speed CPU.

Levels of Cache
1) Level 1 or L1 Cache:

Fastest Cache and it usually comes within the processor

Typically it ranges from 8kb to 64 KB and uses the high speed SRAM instead of the slower
and cheaper DRAM used for main memory.

It is referred to as internal cache or primary cache.

2) Level 2 or L2 Cache::

Larger but slower in speed than L1 Cache.

Stores recently accessed information.

Also known as secondary cache, designed to reduce the time needed to access data in cases
where data has already been accessed previously.

L2 comes between L1 and RAM and is bigger than primary cache.

Typically ranges from 64KB to 4MB.

3) Level 3 or L3 Cache:

It is an enhanced form of memory present on the motherboard.of the computer.

L3 cache is a memory cache that is built into the motherboard.

It is used to feed the L2 cache and is typically faster than the system’s main memory but
slower than L2.
It has more than 3 MB of storage unit.

Not all processors have L3 because it is used to enhance the performance of L1 and L2.

Working of Cache
The CPU initially looks in the cache for data it needs.

If the data is there it will retrieve it and process it.

If the data is not there, then the CPU access the system main memory and then puts the copy of
the new data in the Cache before processing it.

Next time if the CPU needs to access the same data again, it will just retrieve the data from the
cache instead of going through the whole loading process again.

Cache Performance
1.) Cache Hit:

If the required word is found in the Cache.

Hit Ratio = Hits/(Hiots +Miss)

Or

Hit Ratio = No. of Hits/Total no. of CPU References.

1.) Cache Miss:

If the required word is not found in the Cache.

Hit Ratio = Miss/(Hiots +Miss)

Or

Hit Ratio = No. of Miss/Total no. of CPU References.

3.) Cache Access Time:

Time required to access word from the cache.

3.) Miss Penalty(Cache Access time Penalty)

Time required to fetch the required word from main memory.

Locality of Reference

Locality of reference is the tendency of a computer program to access the same set of memory
locations repeatedly over a short period of time. It's also known as the principle of locality.
2 types of Locality

1) Temporal Locality

2) Spatial Locality

3) Temporal Locality

If a program accesses one memory address, there is a good chance that it will access the
same address again.

It refers to reuse of specific data or resources within a short period of time.

Example: LOOP

2) Spatial Locality

If a program accesses one memory address there is a good chance it will access other nearby
addresses.

It refers to using data elements within a relatively closed storage location.

Example: Nearly every program exhibits spatial locality because instructions are usually
executed in sequence.

Cache Mapping

It is a technique by which content of main memory is brought into the cache memory.

It is a transformation of data from main memory to cache memory.

3 Types:

1. Associative Mapping.

2. Direct Mapping

3. Set associative mapping.

1. Direct Mapping
The easiest technique used for mapping is known as direct mapping. The direct
mapping maps every block of the main memory into only a single possible cache line.
In simpler words, in the case of direct mapping, we assign every memory block to a
certain line in the cache.
In the case of direct mapping, a certain block of main memory would be able to map
to only a particular line of cache. Here, the line number of the cache to which any
given block can map is basically given by this:
Cache line number = (The Block Address of the Main Memory ) Modulo (Total
number of lines present in the cache)
Physical Address Division
The physical address, in the case of direct mapping, is divided as follows:

Implementation:
Associative memories are expensive compared to random-access memories because
of the added logic associated with each cell. The CPU address of 15 bits is divided
into two fields. The nine least significant bits constitute the index field and the
remaining six bits form the tag field. The number of bits in the index field is equal to
the number of address bits required to access the cache memory.
In the general case, there are 2k words in cache memory and 2n words in main
memory. The n-bit memory address is divided into two fields: k bits for the index
field and n-k bits for the tag field.
The direct mapping cache organization uses the n-bit address to access the main
memory and the k-bit index to access the cache. Each word in the cache consists of
the data word and its associated tag. When a new word is first brought into the
cache, the tag bits are stored alongside the data bits.

When the CPU generates a memory request, the index field is used for the address
to access the cache. The tag field of the CPU address is compared with the tag in the
word read from the cache. If the two tags match, there is a hit and the desired data
word is in the cache. If there is no match, there is a miss and the required word is
read from main memory. It is then stored in the cache together with the new tag,
replacing the previous value.
The disadvantage of direct mapping is that the hit ratio can drop considerably if two
or more words whose addresses have the same index but different tags are accessed
repeatedly. However, this possibility is minimized by the fact that such words are
relatively far apart in the address range (multiples of 512 locations in this example.)
• To see how the direct-mapping organization operates, consider the numerical
example shown in Fig. 12-13.
• The word at address zero is presently stored in the cache (index = 000, tag =
00, data = 1220).
• Suppose that the CPU now wants to access the word at address 02000.
• The index address is 000, so it is used to access the cache. The two tags are
then compared.
• The cache tag is 00 but the address tag is 02, which does not produce a
match.
• Therefore, the main memory is accessed and the data word 5670 is
transferred to the CPU.
• The cache word at index address 000 is then replaced with a tag of 02 and
data of 5670.
The direct-mapping example just described uses a block size of one word. The same
organization but using a block size of B words is shown in Fig. 12-14.
The index field is now divided into two parts: the block field and the word field. In a
512-word cache there are 64 blocks of 8 words each, since 64 x 8 = 512. The block
number is specified with a 6-bit field and the word within the block is specified with
a 3-bit field. The tag field stored within the cache is common to all eight words of the
same block. Every time a miss occurs, an entire block of eight words must be
transferred from main memory to cache memory. Although this takes extra time, the
hit ratio will most likely improve with a larger block size because of the sequential
nature of computer programs.

2. Associative Mapping

The fastest and most flexible cache organization uses an associative memory. This
organization is illustrated in Fig. 12-1 1. The associative memory stores both the
address and content (data) of the memory word. This permits any location in cache
to store any word from main memory.
The diagram shows three words presently stored in the cache. The address value of
15 bits is shown as a five-digit octal number and its corresponding 12 -bit word is
shown as a four-digit octal number.
A CPU address of 15 bits is placed in the argument register and the associative
memory is searched for a matching address.
If the address is found, the corresponding 12-bit data is read and sent to the CPU.
If no match occurs, the main memory is accessed for the word.
The address--data pair is then transferred to the associative cache memory. If the
cache is full, an address--data pair must be displaced to make room for a pair that is
needed and not presently in the cache.
The decision as to what pair is replaced is determined from the replacement
algorithm that the designer chooses for the cache.
A simple procedure is to replace cells of the cache in round-robin order whenever a
new word is requested from main memory. This constitutes a first-in first-out (FIFO)
replacement policy.
3. Set Associative Memory

It was mentioned previously that the disadvantage of direct mapping is that two words
with the same index in their address but with different tag values cannot reside in cache
memory at the same time. A third type of cache organization, called set-associative
mapping, is an improvement over the direct mapping organization in that each word of
cache can store two or more words of memory under the same index address.
Each data word is stored together with its tag and the number of tag-data items in one
word of cache is said to form a set.
An example of a set-associative cache organization for a set size of two is shown in Fig.
12-15.

Each index address refers to two data words and their associated tags. Each tag requires
six bits and each data word has 12 bits, so the word length is 2(6 + 12) = 36 bits.
An index address of nine bits can accommodate 512 words. Thus the size of cache
memory is 512 x 36.
It can accommodate 1024 words of main memory since each word of the cache contains
two data words. In general, a set-associative cache of set size k will accommodate k
words of main memory in each word of cache.
The words stored at addresses 01000 and 02000 of main memory are stored in cache
memory at index address 000. Similarly, the words at addresses 02777 and 00777 are
stored in the cache at index address 777.
When the CPU generates a memory request, the index value of the address is used to
access the cache. The tag field of the CPU address is then compared with both tags in the
cache to determine if a match occurs.
The comparison logic is done by an associative search of the tags in the set similar to an
associative memory search: thus the name "set-associative." The hit ratio will improve as
the set size increases because more words with the same index but different tags can
reside in the cache. However, an increase in the set size increases the number of bits in
the words of the cache and requires more complex comparison logic.
When a miss occurs in a set-associative cache and the set is full, it is necessary to replace
one of the tag-data items with a new value. The most common replacement algorithms
used are random replacement, first-in, first-out (FIFO), and least recently used (LRU).
With the random replacement policy, the control chooses one tag-data item for
replacement at random. The FIFO procedure selects for replacement the item that has
been in the set the longest. The LRU algorithm selects for replacement the item that has
been least recently used by the CPU. Both FIFO and LRU can be implemented by adding a
few extra bits in each word of cache.

Cache Initialization
One more aspect of cache organization that must be taken into consideration is the
problem of initialization. The cache is initialized when power is applied to the computer
or when the main memory is loaded with a complete set of programs from auxiliary
memory. After initialization, the cache is considered to be empty, but in effect, it
contains some nonvalid data. It is customary to include with each word in the cache a
valid bit to indicate whether or not the word contains valid data. The cache is initialized
by clearing all the valid bits to 0. The valid bit of a particular cache word is set to 1 the
first time this word is loaded from main memory and stays set unless the cache has to be
initialized again. The introduction of the valid bit means that a word in cache is not
replaced by another word unless the valid bit is set to 1 and a mismatch of tags occurs. If
the valid bit happens to be 0, the new word automatically replaces the invalid data. Thus
the initialization condition has the effect of forcing misses from the cache until it fills with
valid data.

Q5. Define Virtual Memory.


Ans. Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of the main memory. The addresses a program may use
to reference memory are distinguished from the addresses the memory system uses to
identify physical storage sites and program-generated addresses are translated
automatically to the corresponding machine addresses.
A virtual memory is what its name indicates- it is an illusion of a memory that is larger
than the real memory.
The basis of virtual memory is the noncontiguous memory allocation model. The virtual
memory manager removes some components from memory to make room for other
components.
The size of virtual storage is limited by the addressing scheme of the computer system
and the amount of secondary memory available not by the actual number of main
storage locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in
computer memory.

1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies different places in the
main memory at different times during the course of execution.
2. A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and the use of a page or segment table
permits this.

If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.

Demand Paging

The process of loading the page into memory on demand (whenever a page fault occurs) is known as
demand paging. The process includes the following steps are as follows:

Demand Paging

1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.

2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS
must bring the required page into the memory.

3. The OS will search for the required page in the logical address space.

4. The required page will be brought from logical address space to physical address space. The
page replacement algorithms are used for the decision-making of replacing the page in
physical address space.

5. The page table will be updated accordingly.


6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.

Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.

Advantages of Virtual Memory

• More processes may be maintained in the main memory: Because we are going to load
only some of the pages of any particular process, there is room for more processes. This
leads to more efficient utilization of the processor because it is more likely that at least one
of the more numerous processes will be in the ready state at any particular time.

• A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.

• It allows greater multiprogramming levels by using less of the available (primary) memory
for each process.

• It has twice the capacity for addresses as main memory.

• It makes it possible to run more applications at once.

• Users are spared from having to add memory modules when RAM space runs out, and
applications are liberated from shared memory management.

• When only a portion of a program is required for execution, speed has increased.

• Memory isolation has increased security.

• It makes it possible for several larger applications to run at once.

• Memory allocation is comparatively cheap.

• It doesn’t require outside fragmentation.

• It is efficient to manage logical partition workloads using the CPU.

• Automatic data movement is possible.

Disadvantages of Virtual Memory

• It can slow down the system performance, as data needs to be constantly transferred
between the physical memory and the hard disk.

• It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if
there is a power outage while data is being transferred to or from the hard disk.

• It can increase the complexity of the memory management system, as the operating system
needs to manage both physical and virtual memory.

Page Fault Service Time: The time taken to service the page fault is called page fault service time.
The page fault service time includes the time taken to perform all the above six steps.

Let Main memory access time is: m


Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Q6. Define Auxiliary Memory

Ans. Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-access storage in a
computer system. It is where programs and data are kept for long-term storage or when not in
immediate use. The most common examples of auxiliary memories are magnetic tapes and magnetic
disks.

Magnetic Disks

A magnetic disk is a type of memory constructed using a circular plate of metal or plastic coated with
magnetized materials. Usually, both sides of the disks are used to carry out read/write operations.
However, several disks may be stacked on one spindle with read/write head available on each
surface.

The following image shows the structural representation for a magnetic disk.

o The memory bits are stored in the magnetized surface in spots along the concentric circles
called tracks.

o The concentric circles (tracks) are commonly divided into sections called sectors.

Magnetic Tape

Magnetic tape is a storage medium that allows data archiving, collection, and backup for different
kinds of data. The magnetic tape is constructed using a plastic strip coated with a magnetic recording
medium.

The bits are recorded as magnetic spots on the tape along several tracks. Usually, seven or nine bits
are recorded simultaneously to form a character together with a parity bit.
Magnetic tape units can be halted, started to move forward or in reverse, or can be rewound.
However, they cannot be started or stopped fast enough between individual characters. For this
reason, information is recorded in blocks referred to as records.

Q7. What is I/O Interrupt? Discuss about different types of interrupts.

Ans. An interrupt I/O is a process of data transfer in which an external device or a peripheral informs
the CPU that it is ready for communication and requests the attention of the CPU.

Interrupt Processing

• A device driver initiates an I/O request on behalf of a process.


• The device driver signals the I/O controller for the proper device, which initiates
the requested I/O.
• The device signals the I/O controller that is ready to retrieve input, the output is
complete or that an error has been generated.
• The CPU receives the interrupt signal on the interrupt-request line and transfer
control over the interrupt handler routine.
• The interrupt handler determines the cause of the interrupt, performs the
necessary processing and executes a “return from” interrupt instruction.
• The CPU returns to the execution state prior to the interrupt being signaled.
• The CPU continues processing until the cycle begins again.
Processor handle interrupts
Whenever an interrupt occurs, it causes the CPU to stop executing the current program.
Then, comes the control to interrupt handler or interrupt service routine.
These are the steps in which ISR handles interrupts. These are as follows −
Step 1 − When an interrupt occurs let assume processor is executing
i'th instruction and program counter will point to the next instruction (i+1) th.
Step 2 − When an interrupt occurs the program value is stored on the process
stack and the program counter is loaded with the address of interrupt service
routine.
Step 3 − Once the interrupt service routine is completed the address on the
process stack is popped and placed back in the program counter.
Step 4 − Now it executes the resume for (i+1)th line.

Types of interrupts
There are two types of interrupts which are as follows −
Hardware interrupts
The interrupt signal generated from external devices and i/o devices are made interrupt to
CPU when the instructions are ready.
For example − In a keyboard if we press a key to do some action this pressing of the
keyboard generates a signal that is given to the processor to do action, such interrupts are
called hardware interrupts.
Hardware interrupts are classified into two types which are as follows −
• Maskable Interrupt − The hardware interrupts that can be delayed when a highest
priority interrupt has occurred to the processor.
• Non Maskable Interrupt − The hardware that cannot be delayed and immediately be
serviced by the processor.
Software interrupts
The interrupt signal generated from internal devices and software programs need to access
any system call then software interrupts are present.
A software interrupt is divided into two types. They are as follows −
• Normal Interrupts − The interrupts that are caused by the software instructions are
called software instructions.
• Exception − Exception is nothing but an unplanned interruption while executing a
program. For example − while executing a program if we got a value that is divided
by zero is called an exception.

Q8. What are Priority Interrupts?

Ans. When I/O devices are ready for I/O transfer, they generate an interrupt request signal to the
computer. The CPU receives this signal, suspends the current instructions it is executing, and then
moves forward to service that transfer request. But what if multiple devices generate interrupts
simultaneously. In that case, we have a way to decide which interrupt is to be serviced first. In other
words, we have to set a priority among all the devices for systemic interrupt servicing. The concept
of defining the priority among devices so as to know which one is to be serviced first in case of
simultaneous requests is called a priority interrupt system. This could be done with either software
or hardware methods.
SOFTWARE METHOD – POLLING

In this method, all interrupts are serviced by branching to the same service program. This program
then checks with each device if it is the one generating the interrupt. The order of checking is
determined by the priority that has to be set. The device having the highest priority is checked first
and then devices are checked in descending order of priority. If the device is checked to be
generating the interrupt, another service program is called which works specifically for that
particular device. The structure will look something like this-

if (device[0].flag)

device[0].service();

else if (device[1].flag)

device[1].service();

else

//raise error

The major disadvantage of this method is that it is quite slow. To overcome this, we can use
hardware solution, one of which involves connecting the devices in series. This is called Daisy-
chaining method.

HARDWARE METHOD – DAISY CHAINING

The daisy-chaining method involves connecting all the devices that can request an interrupt in a
serial manner. This configuration is governed by the priority of the devices. The device with the
highest priority is placed first followed by the second highest priority device and so on. The given
figure depicts this arrangement.
WORKING:

There is an interrupt request line which is common to all the devices and goes into the CPU.

• When no interrupts are pending, the line is in HIGH state. But if any of the devices raises an
interrupt, it places the interrupt request line in the LOW state.

• The CPU acknowledges this interrupt request from the line and then enables the interrupt
acknowledge line in response to the request.

• This signal is received at the PI(Priority in) input of device 1.

• If the device has not requested the interrupt, it passes this signal to the next device through
its PO(priority out) output. (PI = 1 & PO = 1)

• However, if the device had requested the interrupt, (PI =1 & PO = 0)

o The device consumes the acknowledge signal and block its further use by placing 0
at its PO(priority out) output.

o The device then proceeds to place its interrupt vector address(VAD) into the data
bus of CPU.

o The device puts its interrupt request signal in HIGH state to indicate its interrupt has
been taken care of.

• If a device gets 0 at its PI input, it generates 0 at the PO output to tell other devices that
acknowledge signal has been blocked. (PI = 0 & PO = 0)

Hence, the device having PI = 1 and PO = 0 is the highest priority device that is requesting an
interrupt. Therefore, by daisy chain arrangement we have ensured that the highest priority interrupt
gets serviced first and have established a hierarchy.

Topic: Number System

Q1. Convert 15910 to the octal number system.

Solution:

Given: 15910

Here, the required base number is 8. (i.e., octal number system). Hence, follow the
below procedure to convert the decimal system to the octal system.

Step 1: Divide 159 by 8.

⇒ Quotient = 19 & Remainder = 7

Step 2: Divide 19 by 8.

⇒ Quotient = 2 & Remainder = 3

Since, the quotient “2” is less than “8”, we can stop the process.
Therefore, 15910 = 2378

Q2. Convert the binary number 110010112 to the decimal number system.

Solution:

Given binary number: 110010112

Now, multiply each digit of the given binary number by the exponents of the base,
starting with the right to left such that the exponents start with 0 and increase by 1.

Hence, the digits from right to left are written as follows:

1 = 1 × 20 = 1

1 = 1 × 21 = 2

0 = 0 × 22 = 0

1 = 1 × 23 = 8

0 = 0 × 24 = 0

0 = 0 × 25 = 0

1 = 1 × 26 = 64

1 = 1 × 27 = 128

Now, add all the product values obtained.

= 1 + 2 + 0+ 8 + 0 + 0 + 64 + 128

= 203

Hence, the decimal equivalent of 110010112 is 20310.

I.e., 110010112 = 20310.

Alternate Method:
110010112 = (1 × 27) + (1 × 26) + (0 × 25) + (0 × 24) + (1 × 23 ) + (0 × 22) + (1 × 21 ) +
(1 × 20 )

110010112 = 128 + 64 + 0 + 0 + 8 + 0 + 2 + 1

110010112 = 20310.

Q3. Convert the octal number 7148 to the decimal number.

Solution:

Given octal number: 7148

Since the base of the octal number system is 8, we have to multiply each digit of the
given number with the exponents of the base.

Thus, the octal number 7148 can be converted to the decimal system as follows:

7148 = (7 × 82) + (1 × 81) + (4 × 80)

7148 = (7 × 64) + ( 1 × 8) + (4 × 1)

7148 = 448 + 8 + 4

7148 = 46010

Hence, the decimal equivalent of the octal number 714 8 is 46010.

Q4. Multiply -10 and -4 using Booth’s Algorithm.

Ans. Booth’s algorithm is a powerful algorithm that is used for signed multiplication.
It generates a 2n bit product for two n bit signed numbers.
The flowchart is as shown in Figure 1.
The steps in Booth’s algorithm are as follow:
1) Initialize A,Q−1Q−1 to 0 and count to n
2) Based on the values of Q0 and Q−1Q0 and Q−1 do the following:
a. if Q0,Q−1Q0,Q−1=0,0 then Right shift A,Q,Q−1Q−1 and finally decrement count
by 1
b. If Q0,Q−1Q0,Q−1=0,1 then Add A and B store in A, Right shift A,Q,Q−1Q−1 and
finally decrement count by 1
c. If Q0,Q−1=1Q0,Q−1=1,0 then Subtract A and B store in A, Right shift
A,Q,Q−1Q−1 and finally decrement count by 1
d. If Q0,Q−1=1Q0,Q−1=1,1 then Right shift A,Q,Q−1Q−1 and finally decrement
count by 1
3) Repeat step 2 till count does not equal 0.
Using the flowchart, we can solve the given question as follows:
(−5)10(−5)10= 1011(in 2’s complement)
(−2)10(−2)10 =1110(in 2’s complement)
Multiplicand (B) = 1011
Multiplier (Q) =1110
And initially Q-1=0
Count =4

steps A Q Q1 Operation

Initial 0000 0110 0 shift Right

1 0100 0010 0011 0001 0 A->A-B shift Right

2 0001 0000 1 Shift Right

3 0101 0010 0000 1000 10 A->A-B Shift Right

4 0001 0100 0 Shift Right

Result 0001 0100 0

Result =(0001 0100 0)2 This is the required and correct result.

Q5. Write Floating Point Representation for 85.125

Ans. 85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0

1. Single precision:
biased exponent 127+6=133
133 = 10000101
Normalised mantisa = 010101001
we will add 0's to complete the 23 bits
The IEEE 754 Single precision is:
= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000

2. Double precision:
biased exponent 1023+6=1029
1029 = 10000000101
Normalised mantisa = 010101001
we will add 0's to complete the 52 bits

Q6. Subtract 3 from 2 using arithmetic operation and complements.

Ans.
Q7. Convert the decimal numbers into binary and divide in binary

55 ÷ 5

Ans. Given decimal numbers are 55 and 5. The division of 55 and 5 gives us 11 as
the quotient in the decimal system. Let us convert 55 and 5 into binary and then
divide them into binary.

Conversion of 55

5510 = 1101112

Conversion of 5
The quotient is 10112 which is the same as 1110.

Q8. Minimize the following boolean function-


F(A, B, C, D) = Σm(0, 1, 2, 5, 7, 8, 9, 10, 13, 15)

Ans.
• Since the given boolean expression has 4 variables, so we draw a 4 x 4 K Map.
• We fill the cells of K Map in accordance with the given boolean function.
• Then, we form the groups in accordance with the above rules.
Now,

F(A, B, C, D)

= (A’B + AB)(C’D + CD) + (A’B’ + A’B + AB + AB’)C’D + (A’B’ + AB’)(C’D’ + CD’)

= BD + C’D + B’D’

Thus, minimized boolean expression is-

F(A, B, C, D) = BD + C’D + B’D’

Q9. What is the don’t care condition in k-maps? Minimise the following function in SOP
minimal form using K-Maps:

f = m(1, 5, 6, 11, 12, 13, 14) + d(4)


Ans. The “Don’t Care” conditions allow us to replace the empty cell of a K-Map to form a
grouping of the variables which is larger than that of forming groups without don’t care. While
forming groups of cells, we can consider a “Don’t Care” cell as 1 or 0 or we can also ignore that
cell. Therefore, the “Don’t Care” condition can help us to form a larger group of cells.

The SOP K-map for the given expression is:


Therefore, SOP minimal is,

f = BC' + BD' + A'C'D + AB'CD.

Q10. Using Boolean identities, reduce the given Boolean expression:

F(X, Y, Z) = X′Y + YZ′ + YZ + XY′Z′

Solution:

Given,F(X, Y, Z) = X′Y + YZ′ + YZ + XY′Z′

Using the idempotent law, we can write YZ’ = YZ’ + YZ’

⇒ F(X, Y, Z) = X′Y+(YZ′+YZ′)+YZ + XY′Z′

Now, interchange the second and third term, we get

⇒ F(X, Y, Z) = X′Y+(YZ′+YZ)+(YZ′+XY′Z′)

By using distributive law,

⇒ F(X, Y, Z) = X′Y+Y(Z′+Z)+Z′(Y+XY′)

Using Z’ + Z = 1 and absorption law (Y + XY’)= (Y + X),

⇒ F(X, Y, Z) = X′Y+Y.1+Z′(Y+X)

⇒ F(X, Y, Z) = X′Y+Y+Z′(Y+X) [Since Y.1 = Y ]

⇒ F(X, Y, Z) = Y(X′+1)+Z′(Y+X)

⇒ F(X, Y, Z) = Y.1+Z′(Y+X) [ As (X’ + 1) = 1 ]

⇒ F(X, Y, Z) = Y +Z′(Y+X) [ As, Y.1 = Y ]

⇒ F(X, Y, Z) = Y+YZ’+XZ’

⇒ F(X, Y, Z) = Y(1+Z′)+XZ′

⇒ F(X, Y, Z) = Y.1+XZ′ [Since (1 + Z’) = 1]

⇒ F(X, Y, Z) = Y+XZ′ [Since Y.1 = Y]

Hence, the simplified form of the given Boolean expression is F(X, Y, Z) = Y+XZ′.

Q11. Reduce the following Boolean expression: F(P ,Q, R)=(P+Q)(P+R)

Solution:
Given, F(P ,Q, R)=(P+Q)(P+R)

Using distributive law,

⇒ F(P, Q, R) = P.P + P.R +Q.P + Q.R

Using Idempotent law,

⇒ F(P, Q, R) = P + P.R +Q.P + Q.R

Again using distributive law, we get

⇒ F(P, Q, R) = P(1+R) + Q.P + Q.R

Using dominance law, we can write

⇒ F(P, Q, R) = P + Q.P + Q.R

Again using distributive law, we get

⇒ F(P, Q, R) = (P+1).P+ Q.R

Therefore, using dominance law, we can get the reduced form as follows:

⇒ F(P, Q, R) = 1.P+Q.R

⇒ F(P, Q, R) = P+Q.R

Hence, the reduced form of F(P, Q, R) = (P+Q)(P+R) is F(P, Q, R) = P+Q.R.

Q12. Write the Distributive Law.


Ans.
X+(Y.Z) = (X+Y).(X+Z)
X.(Y+Z) = (X.Y)+(X.Z)

Q13. Write the De Morgan Law.


Ans.
∼(X.Y)=∼X+∼Y

∼(X+Y)=∼X.∼Y

Q14. Write the Associative law.

Ans.

X+(Y+Z) = (X+Y)+Z (OR Form)

X.(Y.Z) = (X.Y).Z (AND Form)


Q14. Write the Commutative law.

Ans.

X+Y = Y+X (OR Form)

X.Y = Y.X (AND Form)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy