BCA II SEM Question Set
BCA II SEM Question Set
Q1. What are Sequential Circuits? How are they different from Combinational Circuits?
Ans. Sequential circuits are digital circuits that store and use the previous state information to
determine their next state. Unlike combinational circuits, which only depend on the current input
values to produce outputs, sequential circuits depend on both the current inputs and the previous
state stored in memory elements.
Meaning and It is a type of circuit that generates an It is a type of circuit in which the output does
Definition output by relying on the input it receives not only rely on the current input. It also relies
at that instant, and it stays independent on the previous ones.
of time.
Feedback A Combinational Circuit requires no The output of a Sequential Circuit, on the other
feedback for generating the next output. hand, relies on both- the previous feedback and
It is because its output has no the current input. So, the output generated from
dependency on the time instance. the previous inputs gets transferred in the form
of feedback. The circuit uses it (along with
inputs) for generating the next output.
Performance We require the input of only the current In the case of a Sequential Circuit, the
state for a Combinational Circuit. Thus, it performance is very slow and also
performs much faster and better in comparatively lower. Its dependency on the
comparison with the Sequential Circuit. previous inputs makes the process much more
complex.
Complexity It is very less complex in comparison. It This type of circuit is always more complex in
is because it basically lacks its nature and functionality. It is because it
implementation of feedback. implements the feedback, depends on previous
inputs and also on clocks.
Elementary Logic gates form the building/ elementary Flip-flops form the building/ elementary blocks
Blocks blocks of a Combinational Circuit. of a Sequential Circuit.
Operation One can use these types of circuits for You can mainly make use of these types of
both- Boolean as well as Arithmetic circuits for storing data.
operations.
Ans. A flip-flop is a digital memory circuit that stores one bit of data. They are the primary
blocks of the most sequential circuits. It is also called one-bit memory. A flip-flop state
repeatedly changes at an active state of the clock pulses. They remain unaffected even
when the clock pulse does not stay active. The clocked flip-flops particularly act as the
memory elements of the synchronous sequential circuit- while the un-clocked ones (latches)
function as the memory elements of asynchronous sequential circuits.
A latch is an electronic device that changes its output immediately on the basis of the
applied input. One can use it to store either 0 or 1 at a specified time. A latch contains two
inputs- SET and RESET, and it also has two outputs. They complement each other. One can
use a latch for storing one bit of data. It is a memory device- just like the flip-flop. But it is not
synchronous, and it does not work on the edges of the clock like the flip-flop.
Difference Between Flip-flop and Latch
Clock Signal The clock signal is present. The clock signal is absent.
Designed You can design it using Latches along You can design it using Logic gates.
Using with a clock.
Sensitivity The flip-flop is sensitive to the applied Latches are sensitive to the applied input
input and the clock signal. signal- only when enabled.
Operating It has a slow operating speed. It has a comparatively fast operating speed.
Speed
Classification You can classify a flip-flop into a A user cannot classify the Latch this way.
synchronous or asynchronous flip-flop.
Working Flip-flops work using the binary input and Latches operate only using binary inputs.
the clock signal.
Analysis of It is quite easy to perform circuit analysis. Analyzing the circuit is quite complex.
Circuit
Robustness Flip-flops are comparatively more robust. Latches are comparatively less robust.
Dependency The operation relies on the present and The operation depends on the present and
of Operation past input bits along with the past output past input along with the past output binary
and clock pulses. values.
Uses They constitute the building blocks of Users can utilize these for designing
many sequential circuits such as sequential circuits. But they are still not
counters. generally preferred.
Input and A flip-flop checks the inputs. It only The latch responds to the changes in inputs
Output changes the output at times defined by continuously as soon as it checks the inputs.
any control signal like the clock signal.
Synchronicity A flip-flop is synchronous. It works based A latch is asynchronous. It does not work
on the clock signal. based on the time signal.
Faults Flip-flops stay protected against any fault. The latches are responsive to any occurring
faults on the enable pin.
Q3. What are SR Latch and SR Flip Flops? Explain using a diagram.
Ans.
SR Latch
S-R latches i.e., Set-Reset latches are the simplest form of latches and are implemented using two
inputs: S (Set) and R (Reset). The S input sets the output to 1, while the R input resets the output to
0. When both S and R inputs are at 1, the latch is said to be in a “Hold” state. They are also known as
preset and clear states. The SR latch forms the basic building blocks of all other types of flip-flops.
S R Q(n+1)
0 0 invalid
0 1 1
1 0 0
1 1 Hold
As we see in the diagram, the SR latch comprises two cross-coupled NAND gates, where the output
of one gate is connected to the input of the other and vice versa. The latch takes two binary inputs, S
and R, and produces two binary outputs, Q and Q-bar.
Case 1: When S = 0 and R = 1 (SET)
The NAND gates provide an output of 1 whenever there is a 0 in the input to the logic gate. In the
case of S = 0 and R = 1, the logic gate connected to the set input receives a 0, so it will output a 1.
That 1 will then propagate to the NAND gate connected to the reset input, providing an output of 0
as 1 NAND with 1 is 0.
So the Q values will be 1, and the value of Q-bar will be 0 showing that we set the value of the
output Q as 1.
This is the reverse of Case 1 we went through above; over here, we have reversed the inputs S and R
to be 1 and 0; hence our output for Q would then be 0, and Q-bar would be 1 showing that we have
reset the output bit Q.
In this case, we see that one of the inputs to both NAND gates is 0; this means that the output of
both gates will always be 1 regardless of the 2nd input to the NAND gates. Since the value of Q and
Q-bar is the same, we call this SR latch state invalid.
This case is a bit tricky to solve as none of the inputs is 0, which means that the output depends on
the 2nd input to the NAND gates. We see that there will be no change in already stored bits for Q
and Q-bar upon solving the equation; hence we call this state the hold state since none of the values
of the output change.
Applications of an SR latch
Now that we have gone through what is an SR latch and its internal working, now let's look at some
of the applications of an SR latch below.
• Memory circuits: As the SR latch can store a single bit of data, it can be used to create large
memory circuits for temporary memory storage.
• Control signals: As the SR latches can hold states, they can be used in control signals to hold
some bits till a certain condition is met.
• Flip flops and counters: The SR latch is a basic building block for various types of flip flops
and counter circuits.
SR Flip Flop
The SR Flip Flop Stands for Set Reset Flip Flop. It is the most basic sequential logic circuit. It is built
with two logic gates. The output of each gate is connected to the input of the other gate so a
feedback signal can go from the output to the input. The SR flip flop has two inputs named as Set(S)
and Reset(R), that is why it is called SR Flip Flop. The rest input is used to get back the flip flop to its
old original state from the current state. The output of the flip flop depends upon its inputs such as
the input SET(S) making the output 1 while the input RESET(R) making the output 0. A basic SR flip
Flop is constructed with NAND gates and it has four terminals - Two Active Low input Terminals(S, R)
and two output terminals Q and Q'.
The NAND gates provide an output of 1 whenever there is a 0 in the input to the logic gate. In the
case of S = 0 and R = 1, and clock = 1 the logic gate connected to the set input receives a 0, so it will
output a 1. As the value of clock is set to 1 and R = 1 as well it will give an output 0. When set = 1 and
reset = 0 in SR latch then the output comes as 0.
In this case, we see that one of the inputs to both NAND gates is 0; this means that the output of
both gates will always be 1 regardless of the 2nd input to the NAND gates. When Both inputs in SR
latch are provided as 1 and 1 the ‘HOLD’ state occurs.
In this case, we see that one of the inputs to both NAND gates is 1 and the value of clock is also set
as 1; this means that the output of both gates will always be 0. When Both inputs in SR latch are
provided as 0 and 0 the ‘INVALID’ state occurs.
The characteristic equation is an algebraic expression for the characteristic table’s binary
information. It specifies the value of the next state of a flip-flop in terms of its present state and
present excitation. To obtain the characteristic equation of SR flip-flop, the K-map for the next state
Qn+1 in terms of present state and inputs is shown as:
Excitation Table of SR Flip-Flop
Ans. The truth table of flip-flop refers to the operation characteristic of the flip-flop. Still, in the
designing of sequential circuits, we often face situations where the present state and the next state
of the flip-flop are specified, and we must determine the input conditions that must exist in order for
the intended output condition to occur.
The excitation table of SR flip-flop lists the present state, and the next state and the excitation table
of SR flip-flop indicate the excitations required to take the flip-flop from the present state to the
next state.
Q 5. What are the drawbacks of SR Flip Flop over which JK Flip Flop was introduced?
Ans. The SR (Set-Reset) flip-flop, also known as the SR latch, is a fundamental building block in digital
logic circuits. One of the main disadvantages of the basic SR flip-flop is the possibility of entering an
invalid state when both the S (Set) and R (Reset) inputs are asserted (set to 1) simultaneously. This
condition is known as the "forbidden" or "invalid" state, where both the Q and Q' outputs are high,
which leads to unpredictable behavior.
This issue can be overcome by using a gated SR flip-flop or by incorporating a clock signal into the
flip-flop design to create a clocked SR flip-flop. The clocked SR flip-flop, also known as the D flip-flop
or data flip-flop, addresses the problem of the forbidden state by allowing changes in the inputs (S
and R) to only occur when the clock signal transitions. This ensures that the inputs are only
considered during specific time intervals defined by the clock signal, thus preventing the occurrence
of the forbidden state.
In summary, the disadvantage of the basic SR flip-flop is the potential for entering an invalid state
when both inputs are asserted simultaneously. This can be overcome by using a clocked SR flip-flop
design, such as the D flip-flop, which incorporates a clock signal to control the timing of input
changes and prevent the occurrence of the forbidden state.
Q 6. What is JK Flip Flop? Explain the Race around condition in JK Flip Flop and also the Master
Slave JK Flip Flop.
Ans. The “JK flip flop,” also known as the Jack Kilby flip flop, is a sequential logic circuit designed by
Jack Kilby during his tenure at Texas Instruments in the 1950s. This flip flop serves the purpose of
storing and manipulating binary information within digital systems. JK flip flop operates on
sequential logic principle, where the output is dependent not only on the current inputs but also on
the previous state. There are two inputs in JK Flip Flop Set and Reset denoted by J and K. It also has
two outputs Output and complement of Output denoted by Q and Q̅ . The internal circuitry of a JK
Flip Flop consists of a combination of logic gates, usually NAND gates.
JK flip flop comprises four possible combinations of inputs: J=0, K=0; J=0, K=1; J=1, K=0; and J=1, K=1.
These input combinations determine the behavior of flip flop and its output.
• J=0, K=0: In this state, flip flop retains its preceding state. It neither sets nor resets itself,
making it go into a “HOLD” state.
• J=0, K=1: This input combination forces flip flop to reset, resulting in Q=0 and Q̅ =1. It is often
referred to as the “reset” state.
• J=1, K=0: Here, flip flop resides in the set mode, causing Q=1 and Q̅ =0. It is known as the
“set” state.
• J=1, K=1: This combination “toggles” flip flop. If the previous state is Q=0, it switches to Q=1
and vice versa. This makes it valuable for frequency division and data storage applications.
Characteristic Equation
Excitation Table
Operation Modes of JK Flip Flop
Apart from its basic functionality, there are two essential operating modes in JK Flip Flop: edge-
triggered and level-triggered.
• Edge-Triggered: In this mode, flip flop responds to a signal transition occurring at a clock
pulse. It is commonly used in synchronous systems, where the output changes only when
the clock signal changes from low to high or high to low. The edge-triggered JK Flip
Flop ensures stable output and prevents glitches caused by rapid changes in input values.
• Level-Triggered: Unlike the edge-triggered mode, the level-triggered JK Flip Flop responds to
the input values continuously as long as the clock signal is held at a specific level (high or
low). This mode is mainly used in asynchronous systems or applications where the input
changes are directly reflected in the output.
• Counters
• Shift Registers
• Memory Units
• Frequency Division
▪ For J-K flip-flop, if J=K=1, and if clk=1 for a long period of time, then output Q will toggle as
long as CLK remains high which makes the output unstable or uncertain.
▪ This is called a race around condition in J-K flip-flop.
▪ We can overcome this problem by making the clock =1 for very less duration.
▪ The circuit used to overcome race around conditions is called the Master Slave JK flip flop.
Master Slave JK flip flop
▪ When the clock pulse goes high, the slave is isolated; J and K inputs can affect the state of
the system. The slave flip-flop is isolated when the CP goes low.
▪ When the CP goes back to 0, information is transmitted from the master flip-flop to the slave
flip-flop and output is obtained.
▪ As the master flip flop is positive triggered it responds first and the slave later (it is negative
edge triggered).
▪ The master goes to the K input of the slave when both inputs J=0 and K=1, and also Q’ = 1. In
this case the slave copies the master as the clock forces the slave to reset.
▪ The master goes to the J input of the slave when both J=1 and K=0, Q = 1. The clock is set
due to the negative transition of the clock.
▪ There is a state of toggle when both J=1 and K=1. On the negative transition of clock slave
toggles and the master toggles on the positive transition of the clock.
▪ Both the flip flops are disabled when both J=0 and K=0 and Q is unchanged.
Ans. D Flip flops or data flip flops or delay flip flops can be designed using SR flip flops by connecting
a not gate in between S and R inputs and tying them together. D flip flops can be used in place of SR
flip flops where you need only SET and RESET state.
D Flip-Flop Working
Let us look at the possible cases and write it down in our truth table. The clock is always 1, so only
two cases are possible where D can be high or low.
Case 1: D = 0
Note: Since one input of Gate4 is 0 and Gate4 is a NAND gate. Irrespective of the other input, the
output of Gate 4 will be 1 as per the property of NAND gates.
Case 2: D = 1
Note: Since one input of Gate3 is 0 and Gate3 is a NAND gate. Irrespective of the other input, the
output of Gate 3 will be 1 as per the property of NAND gates.
We will use this truth table to write the characteristics table for the D flip-flop. In the truth table, you
can see there is only one input D and one output Q(n+1). But in the characteristics table, you will see
there are two inputs D and Qn, and one output Q(n+1).
From the logic diagram above it is clear that Qn and Qn’ are two complementary outputs that also
act as inputs for Gate3 and Gate4 hence we will consider Qn i.e the present state of Flip flop as input
and Q(n+1) i.e. the next state as output.
After writing the characteristic table we will draw a 2-variable K-map to derive the characteristic
equation.
D Qn Q(n+1)
0 0 0
0 1 0
1 0 1
1 1 1
From the K-map you get 2 pairs. On solving both we get the following characteristic equation:
Q(n+1) = D
Advantages
There are several advantages to using a D flip-flop. Some of them are listed below:
• Single input: The D flip-flop has a single data input, which makes it simpler to use and easier
to interface with other digital circuits.
• No feedback loop: The D flip-flop does not have a feedback loop, which eliminates the
possibility of a race condition and makes it more stable than other types of flip-flops.
• No invalid states: It does not have any weak states, which helps to avoid unpredictable
behavior in digital systems.
• Reduced power consumption: The D flip-flop consumes less power than other types of flip-
flops, making it more energy-efficient.
• Bi-stable operation: Like other flip-flops, the D flip-flop has a bi-stable operation, which
means that it can hold a state indefinitely until it is changed by an input signal.
Limitations
Apart from several advantages, there are some limitations associated with D flip-flops. Some of
them are listed below:
• No feedback: The D flip-flop does not have a feedback path, which means that it cannot be
used for applications that require feedback control, such as servo systems or motor control.
• No toggling: It cannot be used for toggling applications since it only responds to the data
input and not to the clock signal.
• Propagation delay: It has a propagation delay, which can lead to timing issues in digital
systems with tight timing constraints.
• Limited scalability: The D flip-flop can be challenging to scale up to more complex digital
systems, as it can lead to increased complexity and the potential for errors.
Applications
Some of the applications of D flip flop in real-world includes:
• Shift registers: D flip-flops can be cascaded together to create shift registers, which are used
to store and shift data in digital systems. Shift registers are commonly used in serial
communication protocols such as UART, SPI, and I2C.
• State machines: It can be used to implement state machines, which are used in digital
systems to control sequences of events. State machines are commonly used in control
systems, automotive applications, and industrial automation.
• Counters: It can be used in conjunction with other digital logic gates to create binary
counters that can count up or down depending on the design. This makes them useful in
real-time applications such as timers and clocks.
• Data storage: D flip-flops can be used to store temporary data in digital systems. They are
often used in conjunction with other memory elements to create more complex storage
systems.
Ans. T flip flop is similar to JK flip flop. Just tie both J and K inputs together to get a T Flip flop. Just
like the D flip flop, it has only one external input along with a clock.
T Flip-Flop Working
Let us take a look at the possible cases and write it down in our truth table. The clock is always 1, so
only two cases are possible where T can be high or low.
Case 1: Let’s say, T = 0 and clock pulse is high i.e, 1, then output of both, AND gate 1, AND gate 2 will
be 0, gate 3 output will be Q and similarly gate 4 output will be Q’ so both the values of Q and Q’ are
same as their previous value, which means Hold state.
Case 2: Let’s say, T=1, then output of both AND gate 1 will be (T * clock * Q), and since T and clock
both are 1, then the output of AND gate 1 will be Q, and similarly output of AND gate 2 will be (T *
clock * Q’) i.e, Q’. Now, gate 3 output will be (Q’+Q)’ and let’s say Q’ is zero, then gate 3 output will
be (0+Q)’ which means Q’ and similarly gate 4 output will be (Q+Q’)’ and since Q’ is zero, so gate 4
output will be Q’ which means 0 as Q’ is zero. Hence in this case we can say that the output toggles,
because T=1.
Characteristic Equation
Excitation Table
There are numerous applications of T Flip Flop in Digital System, which are listed below:
• Counters: T Flip Flops used in counters. Counters counts the number of events that occurs in
a digital system.
• Data Storage: T Flip Flops used to create memory which are used to store data, when the
power is turned off.
• Synchronous logic circuits: T flip-flops can be used to implement synchronous logic circuits,
which are circuits that perform operations on binary data based on a clock signal. By
synchronizing the logic circuit’s operations to the clock signal using T flip-flops, the circuit’s
behavior can be made predictable and reliable.
• Frequency division: It is used to divide the frequency of a clock signal by 2. Flip-flop will
toggle its output every time the clock signal transitions from high to low or low to high,
hence dividing the clock frequency by 2.
• Shift registers: T flip-flops can be used in shift registers which are used to shift binary data in
one direction.
Ans. The combinational logic circuits are the circuits that contain different types of logic gates.
Simply, a circuit in which different types of logic gates are combined is known as a combinational
logic circuit. The output of the combinational circuit is determined from the present combination of
inputs, regardless of the previous input. The input variables, logic gates, and output variables are the
basic components of the combinational logic circuit. There are different types of combinational logic
circuits, such as Adder, Subtractor, Decoder, Encoder, Multiplexer, and De-multiplexer.
o At any instant of time, the output of the combinational circuits depends only on the present
input terminals.
o The combinational circuit doesn't have any backup or previous memory. The present state of
the circuit is not affected by the previous state of the input.
o The n number of inputs and m number of outputs are possible in combinational logic
circuits.
一一一一一
一一一一一
Hence, S= 0, C= 0
Case 2: A= 0, B= 1;
As per Binary addition, the sum of these numbers is 1 with no carry bit generation.
0
+ 1
一一一一一
一一一一一
Hence, S= 1, C= 0
Case 3: A= 1, B= 0;
As per Binary addition, the sum of these numbers is 1 with no carry bit generation.
1
+ 0
一一一一一
一一一一一
Hence, S= 1, C= 0
Case 4: A= 1, B= 1;
According to Binary addition, the sum of these numbers is 1 with a carry bit generation of 1.
1
1
+ 1
一一一一一
一一一一一
Hence, S= 0, C= 1
Or
1+1 =2 and Binary value of 2 is 10. Carry = 1 and Sum = 0.
Logical Expression:
For Sum:
Sum = A XOR B
For Carry:
Carry = A AND B
Implementation:
Note: Half adder has only two inputs and there is no provision to add a carry coming from the lower
order bits when multi addition is performed.
1.Limited Usefulness: The half viper can add two single-piece numbers and produce a total and a
convey bit. It can’t perform expansion of multi-bit numbers, which requires the utilization of
additional intricate circuits like full adders.
2.Lack of Convey Info: The half snake doesn’t have a convey input, which restricts its value in more
mind boggling expansion tasks. A convey input is important to perform expansion of multi-bit
numbers and to chain numerous adders together.
3.Propagation Deferral: The half snake circuit has a proliferation delay, which is the time it takes for
the result to change in light of an adjustment of the info. This can cause timing issues in computerized
circuits, particularly in fast frameworks.
1.Arithmetic circuits: Half adders are utilized in number-crunching circuits to add double numbers.
At the point when different half adders are associated in a chain, they can add multi-bit double
numbers.
2.Data handling: Half adders are utilized in information handling applications like computerized
signal handling, information encryption, and blunder adjustment.
3.Address unraveling: In memory tending to, half adders are utilized in address deciphering circuits
to produce the location of a particular memory area.
4.Encoder and decoder circuits: Half adders are utilized in encoder and decoder circuits for
computerized correspondence frameworks.
5.Multiplexers and demultiplexers: Half adders are utilized in multiplexers and demultiplexers to
choose and course information.
6.Counters: Half adders are utilized in counters to augment the count by one.
Operation:
Case 1: A= 0, B= 0, and D= 0;
The sum of the three binary numbers 0, 0, and 0 yields a sum of 0 and
generates no carry bit.
+0
+0
一一一一一
一一一一一
Hence, S= 0, C= 0
Case 2: A= 0, B= 0 and D= 1;
The sum of the three binary numbers 0, 0, and 1 yields a sum of 1 and
generates no carry bit.
+0
+1
一一一一一
一一一一一
Hence, S= 1, C= 0
Case 3: A= 0, B= 1, and D= 0;
The sum of the three binary numbers 0, 1, and 0 yields a sum of 1 and
generates no carry bit.
+1
+0
一一一一一
一一一一一
Hence, S= 1, C= 0
Case 4: A= 0, B= 1 and D= 1;
The sum of the three binary numbers 0, 1, and 1 generates a carry bit 1.
Hence, S= 0, C= 1
Case 5: A= 1, B= 0 and D= 0;
The sum of the three binary numbers 1, 0, and 0 yields a sum of 1 and
generates no carry bit.
+0
+0
一一一一一
一一一一一
Hence, S= 1, C= 0
Case 6: A= 1, B= 0 and D= 1;
The sum of the three binary numbers 1, 0, and 1 generates a carry bit 1.
+0
+1
一一一一一
一一一一一
Hence, S= 0, C= 1
Case 7: A= 1, B= 1 and D= 0;
The sum of the three binary numbers 1, 1, and 0 generates a carry bit 1.
+1
+0
一一一一一
一一一一一
Hence, S= 0, C= 1
Case 8: A= 1, B= 1 and D= 1;
The sum of the three binary numbers 1, 1, and 1 generates a carry bit 1.
The sum 0 gets added to the third bit 1 and the carry gets further to MSB.
+1
+1
一一一一一
一一一一一
Hence, S= 1, C= 1
Full Adder logic circuit.
With this logic circuit, two bits can be added together, taking a carry from the next lower order of
magnitude, and sending a carry to the next higher order of magnitude.
1.Flexibility: A full snake can add three information bits, making it more flexible than a half viper. It
can likewise be utilized to add multi-bit numbers by binding different full adders together.
2.Carry Info: The full viper has a convey input, which permits it to perform expansion of multi-bit
numbers and to chain different adders together.
3.Speed: The full snake works at an extremely fast, making it reasonable for use in rapid
computerized circuits.
1.Complexity: The full snake is more mind boggling than a half viper and requires more parts like
XOR, AND, or potentially entryways. It is likewise more challenging to execute and plan.
2.Propagation Deferral: The full viper circuit has a proliferation delay, which is the time it takes for
the result to change in light of an adjustment of the info. This can cause timing issues in computerized
circuits, particularly in fast frameworks.
1.Arithmetic circuits: Full adders are utilized in math circuits to add twofold numbers. At the point
when different full adders are associated in a chain, they can add multi-bit paired numbers.
2.Data handling: Full adders are utilized in information handling applications like advanced signal
handling, information encryption, and mistake rectification.
3.Counters: Full adders are utilized in counters to addition or decrement the count by one.
4.Multiplexers and demultiplexers: Full adders are utilized in multiplexers and demultiplexers to
choose and course information.
5.Memory tending to: Full adders are utilized in memory addressing circuits to produce the location
of a particular memory area.
6.ALUs: Full adders are a fundamental part of Number juggling Rationale Units (ALUs) utilized in
chip and computerized signal processors.
Operation:
(*D- Difference, P- Borrow)
Case 1: A = 0, B = 0;
–0
一一一一一
一一一一一
Hence, D= 0, P= 0
Case 2: A= 0, B= 1;
–1
→1 (Previous Borrow)
一一一一一
一一一一一
Hence, D= 1, P= 1
Case 3: A= 1, B= 0;
–0
一一一一一
一一一一一
Hence, D= 1, P= 0
Case 4: A= 1, B= 1;
–1
一一一一一
0
一一一一一
Hence, D= 0, P= 0
Implementation
Logical Expression
Difference = A XOR B
Borrow = \overline{A}B
1. First, we need to convert the binary numbers to their two’s complement form if we are subtracting a
negative number.
2. Next, we compare the bits in the minuend and subtrahend at the corresponding positions. If the
subtrahend bit is greater than or equal to the minuend bit, we need to borrow from the previous stage
(if there is one) to subtract the subtrahend bit from the minuend bit.
3. We subtract the two bits along with the borrow-in to get the difference bit. If the minuend bit is
greater than or equal to the subtrahend bit along with the borrow-in, then the difference bit is 1,
otherwise it is 0.
4. We then calculate the borrow-out bit by comparing the minuend and subtrahend bits. If the
minuend bit is less than the subtrahend bit along with the borrow-in, then we need to borrow for the
next stage, so the borrow-out bit is 1, otherwise it is 0.
The circuit diagram for a full subtractor usually consists of two half-subtractors and an additional OR
gate to calculate the borrow-out bit. The inputs and outputs of the full subtractor are as follows:
Inputs:
A: minuend bit
B: subtrahend bit
Bin: borrow-in bit from the previous stage
Outputs:
Diff: difference bit
Bout: borrow-out bit for the next stage
Truth Table –
From above table we can draw the K-Map as shown for “difference” and “borrow”.
Logical expression for difference –
OR
Implementation of Full Subtractor using Half Subtractors – 2 Half Subtractors and an OR gate is
required to implement a Full Subtractor.
A single full adder performs the addition of two one bit numbers and an input carry. But a Parallel
Adder is a digital circuit capable of finding the arithmetic sum of two binary numbers that is greater
than one bit in length by operating on corresponding pairs of bits in parallel. It consists of full adders
connected in a chain where the output carry from each full adder is connected to the carry input of the
next higher order full adder in the chain. A n bit parallel adder requires n full adders to perform the
operation. So for the two-bit number, two adders are needed while for four bit number, four adders are
needed and so on. Parallel adders normally incorporate carry lookahead logic to ensure that carry
propagation between subsequent stages of addition does not limit addition speed.
Parallel Subtractor –
A Parallel Subtractor is a digital circuit capable of finding the arithmetic difference of two binary
numbers that is greater than one bit in length by operating on corresponding pairs of bits in parallel.
The parallel subtractor can be designed in several ways including combination of half and full
subtractors, all full subtractors or all full adders with subtrahend complement input.
Working of Parallel Subtractor –
1. As shown in the figure, the parallel binary subtractor is formed by combination of all full
adders with subtrahend complement input.
2. This operation considers that the addition of minuend along with the 2’s complement of the
subtrahend is equal to their subtraction.
3. Firstly the 1’s complement of B is obtained by the NOT gate and 1 can be added through the
carry to find out the 2’s complement of B. This is further added to A to carry out the
arithmetic subtraction.
4. The process continues till the last full adder FAn uses the carry bit Cn to add with its input An
and 2’s complement of Bn to generate the last bit of the output along last carry bit Cout.
Ans. Registers are a type of computer memory used to quickly accept, store, and
transfer data and instructions that are being used immediately by the CPU. The
registers used by the CPU are often termed as Processor registers.
A processor register may hold an instruction, a storage address, or any data (such as
bit sequence or individual characters).
• Computer registers are designated by upper case letters (and optionally followed by
digits or letters) to denote the function of the register.
• For example, the register that holds an address for the memory unit is usually called
a memory address register and is designated by the name MAR.
• Other designations for registers are PC (for program counter), IR (for instruction
register, and R1 (for processor register).
3 Memory data MDR All the information that is supposed to be written or the
registers information that is supposed to be read from a certain memory
address is stored here
7 Condition These have different flags that depict the status of operations.
code registers These registers set the flags accordingly if the result of
operation caused zero or negative
12 Memory MBR MBR - Memory buffer registers are used to store data content
buffer register or memory commands used to write on the disk. The basic
functionality of these is to save called data from memory.
MBR is very similar to MDR
13 Stack control SCR Stack is a set of location memory where data is stored and
registers retrieved in a certain order. Also called last in first out ( LIFO ),
we can only retrieve a stack at the second position only after
retrieving out the first one, and stack control registers are
mainly used to manage the stacks in the computer.
SP - BP is stack control registers. Also, we can use DI, SI, SP, and
BP as 2 byte or 4-byte registers.
EDI, ESI, ESP, and EBP are 4 - byte registers
14 Flag register FR Flag registers are used to indicate a particular condition. The
size of the registered flag is 1 - 2 bytes, and each registered flag
is furthermore compounded into 8 bits. Each registered flag
defines a condition or a flag.
The data that is stored is split into 8 separate bits.
Basic flag registers -
Zero flags
Carry flag
Parity flag
Sign flag
Overflow flag.
Ans.
• Information transfer from one register to another is designated in symbolic form by
means of a replacement operator.
• The statement R2← R1 denotes a transfer of the content of register R1 into register
R2.
• It designates a replacement of the content of R2 by the content of R1.
• By definition, the content of the source register R 1 does not change after the
transfer.
• If we want the transfer to occur only under a predetermined control condition then
it can be shown by an if-then statement. if (P=1) then R2← R1.
• P is the control signal generated by a control section.
• We can separate the control variables from the register transfer operation by
specifying a Control Function.
• Control function is a Boolean variable that is equal to 0 or 1.
• control function is included in the statement as P: R2← R1
• Control condition is terminated by a colon implies transfer operation be executed by
the hardware only if P=1.
• Every statement written in a register transfer notation implies a hardware
construction for implementing the transfer.
• Figure 4-2 shows the block diagram that depicts the transfer from R1 to R2.
• The n outputs of register R1 are connected to the n inputs of register R2.
• The letter n will be used to indicate any number of bits for the register. It will be
replaced by an actual number when the length of the register is known.
• Register R2 has a load input that is activated by the control variable P.
• It is assumed that the control variable is synchronized with the same clock as the one
applied to the register.
• As shown in the timing diagram, P is activated in the control section by the rising
edge of a clock pulse at time t.
• The next positive transition of the clock at time t + 1 finds the load input active and
the data inputs of R2 are then loaded into the register in parallel.
• P may go back to 0 at time t+1; otherwise, the transfer will occur with every clock
pulse transition while P remains active.
• Even though the control condition such as P becomes active just after time t, the
actual transfer does not occur until the register is triggered by the next positive
transition of the clock at time t +1.
• The basic symbols of the register transfer notation are listed in below table.
• A comma is used to separate two or more operations that are executed at the same
time.
• The statement T : R2← R1, R1← R2 (exchange operation) denotes an operation that
exchanges the contents of two rgisters during one common clock pulse provided that
T=1.
Ans.
• A more efficient scheme for transferring information between registers in a
multiple-register
• configuration is a Common Bus System.
• A common bus consists of a set of common lines, one for each bit of a register.
• Control signals determine which register is selected by the bus during each
particular register
• transfer.
• Different ways of constructing a Common Bus System
➢ Using Multiplexers
➢ Using Tri-state Buffers
➢ The bus consists of four 4 x 1 multiplexers each having four data inputs, 0
through 3, and two selection inputs, S1 and S0.
➢ For example, output 1 of register A is connected to input 0 of MUX 1 because this
input is labelled A1.
➢ The diagram shows that the bits in the same significant position in each register
are connected to the data inputs of one multiplexer to form one line of the bus.
➢ Thus MUX 0 multiplexes the four 0 bits of the registers, MUX 1 multiplexes the
four 1 bits of the registers, and similarly for the other two bits.
➢ The two selection lines Si and So are connected to the selection inputs of all four
multiplexers.
➢ The selection lines choose the four bits of one register and transfer them into the
four-line common bus.
➢ When S1S0 = 00, the 0 data inputs of all four multiplexers are selected and
applied to the outputs that form the bus.
➢ This causes the bus lines to receive the content of register A since the outputs of
this register are connected to the 0 data inputs of the multiplexers.
➢ Similarly, register B is selected if S1S0 = 01, and so on.
➢ Table 4-2 shows the register that is selected by the bus for each of the four
possible binary value of the selection lines.
Memory Transfer:
Ans. In computer central processing units, micro-operations (also known as micro-ops) are the
functional or atomic, operations of a processor. These are low level instructions used in some
designs to implement complex machine instructions. They generally perform operations on data
stored in one or more registers.
Types of Micro-operations:
Arithmetic Micro-operations:
We can perform arithmetic operations on the numeric data which is stored inside the registers.
Example :
R3 <- R1 + R2
The value in register R1 is added to the value in the register R2 and then the sum is transferred into
register R3. Similarly, other arithmetic micro-operations are performed on the registers.
• Addition –
In addition micro-operation, the value in register R1 is added to the value in the register R2
and then the sum is transferred into register R3.
• Subtraction –
In subtraction micro-operation, the contents of register R2 are subtracted from contents of
the register R1, and then the result is transferred into R3.
There is another way of doing the subtraction. In this, 2’s complement of R2 is added to R1, which is
equivalent to R1 – R2, and then the result is transferred into register R3.
• Increment –
In Increment micro-operation, the value inside the R1 register is increased by 1.
• Decrement –
In Decrement micro-operation, the value inside the R1 register is decreased by 1.
• 1’s Complement –
In this micro-operation, the complement of the value inside the register R1 is taken.
• 2’s Complement –
In this micro-operation, the complement of the value inside the register R2 is taken and then
1 is added to the value and then the final result is transferred into the register R2. This
process is also called Negation. It is equivalent to -R2.
Arithmetic micro-operations are the basic building blocks of arithmetic operations performed by a
computer’s central processing unit (CPU). These micro-operations are executed on the data stored in
registers, which are small, high-speed storage units within the CPU.
There are several types of arithmetic micro-operations that can be performed on register data,
including:
1. Addition: This micro-operation adds two values together and stores the result in a register.
2. Subtraction: This micro-operation subtracts one value from another and stores the result in
a register.
5. Multiplication: This micro-operation multiplies two values together and stores the result in a
register.
6. Division: This micro-operation divides one value by another and stores the quotient and
remainder in separate registers.
7. Shift: This micro-operation shifts the bits in a register to the left or right, depending on the
direction specified.
These arithmetic micro-operations are used in combination with logical micro-operations, such as
AND, OR, and NOT, to perform more complex calculations and manipulate data within the CPU.
Logic Micro-operations:
• Logic microoperations specify binary operations for strings of bits stored in registers.
• These operations consider each bit of the register separately and treat them as
binary variables.
• For example, the exclusive-OR microoperation with the contents of two registers RI
and R2 is symbolized by the statement
• It specifies a logic microoperation to be executed on the individual bits of the
registers provided that the control variable P = 1.
• There are 16 different logic operations that can be performed with two binary
variables.
• They can be determined from all possible truth tables obtained with two binary
variables as shown in Table 4-5.
• The 16 Boolean functions of two variables x and y are expressed in algebraic form in
the first column of Table 4-6.
• The 16 logic microoperations are derived from these functions by replacing variable
x by the binary content of register A and variable y by the binary content of register
B.
• The logic micro-operations listed in the second column represent a relationship
between the binary content of two registers A and B.
Shift Microoperations:
Ans. Instruction Code: group of bits that instruct the computer to perform specific operation.
Instruction code is usually divided into two parts: Opcode and address(operand).
Ans. Computer registers are high-speed storage locations that hold data and instructions being
processed by the CPU. They enable quick access to data for arithmetic and logic operations, and
store memory addresses, program counters, and other control information needed for program
execution. Registers also help optimize instruction execution by reducing the need to access slower
memory locations, which improves overall system performance and efficiency.
• The data register (DR) holds the operand read from memory.
• The accumulator (AC) register is a general purpose processing register.
• The instruction read from memory is placed in the instruction register (IR).
• The temporary register (TR) is used for holding temporary data during the
processing.
• The memory address register (AR) has 12 bits since this is the width of a memory
address.
• The program counter (PC) also has 12 bits and it holds the address of the next
instruction to be read from memory after the current instruction is executed.
• Two registers are used for input and output.
The input register (INPR) receives an 8-bit character from an input device. The
output register (OUTR) holds an 8-bit character for an output device.
Ans.
The basic computer has eight registers, a memory unit, and a control unit
• Paths must be provided to transfer information from one register to another and
between memory and registers.
• A more efficient scheme for transferring information in a system with many registers
is to use a common bus.
• The connection of the registers and memory of the basic computer to a common bus
system is shown in Fig. 5-4.
• The outputs of seven registers and memory are connected to the common bus.
• The specific output that is selected for the bus lines at any given time is determined
from the binary value of the selection variables S2, S1, and S0.
• The number along each output shows the decimal equivalent of the required binary
selection.
• For example, the number along the output of DR is 3. The 16-bit outputs of DR are
placed on the bus lines when S2S1S0 = 011.
• The lines from the common bus are connected to the inputs of each register and the
data inputs of the memory.
• The particular register whose LD (load) input is enabled receives the data from the
bus during the next clock pulse transition.
• The memory receives the contents of the bus when its write input is activated.
• The memory places its 16-bit output onto the bus when the read input is activated
and S2S1S0 = 111.
• Two registers, AR and PC, have 12 bits each since they hold a memory address.
• When the contents of AR or PC are applied to the 16-bit common bus, the four most
significant bits are set to 0's.
• When AR or PC receives information from the bus, only the 12 least significant bits
are transferred into the register.
• The input register INPR and the output register OUTR have 8 bits each.
• They communicate with the eight least significant bits in the bus.
• INPR is connected to provide information to the bus but OUTR can only receive
information from the bus.
• This is because INPR receives a character from an input device which is then
transferred to AC.
• OUTR receives a character from AC and delivers it to an output device.
• Five registers have three control inputs: LD (load), INR (increment), and CLR (clear).
• This type of register is equivalent to a binary counter with parallel load and
synchronous clear.
• Two registers have only a LD input.
• The input data and output data of the memory are connected to the common bus,
but the memory address is connected to AR.
• Therefore, AR must always be used to specify a memory address.
• The 16 inputs of AC come from an adder and logic circuit. This circuit has three sets
of inputs.
➢ One set of 16-bit inputs come from the outputs of AC.
➢ Another set of 16-bit inputs come from the data register DR.
➢ The result of an addition is transferred to AC and the end carry-out of the
addition is transferred to flip-flop E (extended AC bit).
➢ A third set of 8-bit inputs come from the input register INPR.
• The content of any register can be applied onto the bus and an operation can be
performed in the adder and logic circuit during the same clock cycle.
• For example, the two microoperations DR<-AC and AC <- DR can be executed at the
same time.
• This can be done by placing the content of AC on the bus (with S2S1S0 = 100),
enabling the LD (load) input of DR, transferring the content of DR through the adder
and logic circuit into AC, and enabling the LD (load) input of AC, all during the same
clock cycle.
Ans.
The basic computer has 16-bit instruction register (IR) which can denote either memory
reference or register reference or input-output instruction.
1. Memory Reference – These instructions refer to memory address as an operand.
The other operand is always accumulator. Specifies 12-bit address, 3-bit opcode
(other than 111) and 1-bit addressing mode for direct and indirect addressing.
Example – IR
register contains = 0001XXXXXXXXXXXX, i.e. ADD after fetching and decoding of
instruction we find out that it is a memory reference instruction for ADD operation.
Hence, DR ← M[AR]
AC ← AC + DR, SC ← 0
Example – IR
register contains = 0111001000000000, i.e. CMA after fetch and decode cycle we
find out that it is a register reference instruction for complement accumulator.
Hence, AC ← ~AC
Example – IR
register contains = 1111100000000000, i.e. INP after fetch and decode cycle we find
out that it is an input/output instruction for inputing character. Hence, INPUT
character from peripheral device.
Essential PC directions are the principal tasks that a PC can perform. These directions are
executed by the focal handling unit (central processor) of a PC, and they structure the
reason for additional perplexing tasks. A few instances of essential PC directions include:
1.Load: This guidance moves information from the memory to a computer processor
register.
2.Store: This guidance moves information from a computer chip register to the memory.
3.Add: This guidance adds two qualities and stores the outcome in a register.
4.Subtract: This guidance deducts two qualities and stores the outcome in a register.
5.Multiply: This guidance duplicates two qualities and stores the outcome in a register.
6.Divide: This guidance isolates two qualities and stores the outcome in a register.
7.Branch: This guidance changes the program counter to a predefined address, which is
utilized to execute restrictive and genuine leaps.
Ans. The instruction formats are a sequence of bits (0 and 1). These bits, when grouped, are known
as fields. Each field of the machine provides specific information to the CPU related to the operation
and location of the data.
The instruction format also defines the layout of the bits for an instruction. It can be of variable
lengths with multiple numbers of addresses. These address fields in the instruction format vary as
per the organization of the registers in the CPU. The formats supported by the CPU depend upon the
Instructions Set Architecture implemented by the processor.
The location of the operands is tacitly represented because this instruction lacks an operand field.
These commands are supported by the stack-organized computer system. It is necessary to translate
the arithmetic expression into a reverse polish notation in order to assess it.
Example of Zero address instruction: Consider the actions below, which demonstrate how the
expression X = (A + B) (C + D) will be formatted for a stack-organized computer.
PUSH A TOS ← A
PUSH B TOS ← B
ADD TOS ← (A + B)
PUSH C TOS ← C
PUSH D TOS ← D
ADD TOS ← (C + D)
MUL TOS ← (C + D) ∗ (A + B)
This instruction performs data manipulation tasks using an implied accumulator. A register that the
CPU uses to carry out logical processes is called an accumulator. The accumulator is inferred in one
address instruction, so it doesn’t need an explicit reference. A second register is required for
addition and subtraction. Instead, we’ll ignore the second register in this case and presume that the
accumulator already holds the outcomes of all the operations.
LOAD A AC ← M [A]
STORE T M [T] ← AC
LOAD C AC ← M [C]
ADD D AC ← AC + M [D]
MUL T AC ← AC ∗ M [T]
STORE X M [X] ← AC
All actions involve a memory operand and the accumulator (AC) register.
Any memory address is M[].
M[T] points to a temporary memory spot where the interim outcome is kept.
There is only one operand space in this instruction format. To transfer data, this address field
employs two unique instructions, namely:
The majority of commercial computers use this command. There are three operand fields in this
address command format. Registers or memory addresses can be used in the two address sections.
MOV X, R1 M [X] ← R1
The MOV command moves the operands from the processor registers to the memory. sensors R1,
R2.
A three-address command must have three operand elements in its format. These three fields could
either be registers or memory locations.
Ans. A program residing in the memory unit of a computer consists of a sequence of instructions.
These instructions are executed by the processor by going through a cycle for each instruction.
Initiating Cycle
During this phase, the computer system boots up and the Operating System loads into the central
processing unit's main memory. It begins when the computer system starts.
Fetching of Instruction
The first phase is instruction retrieval. Each instruction executed in a central processing unit uses the
fetch instruction. During this phase, the central processing unit sends the PC to MAR and then the
READ instruction to a control bus. After sending a read instruction on the data bus, the memory
returns the instruction that was stored at that exact address in the memory. The CPU then copies
data from the data bus into MBR, which it then copies to registers. The pointer is incremented to the
next memory location, allowing the next instruction to be fetched from memory.
Decoding of Instruction
The second phase is instruction decoding. During this step, the CPU determines which instruction
should be fetched from the instruction and what action should be taken on the instruction. The
instruction's opcode is also retrieved from memory, and it decodes the related operation that must
be performed for the instruction.
The third phase is the reading of an effective address. The operation's decision is made during this
phase. Any memory type operation or non-memory type operation can be used. Direct memory
instruction and indirect memory instruction are the two types of memory instruction available.
Execution of Instruction
The last step is to carry out the instructions. The instruction is finally carried out at this stage. The
instruction is carried out, and the result is saved in the register. The CPU gets prepared for the
execution of the next instruction after the completion of each instruction. The execution time of
each instruction is calculated, and this information is used to determine the processor's processing
speed.
Memory Reference
If D7=0 opcode will be 000 through 110
If D7=0 and I=1,indirect and D7=0 and I=0 direct
Microoperations for indirect address should be initially AR←M[AR]
Register reference/ io
If D7=1 and I=0 - Register
If D7=1 and I=1 – i/o
The microoperation for the indirect address condition can be symbolized by the register transfer
statement
AR ← M [AR]
Q7. What are Addressing Modes? Explain Different types of ADDRESSING Modes.
Ans. The term addressing modes refers to how the operand of an instruction is specified. The
addressing mode specifies a rule for interpreting or modifying the address field of the instruction
before the operand is executed.
Examples-
• ADD 10 will increment the value stored in the accumulator by 10.
• MOV R #20 initializes register R to a constant value 20.
In this addressing mode, the address field of the instruction contains the effective address of the
operand. Only one reference to memory is required to fetch the operand. It is also called as absolute
addressing mode.
Example-
• ADD X will increment the value stored in the accumulator by the value stored at
memory location X. AC ← AC + [X]
In this addressing mode, the address field of the instruction specifies the address of memory
location that contains the effective address of the operand. Two references to memory are required
to fetch the operand.
Example-
• ADD X will increment the value stored in the accumulator by the value stored at
memory location specified by X. AC ← AC + [[X]]
Example-
ADD R will increment the value stored in the accumulator by the content of register R.
AC ← AC + [R]
• This addressing mode is similar to direct addressing mode.
• The only difference is address field of the instruction refers to a CPU register instead
of main memory.
In this addressing mode, the address field of the instruction refers to a CPU register that contains the
effective address of the operand. Only one reference to memory is required to fetch the operand.
Example-
ADD R will increment the value stored in the accumulator by the content of memory location
specified in register R.
AC ← AC + [[R]]
• This addressing mode is similar to indirect addressing mode.
• The only difference is address field of the instruction refers to a CPU register.
• Effective address of the operand is obtained by adding the content of program counter with
the address part of the instruction.
Effective Address
= Content of Program Counter + Address part of the instruction
NOTE:
• Program counter (PC) always contains the address of the next instruction to be executed.
• After fetching the address of the instruction, the value of program counter immediately
increases.
• The value increases irrespective of whether the fetched instruction has completely executed
or not.
• Effective address of the operand is obtained by adding the content of base register with the
address part of the instruction
• This addressing mode is a special case of Register Indirect Addressing Mode where-
= Content of Register
• After accessing the operand, the content of the register is automatically incremented by
step size ‘d’.
Example-
Assume operand size = 2 bytes.
Here,
• After fetching the operand 6B, the instruction register RAUTO will be automatically
incremented by 2.
• Then, updated value of RAUTO will be 3300 + 2 = 3302.
• At memory address 3302, the next operand will be found.
NOTE:
In auto-increment addressing mode,
• First, the operand value is fetched.
• Then, the instruction register RAUTO value is incremented by step size ‘d’.
• This addressing mode is again a special case of Register Indirect Addressing Mode where-
Example-
Assume operand size = 2 bytes.
Here,
• First, the instruction register RAUTO will be decremented by 2.
• Then, updated value of RAUTO will be 3302 – 2 = 3300.
• At memory address 3300, the operand will be found.
NOTE-
Ans. We store the binary information received through an external device in the memory unit. The
information transferred from the CPU to external devices originates from the memory unit. Although
the CPU processes the data, the target and source are always the memory unit. We can transfer this
information using three different modes of transfer.
1. Programmed I/O
2. Interrupt- initiated I/O
3. Direct memory access(DMA)
1. Programmed I/O
Programmed I/O uses the I/O instructions written in the computer program. The instructions in the
program initiate every data item transfer. Usually, the data transfer is from a memory and CPU
register. This case requires constant monitoring by the peripheral device's CPU.
Advantages:
• Programmed I/O is simple to implement.
• It requires very little hardware support.
• CPU checks status bits periodically.
Disadvantages:
• The processor has to wait for a long time for the I/O module to be ready for either
transmission or reception of data.
• The performance of the entire system is severely degraded.
2. Interrupt-initiated I/O
In the above section, we saw that the CPU is kept busy unnecessarily. We can avoid this situation by
using an interrupt-driven method for data transfer. The interrupt facilities and special commands
inform the interface for issuing an interrupt request signal as soon as the data is available from any
device. In the meantime, the CPU can execute other programs, and the interface will keep
monitoring the i/O device. Whenever it determines that the device is ready for transferring data
interface initiates an interrupt request signal to the CPU. As soon as the CPU detects an external
interrupt signal, it stops the program it was already executing, branches to the service program to
process the I/O transfer, and returns to the program it was initially running.
Working of CPU in terms of interrupts:
• CPU issues read command.
• It starts executing other programs.
• Check for interruptions at the end of each instruction cycle.
• On interruptions:-
o Process interrupt by fetching data and storing it.
o See operation system notes.
• Starts working on the program it was executing.
Advantages:
• It is faster and more efficient than Programmed I/O.
• It requires very little hardware support.
• CPU does not check status bits periodically.
Disadvantages:
• It can be tricky to implement if using a low-level language.
• It can be tough to get various pieces of work well together.
• The hardware manufacturer / OS maker usually implements it, e.g., Microsoft.
Bus Request - We use bus requests in the DMA controller to ask the CPU to relinquish the control
buses.
Bus Grant - CPU activates bus grant to inform the DMA controller that DMA can take control of the
control buses. Once the control is taken, it can transfer data in many ways.
Types of DMA transfer using DMA controller:
• Burst Transfer: In this transfer, DMA will return the bus control after the complete data
transfer. A register is used as a byte count, which decrements for every byte transfer, and
once it becomes zero, the DMA Controller will release the control bus. When the DMA
Controller operates in burst mode, the CPU is halted for the duration of the data transfer.
• Cyclic Stealing: It is an alternative method for data transfer in which the DMA controller will
transfer one word at a time. After that, it will return the control of the buses to the CPU. The
CPU operation is only delayed for one memory cycle to allow the data transfer to “steal” one
memory cycle.
Advantages
• It is faster in data transfer without the involvement of the CPU.
• It improves overall system performance and reduces CPU workload.
• It deals with large data transfers, such as multimedia and files.
Disadvantages
• It is costly and complex hardware.
• It has limited control over the data transfer process.
• Risk of data conflicts between CPU and DMA
Q2. What is Associative Memory? Draw and explain its block diagram. How read and write
operations are performed in Associative Memory?
Ans. An associative memory can be treated as a memory unit whose saved information can be
recognized for approach by the content of the information itself instead of by an address or memory
location. Associative memory is also known as Content Addressable Memory (CAM).
• Data is accessed by data content rather than data address.
• Data is stored at very first empty location found in memory.
• When Data is written no address is given.
• When data is read, only the key i.e, data or part of data is provided.
• Memory locates all words which match specified content and marks them for reading.
• It does parallel searches by data association.
Hardware Organisation
• Argument Register (A): It contains word to be searched. It has n bits.
• Key Register (K): It specifies part of the argument word that needs to be compared with
words in memory. If all the bits in key register are 1, the entire word should be compared,
otherwise only the bits having 1’s in their corresponding position are compared.
• Associative Memory Array: It contains the word that are to be compared with argument
word in parallel. Contains m words of n bits each.
• Match Register (M): It has m bits, one bit corresponding to each word in the memory array.
After the matching process, the bits corresponding matching words in match register are set
to 1.
Searching is performed in parallel manner and reading in sequential manner.
• It consist of flip flops storage element Fij and circuit for reading, writing and matching the
cell.
• The input bit is transferred into storage cell during a write operation.
• The match logic compares the contents of the storage cell with the corresponding unmasked
bit of the arguments and sets the bit in Mi
Q3. Explain Main Memory. Differentiate between RAM and ROM.
Ans. The main memory acts as the central storage unit in a computer system. It is a relatively
large and fast memory which is used to store programs and data during the run time
operations.
The primary technology used for the main memory is based on semiconductor integrated
circuits. The integrated circuits for the main memory are classified into two major units.
Random Access Memory (RAM) is used to store the programs and data being used by the CPU
in real time. The data on the random access memory can be read, written, and erased any
number of times. RAM is a hardware element where the data currently used is stored. It is a
volatile memory. It is also called as Main Memory or Primary Memory. This is user’s memory.
The software (program) as well as data files are stored on the hard disk when the software or
those files are opened. They get expanded into RAM. It is the space where temporary data are
automatically stored until the user saves it into the secondary storage devices.
Types of RAM
1. Static RAM: Static RAM or SRAM stores a bit of data using the state of a six-transistor
memory cell.
2. Dynamic RAM: Dynamic RAM or DRAM stores a bit of data using a pair of transistors and
capacitors which constitute a DRAM memory cell.
Read Only Memory (ROM) is a type of memory where the data has been pre-recorded. Data
stored in ROM is retained even after the computer is turned off i.e., non-volatile. It is generally
used in Embedded Parts, where the programming requires almost no changes. It is also called as
Secondary Memory. It is a permanent CNO4 erasable memory gets initiated when the power is
supplied to the computer ROM is a memory chip fixed on the motherboard at the time of
manufacturing. It stores a program called BIOS(Basic Input Output Setup). This program checks
the status of all the devices attached to the computer.
Types of ROM
1. Programmable ROM: It is a type of ROM where the data is written after the memory chip
has been created. It is non-volatile.
2. Erasable Programmable ROM: It is a type of ROM where the data on this non-volatile
memory chip can be erased by exposing it to high-intensity UV light.
3. Electrically Erasable Programmable ROM: It is a type of ROM where the data on this non-
volatile memory chip can be electrically erased using field electron emission.
4. Mask ROM: It is a type of ROM in which the data is written during the manufacturing of the
memory chip.
CPU CPU can easily access data CPU cannot easily access data stored in
Interaction stored in RAM. ROM.
Size and Large size with higher capacity, Small size with less capacity, concerning
Capacity concerning ROM. RAM.
The data stored is easily The data stored is not as easily accessible as
Accessibility
accessible. in the concerning RAM.
Random Access Memory
Difference (RAM) Read Only Memory (ROM)
A RAM chip can store only a A ROM chip can store multiple megabytes
Chip Size
few gigabytes (GB) of data. (MB) of data.
Advantages of RAM
• Speed: RAM is much faster than other types of memory, such as hard disk drives, making it
ideal for storing and accessing data that needs to be accessed quickly.
• Volatility: RAM is volatile memory, which means that it loses its contents when power is
turned off. This property allows RAM to be easily reprogrammed and reused.
• Flexibility: RAM can be easily upgraded and expanded, allowing for more memory to be
added as needed.
Disadvantages of RAM
• Limited capacity: RAM has a limited capacity, which can limit the amount of data that can be
stored and accessed at any given time.
• Volatility: The volatile nature of RAM means that data must be saved to a more permanent
form of storage, such as a hard drive or SSD, to prevent data loss.
• Cost: RAM can be relatively expensive, particularly for high-capacity modules, which can
make it difficult to scale memory as needed.
Advantages of ROM
• Non-volatile: ROM is non-volatile memory, which means that it retains its contents even
when power is turned off. This property makes ROM ideal for storing permanent data, such
as firmware and system software.
• Stability: ROM is stable and reliable, which makes it a good choice for critical systems and
applications.
• Security: ROM cannot be easily modified, which makes it less susceptible to malicious
attacks, such as viruses and malware.
Disadvantages of ROM
• Limited flexibility: ROM cannot be easily reprogrammed or updated, which makes it difficult
to modify or customize the contents of ROM.
• Limited capacity: ROM has a limited capacity, which can limit the amount of data that can
be stored and accessed at any given time.
• Cost: ROM can be relatively expensive to produce, particularly for custom or specialized
applications, which can make it less cost-effective than other types of memory.
Ans. Both SRAM and DRAM are the types of random access memory. Although, they are quite
different from each other. The important differences between SRAM and DRAM are highlighted in
the following table −
Full Form SRAM stands for Static Random DRAM stands for Dynamic
Access Memory. Random Access Memory.
Component SRAM stores information with the DRAM stores data using
help of transistors. capacitors.
Need to Refresh In SRAM, capacitors are not used In DRAM, contents of a capacitor
which means refresh is not needed. need to be refreshed periodically.
Speed SRAM provides faster speed of data DRAM provides slower speed of
read/write. data read/write.
Data Life SRAM has long data life. DRAM has short data life.
Usage SRAMs are used as cache memory in DRAMs are used as main memory
computer and other computing in computer systems.
devices.
Ans. Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There are
various different independent caches in a CPU, which store instructions and data. The most
important use of cache memory is that it is used to reduce the average time to access data from
the main memory.
• Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.
• Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.
• Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.
Levels of Cache
1) Level 1 or L1 Cache:
Typically it ranges from 8kb to 64 KB and uses the high speed SRAM instead of the slower
and cheaper DRAM used for main memory.
2) Level 2 or L2 Cache::
Also known as secondary cache, designed to reduce the time needed to access data in cases
where data has already been accessed previously.
3) Level 3 or L3 Cache:
It is used to feed the L2 cache and is typically faster than the system’s main memory but
slower than L2.
It has more than 3 MB of storage unit.
Not all processors have L3 because it is used to enhance the performance of L1 and L2.
Working of Cache
The CPU initially looks in the cache for data it needs.
If the data is not there, then the CPU access the system main memory and then puts the copy of
the new data in the Cache before processing it.
Next time if the CPU needs to access the same data again, it will just retrieve the data from the
cache instead of going through the whole loading process again.
Cache Performance
1.) Cache Hit:
Or
Or
Locality of Reference
Locality of reference is the tendency of a computer program to access the same set of memory
locations repeatedly over a short period of time. It's also known as the principle of locality.
2 types of Locality
1) Temporal Locality
2) Spatial Locality
3) Temporal Locality
If a program accesses one memory address, there is a good chance that it will access the
same address again.
Example: LOOP
2) Spatial Locality
If a program accesses one memory address there is a good chance it will access other nearby
addresses.
Example: Nearly every program exhibits spatial locality because instructions are usually
executed in sequence.
Cache Mapping
It is a technique by which content of main memory is brought into the cache memory.
3 Types:
1. Associative Mapping.
2. Direct Mapping
1. Direct Mapping
The easiest technique used for mapping is known as direct mapping. The direct
mapping maps every block of the main memory into only a single possible cache line.
In simpler words, in the case of direct mapping, we assign every memory block to a
certain line in the cache.
In the case of direct mapping, a certain block of main memory would be able to map
to only a particular line of cache. Here, the line number of the cache to which any
given block can map is basically given by this:
Cache line number = (The Block Address of the Main Memory ) Modulo (Total
number of lines present in the cache)
Physical Address Division
The physical address, in the case of direct mapping, is divided as follows:
Implementation:
Associative memories are expensive compared to random-access memories because
of the added logic associated with each cell. The CPU address of 15 bits is divided
into two fields. The nine least significant bits constitute the index field and the
remaining six bits form the tag field. The number of bits in the index field is equal to
the number of address bits required to access the cache memory.
In the general case, there are 2k words in cache memory and 2n words in main
memory. The n-bit memory address is divided into two fields: k bits for the index
field and n-k bits for the tag field.
The direct mapping cache organization uses the n-bit address to access the main
memory and the k-bit index to access the cache. Each word in the cache consists of
the data word and its associated tag. When a new word is first brought into the
cache, the tag bits are stored alongside the data bits.
When the CPU generates a memory request, the index field is used for the address
to access the cache. The tag field of the CPU address is compared with the tag in the
word read from the cache. If the two tags match, there is a hit and the desired data
word is in the cache. If there is no match, there is a miss and the required word is
read from main memory. It is then stored in the cache together with the new tag,
replacing the previous value.
The disadvantage of direct mapping is that the hit ratio can drop considerably if two
or more words whose addresses have the same index but different tags are accessed
repeatedly. However, this possibility is minimized by the fact that such words are
relatively far apart in the address range (multiples of 512 locations in this example.)
• To see how the direct-mapping organization operates, consider the numerical
example shown in Fig. 12-13.
• The word at address zero is presently stored in the cache (index = 000, tag =
00, data = 1220).
• Suppose that the CPU now wants to access the word at address 02000.
• The index address is 000, so it is used to access the cache. The two tags are
then compared.
• The cache tag is 00 but the address tag is 02, which does not produce a
match.
• Therefore, the main memory is accessed and the data word 5670 is
transferred to the CPU.
• The cache word at index address 000 is then replaced with a tag of 02 and
data of 5670.
The direct-mapping example just described uses a block size of one word. The same
organization but using a block size of B words is shown in Fig. 12-14.
The index field is now divided into two parts: the block field and the word field. In a
512-word cache there are 64 blocks of 8 words each, since 64 x 8 = 512. The block
number is specified with a 6-bit field and the word within the block is specified with
a 3-bit field. The tag field stored within the cache is common to all eight words of the
same block. Every time a miss occurs, an entire block of eight words must be
transferred from main memory to cache memory. Although this takes extra time, the
hit ratio will most likely improve with a larger block size because of the sequential
nature of computer programs.
2. Associative Mapping
The fastest and most flexible cache organization uses an associative memory. This
organization is illustrated in Fig. 12-1 1. The associative memory stores both the
address and content (data) of the memory word. This permits any location in cache
to store any word from main memory.
The diagram shows three words presently stored in the cache. The address value of
15 bits is shown as a five-digit octal number and its corresponding 12 -bit word is
shown as a four-digit octal number.
A CPU address of 15 bits is placed in the argument register and the associative
memory is searched for a matching address.
If the address is found, the corresponding 12-bit data is read and sent to the CPU.
If no match occurs, the main memory is accessed for the word.
The address--data pair is then transferred to the associative cache memory. If the
cache is full, an address--data pair must be displaced to make room for a pair that is
needed and not presently in the cache.
The decision as to what pair is replaced is determined from the replacement
algorithm that the designer chooses for the cache.
A simple procedure is to replace cells of the cache in round-robin order whenever a
new word is requested from main memory. This constitutes a first-in first-out (FIFO)
replacement policy.
3. Set Associative Memory
It was mentioned previously that the disadvantage of direct mapping is that two words
with the same index in their address but with different tag values cannot reside in cache
memory at the same time. A third type of cache organization, called set-associative
mapping, is an improvement over the direct mapping organization in that each word of
cache can store two or more words of memory under the same index address.
Each data word is stored together with its tag and the number of tag-data items in one
word of cache is said to form a set.
An example of a set-associative cache organization for a set size of two is shown in Fig.
12-15.
Each index address refers to two data words and their associated tags. Each tag requires
six bits and each data word has 12 bits, so the word length is 2(6 + 12) = 36 bits.
An index address of nine bits can accommodate 512 words. Thus the size of cache
memory is 512 x 36.
It can accommodate 1024 words of main memory since each word of the cache contains
two data words. In general, a set-associative cache of set size k will accommodate k
words of main memory in each word of cache.
The words stored at addresses 01000 and 02000 of main memory are stored in cache
memory at index address 000. Similarly, the words at addresses 02777 and 00777 are
stored in the cache at index address 777.
When the CPU generates a memory request, the index value of the address is used to
access the cache. The tag field of the CPU address is then compared with both tags in the
cache to determine if a match occurs.
The comparison logic is done by an associative search of the tags in the set similar to an
associative memory search: thus the name "set-associative." The hit ratio will improve as
the set size increases because more words with the same index but different tags can
reside in the cache. However, an increase in the set size increases the number of bits in
the words of the cache and requires more complex comparison logic.
When a miss occurs in a set-associative cache and the set is full, it is necessary to replace
one of the tag-data items with a new value. The most common replacement algorithms
used are random replacement, first-in, first-out (FIFO), and least recently used (LRU).
With the random replacement policy, the control chooses one tag-data item for
replacement at random. The FIFO procedure selects for replacement the item that has
been in the set the longest. The LRU algorithm selects for replacement the item that has
been least recently used by the CPU. Both FIFO and LRU can be implemented by adding a
few extra bits in each word of cache.
Cache Initialization
One more aspect of cache organization that must be taken into consideration is the
problem of initialization. The cache is initialized when power is applied to the computer
or when the main memory is loaded with a complete set of programs from auxiliary
memory. After initialization, the cache is considered to be empty, but in effect, it
contains some nonvalid data. It is customary to include with each word in the cache a
valid bit to indicate whether or not the word contains valid data. The cache is initialized
by clearing all the valid bits to 0. The valid bit of a particular cache word is set to 1 the
first time this word is loaded from main memory and stays set unless the cache has to be
initialized again. The introduction of the valid bit means that a word in cache is not
replaced by another word unless the valid bit is set to 1 and a mismatch of tags occurs. If
the valid bit happens to be 0, the new word automatically replaces the invalid data. Thus
the initialization condition has the effect of forcing misses from the cache until it fills with
valid data.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies different places in the
main memory at different times during the course of execution.
2. A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and the use of a page or segment table
permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.
Demand Paging
The process of loading the page into memory on demand (whenever a page fault occurs) is known as
demand paging. The process includes the following steps are as follows:
Demand Paging
1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS
must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space. The
page replacement algorithms are used for the decision-making of replacing the page in
physical address space.
Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.
• More processes may be maintained in the main memory: Because we are going to load
only some of the pages of any particular process, there is room for more processes. This
leads to more efficient utilization of the processor because it is more likely that at least one
of the more numerous processes will be in the ready state at any particular time.
• A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.
• It allows greater multiprogramming levels by using less of the available (primary) memory
for each process.
• Users are spared from having to add memory modules when RAM space runs out, and
applications are liberated from shared memory management.
• When only a portion of a program is required for execution, speed has increased.
• It can slow down the system performance, as data needs to be constantly transferred
between the physical memory and the hard disk.
• It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if
there is a power outage while data is being transferred to or from the hard disk.
• It can increase the complexity of the memory management system, as the operating system
needs to manage both physical and virtual memory.
Page Fault Service Time: The time taken to service the page fault is called page fault service time.
The page fault service time includes the time taken to perform all the above six steps.
Ans. Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-access storage in a
computer system. It is where programs and data are kept for long-term storage or when not in
immediate use. The most common examples of auxiliary memories are magnetic tapes and magnetic
disks.
Magnetic Disks
A magnetic disk is a type of memory constructed using a circular plate of metal or plastic coated with
magnetized materials. Usually, both sides of the disks are used to carry out read/write operations.
However, several disks may be stacked on one spindle with read/write head available on each
surface.
The following image shows the structural representation for a magnetic disk.
o The memory bits are stored in the magnetized surface in spots along the concentric circles
called tracks.
o The concentric circles (tracks) are commonly divided into sections called sectors.
Magnetic Tape
Magnetic tape is a storage medium that allows data archiving, collection, and backup for different
kinds of data. The magnetic tape is constructed using a plastic strip coated with a magnetic recording
medium.
The bits are recorded as magnetic spots on the tape along several tracks. Usually, seven or nine bits
are recorded simultaneously to form a character together with a parity bit.
Magnetic tape units can be halted, started to move forward or in reverse, or can be rewound.
However, they cannot be started or stopped fast enough between individual characters. For this
reason, information is recorded in blocks referred to as records.
Ans. An interrupt I/O is a process of data transfer in which an external device or a peripheral informs
the CPU that it is ready for communication and requests the attention of the CPU.
Interrupt Processing
Types of interrupts
There are two types of interrupts which are as follows −
Hardware interrupts
The interrupt signal generated from external devices and i/o devices are made interrupt to
CPU when the instructions are ready.
For example − In a keyboard if we press a key to do some action this pressing of the
keyboard generates a signal that is given to the processor to do action, such interrupts are
called hardware interrupts.
Hardware interrupts are classified into two types which are as follows −
• Maskable Interrupt − The hardware interrupts that can be delayed when a highest
priority interrupt has occurred to the processor.
• Non Maskable Interrupt − The hardware that cannot be delayed and immediately be
serviced by the processor.
Software interrupts
The interrupt signal generated from internal devices and software programs need to access
any system call then software interrupts are present.
A software interrupt is divided into two types. They are as follows −
• Normal Interrupts − The interrupts that are caused by the software instructions are
called software instructions.
• Exception − Exception is nothing but an unplanned interruption while executing a
program. For example − while executing a program if we got a value that is divided
by zero is called an exception.
Ans. When I/O devices are ready for I/O transfer, they generate an interrupt request signal to the
computer. The CPU receives this signal, suspends the current instructions it is executing, and then
moves forward to service that transfer request. But what if multiple devices generate interrupts
simultaneously. In that case, we have a way to decide which interrupt is to be serviced first. In other
words, we have to set a priority among all the devices for systemic interrupt servicing. The concept
of defining the priority among devices so as to know which one is to be serviced first in case of
simultaneous requests is called a priority interrupt system. This could be done with either software
or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This program
then checks with each device if it is the one generating the interrupt. The order of checking is
determined by the priority that has to be set. The device having the highest priority is checked first
and then devices are checked in descending order of priority. If the device is checked to be
generating the interrupt, another service program is called which works specifically for that
particular device. The structure will look something like this-
if (device[0].flag)
device[0].service();
else if (device[1].flag)
device[1].service();
else
//raise error
The major disadvantage of this method is that it is quite slow. To overcome this, we can use
hardware solution, one of which involves connecting the devices in series. This is called Daisy-
chaining method.
The daisy-chaining method involves connecting all the devices that can request an interrupt in a
serial manner. This configuration is governed by the priority of the devices. The device with the
highest priority is placed first followed by the second highest priority device and so on. The given
figure depicts this arrangement.
WORKING:
There is an interrupt request line which is common to all the devices and goes into the CPU.
• When no interrupts are pending, the line is in HIGH state. But if any of the devices raises an
interrupt, it places the interrupt request line in the LOW state.
• The CPU acknowledges this interrupt request from the line and then enables the interrupt
acknowledge line in response to the request.
• If the device has not requested the interrupt, it passes this signal to the next device through
its PO(priority out) output. (PI = 1 & PO = 1)
o The device consumes the acknowledge signal and block its further use by placing 0
at its PO(priority out) output.
o The device then proceeds to place its interrupt vector address(VAD) into the data
bus of CPU.
o The device puts its interrupt request signal in HIGH state to indicate its interrupt has
been taken care of.
• If a device gets 0 at its PI input, it generates 0 at the PO output to tell other devices that
acknowledge signal has been blocked. (PI = 0 & PO = 0)
Hence, the device having PI = 1 and PO = 0 is the highest priority device that is requesting an
interrupt. Therefore, by daisy chain arrangement we have ensured that the highest priority interrupt
gets serviced first and have established a hierarchy.
Solution:
Given: 15910
Here, the required base number is 8. (i.e., octal number system). Hence, follow the
below procedure to convert the decimal system to the octal system.
Step 2: Divide 19 by 8.
Since, the quotient “2” is less than “8”, we can stop the process.
Therefore, 15910 = 2378
Q2. Convert the binary number 110010112 to the decimal number system.
Solution:
Now, multiply each digit of the given binary number by the exponents of the base,
starting with the right to left such that the exponents start with 0 and increase by 1.
1 = 1 × 20 = 1
1 = 1 × 21 = 2
0 = 0 × 22 = 0
1 = 1 × 23 = 8
0 = 0 × 24 = 0
0 = 0 × 25 = 0
1 = 1 × 26 = 64
1 = 1 × 27 = 128
= 1 + 2 + 0+ 8 + 0 + 0 + 64 + 128
= 203
Alternate Method:
110010112 = (1 × 27) + (1 × 26) + (0 × 25) + (0 × 24) + (1 × 23 ) + (0 × 22) + (1 × 21 ) +
(1 × 20 )
110010112 = 128 + 64 + 0 + 0 + 8 + 0 + 2 + 1
110010112 = 20310.
Solution:
Since the base of the octal number system is 8, we have to multiply each digit of the
given number with the exponents of the base.
Thus, the octal number 7148 can be converted to the decimal system as follows:
7148 = (7 × 64) + ( 1 × 8) + (4 × 1)
7148 = 448 + 8 + 4
7148 = 46010
Ans. Booth’s algorithm is a powerful algorithm that is used for signed multiplication.
It generates a 2n bit product for two n bit signed numbers.
The flowchart is as shown in Figure 1.
The steps in Booth’s algorithm are as follow:
1) Initialize A,Q−1Q−1 to 0 and count to n
2) Based on the values of Q0 and Q−1Q0 and Q−1 do the following:
a. if Q0,Q−1Q0,Q−1=0,0 then Right shift A,Q,Q−1Q−1 and finally decrement count
by 1
b. If Q0,Q−1Q0,Q−1=0,1 then Add A and B store in A, Right shift A,Q,Q−1Q−1 and
finally decrement count by 1
c. If Q0,Q−1=1Q0,Q−1=1,0 then Subtract A and B store in A, Right shift
A,Q,Q−1Q−1 and finally decrement count by 1
d. If Q0,Q−1=1Q0,Q−1=1,1 then Right shift A,Q,Q−1Q−1 and finally decrement
count by 1
3) Repeat step 2 till count does not equal 0.
Using the flowchart, we can solve the given question as follows:
(−5)10(−5)10= 1011(in 2’s complement)
(−2)10(−2)10 =1110(in 2’s complement)
Multiplicand (B) = 1011
Multiplier (Q) =1110
And initially Q-1=0
Count =4
steps A Q Q1 Operation
Result =(0001 0100 0)2 This is the required and correct result.
Ans. 85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0
1. Single precision:
biased exponent 127+6=133
133 = 10000101
Normalised mantisa = 010101001
we will add 0's to complete the 23 bits
The IEEE 754 Single precision is:
= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000
2. Double precision:
biased exponent 1023+6=1029
1029 = 10000000101
Normalised mantisa = 010101001
we will add 0's to complete the 52 bits
Ans.
Q7. Convert the decimal numbers into binary and divide in binary
55 ÷ 5
Ans. Given decimal numbers are 55 and 5. The division of 55 and 5 gives us 11 as
the quotient in the decimal system. Let us convert 55 and 5 into binary and then
divide them into binary.
Conversion of 55
5510 = 1101112
Conversion of 5
The quotient is 10112 which is the same as 1110.
Ans.
• Since the given boolean expression has 4 variables, so we draw a 4 x 4 K Map.
• We fill the cells of K Map in accordance with the given boolean function.
• Then, we form the groups in accordance with the above rules.
Now,
F(A, B, C, D)
= BD + C’D + B’D’
Q9. What is the don’t care condition in k-maps? Minimise the following function in SOP
minimal form using K-Maps:
Solution:
⇒ F(X, Y, Z) = X′Y+(YZ′+YZ)+(YZ′+XY′Z′)
⇒ F(X, Y, Z) = X′Y+Y(Z′+Z)+Z′(Y+XY′)
⇒ F(X, Y, Z) = X′Y+Y.1+Z′(Y+X)
⇒ F(X, Y, Z) = Y(X′+1)+Z′(Y+X)
⇒ F(X, Y, Z) = Y+YZ’+XZ’
⇒ F(X, Y, Z) = Y(1+Z′)+XZ′
Hence, the simplified form of the given Boolean expression is F(X, Y, Z) = Y+XZ′.
Solution:
Given, F(P ,Q, R)=(P+Q)(P+R)
Therefore, using dominance law, we can get the reduced form as follows:
⇒ F(P, Q, R) = 1.P+Q.R
⇒ F(P, Q, R) = P+Q.R
∼(X+Y)=∼X.∼Y
Ans.
Ans.