Unit-Ii Notes
Unit-Ii Notes
the control must provide binary selection variables to the following selector inputs:
1. MUX A selector (SELA): to place the content of R2 into bus A .
2. MUX B selector (SELB): to place the content o f R 3 into bus B .
3 . ALU operation selector (OPR): to provide the arithmetic addition A + B .
4. Decoder destination selector (SELD): to transfer the content of the output bus into R1 .
The four control selection variables are generated in the control unit and must be available
at the beginning of a clock cycle. The data from the two source registers propagate through
the gates in the multiplexers and the ALU, to the output bus, and into the inputs of the
destination register, all during the clock cycle interval. Then, when the next clock transition
occurs, the binary information from the output bus is transferred into R 1.
To achieve a fast response time, the ALU is constructed with high speed circuits.
Control Word
There are 14 binary selection inputs in the unit, and their combined value specifies a control
word. Encoding of register selection fields
ALU
The ALU provides arithmetic and logic operations. In addition, the CPU must provide shift
operations. The shifter may be placed in the input of the ALU to provide a pre shift
capability, or at the output of the ALU to provide post shifting capability. In some cases, the
shift operations are included with the ALU.
It is assumed that the computer has two processor registers, R 1 and R2. The symbol M [A ]
denotes the operand at memory address symbolized by A . The advantage o f the three-
address format i s that i t results in short programs when evaluating arithmetic expressions.
The disadvantage is that the binary-coded instructions require too many bits to specify three
addresses.
An example of a commercial computer that uses three-address instructions is the Cyber 170.
The instruction formats in the Cyber computer are restricted to either three register address
fields or two register address fields and one memory address field.
Addressing Modes
3. Addressing Modes
The operation field of an instruction specifies the operation to be performed. This
operation must be executed on some data stored in computer registers or memory words.
The way the operands are chosen during program execution in dependent on the
addressing mode of the instruction. The addressing mode specifies a rule for interpreting or
modifying the address field of the instruction before the operand is actually referenced.
Computers use addressing mode techniques for the purpose of accommodating one or both
of the following provisions:
1 To give programming versatility to the user by providing such facilities as pointers to
Memory, counters for loop control, indexing of data, and program relocation
2 To reduce the number of bits in the addressing field of the instruction.
3 The availability of the addressing modes gives the experienced assembly language
programmer flexibility for writing programs that are more efficient with respect to the
number of instructions and execution time.
To understand the various addressing modes to be presented in this section, it is
imperative that we understand the basic operation cycle of the computer. The control unit of
a computer is designed to go through an instruction cycle that is divided into three major
phases:
1. Fetch the instruction from memory
2. Decode the instruction.
3. Execute the
instruction. Addressing modes
are as:
1. Implied Mode
Address of the operands are specified implicitly in the definition of the instruction
- No need to specify address in the instruction
- EA = AC, or EA = Stack[SP], EA: Effective Address.
2. Immediate Mode
Instead of specifying the address of the operand, operand itself is specified
- No need to specify address in the instruction
- However, operand itself needs to be specified
- Sometimes, require more bits than the address
- Fast to acquire an operand
EA=Not defined.
3. Register Mode
Address specified in the instruction is the register address.
- Designated operand need to be in a register
- Shorter address than the memory address
- Saving address field in the instruction
- Faster to acquire an operand than the memory
addressing EA = IR(R) (IR(R): Register field of IR)
4. Register Indirect Mode
Instruction specifies a register which contains the memory address of the operand
- Saving instruction bits since register
address is shorter than the memory
address
- Slower to acquire an operand than both
the register addressing or memory
addressing
- EA = [IR(R)] ([x]: Content of x)
Name Mnemonic
Clear CLR
Complement COM
AND AND
OR OR
Exclusive-Or XOR
Clear Carry CLRC
Set Carry SETC
Complement Carry COMC
Enable Interrupt ET
Disable Interrupt OI
(c) Shift Instruction
Instructions to shift the content of an operand are quite useful and one often provided in
several variations. Shifts are operation in which the bits of a word are moved to the left or
right. The bit-shifted in at the and of the word determines the type of shift used. Shift
instruction may specify either logical shift, arithmetic shifts, or rotate type shifts.
Name Mnemonic
Logical Shift right SHR
Logical Shift left SHL
Arithmetic shift right SHRA
Arithmetic shift left SHLA
Rotate right ROR
Rotate left ROL
Rotate mgmt through carry RORC
Rotate left through carry ROLC
(3)Program Control:
Instructions are always stored in successive memory locations. When processed in the
CPU, the instructions are fetched from consecutive memory locations and executed
Status Bit Conditions
It is sometimes convenient to supplement the ALU circuit in the CPU with a status
register where status b it conditions can be stored for further analysis. Status bits are also
called condition-code bits or flag bits. The four status bits are symbolized by C, S, Z, and V. The
Introduction:
In order to solve the computational problems, arithmetic instructions are used in digital
computers that manipulate data. These instructions perform arithmetic calculations.
We designate the magnitude of the two numbers by A and B. Where the signed
numbers are added or subtracted, we find that there are eight different
conditions to consider, depending on the sign of the numbers and the operation
performed. These conditions are listed in the first column of Table 4.1. The other
columns in the table show the actual operation to be performed with the
magnitude of the numbers. The last column is needed to present a negative
zero. In other words, when two equal numbers are subtracted, the result should
be +0 not -0.
The algorithms for addition and subtraction are derived from the table and can
be stated as follows (the words parentheses should be used for the subtraction
algorithm)
addition and Subtraction of Signed-Magnitude Numbers
Computer Arithmetic 2 Addition and Subtraction
Output
Parallel Adder Input
Carry
Carry
S
As A Register Load Sum
Computer Organization Prof. H. Yoon
Hardware
B Register
Complementer and
V
Parallel Adder
Overflow
AC
Algorithm
Subtract Add
Minuend in AC Augend in AC
Subtrahend in B Addend in B
AC AC + B’+ 1 AC AC + B
V overflow V overflow
END END
The flowchart is shown in Figure 7.1. The two signs A, and B, are
compared by an exclusive-OR gate.
The two magnitudes are subtracted if the signs are different for an add
operation or identical for a subtract operation. The magnitudes are
subtracted by adding A to the 2's complemented B. No overflow can occur if
the numbers are subtracted so AVF is cleared to 0.
1 in E indicates that A >= B and the number in A is the correct result. If this
numbs is zero, the sign A must be made positive to avoid a negative zero.
0 in E indicates that A < B. For this case it is necessary to take the 2's
complement of the value in A. The operation can be done with one
microoperation A A' +1.
However, we assume that the A register has circuits for microoperations
complement and increment, so the 2's complement is obtained from these
two microoperations.
In other paths of the flowchart, the sign of the result is the same as the sign of
A. so no change in A is required. However, when A < B, the sign of the result is
the complement of the original sign of A. It is then necessary to complement
A, to obtain the correct sign.
The final result is found in register A and its sign in As. The value in AVF
provides an overflow indication. The final value of E is immaterial.
Figure 7.2 shows a block diagram of the hardware for implementing the
addition and subtraction operations.
The add-overflow flip-flop AVF holds the overflow bit when A and B are added.
Now, the low order bit of the multiplier in Qn is tested. If it is 1, the multiplicand (B) is
added to present partial product (A), 0 otherwise. Register EAQ is then shifted once to
the right to form the new partial product. The sequence counter is decremented by 1
and its new value checked. If it is not equal to zero, the process is repeated and a new
partial product is formed. When SC = 0 we stops the process.
Booth’s algorithm :
Booth algorithm gives a procedure for multiplying binary integers in
signed- 2’s complement representation.
It operates on the fact that strings of 0’s in the multiplier require no addition but just
shifting, and a stringk+1of 1’s
m
in the multiplier from bit weight 2k to weight 2m
can be treated as 2 – 2 .
For example, the binary number 001110 (+14) has a string 1’s from 2 3 to 21
(k=3, m=1). The number can be represented as 2 k+1 – 2m. = 24 – 21 = 16 – 2 =
14. Therefore, the multiplication M X 14, where M is the multiplicand and 14
the multiplier, can be done as M X 24 – M X 21.
Thus the product can be obtained by shifting the binary multiplicand M four
times to the left and subtracting M shifted left once.
As in all multiplication schemes, booth algorithm requires
examination of the multiplier bits and shifting of partial product.
Prior to the shifting, the multiplicand may be added to the partial product,
subtracted from the partial, or left unchanged according to the following
rules:
1. The multiplicand is subtracted from the partial product upon encountering
the first least significant 1 in a string of 1’s in the multiplier.
2. The multiplicand is added to the partial product upon encountering the first
0 in a string of 0’s in the multiplier.
3. The partial product does not change when multiplier bit is identical to
the previous multiplier bit.
This is because a negative multiplier ends with a string of 1’s and the last
operation will be a subtraction of the appropriate weight.
If the two bits are equal to 10, it means that the first 1 in a string of 1 's has
been encountered. This requires a subtraction of the multiplicand from the
partial product in AC.
If the two bits are equal to 01, it means that the first 0 in a string of 0's has
been encountered. This requires the addition of the multiplicand to the
partial product in AC.
When the two bits are equal, the partial product does not change.
Division Algorithms
Division of two fixed-point binary numbers in signed magnitude representation is
performed with paper and pencil by a process of successive compare, shift and subtract
operations. Binary division is much simpler than decimal division because here the
quotient digits are either 0 or 1 and there is no need to estimate how many times the
dividend or partial remainder fits into the divisor. The division process is described in
Figure
The devisor is compared with the five most significant bits of the dividend.
Since the 5-bit number is smaller than B, we again repeat the same process.
Now the 6-bit number is greater than B, so we place a 1 for the quotient bit in
the sixth position above the dividend. Now we shift the divisor once to the
right and subtract it from the dividend. The difference is known as a partial
remainder because the division could have stopped here to obtain a
quotient of 1 and a remainder equal to the partial remainder. Comparing
a partial remainder with the divisor continues the process. If the partial
remainder is greater than or equal to the divisor, the quotient bit is equal to
1. The divisor is then shifted right and subtracted from the partial remainder.
If the partial remainder is smaller than the divisor, the quotient bit is 0 and no
subtraction is needed. The divisor is shifted once to the right in any case.
Obviously the result gives both a quotient and a remainder.
Algorithm:
Basic Considerations :
m x re
The mantissa may be a fraction or an integer. The position of the radix point and the
value of the radix r are not included in the registers. For example, assume a fraction
representation and a radix
10. The decimal number 537.25 is represented in a register with m = 53725 and e = 3 and
is interpreted to represent the floating-point number
.53725 x 103
+ (1 – 2-35) x 22047
This number is derived from a fraction that contains 35 1’s, an exponent of 11 bits
(excluding its sign), and because 211–1 = 2047. The largest number that can be
accommodated is approximately 10615. The mantissa that can accommodated is 35 bits
(excluding the sign) and if considered as an integer it can store a number as large as (2 35
–1). This is approximately equal to 10 10, which is equivalent to a decimal number of 10
digits.
Computers with shorter word lengths use two or more words to represent a floating-
point number. An 8-bit microcomputer uses four words to represent one floating-point
number. One word of 8 bits are reserved for the exponent and the 24 bits of the other
three words are used in the mantissa.
Arithmetic operations with floating-point numbers are more complicated than with
fixed-point numbers. Their execution also takes longer time and requires more complex
hardware. Adding or subtracting two numbers requires first an alignment of the radix
point since the exponent parts must be made equal before adding or subtracting the
mantissas. We do this alignment by shifting one mantissa while its exponent is adjusted
until it becomes equal to the other exponent. Consider the sum of the following
floating-point numbers:
.5372400 x 102
+ .1580000 x 10-1
The operations done with the mantissas are the same as in fixed-point numbers, so the
two can share the same registers and circuits. The operations performed with the
exponents are compared and incremented (for aligning the mantissas), added and
subtracted (for multiplication) and division), and decremented (to normalize the result).
We can represent the exponent in any one of the three representations - signed-
magnitude, signed 2’s complement or signed 1’s complement.
Biased exponents have the advantage that they contain only positive numbers. Now it
becomes simpler to compare their relative magnitude without bothering about their
signs. Another advantage is that the smallest possible biased exponent contains all
zeros. The floating-point representation of zero is then a zero mantissa and the smallest
possible exponent.
Register Configuration
The register configuration for floating-point operations is shown in figure 4.13. As a rule,
the same registers and adder used for fixed-point arithmetic are used for processing
the mantissas. The difference lies in the way the exponents are handled.
The register organization for floating-point operations is shown in Fig. 4.13. Three
registers are there, BR, AC, and QR. Each register is subdivided into two parts. The
mantissa part has the same uppercase letter symbols as in fixed-point representation.
The exponent part may use corresponding lower-case letter symbol.
omputer Arithmetic 14 Floating Point Arithmetic
F = m x re
where m: Mantissa
r: Radix
e: Exponent
Bs B b BR
Parallel Adder
E Parallel Adder and Comparator
A s A1 A a AC
Qs Q q QR
In the similar way, register BR is subdivided into Bs, B, and b and QR into Qs, Q and q. A
parallel-adder adds the two mantissas and loads the sum into A and the carry into E. A
separate parallel adder can be used for the exponents. The exponents do not have a
district sign bit because they are biased but are represented as a biased positive
quantity. It is assumed that the floating- point number are so large that the chance of an
exponent overflow is very remote and so the exponent overflow will be neglected. The
exponents are also connected to a magnitude comparator that provides three binary
outputs to indicate their relative magnitude.
The number in the mantissa will be taken as a fraction, so they binary point is assumed
to reside to the left of the magnitude part. Integer representation for floating point
causes certain scaling problems during multiplication and division. To avoid these
problems, we adopt a fraction representation.
The numbers in the registers should initially be normalized. After each arithmetic
operation, the result will be normalized. Thus all floating-point operands are always
normalized.
Addition and Subtraction of Floating Point Numbers
During addition or subtraction, the two floating-point operands are kept in AC and
BR. The sum or difference is formed in the AC. The algorithm can be divided into
four consecutive parts:
If the magnitudes were subtracted, there may be zero or may have an underflow in the
result. If the mantissa is equal to zero the entire floating-point number in the AC is
cleared to zero. Otherwise, the mantissa must have at least one bit that is equal to 1.
The mantissa has an underflow if the most significant bit in position A1, is 0. In that
case, the mantissa is shifted left and the exponent decremented. The bit in A1 is
checked again and the process is repeated until A1 = 1. When A1 = 1, the mantissa is
normalized and the operation is completed.
Algorithm for Floating Point Addition and Subtraction
Multiplication
The two operands are checked for zero. If the divisor is zero, it indicates an
attempt to divide by zero, which is an illegal operation. The operation is
terminated with an error message. An alternative procedure would be to set the
quotient in QR to the most positive number possible (if the dividend is positive)
or to the most negative possible (if the dividend is negative). If the dividend in
AC is zero, the quotient in QR is made zero and the operation terminates.
If the operands are not zero, we proceed to determine the sign of the
quotient and store it in Q, . The sign of the dividend in A, is left unchanged to be
the sign of the remainder. The Q register is cleared and the sequence counter SC is
set to a number equal to the number of bits in the quotient.
The dividend alignment is similar to the divide-overflow check in the fixed-
point operation. The proper alignment requires that the fraction dividend be
smaller than the divisor. The two fractions are compared by a subtraction test.
The carry in E determines their relative magnitude. The dividend fraction is
restored to its original value by adding the divisor. If A >= B, it is necessary to
shift A once to the right and increment the dividend exponent. Since both
operands are normalized, this alignment ensures that A < B.
Next, the divisor exponent is subtracted from the dividend exponent. Since
both exponents were originally biased, the subtraction operation gives
the difference without the bias. The bias is then added and the result transferred
into q because the quotient is formed in QR .
The magnitudes of the mantissas are divided as in the fixed-point case. After
the operation, the mantissa quotient resides in Q and the remainder in