0% found this document useful (0 votes)
3 views35 pages

Part2 Comp Arch

The document discusses the organization and architecture of computers, focusing on components such as caches, ALUs, control units, and buses. It explains the instruction cycle, the role of registers, and the Von Neumann architecture, emphasizing the importance of system and application software in managing hardware resources. Additionally, it covers performance metrics and the impact of processor design on execution speed.

Uploaded by

fotsostyve840
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views35 pages

Part2 Comp Arch

The document discusses the organization and architecture of computers, focusing on components such as caches, ALUs, control units, and buses. It explains the instruction cycle, the role of registers, and the Von Neumann architecture, emphasizing the importance of system and application software in managing hardware resources. Additionally, it covers performance metrics and the impact of processor design on execution speed.

Uploaded by

fotsostyve840
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Computer Organization and Architecture

Caches are the small fast RAM units, which are coupled with the processor and are
often contained on the same IC chip to achieve high performance. Although primary storage
is essential it tends to be expensive.
2 Secondary memory: - Is used where large amounts of data & programs have to be stored,
particularly information that is accessed infrequently.
Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.,

Arithmetic logic unit (ALU):-


Most of the computer operators are executed in ALU of the processor like addition,
subtraction, division, multiplication, etc. the operands are brought into the ALU from
memory and stored in high speed storage elements called register. Then according to the
instructions the operation is performed in the required sequence.

The control and the ALU are may times faster than other devices connected to a
computer system. This enables a single processor to control a number of external devices
such as key boards, displays, magnetic and optical disks, sensors and other mechanical
controllers.

Output unit:-
These actually are the counterparts of input unit. Its basic function is to send the
processed results to the outside world.

Examples:- Printer, speakers, monitor etc.

Control unit:-
It effectively is the nerve center that sends signals to other units and senses their
states. The actual timing signals that govern the transfer of data between input unit,
processor, memory and output unit are generated by the control unit.

BASIC OPERATIONAL CONCEPTS


Page 20
Computer Organization and Architecture

To perform a given task an appropriate program consisting of a list of instructions is stored


in the memory. Individual instructions are brought from the memory into the processor,
which executes the specified operations. Data to be stored are also stored in the memory.
Examples: - Add LOCA, R0
This instruction adds the operand at memory location LOCA, to operand in register R 0
& places the sum into register. This instruction requires the performance of several steps,

1. First the instruction is fetched from the memory into the processor.
2. The operand at LOCA is fetched and added to the contents of R0
3. Finally the resulting sum is stored in the register R0

The preceding add instruction combines a memory access operation with an ALU
Operations. In some other type of computers, these two types of operations are performed by
separate instructions for performance reasons.

Load LOCA, R1
Add R1, R0
Transfers between the memory and the processor are started by sending the address
of the memory location to be accessed to the memory unit and issuing the appropriate control
signals. The data are then transferred to or from the memory.

The fig shows how memory &

Page 21
Computer Organization and Architecture

the processor can be connected. In addition to the ALU & the control circuitry, the processor
contains a number of registers used for several different purposes.

Register:
It is a special, high-speed storage area within the CPU. All data must be represented in
a register before it can be processed. For example, if two numbers are to be multiplied, both
numbers must be in registers, and the result is also placed in a register. (The register can
contain the address of a memory location where data is stored rather than the actual data
itself.)

The number of registers that a CPU has and the size of each (number of bits) help
determine the power and speed of a CPU. For example a 32-bit CPU is one in which each
register is 32 bits wide. Therefore, each CPU instruction can manipulate 32 bits of
data. In high-level languages, the compiler is responsible for translating high-level operations
into low-level operations that access registers.

Instruction Format:

Computer instructions are the basic components of a machine language program. They are
also known as macro operations, since each one is comprised of sequences of micro
operations.
Each instruction initiates a sequence of micro operations that fetch operands from registers
or memory, possibly perform arithmetic, logic, or shift operations, and store results in
registers or memory.

Page 22
Computer Organization and Architecture

Instructions are encoded as binary instruction codes. Each instruction code contains of
a operation code, or opcode, which designates the overall purpose of the instruction (e.g. add,
subtract, move, input, etc.). The number of bits allocated for the opcode determined how
many different instructions the architecture supports.
In addition to the opcode, many instructions also contain one or more operands, which
indicate where in registers or memory the data required for the operation is located. For
example, and add instruction requires two operands, and a not instruction requires one.
15 12 11 65 0
+-----------------------------------+
| Opcode | Operand | Operand |
+-----------------------------------+

The opcode and operands are most often encoded as unsigned binary numbers in order to
minimize the number of bits used to store them. For example, a 4-bit opcode encoded as a
binary number could represent up to 16 different operations.
The control unit is responsible for decoding the opcode and operand bits in the instruction
register, and then generating the control signals necessary to drive all other hardware in the
CPU to perform the sequence of micro operations that comprise the instruction.

INSTRUCTION CYCLE:

Page 23
Computer Organization and Architecture

The instruction register (IR):- Holds the instructions that are currently being executed. Its
output is available for the control circuits which generates the timing signals that control the
various processing elements in one execution of instruction.

The program counter PC:-


This is another specialized register that keeps track of execution of a program. It
contains the memory address of the next instruction to be fetched and executed.

Besides IR and PC, there are n-general purpose registers R0 through Rn-1.

The other two registers which facilitate communication with memory are: -
1. MAR – (Memory Address Register):- It holds the address of the location to be
accessed.
2. MDR – (Memory Data Register):- It contains the data to be written into or read out
of the address location.

Operating steps are


1. Programs reside in the memory & usually get these through the I/P unit.
2. Execution of the program starts when the PC is set to point at the first instruction of
the program.
3. Contents of PC are transferred to MAR and a Read Control Signal is sent to the
memory.

Page 24
Computer Organization and Architecture

4. After the time required to access the memory elapses, the address word is read out of
the memory and loaded into the MDR.
5. Now contents of MDR are transferred to the IR & now the instruction is ready to be
decoded and executed.
6. If the instruction involves an operation by the ALU, it is necessary to obtain the
required operands.
7. An operand in the memory is fetched by sending its address to MAR & Initiating a
read cycle.
8. When the operand has been read from the memory to the MDR, it is transferred from
MDR to the ALU.
9. After one or two such repeated cycles, the ALU can perform the desired operation.
10. If the result of this operation is to be stored in the memory, the result is sent to MDR.
11. Address of location where the result is stored is sent to MAR & a write cycle is
initiated.
12. The contents of PC are incremented so that PC points to the next instruction that is to
be executed.

Normal execution of a program may be preempted (temporarily interrupted) if some


devices require urgent servicing, to do this one device raises an Interrupt signal. An interrupt
is a request signal from an I/O device for service by the processor. The processor provides
the requested service by executing an appropriate interrupt service routine.

The Diversion may change the internal stage of the processor its state must be saved
in the memory location before interruption. When the interrupt-routine service is completed
the state of the processor is restored so that the interrupted program may continue

THE VON NEUMANN ARCHITECTURE

The task of entering and altering programs for the ENIAC was extremely tedious. The
programming process can be easy if the program could be represented in a form suitable for
storing in memory alongside the data. Then, a computer could get its instructions by reading
them from memory, and a program could be set or altered by setting the values of a portion of
memory. This idea is known a the stored-program concept. The first publication of the idea

Page 25
Computer Organization and Architecture

was in a 1945 proposal by von Neumann for a new computer, the EDVAC (Electronic Discrete
Variable Computer).

In 1946, von Neumann and his colleagues began the design of a new stored-program
computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies.
The IAS computer, although not completed until 1952, is the prototype of all subsequent
general-purpose computers.

It consists of
 A main memory, which stores both data and instruction
 An arithmetic and logic unit (ALU) capable of operating on binary data
 A control unit, which interprets the instructions in memory and causes them to be
executed
 Input and output (I/O) equipment operated by the control unit

BUS STRUCTURES:

Bus structure and multiple bus structures are types of bus or computing. A bus is basically a
subsystem which transfers data between the components of Computer components either
within a computer or between two computers. It connects peripheral devices at the same
time.

Page 26
Computer Organization and Architecture

- A multiple Bus Structure has multiple inter connected service integration buses and for each
bus the other buses are its foreign buses. A Single bus structure is very simple and consists of
a single server.

- A bus cannot span multiple cells. And each cell can have more than one buses. - Published
messages are printed on it. There is no messaging engine on Single bus structure
I) In single bus structure all units are connected in the same bus than connecting different
buses as multiple bus structure.
II) Multiple bus structure's performance is better than single bus structure. Iii)single bus
structure's cost is cheap than multiple bus structure.

Group of lines that serve as connecting path for several devices is called a bus (one bit per
line).
Individual parts must communicate over a communication line or path for exchanging
data, address and control information as shown in the diagram below. Printer example –
processor to printer. A common approach is to use the concept of buffer registers to hold the
content during the transfer.

Buffer registers hold the data during the data transfer temporarily. Ex: printing

Types of Buses:
1. Data Bus:
Data bus is the most common type of bus. It is used to transfer data between different
components of computer. The number of lines in data bus affects the speed of data transfer
between different components. The data bus consists of 8, 16, 32, or 64 lines. A 64-line data
bus can transfer 64 bits of data at one time.

Page 27
Computer Organization and Architecture

The data bus lines are bi-directional. It means that:


CPU can read data from memory using these lines CPU can write data to memory locations
using these lines

2. Address Bus:
Many components are connected to one another through buses. Each component is assigned a
unique ID. This ID is called the address of that component. It a component wants to
communicate with another component, it uses address bus to specify the address of that
component. The address bus is a unidirectional bus. It can carry information only in one
direction. It carries address of memory location from microprocessor to the main memory.

3. Control Bus:
Control bus is used to transmit different commands or control signals from one component to
another component. Suppose CPU wants to read data from main memory. It will use control is
also used to transmit control signals like ASKS (Acknowledgement signals). A control signal
contains the following:
1 Timing information: It specifies the time for which a device can use data and address bus.
2 Command Signal: It specifies the type of operation to be performed.
Suppose that CPU gives a command to the main memory to write data. The memory sends
acknowledgement signal to CPU after writing the data successfully. CPU receives the signal
and then moves to perform some other action.

SOFTWARE
If a user wants to enter and run an application program, he/she needs a System Software.
System Software is a collection of programs that are executed as needed to perform functions
such as:

• Receiving and interpreting user commands


• Entering and editing application programs and storing then as files in secondary storage
devices
• Running standard application programs such as word processors, spread sheets, games
etc…
Operating system - is key system software component which helps the user to exploit the
below underlying hardware with the programs.

Page 28
Computer Organization and Architecture

Types of software
A layer structure showing where Operating System is located on generally used software
systems on desktops

System software
System software helps run the computer hardware and computer system. It includes a
combination of the following:
 device drivers
 operating systems
 servers
 utilities
 windowing systems
 compilers
 debuggers
 interpreters
 linkers

The purpose of systems software is to unburden the applications programmer from the often
complex details of the particular computer being used, including such accessories as
communications devices, printers, device readers, displays and keyboards, and also to
partition the computer's resources such as memory and processor time in a safe and stable
manner. Examples are- Windows XP, Linux and Mac.

Application software
Application software allows end users to accomplish one or more specific (not directly
computer development related) tasks. Typical applications include:

onal software

Page 29
Computer Organization and Architecture

Application software exists for and has impacted a wide variety of topics.

PERFORMANCE
The most important measure of the performance of a computer is how quickly it can
execute programs. The speed with which a computer executes program is affected by the
design of its hardware. For best performance, it is necessary to design the compiles, the
machine instruction set, and the hardware in a coordinated way.

The total time required to execute the program is elapsed time is a measure of the
performance of the entire computer system. It is affected by the speed of the processor, the
disk and the printer. The time needed to execute a instruction is called the processor time.

Just as the elapsed time for the execution of a program depends on all units in a
computer system, the processor time depends on the hardware involved in the execution of
individual machine instructions. This hardware comprises the processor and the memory
which are usually connected by the bus as shown in the fig c.

The pertinent parts of the fig. c are repeated in fig. d which includes the cache
memory as part of the processor unit.

Page 30
Computer Organization and Architecture

Let us examine the flow of program instructions and data between the memory and
the processor. At the start of execution, all program instructions and the required data are
stored in the main memory. As the execution proceeds, instructions are fetched one by one
over the bus into the processor, and a copy is placed in the cache later if the same instruction
or data item is needed a second time, it is read directly from the cache.

The processor and relatively small cache memory can be fabricated on a single IC
chip. The internal speed of performing the basic steps of instruction processing on chip is
very high and is considerably faster than the speed at which the instruction and data can be
fetched from the main memory. A program will be executed faster if the movement of
instructions and data between the main memory and the processor is minimized, which is
achieved by using the cache.

For example:- Suppose a number of instructions are executed repeatedly over a short period
of time as happens in a program loop. If these instructions are available in the cache, they can
be fetched quickly during the period of repeated use. The same applies to the data that are
used repeatedly.

Processor clock: -
Processor circuits are controlled by a timing signal called clock. The clock designer
the regular time intervals called clock cycles. To execute a machine instruction the processor
divides the action to be performed into a sequence of basic steps that each step can be
completed in one clock cycle. The length P of one clock cycle is an important parameter that
affects the processor performance.

Processor used in today’s personal computer and work station have a clock rates that
range from a few hundred million to over a billion cycles per second.

Basic performance equation


We now focus our attention on the processor time component of the total elapsed
time. Let ‘T’ be the processor time required to execute a program that has been prepared in
some high-level language. The compiler generates a machine language object program that
Page 31
Computer Organization and Architecture

corresponds to the source program. Assume that complete execution of the program requires
the execution of N machine cycle language instructions. The number N is the actual number
of instruction execution and is not necessarily equal to the number of machine cycle
instructions in the object program. Some instruction may be executed more than once, which
in the case for instructions inside a program loop others may not be executed all, depending
on the input data used.

Suppose that the average number of basic steps needed to execute one machine cycle
instruction is S, where each basic step is completed in one clock cycle. If clock rate is ‘R’
cycles per second, the program execution time is given by
T=N*S/R
this is often referred to as the basic performance equation.

We must emphasize that N, S & R are not independent parameters changing one may
affect another. Introducing a new feature in the design of a processor will lead to improved
performance only if the overall result is to reduce the value of T.

Pipelining and super scalar operation: -


We assume that instructions are executed one after the other. Hence the value of S is
the total number of basic steps, or clock cycles, required to execute one instruction. A
substantial improvement in performance can be achieved by overlapping the execution of
successive instructions using a technique called pipelining.

Consider Add R1 R2 R3
This adds the contents of R1 & R2 and places the sum into R3.

The contents of R1 & R2 are first transferred to the inputs of ALU. After the addition
operation is performed, the sum is transferred to R3. The processor can read the next
instruction from the memory, while the addition operation is being performed. Then of that
instruction also uses, the ALU, its operand can be transferred to the ALU inputs at the same
time that the add instructions is being transferred to R3.

In the ideal case if all instructions are overlapped to the maximum degree possible
the execution proceeds at the rate of one instruction completed in each clock cycle.

Page 32
Computer Organization and Architecture

Individual instructions still require several clock cycles to complete. But for the purpose of
computing T, effective value of S is 1.

A higher degree of concurrency can be achieved if multiple instructions pipelines are


implemented in the processor. This means that multiple functional units are used creating
parallel paths through which different instructions can be executed in parallel with such an
arrangement, it becomes possible to start the execution of several instructions in every clock
cycle. This mode of operation is called superscalar execution. If it can be sustained for a long
time during program execution the effective value of S can be reduced to less than one. But
the parallel execution must preserve logical correctness of programs that is the results
produced must be same as those produced by the serial execution of program instructions.
Now days many processors are designed in this manner.

Clock rate
These are two possibilities for increasing the clock rate ‘R’.
1. Improving the IC technology makes logical circuit faster, which reduces the time of
execution of basic steps. This allows the clock period P, to be reduced and the clock
rate R to be increased.
2. Reducing the amount of processing done in one basic step also makes it possible to
reduce the clock period P. however if the actions that have to be performed by an
instructions remain the same, the number of basic steps needed may increase.

Increase in the value ‘R’ that are entirely caused by improvements in IC technology
affects all aspects of the processor’s operation equally with the exception of the time it takes
to access the main memory. In the presence of cache the percentage of accesses to the main
memory is small. Hence much of the performance gain excepted from the use of faster
technology can be realized.

Instruction set CISC & RISC:-


Simple instructions require a small number of basic steps to execute. Complex
instructions involve a large number of steps. For a processor that has only simple instruction
a large number of instructions may be needed to perform a given programming task. This
could lead to a large value of ‘N’ and a small value of ‘S’ on the other hand if individual
instructions perform more complex operations, a fewer instructions will be needed, leading
Page 33
Computer Organization and Architecture

to a lower value of N and a larger value of S. It is not obvious if one choice is better than the
other.
But complex instructions combined with pipelining (effective value of S ¿ 1) would
achieve one best performance. However, it is much easier to implement efficient pipelining in
processors with simple instruction sets.

RISC and CISC are computing systems developed for computers. Instruction set or
instruction set architecture is the structure of the computer that provides commands to the
computer to guide the computer for processing data manipulation. Instruction set consists of
instructions, addressing modes, native data types, registers, interrupt, exception handling and
memory architecture. Instruction set can be emulated in software by using an interpreter or
built into hardware of the processor. Instruction Set Architecture can be considered as a
boundary between the software and hardware. Classification of microcontrollers and
microprocessors can be done based on the RISC and CISC instruction set architecture.
Comparison between RISC and CISC:

RISC CISC

It stands for ‘Reduced It stands for ‘Complex


Acronym
Instruction Set Computer’. Instruction Set Computer’.

The RISC processors have a The CISC processors have a


Definition smaller set of instructions with larger set of instructions with
few addressing nodes. many addressing nodes.

It has no memory unit and uses It has a memory unit to


Memory unit a separate hardware to implement complex
implement instructions. instructions.

It has a hard-wired unit of It has a micro-programming


Program
programming. unit.

Design It is a complex complier design. It is an easy complier design.

The calculations are faster and The calculations are slow and
Calculations
precise. precise.

Decoding of instructions is Decoding of instructions is


Decoding
simple. complex.

Time Execution time is very less. Execution time is very high.

External It does not require external It requires external memory

Page 34
Computer Organization and Architecture

memory memory for calculations. for calculations.

Pipelining does function Pipelining does not function


Pipelining
correctly. correctly.

Stalling is mostly reduced in


Stalling The processors often stall.
processors.

Code expansion can be a Code expansion is not a


Code expansion
problem. problem.

Disc space The space is saved. The space is wasted.

Used in high end applications


Used in low end applications
such as video processing,
Applications such as security systems, home
telecommunications and image
automations, etc.
processing.

Page 35
Computer Organization and Architecture

1.8 Performance measurements


The performance measure is the time taken by the computer to execute a given bench
mark. Initially some attempts were made to create artificial programs that could be used as
bench mark programs. But synthetic programs do not properly predict the performance
obtained when real application programs are run.

A non-profit organization called SPEC- system performance Evaluation Corporation


selects and publishes bench marks.

The program selected range from game playing, compiler, and data base applications
to numerically intensive programs in astrophysics and quantum chemistry. In each case, the
program is compiled under test, and the running time on a real computer is measured. The
same program is also compiled and run on one computer selected as reference.

The ‘SPEC’ rating is computed as follows.


SPEC rating = Running time on the reference computer/ Running time on the computer under
test

MULTIPROCESSORS AND MULTICOMPUTER

Page 36
Computer Organization and Architecture

multicomputer multiprocessors
1. A computer made up of several computers. 1. A computer that has more than one CPU on
2. Distributed computing deals with hardware its motherboard.
and software systems containing more than 2. Multiprocessing is the use of two or more
one processing element, multiple programs central processing units (CPUs) within a
3. It can run faster single computer system.
4. A multi-computer is multiple computers, 3. Speed depends on the all processors speed
each of which can have multiple processors. 4. Single Computer with multiple processors
5. Used for true parallel processing. 5. Used for true parallel processing.
6. Processor can not share the memory. 6. Processors can share the memory.
7. Called as message passing multi computers 7. Called as shared memory multi processors
8. Cost is more 8. Cost is low

Data Representation:
Page 37
Computer Organization and Architecture

Registers are made up of flip-flops and flip-flops are two-state devices that can store only 1’s
and 0’s.

There are many methods or techniques which can be used to convert numbers from one
base to another. We'll demonstrate here the following −
 Decimal to Other Base System
 Other Base System to Decimal
 Other Base System to Non-Decimal
 Shortcut method − Binary to Octal
 Shortcut method − Octal to Binary
 Shortcut method − Binary to Hexadecimal
 Shortcut method − Hexadecimal to Binary
Decimal to Other Base System
Steps
 Step 1 − Divide the decimal number to be converted by the value of the new base.
 Step 2 − Get the remainder from Step 1 as the rightmost digit (least significant digit)
of new base number.
 Step 3 − Divide the quotient of the previous divide by the new base.
 Step 4 − Record the remainder from Step 3 as the next digit (to the left) of the new
base number.
Repeat Steps 3 and 4, getting remainders from right to left, until the quotient becomes zero
in Step 3.
The last remainder thus obtained will be the Most Significant Digit (MSD) of the new base
number.
Example −
Decimal Number: 2910
Calculating Binary Equivalent −

Page 38
Computer Organization and Architecture

Step Operation Result Remainder

Step 1 29 / 2 14 1

Step 2 14 / 2 7 0

Step 3 7/2 3 1

Step 4 3/2 1 1

Step 5 1/2 0 1

As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so
that the first remainder becomes the Least Significant Digit (LSD) and the last remainder
becomes the Most Significant Digit (MSD).
Decimal Number − 2910 = Binary Number − 111012.
Other Base System to Decimal System
Steps
 Step 1 − Determine the column (positional) value of each digit (this depends on the
position of the digit and the base of the number system).
 Step 2 − Multiply the obtained column values (in Step 1) by the digits in the
corresponding columns.
 Step 3 − Sum the products calculated in Step 2. The total is the equivalent value in
decimal.

Step Binary Number Decimal Number

Step 1 111012 ((1 × 24) + (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20))10

Step 2 111012 (16 + 8 + 4 + 0 + 1)10

Step 3 111012 2910

Example
Binary Number − 111012
Calculating Decimal Equivalent −
Binary Number − 111012 = Decimal Number − 2910
Other Base System to Non-Decimal System
Steps

Page 39
Computer Organization and Architecture

 Step 1 − Convert the original number to a decimal number (base 10).


 Step 2 − Convert the decimal number so obtained to the new base number.
Example
Octal Number − 258
Calculating Binary Equivalent −
Step 1 − Convert to Decimal

Step Octal Number Decimal Number

Step 1 258 ((2 × 81) + (5 × 80))10

Step 2 258 (16 + 5 )10

Step 3 258 2110

Octal Number − 258 = Decimal Number − 2110


Step 2 − Convert Decimal to Binary

Step Operation Result Remainder

Step 1 21 / 2 10 1

Step 2 10 / 2 5 0

Step 3 5/2 2 1

Step 4 2/2 1 0

Step 5 1/2 0 1

Decimal Number − 2110 = Binary Number − 101012


Octal Number − 258 = Binary Number − 101012
Shortcut method - Binary to Octal
Steps
 Step 1 − Divide the binary digits into groups of three (starting from the right).
 Step 2 − Convert each group of three binary digits to one octal digit.
Example
Binary Number − 101012
Calculating Octal Equivalent −

Page 40
Computer Organization and Architecture

Step Binary Number Octal Number

Step 1 101012 010 101

Step 2 101012 28 58

Step 3 101012 258

Binary Number − 101012 = Octal Number − 258


Shortcut method - Octal to Binary
Steps
 Step 1 − Convert each octal digit to a 3 digit binary number (the octal digits may be
treated as decimal for this conversion).
 Step 2 − Combine all the resulting binary groups (of 3 digits each) into a single binary
number.
Example
Octal Number − 258
Calculating Binary Equivalent −

Step Octal Number Binary Number

Step 1 258 210 510

Step 2 258 0102 1012

Step 3 258 0101012

Octal Number − 258 = Binary Number − 101012


Shortcut method - Binary to Hexadecimal
Steps
 Step 1 − Divide the binary digits into groups of four (starting from the right).
 Step 2 − Convert each group of four binary digits to one hexadecimal symbol.
Example
Binary Number − 101012
Calculating hexadecimal Equivalent −

Step Binary Number Hexadecimal Number

Step 1 101012 0001 0101

Page 41
Computer Organization and Architecture

Step 2 101012 110 510

Step 3 101012 1516

Binary Number − 101012 = Hexadecimal Number − 1516


Shortcut method - Hexadecimal to Binary
Steps
 Step 1 − Convert each hexadecimal digit to a 4 digit binary number (the hexadecimal
digits may be treated as decimal for this conversion).
 Step 2 − Combine all the resulting binary groups (of 4 digits each) into a single binary
number.
Example
Hexadecimal Number − 1516
Calculating Binary Equivalent −

Step Hexadecimal Number Binary Number

Step 1 1516 110 510

Step 2 1516 00012 01012

Step 3 1516 000101012

Hexadecimal Number − 1516 = Binary Number − 101012

Binary Coded Decimal (BCD) code


In this code each decimal digit is represented by a 4-bit binary number. BCD is a way to
express each of the decimal digits with a binary code. In the BCD, with four bits we can
represent sixteen numbers (0000 to 1111). But in BCD code only first ten of these are used
(0000 to 1001). The remaining six code combinations i.e. 1010 to 1111 are invalid in BCD.

Advantages of BCD Codes


 It is very similar to decimal system.

Page 42
Computer Organization and Architecture

 We need to remember binary equivalent of decimal numbers 0 to 9 only.


Disadvantages of BCD Codes
 The addition and subtraction of BCD have different rules.
 The BCD arithmetic is little more complicated.
 BCD needs more number of bits than binary to represent the decimal number. So BCD
is less efficient than binary.

Alphanumeric codes
A binary digit or bit can represent only two symbols as it has only two states '0' or '1'. But
this is not enough for communication between two computers because there we need many
more symbols for communication. These symbols are required to represent 26 alphabets
with capital and small letters, numbers from 0 to 9, punctuation marks and other symbols.

The alphanumeric codes are the codes that represent numbers and alphabetic
characters. Mostly such codes also represent other characters such as symbol and various
instructions necessary for conveying information. An alphanumeric code should at least
represent 10 digits and 26 letters of alphabet i.e. total 36 items. The following three
alphanumeric codes are very commonly used for the data representation.
 American Standard Code for Information Interchange (ASCII).
 Extended Binary Coded Decimal Interchange Code (EBCDIC).
 Five bit Baudot Code.
ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is more commonly
used worldwide while EBCDIC is used primarily in large IBM computers.

Complement Arithmetic
Complements are used in the digital computers in order to simplify the subtraction
operation and for the logical manipulations. For each radix-r system (radix r represents base
of number system) there are two types of complements.

S.N. Complement Description

1 Radix Complement The radix complement is referred to as the r's


complement

2 Diminished Radix Complement The diminished radix complement is referred

Page 43
Computer Organization and Architecture

to as the (r-1)'s complement

Binary system complements


As the binary system has base r = 2. So the two types of complements for the binary system
are 2's complement and 1's complement.

1's complement
The 1's complement of a number is found by changing all 1's to 0's and all 0's to 1's. This is
called as taking complement or 1's complement. Example of 1's Complement is as follows.

2's complement
The 2's complement of binary number is obtained by adding 1 to the Least Significant Bit
(LSB) of 1's complement of the number.
2's complement = 1's complement + 1
Example of 2's Complement is as follows.

Binary Arithmetic
Binary arithmetic is essential part of all the digital computers and many other digital system.
Page 44
Computer Organization and Architecture

Binary Addition
It is a key for binary subtraction, multiplication, division. There are four rules of binary
addition.

In fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e. 0 is written in the given
column and a carry of 1 over to the next column.

Example − Addition

Binary Subtraction
Subtraction and Borrow, these two words will be used very frequently for the binary
subtraction. There are four rules of binary subtraction.

Example − Subtraction

Binary Multiplication

Page 45
Computer Organization and Architecture

Binary multiplication is similar to decimal multiplication. It is simpler than decimal


multiplication because only 0s and 1s are involved. There are four rules of binary
multiplication.

Example − Multiplication

Binary Division
Binary division is similar to decimal division. It is called as the long division procedure.

Example − Division

Subtraction by 1’s Complement

In subtraction by 1’s complement we subtract two binary numbers using carried by 1’s
complement.

Page 46
Computer Organization and Architecture

The steps to be followed in subtraction by 1’s complement are:

i) To write down 1’s complement of the subtrahend.

ii) To add this with the minuend.

iii) If the result of addition has a carry over then it is dropped and an 1 is added in the last bit.

iv) If there is no carry over, then 1’s complement of the result of addition is obtained to get the
final result and it is negative.

Evaluate:

(i) 110101 – 100101

Solution:

1’s complement of 10011 is 011010. Hence

Minued - 110101

1’s complement of subtrahend - 011010

Carry over - 1 001111

010000

The required difference is 10000

(ii) 101011 – 111001

Solution:

1’s complement of 111001 is 000110. Hence

Minued - 101011

1’s complement - 000110

110001

Hence the difference is – 1 1 1 0

(iii) 1011.001 – 110.10

Solution:

1’s complement of 0110.100 is 1001.011 Hence

Minued - 1011.001

1’s complement of subtrahend - 1001.011

Carry over - 1 0100.100

Page 47
Computer Organization and Architecture

0100.101
Hence the required difference is 100.101

(iv) 10110.01 – 11010.10

Solution:

1’s complement of 11010.10 is 00101.01

10110.01

00101.01

11011.10

Hence the required difference is – 00100.01 i.e. – 100.01

Subtraction by 2’s Complement

With the help of subtraction by 2’s complement method we can easily subtract two binary
numbers.

The operation is carried out by means of the following steps:

(i) At first, 2’s complement of the subtrahend is found.

(ii) Then it is added to the minuend.

(iii) If the final carry over of the sum is 1, it is dropped and the result is positive.

(iv) If there is no carry over, the two’s complement of the sum will be the result and it is
negative.

The following examples on subtraction by 2’s complement will make the


procedure clear:

Evaluate:

(i) 110110 - 10110

Solution:

The numbers of bits in the subtrahend is 5 while that of minuend is 6. We make the number of
bits in the subtrahend equal to that of minuend by taking a `0’ in the sixth place of the
subtrahend.

Now, 2’s complement of 010110 is (101101 + 1) i.e.101010. Adding this with the minuend .

1 10110 Minuend

1 01010 2’s complement of subtrahend

Page 48
Computer Organization and Architecture

Carry over 1 1 00000 Result of addition

After dropping the carry over we get the result of subtraction to be 100000.

(ii) 10110 – 11010

Solution:

2’s complement of 11010 is (00101 + 1) i.e. 00110. Hence

Minued - 10110

2’s complement of subtrahend - 00110

Result of addition - 11100

As there is no carry over, the result of subtraction is negative and is obtained by writing the 2’s
complement of 11100 i.e.(00011 + 1) or 00100.

Hence the difference is – 100.

(iii) 1010.11 – 1001.01

Solution:

2’s complement of 1001.01 is 0110.11. Hence

Minued - 1010.11

2’s complement of subtrahend - 0110.11

Carry over 1 0001.10


After dropping the carry over we get the result of subtraction as 1.10.

(iv) 10100.01 – 11011.10

Solution:

2’s complement of 11011.10 is 00100.10. Hence

Minued - 10100.01

2’s complement of subtrahend - 01100.10

Result of addition - 11000.11

As there is no carry over the result of subtraction is negative and is obtained by writing the 2’s
complement of 11000.11.

Hence the required result is – 00111.01.

Page 49
Computer Organization and Architecture

Error Detection & Correction


What is Error?
Error is a condition when the output information does not match with the input information.
During transmission, digital signals suffer from noise that can introduce errors in the binary
bits travelling from one system to other. That means a 0 bit may change to 1 or a 1 bit may
change to 0.

Error-Detecting codes
Whenever a message is transmitted, it may get scrambled by noise or data may get
corrupted. To avoid this, we use error-detecting codes which are additional data added to a
given digital message to help us detect if an error occurred during transmission of the
message. A simple example of error-detecting code is parity check.

Error-Correcting codes
Along with error-detecting code, we can also pass some data to figure out the original
message from the corrupt message that we received. This type of code is called an error-
correcting code. Error-correcting codes also deploy the same strategy as error-detecting
codes but additionally, such codes also detect the exact location of the corrupt bit.
In error-correcting codes, parity check has a simple way to detect errors along with a
sophisticated mechanism to determine the corrupt bit location. Once the corrupt bit is
located, its value is reverted (from 0 to 1 or 1 to 0) to get the original message.

How to Detect and Correct Errors?


To detect and correct the errors, additional bits are added to the data bits at the time of
transmission.
 The additional bits are called parity bits. They allow detection or correction of the
errors.
 The data bits along with the parity bits form a code word.

Page 50
Computer Organization and Architecture

Parity Checking of Error Detection


It is the simplest technique for detecting and correcting errors. The MSB of an 8-bits word is
used as the parity bit and the remaining 7 bits are used as data or message bits. The parity of
8-bits transmitted word can be either even parity or odd parity.

Even parity -- Even parity means the number of 1's in the given word including the parity
bit should be even (2,4,6,....).
Odd parity -- Odd parity means the number of 1's in the given word including the parity bit
should be odd (1,3,5,....).

Use of Parity Bit


The parity bit can be set to 0 and 1 depending on the type of the parity required.
 For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is
even. Shown in fig. (a).
 For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is
odd. Shown in fig. (b).

How Does Error Detection Take Place?


Parity checking at the receiver can detect the presence of an error if the parity of the receiver
signal is different from the expected parity. That means, if it is known that the parity of the
transmitted signal is always going to be "even" and if the received signal has an odd parity,
then the receiver can conclude that the received signal is not correct. If an error is detected,

Page 51
Computer Organization and Architecture

then the receiver will ignore the received byte and request for retransmission of the same
byte to the transmitter.

Page 52
Computer Organization and Architecture

UNIT – II (12 Lectures)


BASIC COMPUTER ORGANIZATION AND DESIGN: Instruction codes, computer
registers, computer instructions, instruction cycle, timing and control,
memory-reference instructions, input-output and interrupt.
Book: M. Moris Mano (2006), Computer System Architecture, 3rd edition, Pearson/PHI,
India: Unit-5 Pages: 123-157

Central processing unit: stack organization, instruction formats, addressing modes,


data transfer and manipulation, program control, reduced instruction set computer
(RISC).
Book: M. Moris Mano (2006), Computer System Architecture, 3rd edition, Pearson/PHI,
India: Unit-8 Pages: 241-297

Instruction Codes
Computer instructions are the basic components of a machine language program. They
are also known as macro operations, since each one is comprised of sequences of
micro operations. Each instruction initiates a sequence of micro operations that fetch
operands from registers or memory, possibly perform arithmetic, logic, or shift
operations, and store results in registers or memory.

Instructions are encoded as binary instruction codes. Each instruction code


contains of a operation code, or opcode, which designates the overall purpose of the
instruction (e.g. add, subtract, move, input, etc.). The number of bits allocated for the
opcode determined how many different instructions the architecture supports.

In addition to the opcode, many instructions also contain one or more operands,
which indicate where in registers or memory the data required for the operation is
located. For example, and add instruction requires two operands, and a not instruction
requires one.
15 12 11 65 0
+-----------------------------------+

Page 53
Computer Organization and Architecture

| Opcode | Operand | Operand |


+-----------------------------------+

The opcode and operands are most often encoded as unsigned binary numbers in
order to minimize the number of bits used to store them. For example, a 4-bit opcode
encoded as a binary number could represent up to 16 different operations.

The control unit is responsible for decoding the opcode and operand bits in the
instruction register, and then generating the control signals necessary to drive all
other hardware in the CPU to perform the sequence of microoperations that comprise
the instruction.

Basic Computer Instruction Format:


The Basic Computer has a 16-bit instruction code similar to the examples described
above. It supports direct and indirect addressing modes.
How many bits are required to specify the addressing mode?
15 14 12 11 0
+------------------+
| I | OP | ADDRESS |
+------------------+
I = 0: direct
I = 1: indirect

Computer Instructions
All Basic Computer instruction codes are 16 bits wide. There are 3 instruction code
formats:
Memory-reference instructions take a single memory address as an operand, and
have the format:
15 14 12 11 0
+-------------------+
| I | OP | Address |
+-------------------+

Page 54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy