CTE 234 Computer Architecture Lecture Notes
CTE 234 Computer Architecture Lecture Notes
In the 1940s, a mathematician called John Von Neumann described the basic arrangement (or
architecture) of a computer. Most computers today follow the concept that he described although there
are other types of architecture. When we talk about the Von Neumann architecture, we are actually
talking about the relationship between the hardware that makes up a Von Neumann-based computer.
Uses one memory for both instructions and data. A von Neumann computer cannot distinguish between
data and instructions in a memory location! It ‘knows’ only because of the location of a particular bit
pattern in RAM.
Executes programs by doing one instruction after the next in a serial manner using a fetch-decode-
execute cycle.
In this chapter, we are going to build upon and refine the ideas introduced in an earlier chapter. You
should re-read the relevant chapter on CPUs before you start this one. We have already said that the
CPU was made up of 4 important components:
The ALU.
The registers.
Because the IAS is so important, we are definitely going to move it to its own section in our model of a
computer. (We discussed this previously). We need to get data into and out of the computer so we will
include this as a separate section as well. We will also introduce the idea of a clock and clock cycles in
the CPU. Our new model of a computer now looks like this:
von_neumann
The CPU, or Central Processing Unit, is the name given to the component that controls the computer
and works on the data. It can be split up into four sub-components:
4 parts
We know a few things from before about the Von Neumann CPU.
A Von Neumann CPU has a control unit. The control unit is in charge of ‘fetching’ each instruction that
needs to be executed in a program by issuing control signals to the hardware. It then decodes the
instruction and finally issues more control signals to the hardware to actually execute it.
3) Registers
A Von Neumann CPU has registers. These are very fast memory circuits. They hold information such as
the address of the next instruction (Program Counter), the current instruction being executed (Current
Instruction Register), the data being worked on and the results of arithmetic and logical operations
(Accumulators), information about the last operation (Status Register) and whether an interrupt has
happened (Interrupt Register). Registers are covered in a lot more detail later in this chapter.
4) The clock
Instructions are carried out to the beat of the clock! Some instructions take one beat and others more
than one beat. Very roughly speaking, the faster the clock, the more clock beats you have per second so
the more instructions per section you can do and the faster your computer will go.
We also know that the Von Neumann computer has an IAS, or Immediate Access Store, where it puts
both programs and data. We often commonly refer to this memory as RAM. RAM is made up of lots of
boxes that can store a bit pattern. Each box has a unique address. A memory address might store an
instruction (which is made up of an operator and an operand) or it might store just a piece of data. A
Von Neumann computer can’t tell the difference between the bit patterns as such, but ‘knows’ indirectly
because of where the bit pattern is stored in RAM. Pre-Von Neumann computers used to split up
memory into program memory and data memory and this made computers relatively complex. Von
Neumann was the first to realise that there was actually no difference between the nature of an
instruction and the nature of a piece of data. One important function of an operating system is to
manage memory and to keep track of the RAM addresses of applications as well as any data.
We also know that computers have an address bus, so that the CPU can address each individual memory
location in the IAS, for example, when it wants to store a piece of data or retrieve a piece of data. The
data itself is moved about between devices on a data bus. There is also a control bus, to generate signals
to manage the whole process.
io_controllers
Of course, there are a whole range of other I/O controllers we could have included. We could have
shown ones for devices such as a mouse, a MIDI device, a printer, a DVD player, a SCSI device as used
with many scanners or a network card, to name just a few.
Whatever you do to improve performance, you cannot get away from the fact that instructions can only
be done one at a time and can only be carried out sequentially. Both of these factors hold back the
efficiency of the CPU. This is commonly referred to as the 'Von Neumann bottleneck'. You can provide a
Von Neumann processor with more RAM, more cache or faster components but if real gains are to be
made in CPU performance then a major review needs to take place of CPU design.
The main idea behind this is to make hardware simpler by using an instruction set composed of a few
basic steps for loading, evaluating, and storing operations just like a load command will load data, a
store command will store the data.
The main idea is that a single instruction will do all loading, evaluating, and storing operations just like a
multiplication command will do stuff like loading data, evaluating, and storing it, hence it’s complex.
RISC: Reduce the cycles per instruction at the cost of the number of instructions per program.
CISC: The CISC approach attempts to minimize the number of instructions per program but at the cost of
an increase in the number of cycles per instruction.
Earlier when programming was done using assembly language, a need was felt to make instruction do
more tasks because programming in assembly was tedious and error-prone due to which CISC
architecture evolved but with the uprise of high-level language dependency on assembly reduced RISC
architecture prevailed.
Characteristic of RISC –
Characteristic of CISC –
Instruction may take more than a single clock cycle to get executed.
Difference –
RISC CISC
Transistors are used for more registers Transistors are used for storing complex
Instructions
Can perform only Register to Register Arithmetic operations Can perform REG to REG or REG to
MEM or MEM to MEM
CISC approach: There will be a single command or instruction for this like ADD which will perform the
task.
RISC approach: Here programmer will write the first load command to load data in registers then it will
use a suitable operator and then it will store the result in the desired location.
So, add operation is divided into parts i.e. load, operate, store due to which RISC programs are longer
and require more memory to get stored but require fewer transistors due to less complex command.
In Computer Architecture, the Registers are very fast computer memory which are used to execute
programs and operations efficiently. This does by giving access to commonly used values, i.e., the values
which are in the point of operation/execution at that time. So, for this purpose, there are several
different classes of CPU registers which works in coordination with the computer memory to run
operations efficiently.
The sole purpose of having register is fast retrieval of data for processing by CPU. Though accessing
instructions from RAM is comparatively faster with hard drive, it still isn’t enough for CPU. For even
better processing, there are memories in CPU which can get data from RAM which are about to be
executed beforehand. After registers we have cache memory, which are faster but less faster than
registers
These are classified as given below.
Accumulator:
This is the most frequently used register used to store data taken from memory. It is in different
numbers in different microprocessors.
It holds the address of the location to be accessed from memory. MAR and MDR (Memory Data
Register) together facilitate the communication of the CPU and the main memory.
It contains data to be written into or to be read out from the addressed location.
These are numbered as R0, R1, R2….Rn-1, and used to store temporary data during any ongoing
operation. Its content can be accessed by assembly programming. Modern CPU architectures tends to
use more GPR so that register-to-register addressing can be used more, which is comparatively faster
than other addressing modes.
Program Counter (PC) is used to keep the track of execution of the program. It contains the memory
address of the next instruction to be fetched. PC points to the address of the next instruction to be
fetched from the main memory when the previous instruction has been successfully completed.
Program Counter (PC) also functions to count the number of instructions. The incrementation of PC
depends on the type of architecture being used. If we are using 32-bit architecture, the PC gets
incremented by 4 every time to fetch the next instruction.
The IR holds the instruction which is just about to be executed. The instruction from PC is fetched and
stored in IR. As soon as the instruction in placed in IR, the CPU starts executing the instruction and the
PC points to the next instruction to be executed.
Condition code register ( CCR ) :
Condition code registers contain different flags that indicate the status of any operation.for instance lets
suppose an operation caused creation of a negative result or zero, then these flags are set high
accordingly.and the flags are
Carry C: Set to 1 if an add operation produces a carry or a subtract operation produces a borrow;
otherwise cleared to 0.
Negate N: Meaningful only in signed number operations. Set to 1 if a negative result is produced.
These are generally decided by ALU. So, these are the different registers which are operating for a
specific purpose.
Registers, a memory location within the actual processor that work at very fast speeds. It stores
instructions which await to be decoded or executed.
1. PC - program counter - stores address of the -> next <- instruction in RAM
next instruction
3. MDR - memory data register - stores the data that is to be sent to or fetched from memory
4. CIR - current instruction register - stores actual instruction that is being decoded and executed
Software describes a collection of programs and procedures that perform tasks on a computer. Software
is an ordered sequence of instructions that change the state of a computer’s hardware.There are three
general types of software:
System software
Programming software
Application software
When you think of computer science, software is probably what comes to mind. Software is what
developers actually code. Those programs are then installed onto a hard drive.
Hardware is anything physically connected to a computer. For example, your display monitor, printer,
mouse, and hard drive are all hardware components.
Hardware and software interact with each other. The software “tells” the hardware which tasks to
perform, and hardware makes it possible to actually perform them.
Hardware Software
Physical devices that store and run software Collection of coded instructions that all us to interact
with a computer
Monitor, printer, scanners , label makers, routers and hard driveAdobe, Google Chrome, Microsoft
Excel, Spotify
Hardware begins functioning when software is loaded. Software must be installed on hardware
Hardware will wear down over time Software will not wear down, but it is vulnerable to bugs /
becoming outdated
Hardware components
Now that we understand the difference between hardware and software, let’s learn about the hardware
components of a computer system. Remember: hardware includes the physical parts of a computer that
the software instructs.
CPU
The Central Processing Unit (CPU) is a physical object that processes information on a computer. It takes
data from the main memory, processes it, and returns the modified data into the main memory. It is
comprised of two sub-units:
The control unit (CU): controls data flow from and into the main memory
Control Unit
Inputs/Outputs
Memory Unit
Registers
Hardware and software are essential parts of a computer system. Hardware components are the
physical parts of a computer, like the central processing unit (CPU), mouse, storage, and more. Software
components are the set of instructions that we store and run on our hardware. Together, they form a
computer.
Software describes a collection of programs and procedures that perform tasks on a computer. Software
is an ordered sequence of instructions that change the state of a computer’s hardware.There are three
general types of software:
System software
Programming software
Application software
When you think of computer science, software is probably what comes to mind. Software is what
developers actually code. Those programs are then installed onto a hard drive.
Hardware is anything physically connected to a computer. For example, your display monitor, printer,
mouse, and hard drive are all hardware components.
Hardware and software interact with each other. The software “tells” the hardware which tasks to
perform, and hardware makes it possible to actually perform them.
Note: Most computers require at least a hard drive, display, keyboard, memory, motherboard,
processor, power supply, and video card to function.
Hardware Software
Physical devices that store and run software Collection of coded instructions that all us to interact
with a computer
Monitor, printer, scanners , label makers, routers and hard driveAdobe, Google Chrome, Microsoft
Excel, Spotify
Hardware begins functioning when software is loaded. Software must be installed on hardware
Hardware will wear down over time Software will not wear down, but it is vulnerable to bugs /
becoming outdated
Hardware components
Now that we understand the difference between hardware and software, let’s learn about the hardware
components of a computer system. Remember: hardware includes the physical parts of a computer that
the software instructs.
CPU
The Central Processing Unit (CPU) is a physical object that processes information on a computer. It takes
data from the main memory, processes it, and returns the modified data into the main memory. It is
comprised of two sub-units:
The control unit (CU): controls data flow from and into the main memory
This computer architecture design, created by John von Neumann in 1945, is still used in most
computers produced today. The Von Neumann architecture is based on the concept of a stored-
program computer. Instruction and program data are stored in the same memory.
Control Unit
Inputs/Outputs
Memory Unit
Registers
The Von Neumann Architecture.
The input unit takes inputs from the real world or an input device and converts that data into streams of
bytes. Common input devices include a keyboard, mouse, microphone, camera, and USB.
The output unit, on the other hand, takes the processed data from the storage of CPU and represents it
in a way a human can understand. Common output devices include a monitor screens, printers, and
headphones.
Storage Units
After the data is retrieved and converted, it must be stored in the memory. The storage unit or memory
is the physical memory space. It is divided into byte-sized storage locations.
S-R latch
D latch
Memory
There are components to a computer’s hardware memory. Main memory or random access memory
(RAM) is the physical memory space inside a computer. It stores data and instructions that can directly
be accessed by the CPU. Computers usually have a limited amount of main memory to store all your
data.
That is when secondary storage comes into use. Secondary storage augments the main memory and
holds data and programs that are not needed immediately.
Secondary storage devices include hard drives, compact discs (CD), USB flash drives, etc. Secondary
storage devices cannot be directly accessed by the CPU.
Software components
Now let’s discuss the different software components that we need to have a functioning computer.
Remember: software comprises the set of programs, procedures, and routines associated needed to
operate a computer.
Machine language
A computer can only process binary: a stream of ones and zeros. Binary is the computer’s language.
Instructions for the computer are also stored as ones and zeros that the computer must decode and
execute.
Assembly language
Assembly language is a human-readable instruction mode that translates binary opcode into assembly
instruction. A CPU cannot process or execute assembly instructions, so an encoder is required that can
convert assembly language to machine language.
Assembler
An assembler translates an assembly language program into machine language. The code snippet below
is an assembly program that prints “Hello, world!” on the screen for the X86 processor.
section .data
section .text
global _start
_start:
mov rax, 1
mov rdi, 1
mov rdx, 14
syscall
mov rax, 60
mov rdi, 0
syscall
High-level languages
Assembly language is referred to as a low-level language because it’s a lot like machine language. To
overcome these shortcomings, high-level languages were created.
These are called programming languages, and they allow us to create powerful, complex, human-
readable programs without large numbers of low-level instructions. Some of the most famous high-level
languages are:
Python
C++
Java
Software design is the process of transforming particular requirements into a suitable program using
code and a high-level language. We need to properly design a program and system that meets our goals.
Developers use software design to think through all the parts of their code and system. Software design
includes three levels:
Architectural Design: an abstract version of the program or system that outlines how components
interact with each other.
High-level Design: this part breaks the design into sub-systems and modules. High-level design focuses
on how the system should be implemented.
Detailed Design: this part deals with the implementation. This is here we define the logical structure of
each module
Whether it is a read operation or write operation the CPU calculates the address of the required data
and sends it on the data bus for the execution of the required operation. The maximum number of
memory locations that can be accessed in a system is determined by the number of lines of an address
bus.
An address bus of n lines can be addressed at the most 2n locations directly. Thus a 16-bit address bus
can allow access 2 16 bit or 64 K Byte of memory.
A data bus is used to carry the data and instructions from the CPU to memory and peripheral devices
and vice versa. Thus it is a bidirectional bus. It is one of most important parts of the connections to the
CPU because every program instruction and every byte of data must travel across the bus at some point.
The size of the data bus is measured in bits. The data bus size has much influence on the computer
architecture because the important parameters of it like word size, the quantum of data etc. are
determined and manipulated by the size of the data bus.
Generally, a microprocessor is called n-bit processors. Thus as the CPU became more advanced, the data
bus grew in size. A 64-bit data bus can transfer 8 bytes in every bus cycle and thus its speed is much
faster as compared to the 8-bit processor that can transfer one byte in every bus cycle.
A control bus contains various individual lines carrying synchronizing signals that are used to control.
Various peripheral devices connected to the CPU. The common signals that are transferred to the
control bus from CPU to devices and vice versa are memory read, memory writes, I/O read, I/O write
etc.
Signals are designed, keeping in mind, the design philosophy of the microprocessor and the requirement
of the various devices connected to the CPU. So different types of the microprocessor have different
control signals. See Below for better understanding.
Memory management function of operating system helps in allocating the main memory space to the
processes and their data at the time of their execution.
The following are the three key memory management techniques used by an operating system:
Segmentation
Paging
Swapping
1) Segmentation
Segmentation refers to the technique of dividing the physical memory space into multiple blocks. Each
block has specific length and is known as a segment. Each segment has a starting address called the base
address. The length of the segment determines the availability memory space in the segment
The location of data values stored in the segment can be determined by the distance of actual position
of data value from base address of the segment. The distance between the actual position of data and
the base address of segment is known as displacement or offset value. In other words, when there is a
need to obtain data from required segmented memory then the actual address of data is calculated by
adding the base address of the segment with offset value.
The base address of the segment and the offset value is specified in a program instruction itself. The
following figure shows how the actual position of an operand in a segment is obtained by adding the
base address and offset value.
2) Paging
Paging is a technique in which the main memory of computer system is organized in the form of equal
sized blocks called pages. In this technique, the address of occupied pages of physical memory are
stored in a table, which is known as page table.
Paging enables the operating system to obtain data from the physical memory location without
specifying lengthy memory address in the instruction. In this technique, the virtual address is used to
map the physical address of the data. The length of virtual address is specified in the instruction and is
smaller than physical address of the data. It consists of two different numbers, first number is the
address of page called virtual page in the page table and the second number is the offset value of the
actual data in the page.
The above figure shows how the virtual address is used to obtain the physical address of an occupied
page of physical memory using a page table.
3) Swapping
Swapping is the technique used by an operating system for efficient management of memory space of a
computer system. Swapping involves performing two tasks called swapping in and swapping out. The
task of placing the pages or blocks of data from the hard disk to the main memory is called swapping in.
On the other hand, the task of removing pages or blocks of data from main memory to the hard disk is
called swapping out. The swapping technique is useful uwhen larger program is to be executed or some
operations have to performed on a large file.
Cache Memory in Computer Organization
Cache Memory is a special very high-speed memory. It is used to speed up and synchronize with high-
speed CPU. Cache memory is costlier than main memory or disk memory but more economical than CPU
registers. Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. It holds frequently requested data and instructions so that they are immediately available to the
CPU when needed. Cache memory is used to reduce the average time to access data from the Main
memory. The cache is a smaller and faster memory that stores copies of the data from frequently used
main memory locations. There are various different independent caches in a CPU, which store
instructions and data.
Levels of memory:
Level 1 or Register – It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter, address register etc.
Level 2 or Cache memory – It is the fastest memory which has faster access time where data is
temporarily stored for faster access.
Level 3 or Main Memory – It is memory on which computer works currently. It is small in size and once
power is off data no longer stays in this memory.
Level 4 or Secondary Memory – It is external memory which is not as fast as main memory but data
stays permanently in this memory.
Cache Performance: When the processor needs to read or write a location in main memory, it first
checks for a corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read
from the cache.
If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache
miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled
from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, and higher associativity, reduce miss
rate, reduce miss penalty, and reduce the time to hit in the cache.
Cache Mapping: There are three different types of mapping used for the purpose of cache memory
which is as follows: Direct mapping, Associative mapping, and Set-Associative mapping. These are
explained below.
A. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into only one
possible cache line. or In Direct mapping, assign each memory block to a specific line in the cache. If a
line is previously taken up by a memory block when a new block needs to be loaded, the old block is
trashed. An address space is split into two parts index field and a tag field. The cache is used to store the
tag field whereas the rest is stored in the main memory. Direct mapping`s performance is directly
proportional to the Hit ratio.
i = j modulo m
where
For purposes of cache access, each main memory address can be viewed as consisting of three fields.
The least significant w bits identify a unique word or byte within a block of main memory. In most
contemporary machines, the address is at the byte level. The remaining s bits specify one of the 2s
blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (most significant
portion) and a line field of r bits. This latter field identifies one of the m=2r lines of the cache. Line offset
is index bits in the direct mapping.
B. Associative Mapping
In this type of mapping, the associative memory is used to store content and addresses of the memory
word. Any block can go into any line of the cache. This means that the word id bits are used to identify
which word in the block is needed, but the tag becomes all of the remaining bits. This enables the
placement of any word at any place in the cache memory. It is considered to be the fastest and the most
flexible mapping form. In associative mapping the index bits are zero.
C. Set-associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct mapping are
removed. Set associative addresses the problem of possible thrashing in the direct mapping method. It
does this by saying that instead of having exactly one line that a block can map to in the cache, we will
group a few lines together creating a set. Then a block in memory can map to any one of the lines of a
specific set. Set-associative mapping allows that each word that is present in the cache can have two or
more words in the main memory for the same index address. Set associative cache mapping combines
the best of direct and associative cache mapping techniques. In set associative mapping the index bits
are given by the set offset bits. In this case, the cache consists of a number of sets, each of which
consists of a number of lines. The relationships are
m=v*k
i= j mod v
where
v=number of sets
Usually, the cache memory can store a reasonable number of blocks at any given time, but this number
is small compared to the total number of blocks in the main memory.
The correspondence between the main memory blocks and those in the cache is specified by a mapping
function.
Primary Cache – A primary cache is always located on the processor chip. This cache is small and its
access time is comparable to that of processor registers.
Secondary Cache – Secondary cache is placed between the primary cache and the rest of the memory. It
is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the processor chip.
Spatial Locality of reference This says that there is a chance that the element will be present in close
proximity to the reference point and next time if again searched then more close proximity to the point
of reference.
Temporal Locality of reference In this Least recently used algorithm will be used. Whenever there is
page fault occurs within a word will not only load word in main memory but complete page fault will be
loaded because the spatial locality of reference rule says that if you are referring to any word next word
will be referred to in its register that’s why we load complete page table so the complete block will be
loaded.
Microprocessor
Computer's Central Processing Unit (CPU) built on a single Integrated Circuit (IC) is called a
microprocessor.
A digital computer with one microprocessor which acts as a CPU is called microcomputer.
It is a programmable, multipurpose, clock -driven, register-based electronic device that reads binary
instructions from a storage device called memory, accepts binary data as input and processes data
according to those instructions and provides results as output.
The microprocessor contains millions of tiny components like transistors, registers, and diodes that work
together.
Evolution of Microprocessors
We can categorize the microprocessor according to the generations or according to the size of the
microprocessor:
The first generation microprocessors were introduced in the year 1971-1972 by Intel Corporation. It was
named Intel 4004 since it was a 4-bit processor.
It was a processor on a single chip. It could perform simple arithmetic and logical operations such as
addition, subtraction, Boolean OR and Boolean AND.
It had a control unit capable of performing control functions like fetching an instruction from storage
memory, decoding it, and then generating control pulses to execute it.
The second generation microprocessors were introduced in 1973 again by Intel. It was a first 8 - bit
microprocessor which could perform arithmetic and logic operations on 8-bit words. It was Intel 8008,
and another improved version was Intel 8088.
The third generation microprocessors, introduced in 1978 were represented by Intel's 8086, Zilog Z800
and 80286, which were 16 - bit processors with a performance like minicomputers.
Several different companies introduced the 32-bit microprocessors, but the most popular one is the
Intel 80386.
From 1995 to now we are in the fifth generation. After 80856, Intel came out with a new processor
namely Pentium processor followed by Pentium Pro CPU, which allows multiple CPUs in a single system
to achieve multiprocessing.
Bus - Set of conductors intended to transmit data, address or control information to different elements
in a microprocessor. A microprocessor will have three types of buses, i.e., data bus, address bus, and
control bus.
IPC (Instructions Per Cycle) - It is a measure of how many instructions a CPU is capable of executing in a
single clock.
Clock Speed - It is the number of operations per second the processor can perform. It can be expressed
in megahertz (MHz) or gigahertz (GHz). It is also called the Clock Rate.
Word Length - The number of bits the processor can process at a time is called the word length of the
processor. 8-bit Microprocessor may process 8 -bit data at a time. The range of word
length is from 4 bits to 64 bits depending upon the type of the microcomputer.
Data Types - The microprocessor supports multiple data type formats like binary, ASCII, signed and
unsigned numbers.
Working of Microprocessor
The microprocessor follows a sequence to execute the instruction: Fetch, Decode, and then Execute.
Initially, the instructions are stored in the storage memory of the computer in sequential order. The
microprocessor fetches those instructions from the stored area (memory), then decodes it and executes
those instructions till STOP instruction is met. Then, it sends the result in binary form to the output port.
Between these processes, the register stores the temporary data and ALU (Arithmetic and Logic Unit)
performs the computing functions.
Features of Microprocessor
Low Cost - Due to integrated circuit technology microprocessors are available at very low cost. It will
reduce the cost of a computer system.
High Speed - Due to the technology involved in it, the microprocessor can work at very high speed. It can
execute millions of instructions per second.
Small Size - A microprocessor is fabricated in a very less footprint due to very large scale and ultra large
scale integration technology. Because of this, the size of the computer system is reduced.
Versatile - The same chip can be used for several applications, therefore, microprocessors are versatile.
Low Power Consumption - Microprocessors are using metal oxide semiconductor technology, which
consumes less power.
Less Heat Generation - Microprocessors uses semiconductor technology which will not emit much heat
as compared to vacuum tube devices.
Reliable - Since microprocessors use semiconductor technology, therefore, the failure rate is very less.
Hence it is very reliable.
Portable - Due to the small size and low power consumption microprocessors are portable.
What is OPCODE?
Opcode is the first part of an instruction that tells the computer what function to perform and is also
called Operation codes. Opcodes are the numeric codes that hold the instructions given to the computer
system.
These are the instructions that describe the CPU what operations are to be performed. The computer
system has an operation code or opcode for each and every function given to it.
What is OPERAND?
Operand is another second part of instruction, which indicates the computer system where to find the
data or instructions or where to store the data or instructions.
The number of operands varies amongst different computer systems. Each instruction indicates the
Control Unit of the computer system what to perform and how to perform it.
The operations are Arithmetic, Logical, Branch operation, so on depending upon the problem that is
provided to the computer.
Registers take place in each instruction cycle. Various registers and instruction cycles are discussed
below:
1. Memory Address Register (MAR) - Memory address register is associated with the address lines of the
system bus. It defines the address in memory for read and write operation.
2. Memory Buffer Register (MBR) - Memory Buffer Register is associated with the data lines of the
system bus. It defines the value that is to be stored in the memory or the last value scanned from the
memory.
3. Program Counter (PC) - Program Counter carries the address of the next instruction that is to be
fetched.
4. Instruction Register (IR) - Instruction Register holds the last instruction fetched.
The Instruction Cycle - Each and every phase of the instruction cycle can be broken down into a
sequence of elementary micro-operations.
The instruction cycle consists of four cycles i.e., The Fetch Cycle, The Indirect Cycle, The Execute Cycle,
and The Interrupt Cycle.
Indirect cycle - The Indirect cycle is always followed by the Execute cycle and the Interrupt cycle is
always followed by the Fetch cycle. The next cycle totally depends upon the state of the system.
A 2-bit register called Instruction Cycle Code (ICC) is assumed. It labels the state of the processor. At last
of each and every cycle, Instruction Cycle Code (I CC) is set accordingly.
1. Fetch Cycle
2. Indirect Cycle
3. Execute Cycle
4. Interrupt Cycle
The Fetch cycle - At the starting of the fetch cycle, the address of the next instruction that is to be
executed is stored in the program counter (PC) registers.
Step - 2. Address in Memory Address Register (MAR) is located on the address bus, there is READ
command on the control bus, result arrives on the data bus, and then it is copied into Memory Buffer
Register (MBR).
The program counter (PC) is increased by 1 in such a way that it get ready for the next instruction.
Step - 3. Counter of Memory Buffer Register (MBR) is lifted to the Instruction Register (IR).
The fetch cycle includes 3 steps and 4 micro-operations. It is to be noted that the second and third
micro-operations happen simultaneously during the second time unit.
The Indirect Cycle - When an instruction is fetched, the next step is to fetch source operands that are to
be fetched by indirect addressing.
Step - 1. The addressing field of instruction is moved to Memory Address Register (MAR) which is used
for fetching the address of the operand.
Step - 2. The address field of Instruction Register (IR) is amended from the Memory Buffer Register
(MBR).
Step - 3. Instruction Register (IR) is now in the state i.e., IR is ready for execution cycle but it considers
interrupt cycle first that's why it skips the execution cycle for a bit.
The Execution Cycle - Unlike the three other cycles, the execute cycle is different. The fetch, indirect,
and interrupt cycles are very easy but the execution cycle is different. Let us consider an example of ADD
instruction
ADD R, X
Here, the instruction adds the value of location X to register R. The beginning should be with Instruction
Register (IR) containing the ADD instruction.
Step - 1. The addition of the Instruction Register (IR) is loaded into the Memory Address Register (MAR).
Step - 2. The address field of Instruction Register (IR) is updated from the Memory Buffer Register
(MBR).
Step - 3. The value of R and Memory Buffer Register (MBR) is being added by Arithmetic Logic Unit
(ALU).
The Interrupt Cycle - When the execution cycle completed, there is a test that is build to check whether
an enabled interrupt has arisen or not.
If an enabled interrupt has arisen, the interrupt cycle also arises. Its nature depends upon the machine.
Step - 1. Values of Program Counter (PC) is moved to Memory Buffer Register (MBR).
Step - 2. Memory Address Register (MAR) is loaded with an address where the value of the Program
Counter (PC) is to be saved. PC is loaded with the address of begin of the interrupt routine.
Step - 3. Memory Buffer Register (MBR) having the old value of Program Counter (PC) is stored in
memory.