0% found this document useful (0 votes)
14 views10 pages

2 - Cpe410l2

The document discusses CPU organization, focusing on the data path, which includes registers, the ALU, and buses that facilitate computations. It explains instruction execution cycles, the differences between RISC and CISC architectures, and the concepts of instruction-level and processor-level parallelism, including pipelining and multiprocessor systems. Additionally, it covers advanced processing techniques like vector processors and array computers for enhanced performance.

Uploaded by

olaitankazeem73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views10 pages

2 - Cpe410l2

The document discusses CPU organization, focusing on the data path, which includes registers, the ALU, and buses that facilitate computations. It explains instruction execution cycles, the differences between RISC and CISC architectures, and the concepts of instruction-level and processor-level parallelism, including pipelining and multiprocessor systems. Additionally, it covers advanced processing techniques like vector processors and array computers for enhanced performance.

Uploaded by

olaitankazeem73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CPU Organization

: Data Path

A+B

ALU input register


A B ALU input bus

ALU

A+B ALU output register


Data Path
• This part is called the data path and is very important in all machines.
It consists of the registers (typically 1 to 32), the ALU (Arithmetic
Logic Unit), and several buses connecting the pieces. The registers
feed into two ALU input registers, labelled A and B. These registers
hold the ALU input while the ALU is performing some computation.
• The process of running two operands through the ALU and storing the
result is called the data path cycle and is the heart of most CPUs. To a
considerable extent, it defines what the machine can do. The faster
the data path cycle is, the faster the machine runs
CPU Instructions: register-memory
or register-register instructions
• Register-memory instructions allow memory words to be fetched into
registers, where they can be used as ALU inputs in subsequent
instructions, for example. Words are the units of data moved between
memory and registers. A word might be an integer. Other register-
memory instructions allow registers to be stored back into memory.
• Register-register. A typical register-register instruction fetches two
operands from the registers, brings them to the ALU input registers,
performs some operation on them, for example, addition or Boolean
AND, and stores the result back in one of the registers.
Instruction Execution
• The CPU executes each instruction in a fetch-decode-execute cycle
consisting of series of small steps as follows:
1. Fetch the next instruction from memory into the instruction register.
2. Change the program counter to point to the following instruction.
3. Determine the type of instruction just fetched.
4. If the instruction uses a word in memory, determine where it is.
5. Fetch the word, if needed, into a CPU register.
6. Execute the instruction.
7. Go to step 1 to begin executing the following instruction.
modern trend in CPU arrangement in
computer design
• RISC: a type of computer architecture that has a small or reduced set of
instructions that execute quickly, it contains frequently used instructions.
Usually for dedicated controllers (hardwired design)

• CISC: computer architecture that has long/ complex instructions that are
general purpose and powerful but are slower that RISC; used for general
purpose design (microprogram/microcode design)
Parallelism: Instruction level and processor level
• instruction level: parallelism is exploited within individual instructions to get
more instructions/sec out of the machine. This technique employ pipelining and
superscalar architecture

• Pipelining involves the concept of breaking down the operation of instruction


execution into smaller stages such that each is handled by dedicated piece of
hardware, all running in parallel just like assembly-line operation in a
manufacturing company

A five-stage pipeline
The state of each stage as a function of time: Nine clock cycles are illustrated
• Pipelining allows a trade-off between latency (how long it takes to
execute an instruction), and processor bandwidth (how many MIPS
the CPU has).
• Since one instruction completes every clock cycle and there are
109 /T clock cycles/second, the number of instructions executed per
second is 109 /T. For example, if T = 2nsec, 500 million instructions
are executed each seconds. To get the number of MIPS, we have to
divide the instruction execution rate by 1 million to get (109 /T)/106 =
1000/T MIPS.
• Superscalar architecture uses multiple pipelining technique such that
several instructions are executed in parallel in each processing stage
• Array Computers uses array of processors to perform the same sequence of
instructions on different sets of data.
• Vector processor: vector processor uses the concept of a vector register,
which consists of a set of conventional registers that can be loaded from
memory in a single instruction, which actually loads them from memory
serially. Then a vector addition instruction performs the pairwise addition of
the elements of two such vectors by feeding them to a pipelined adder from
the two vector registers. The result from the adder is another vector, which
can either be stored into a vector register, or used directly as an operand for
another vector operation
Processor level parallelism: multiple CPUs work together on the same problem to obtain higher gains in throughput
than instruction level parallelism.

• Multiprocessors: is the use of several CPUs or similar programmable


processors operating relatively independently of one another but sharing
common resources such as memory and I/O, over a common multiple
processor bus.
• Multiple computers: large numbers of interconnected computers, each having
its own private memory, but no common memory. The CPUs in a
multicomputer are sometimes said to be loosely coupled, to contrast them
with the tightly-coupled CPUs in a multiprocessor. (computer networks)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy