Pipelining and Vector Processing: - Parallel
Pipelining and Vector Processing: - Parallel
Parallel
Processing
Pipelining
Arithmetic Pipeline
Instruction
Pipeline
RISC Pipeline
Vector Processing
Array Processors
Computer Organization
Parallel Processing
PARALLEL PROCESSING
Parallel processing is a term used for a large class of techniques that
are used to provide simultaneous data-processing tasks for the
purpose of increasing the computational speed of a computer system.
Computer Organization
PARALLEL PROCESSING
Example of parallel Processing:
Multiple Functional Unit:
Separate the execution unit into
eight functional units operating in
parallel.
Computer Organization
Parallel Processing
PARALLEL COMPUTERS
Architectural Classification
Flynn's classification
Based on the multiplicity of Instruction Streams and Data Streams
Instruction Stream
Sequence of Instructions read from memory
Data Stream
Operations performed on the data in the processor
Number of Single
Instruction
Streams
Multiple
Computer Organization
Single
Multiple
SISD
SIMD
MISD
MIMD
Computer Architectures Lab
Parallel Processing
Processor
Unit
Data stream
Memory
Instruction stream
Characteristics
- Standard von Neumann machine
- Instructions and data are stored in memory
- One operation at a time
Limitations
Von Neumann bottleneck
Computer Organization
Parallel Processing
Multiprogramming
Spooling
Multifunction processor
Pipelining
Exploiting instruction-level parallelism
- Superscalar
- Superpipelining
- VLIW (Very Long Instruction Word)
Computer Organization
Parallel Processing
CU
CU
CU
Memory
Data stream
Instruction stream
Characteristics
- There is no computer at present that can be classified as MISD
Computer Organization
Parallel Processing
Processor units
Data stream
Alignment network
Memory modules
Characteristics
- Only one copy of the program exists
- A single controller executes one instruction at a time
Computer Organization
Parallel Processing
Computer Organization
Parallel Processing
Interconnection Network
Shared Memory
Characteristics
- Multiple processing units
- Execution of multiple instructions on multiple data
Types of MIMD computer systems
- Shared memory multiprocessors
- Message-passing multicomputers
Computer Organization
Parallel Processing
Buses,
Multistage IN,
Crossbar Switch
Interconnection Network(IN)
Characteristics
All processors have equally direct access to one large memory address
space
Example systems
Bus and cache-based systems ( Sequent Balance, Encore Multimax)
Multistage IN-based systems ( Ultracomputer, Butterfly, RP3, HEP)
Crossbar switch-based systems ( C.mmp, Alliant FX/8)
Limitations
Memory access latency
Hot spot problem
Computer Organization
Parallel Processing
MESSAGE-PASSING MULTICOMPUTER
Message-Passing Network
Point-to-point connections
Characteristics
- Interconnected computers
- Each processor has its own memory, and
communicate via message-passing
Example systems
- Tree structure: Teradata, DADO
- Mesh-connected: Rediflow, Series 2010, J-Machine
- Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III
Limitations
- Communication overhead
- Hard to programming
Computer Organization
Pipelining
PIPELINING
A technique of decomposing a sequential process into boperations,
with each subprocess being executed in a partial dedicated
segment that operates concurrently with all other segments.
Ai * B i + C i
for i = 1, 2, 3, ... , 7
Ai
Bi
R1
R2
Memory Ci
Segment 1
Multiplier
Segment 2
R4
R3
Segment 3
Adder
R5
R1 Ai, R2 Bi
Load Ai and Bi
R3 R1 * R2, R4 Ci Multiply and load Ci
R5 R3 + R4
Add
Computer Organization
Pipelining
Computer Organization
Pipelining
GENERAL PIPELINE
General Structure of a 4-Segment Pipeline
Clock
Input
S1
R1
S2
R2
S3
R3
S4
R4
Space-Time Diagram
Segment 1
2
T1
T2
T3
T4
T5
T6
T1
T2
T3
T4
T5
T6
T1
T2
T3
T4
T5
T6
T1
T2
T3
T4
T5
3
4
Computer Organization
Clock cycles
T6
Pipelining
PIPELINE SPEEDUP
n: Number of tasks to be performed
Conventional Machine (Non-Pipelined)
tn: Clock cycle
1: Time required to complete the n tasks
1 = n * t n
Pipelined Machine (k stages)
tp: Clock cycle (time to complete each suboperation)
: Time required to complete the n tasks
= (k + n - 1) * tp
Speedup
Sk: Speedup
Sk = n*tn / (k + n - 1)*tp
lim
Sk =
Computer Organization
tn
tp
( = k, if tn = k * tp )
Pipelining
Computer Organization
Arithmetic Pipeline
ARITHMETIC PIPELINE
Floating-point adder
X = A x 2a
Y = B x 2b
Segment 1:
[1]
[2]
[3]
[4]
Exponents
a
b
Mantissas
A
B
Compare
exponents
by subtraction
Difference
Segment 2:
Choose exponent
Align mantissa
R
Add or subtract
mantissas
Segment 3:
Segment 4:
Adjust
exponent
R
Computer Organization
Normalize
result
Arithmetic Pipeline
Stages:
S1
Exponent
subtractor
B = b x 2q
q
b
Other
fraction
Fraction
selector
Fraction with min(p,q)
r = max(p,q)
t = |p - q|
Right shifter
Fraction
adder
c
S2
r
Leading zero
counter
S3
c
Left shifter
r
d
S4
Exponent
adder
C = A + B = c x 2r = d x 2s
(r = max (p,q), 0.5 d < 1)
Computer Organization
Instruction Pipeline
INSTRUCTION CYCLE
Six Phases* in an Instruction Cycle
[1] Fetch an instruction from memory
[2] Decode the instruction
[3] Calculate the effective address of the operand
[4] Fetch the operands from memory
[5] Execute the operation
[6] Store the result in the proper place
* Some instructions skip some phases
* Effective address calculation can be done in
the part of the decoding phase
* Storage of the operation result into a register
is done automatically in the execution phase
==> 4-Stage Pipeline
[1] FI: Fetch an instruction from memory
[2] DA: Decode the instruction and calculate
the effective address of the operand
[3] FO: Fetch the operand
[4] EX: Execute the operation
Computer Organization
Instruction Pipeline
INSTRUCTION PIPELINE
FI
DA
FO EX
i+1
FI
DA FO
EX
i+2
FI
DA
FO EX
Pipelined
i
FI
DA FO
i+1
FI
DA FO
i+2
FI
Computer Organization
EX
EX
DA FO EX
Instruction Pipeline
Fetch instruction
from memory
Segment2:
Decode instruction
and calculate
effective address
Branch?
yes
Segment3:
Segment4:
Interrupt
handling
no
Fetch operand
from memory
Execute instruction
yes
Interrupt?
no
Update PC
Empty pipe
Step:
Instruction
1
2
(Branch)
3
4
5
6
7
Computer Organization
FI
DA
FO
EX
FI
DA
FO
EX
FI
DA
FO
FI
10
11
12
FI
DA
FO
EX
FI
DA
FO
EX
FI
DA
FO
FI
DA FO
13
EX
EX
EX
Instruction Pipeline
Control hazards
Branches and other instructions that change the PC
JMPofID
PC
make the fetch
the PC
next+instruction
to address
be delayed
Branch
dependency
bubble
IF
ID
OF
OE
OS
Pipeline Interlock:
Detect Hazards Stall until it is cleared
Computer Architectures Lab
Instruction Pipeline
STRUCTURAL HAZARDS
Structural Hazards
Occur when some resource has not been
duplicated enough to allow all combinations
of instructions in the pipeline to execute
Example: With one memory-port, a data and an instruction fetch
cannot be initiated in the same clock
i
i+
1
i+
2
FI
D
A
FI
FO
D
A
stal
l
E
X
FO
stal
l
E
X
FI
D
A
FO
E
X
Computer Organization
Instruction Pipeline
DATA HAZARDS
Data Hazards
Occurs when the execution of an instruction
depends on the results of a previous instruction
ADD
R1, R2, R3
SUB
R4, R1, R5
Data hazard can be dealt with either hardware
techniques or software technique
Hardware Technique
Interlock
- hardware detects the data dependencies and delays the scheduling
of the dependent instruction by stalling enough clock cycles
Forwarding (bypassing, short-circuiting)
- Accomplished by a data path that routes a value from a source
(usually an ALU) to a user, bypassing a designated register. This
allows the value to be produced to be used at an earlier stage in the
pipeline than would otherwise be possible
Software Technique
Instruction Scheduling(compiler) for delayed load
Computer Organization
Instruction Pipeline
FORWARDING HARDWARE
Example:
ADD
SUB
Register
fil
e
R1, R2, R3
R4, R1, R5
3-stage Pipeline
MUX
MUX
I: Instruction Fetch
A: Decode, Read Registers,
ALU Operations
E: Write the result to the
destination register
Result
write
bus
Bypass
pat
h
AL
U
R
4
ALU result buffer
ADD
SUB
SUB
Computer Organization
E
A
Without Bypassing
With Bypassing
Instruction Pipeline
INSTRUCTION SCHEDULING
a = b + c;
d = e - f;
Unscheduled code:
LW
Rb, b
LW
Rc, c
ADD
Ra, Rb, Rc
SW
a, Ra
LW
Re, e
LW
Rf, f
SUB
Rd, Re, Rf
SW
d, Rd
Scheduled Code:
LW
Rb, b
LW
Rc, c
LW
Re, e
ADD
Ra, Rb, Rc
LW
Rf, f
SW
a, Ra
SUB
Rd, Re, Rf
SW
d, Rd
Delayed Load
A load requiring that the following instruction not use its result
Computer Organization
Instruction Pipeline
CONTROL HAZARDS
Branch Instructions
- Branch target address is not known until
the branch instruction is completed
Branch
Instruction
Next
Instruction
FI
DA
FO
EX
FI
DA
FO
EX
Instruction Pipeline
CONTROL HAZARDS
Prefetch Target Instruction
Fetch instructions in both streams, branch not taken and branch taken
Both are saved until branch branch is executed. Then, select the right
instruction stream and discard the wrong stream
Branch Prediction
Guessing the branch condition, and fetch an instruction stream based
on
the guess. Correct guess eliminates the branch penalty
Delayed Branch
Compiler detects the branch and rearranges the instruction sequence
by inserting useful instructions that keep the pipeline busy
in the presence of a branch instruction
Computer Organization
RISC Pipeline
RISC PIPELINE
RISC
- Machine with a very fast clock cycle that
executes at the rate of one instruction per cycle
<- Simple Instruction Set
Fixed Length Instruction Format
Register-to-Register Operations
Instruction Cycles of Three-Stage Instruction Pipeline
Data Manipulation Instructions
I:
Instruction Fetch
A: Decode, Read Registers, ALU Operations
E: Write a Register
Load and Store Instructions
I:
Instruction Fetch
A: Decode, Evaluate Effective Address
E: Register-to-Memory or Memory-to-Register
Program Control Instructions
I:
Instruction Fetch
A: Decode, Evaluate Branch Address
E: Write Register(PC)
Computer Organization
RISC Pipeline
DELAYED LOAD
LOAD: R1 M[address 1]
LOAD: R2 M[address 2]
ADD: R3 R1 + R2
STORE:
M[address 3]
Three-segment pipeline
R3 timing
Pipeline timing with data conflict
clock cycle
Load R1
Load R2
Add R1+R2
Store R3
1 2 3 4 5 6
I A E
I A E
I A E
I A E
1 2 3 4 5 6 7
I A E
I A E
I A E
I A E
I A E
RISC Pipeline
DELAYED BRANCH
Compiler analyzes the instructions before and after
the branch and rearranges the program sequence by
inserting useful instructions in the delay steps
Using no-operation instructions
Computer Organization
Vector Processing
VECTOR PROCESSING
Vector Processing Applications
Problems that can be efficiently formulated in terms of vectors
Vector Processing
VECTOR PROGRAMMING
20
DO 20 I = 1, 100
C(I) = B(I) + A(I)
Conventional computer
Initialize I = 0
20 Read A(I)
Read B(I)
Store C(I) = A(I) + B(I)
Increment I = i + 1
If I 100 goto 20
Vector computer
C(1:100) = A(1:100) + B(1:100)
Computer Organization
Vector Processing
VECTOR INSTRUCTIONS
f1: V V
f2: V S
f3: V x V V
f4: V x S V
V: Vector operand
S: Scalar operand
f2
B(I) sin(A(I))
A(I) A(I)
S A(I)
B(I) SQR(A(I))
S max{A(I)}
C(I) max(A(I),B(I))
f4
B(I) S + A(I)
B(I) A(I) / S
Computer Organization
Vector Processing
Computer Organization
Vector Processing
M0
M1
M2
M3
AR
AR
AR
AR
Memory
array
Memory
array
Memory
array
Memory
array
DR
DR
DR
DR
Data bus
Address Interleaving
Different sets of addresses are assigned to
different memory modules
Computer Organization