0% found this document useful (0 votes)
14 views32 pages

l22 Vector

Uploaded by

abhinav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views32 pages

l22 Vector

Uploaded by

abhinav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Joel Emer

November 30, 2005


6.823, L22-1

Vector Computers

Joel Emer
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

Based on the material prepared by


Krste Asanovic and Arvind
Joel Emer
November 30, 2005
6.823, L22-2

Supercomputers
Definition of a supercomputer:
• Fastest machine in world at given task
• A device to turn a compute-bound problem into an I/O
bound problem
• Any machine costing $30M+
• Any machine designed by Seymour Cray

CDC6600 (Cray, 1964) regarded as first supercomputer


Joel Emer
November 30, 2005
6.823, L22-3

Supercomputer Applications

Typical application areas


• Military research (nuclear weapons, cryptography)
• Scientific research
• Weather forecasting
• Oil exploration
• Industrial design (car crash simulation)
• Bioinformatics
• Cryptography

All involve huge computations on large data sets

In 70s-80s, Supercomputer ≡ Vector Machine


Joel Emer
November 30, 2005

Loop Unrolled Code Schedule


6.823, L22-4

loop: ld f1, 0(r1) Int1 Int 2 M1 M2 FP+ FPx


ld f2, 8(r1)
ld f3, 16(r1) loop: ld f1
ld f4, 24(r1) ld f2
add r1, 32 ld f3
fadd f5, f0, f1 add r1 ld f4 fadd f5
fadd f6, f0, f2 Schedule fadd f6
fadd f7, f0, f3 fadd f7
fadd f8, f0, f4 fadd f8
sd f5, 0(r2) sd f5
sd f6, 8(r2) sd f6
sd f7, 16(r2) sd f7
sd f8, 24(r2)
add r2 bne sd f8
add r2, 32
bne r1, r3, loop
Joel Emer
November 30, 2005
6.823, L22-5

Vector Supercomputers
Epitomized by Cray-1, 1976:

• Scalar Unit
– Load/Store Architecture
• Vector Extension
– Vector Registers
– Vector Instructions
• Implementation
– Hardwired Control
– Highly Pipelined Functional Units
– Interleaved Memory System
– No Data Caches
– No Virtual Memory
Joel Emer
November 30, 2005

Cray-1 (1976) 6.823, L22-6

Core unit of the Cray 1 computer


Image removed due to copyright restrictions.
To view image, visit http://www.cray-
cyber.org/memory/scray.php.
Joel Emer
November 30, 2005

Cray-1 (1976) 6.823, L22-7

V0 Vi V. Mask
V1
V2 Vj
64 Element V3 V. Length
Vector Registers V4 Vk
Single Port V5
V6
Memory V7
FP Add
S0 Sj FP Mul
16 banks of ( (Ah) + j k m ) S1
S2 Sk FP Recip
64-bit words Si S3
(A0) 64 S4 Si Int Add
+ Tjk S5
T Regs S6
8-bit SECDED S7
Int Logic
Int Shift
A0
80MW/sec data ( (Ah) + j k m ) A1 Pop Cnt
A2
load/store Ai A3
Aj
(A0) 64 A4 Ak Addr Add
Bjk A5
Ai
320MW/sec B Regs A6 Addr Mul
A7
instruction
buffer refill NIP CIP
64-bitx16
LIP
4 Instruction Buffers

memory bank cycle 50 ns processor cycle 12.5 ns (80MHz)


Joel Emer

Vector Programming Model


November 30, 2005
6.823, L22-8

Scalar Registers Vector Registers


r15 v15

r0 v0
[0] [1] [2] [VLRMAX-1]
Vector Length Register VLR
v1
Vector Arithmetic v2
Instructions + + + + + +
ADDV v3, v1, v2 v3
[0] [1] [VLR-1]

Vector Load and Vector Register


Store Instructions v1
LV v1, r1, r2

Memory
Base, r1 Stride, r2
Joel Emer
November 30, 2005
6.823, L22-9

Vector Code Example

# C code # Scalar Code # Vector Code


for (i=0; i<64; i++) LI R4, 64 LI VLR, 64
C[i] = A[i] + B[i]; loop: LV V1, R1
L.D F0, 0(R1) LV V2, R2
L.D F2, 0(R2) ADDV.D V3, V1, V2
ADD.D F4, F2, F0 SV V3, R3
S.D F4, 0(R3)
DADDIU R1, 8
DADDIU R2, 8
DADDIU R3, 8
DSUBIU R4, 1
BNEZ R4, loop
Joel Emer
November 30, 2005
6.823, L22-10

Vector Instruction Set Advantages


• Compact
– one short instruction encodes N operations
• Expressive, tells hardware that these N operations:
– are independent
– use the same functional unit
– access disjoint registers
– access registers in same pattern as previous instructions
– access a contiguous block of memory
(unit-stride load/store)
– access memory in a known pattern
(strided load/store)
• Scalable
– can run same code on more parallel pipelines (lanes)
Joel Emer
November 30, 2005

Vector Arithmetic Execution 6.823, L22-11

• Use deep pipeline (=> fast


clock) to execute element V V V
operations 1 2 3
• Simplifies control of deep
pipeline because elements in
vector are independent (=> no
hazards!)

Six stage multiply pipeline

V3 <- v1 * v2
Joel Emer
November 30, 2005

Vector Instruction Execution


6.823, L22-12

ADDV C,A,B

Execution using one Execution using


pipelined functional four pipelined
unit functional units

A[6] B[6] A[24] B[24] A[25] B[25] A[26] B[26] A[27] B[27]
A[5] B[5] A[20] B[20] A[21] B[21] A[22] B[22] A[23] B[23]
A[4] B[4] A[16] B[16] A[17] B[17] A[18] B[18] A[19] B[19]
A[3] B[3] A[12] B[12] A[13] B[13] A[14] B[14] A[15] B[15]

C[2] C[8] C[9] C[10] C[11]

C[1] C[4] C[5] C[6] C[7]

C[0] C[0] C[1] C[2] C[3]


Joel Emer
November 30, 2005
6.823, L22-13

Vector Memory System

Cray-1, 16 banks, 4 cycle bank busy time, 12 cycle latency


• Bank busy time: Cycles between accesses to same bank

Base Stride
Vector Registers

Address
Generator
+

0 1 2 3 4 5 6 7 8 9 A B C D E F
Memory Banks
Joel Emer
November 30, 2005
6.823, L22-14

Vector Unit Structure


Functional Unit

Vector
Registers
Elements Elements Elements Elements
0, 4, 8, … 1, 5, 9, … 2, 6, 10, … 3, 7, 11, …

Lane

Memory Subsystem
Joel Emer
November 30, 2005

T0 Vector Microprocessor (1995) 6.823, L22-15

Vector register Lane


elements striped
over lanes
[24][25] [26] [27][28] [29] [30] [31]
[16][17] [18] [19][20] [21] [22] [23]
[8] [9] [10] [11][12] [13] [14] [15]
[0] [1] [2] [3] [4] [5] [6] [7]

For more information, visit _____________________________________________


http://www.icsi.berkeley.edu/real/spert/t0-intro.html
Joel Emer
November 30, 2005
6.823, L22-16

Vector Instruction Parallelism


Can overlap execution of multiple vector instructions
– example machine has 32 elements per vector register and 8 lanes

Load Unit Multiply Unit Add Unit


load
mul
add
time
load
mul
add

Instruction
issue

Complete 24 operations/cycle while issuing 1 short instruction/cycle


Joel Emer

Vector Chaining November 30, 2005


6.823, L22-17

• Vector version of register bypassing


– introduced with Cray-1

V V V V V
LV v1
1 2 3 4 5
MULV v3,v1,v2
ADDV v5, v3, v4
Chain Chain

Load
Unit
Mult. Add

Memory
Joel Emer
November 30, 2005
6.823, L22-18

Vector Chaining Advantage

• Without chaining, must wait for last element of result to be


written before starting dependent instruction

Load
Mul
Time Add

• With chaining, can start dependent instruction as soon as first


result appears

Load
Mul
Add
Joel Emer

Vector Startup November 30, 2005


6.823, L22-19

Two components of vector startup penalty


– functional unit latency (time through pipeline)
– dead time or recovery time (time before another vector
instruction can start down pipeline)
Functional Unit Latency

R X X X W

R X X X W First Vector Instruction

R X X X W

R X X X W

R X X X W
Dead Time
R X X X W

R X X X W

R X X X W
Dead Time R X X X W
Second Vector Instruction

R X X X W
Joel Emer
November 30, 2005
6.823, L22-20

Dead Time and Short Vectors

No dead time

4 cycles dead time T0, Eight lanes


No dead time
100% efficiency with 8 element
vectors

64 cycles active

Cray C90, Two lanes


4 cycle dead time
Maximum efficiency 94%
with 128 element vectors
Joel Emer
November 30, 2005
6.823, L22-21
Vector Memory-Memory versus Vector
Register Machines
• Vector memory-memory instructions hold all vector
operands in main memory
• The first vector machines, CDC Star-100 (‘73) and TI ASC
(‘71), were memory-memory machines
• Cray-1 (’76) was first vector register machine
Vector Memory-Memory Code
Example Source Code ADDV C, A, B
for (i=0; i<N; i++) SUBV D, A, B
{
C[i] = A[i] + B[i]; Vector Register Code
D[i] = A[i] - B[i]; LV V1, A
} LV V2, B
ADDV V3, V1, V2
SV V3, C
SUBV V4, V1, V2
SV V4, D
Joel Emer
November 30, 2005
6.823, L22-22

Vector Memory-Memory vs. Vector


Register Machines

• Vector memory-memory architectures (VMMA) require


greater main memory bandwidth, why?
– All operands must be read in and out of memory
• VMMAs make if difficult to overlap execution of multiple
vector operations, why?
– Must check dependencies on memory addresses
• VMMAs incur greater startup latency
– Scalar code was faster on CDC Star-100 for vectors < 100 elements
– For Cray-1, vector/scalar breakeven point was around 2 elements

⇒ Apart from CDC follow-ons (Cyber-205, ETA-10) all major


vector machines since Cray-1 have had vector register
architectures
(we ignore vector memory-memory from now on)
Joel Emer

Automatic Code Vectorization


November 30, 2005
6.823, L22-23

for (i=0; i < N; i++)


C[i] = A[i] + B[i];
Scalar Sequential Code Vectorized Code

load load load

Iter. 1 load load load

add Time add add

store store store

load
Iter Iter
Iter. 2 load .1 .2 Vector Instruction

add
Vectorization is a massive compile-time
reordering of operation sequencing
⇒ requires extensive loop dependence analysis
store
Joel Emer

Vector Stripmining November 30, 2005


6.823, L22-24

Problem: Vector registers have finite length


Solution: Break loops into pieces that fit in registers, “Stripmining”

ANDI R1, N, 63 # N mod 64


for (i=0; i<N; i++) MTC1 VLR, R1 # Do remainder
C[i] = A[i]+B[i]; loop:
LV V1, RA
A B C
DSLL R2, R1, 3 # Multiply by 8
+ Remainder DADDU RA, RA, R2 # Bump pointer
LV V2, RB
DADDU RB, RB, R2
+ 64 elements ADDV.D V3, V1, V2
SV V3, RC
DADDU RC, RC, R2
DSUBU N, N, R1 # Subtract elements
+ LI R1, 64
MTC1 VLR, R1 # Reset full length
BGTZ N, loop # Any more to do?
Joel Emer
November 30, 2005
6.823, L22-25

Vector Scatter/Gather

Want to vectorize loops with indirect accesses:


for (i=0; i<N; i++)
A[i] = B[i] + C[D[i]]

Indexed load instruction (Gather)


LV vD, rD # Load indices in D vector
LVI vC, rC, vD # Load indirect from rC base
LV vB, rB # Load B vector
ADDV.D vA, vB, vC # Do add
SV vA, rA # Store result
Joel Emer
November 30, 2005
6.823, L22-26

Vector Scatter/Gather

Scatter example:
for (i=0; i<N; i++)
A[B[i]]++;

Is following a correct translation?


LV vB, rB # Load indices in B vector
LVI vA, rA, vB # Gather initial A values
ADDV vA, vA, 1 # Increment
SVI vA, rA, vB # Scatter incremented values
Joel Emer
November 30, 2005
6.823, L22-27

Vector Conditional Execution

Problem: Want to vectorize loops with conditional code:


for (i=0; i<N; i++)
if (A[i]>0) then
A[i] = B[i];

Solution: Add vector mask (or flag) registers


– vector version of predicate registers, 1 bit per element
…and maskable vector instructions
– vector operation becomes NOP at elements where mask bit is clear
Code example:
CVM # Turn on all elements
LV vA, rA # Load entire A vector
SGTVS.D vA, F0 # Set bits in mask register where A>0
LV vA, rB # Load B vector into A under mask
SV vA, rA # Store A back to memory under mask
Joel Emer
November 30, 2005
6.823, L22-28

Masked Vector Instructions


Simple Implementation Density-Time Implementation
– execute all N operations, turn off result – scan mask vector and only execute
writeback according to mask elements with non-zero masks

M[7]=1 A[7] B[7] M[7]=1


M[6]=0 A[6] B[6] M[6]=0 A[7] B[7]
M[5]=1 A[5] B[5] M[5]=1
M[4]=1 A[4] B[4] M[4]=1
M[3]=0 A[3] B[3] M[3]=0 C[5]

M[2]=0 C[4]
M[1]=1
M[2]=0 C[2]
M[0]=0
M[1]=1 C[1] C[1]

Write data port

M[0]=0 C[0]

Write Enable Write data port


Joel Emer
November 30, 2005
6.823, L22-29

Compress/Expand Operations
• Compress packs non-masked elements from one vector register
contiguously at start of destination vector register
– population count of mask vector gives packed vector length
• Expand performs inverse operation

M[7]=1 A[7] A[7] A[7] M[7]=1


M[6]=0 A[6] A[5] B[6] M[6]=0
M[5]=1 A[5] A[4] A[5] M[5]=1
M[4]=1 A[4] A[1] A[4] M[4]=1
M[3]=0 A[3] A[7] B[3] M[3]=0
M[2]=0 A[2] A[5] B[2] M[2]=0
M[1]=1 A[1] A[4] A[1] M[1]=1
M[0]=0 A[0] A[1] B[0] M[0]=0

Compress Expand

Used for density-time conditionals and also for general selection


operations
Joel Emer
November 30, 2005
6.823, L22-30

Vector Reductions

Problem: Loop-carried dependence on reduction variables


sum = 0;
for (i=0; i<N; i++)
sum += A[i]; # Loop-carried dependence on sum
Solution: Re-associate operations if possible, use binary tree to
perform reduction
# Rearrange as:
sum[0:VL-1] = 0 # Vector of VL partial sums
for(i=0; i<N; i+=VL) # Stripmine VL-sized chunks
sum[0:VL-1] += A[i:i+VL-1]; # Vector sum
# Now have VL partial sums in one vector register
do {
VL = VL/2; # Halve vector length
sum[0:VL-1] += sum[VL:2*VL-1] # Halve no. of partials
} while (VL>1)
Joel Emer
November 30, 2005

A Modern Vector Super: NEC SX-6 (2003) 6.823, L22-31

• CMOS Technology
– 500 MHz CPU, fits on single chip
– SDRAM main memory (up to 64GB)
• Scalar unit
Image removed due
– 4-way superscalar with out-of-order and speculative
execution to copyright
restrictions.
– 64KB I-cache and 64KB data cache
Image available in
• Vector unit Kitagawa, K., S.
– 8 foreground VRegs + 64 background VRegs (256x64- Tagaya, Y. Hagihara,
bit elements/VReg) and Y. Kanoh. "A
– 1 multiply unit, 1 divide unit, 1 add/shift unit, 1 logical hardware overview of
unit, 1 mask unit SX-6 and SX-7
– 8 lanes (8 GFLOPS peak, 16 FLOPS/cycle) supercomputer." NEC
– 1 load & store unit (32x8 byte accesses/cycle) Research &
Development Journal
– 32 GB/s memory bandwidth per processor
44, no. 1 (Jan
• SMP structure 2003):2-7.
– 8 CPUs connected to memory through crossbar
– 256 GB/s shared memory bandwidth (4096 interleaved
banks)
Joel Emer
November 30, 2005
6.823, L22-32

Multimedia Extensions

• Very short vectors added to existing ISAs for micros


• Usually 64-bit registers split into 2x32b or 4x16b or 8x8b
• Newer designs have 128-bit registers (Altivec, SSE2)
• Limited instruction set:
– no vector length control
– no strided load/store or scatter/gather
– unit-stride loads must be aligned to 64/128-bit boundary
• Limited vector register length:
– requires superscalar dispatch to keep multiply/add/load units busy
– loop unrolling to hide latencies increases register pressure
• Trend towards fuller vector support in microprocessors

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy