0% found this document useful (0 votes)
12 views45 pages

Cuda 1

This document provides an introduction to CUDA, focusing on its architecture, programming model, and practical examples. It covers memory management, parallel kernels, thread synchronization, and CUDA C language extensions, along with various code examples demonstrating basic CUDA operations. Additionally, it discusses performance optimization techniques and debugging tools for CUDA applications.

Uploaded by

Akshat Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views45 pages

Cuda 1

This document provides an introduction to CUDA, focusing on its architecture, programming model, and practical examples. It covers memory management, parallel kernels, thread synchronization, and CUDA C language extensions, along with various code examples demonstrating basic CUDA operations. Additionally, it discusses performance optimization techniques and debugging tools for CUDA applications.

Uploaded by

Akshat Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Introduction to CUDA

heterogeneous programming

Katia Oleinik
koleinik@bu.edu
Scientific Computing and Visualization

Boston University
GPU memory
• Memory management
CUDA Basics • Parallel kernels
• Threads synchronization
• Hello, World!
• Race conditions and atomic
• CUDA kernels
operations
• Blocks and
threads overview
CUDA
• Architecture
• C Language extensions
• Terminology
Architecture

NVIDIA Tesla M2070:

Core clock: 1.15GHz


Single instruction
448 CUDA cores

1.15 x 1 x 448 =
515 Gigaflops double precision (peak)

1.03 Tflops single precision (peak)

3GB total dedicated memory

Delivers performance at about 10% of the cost and 5% the power of CPU
Architecture

CUDA:

• Compute Unified Device Architecture

• General Purpose Parallel Computing Architecture by NVIDIA

• Supports traditional OpenGL graphics


Architecture

Memory Bandwidth:

the rate at which data can be read from or stored into memory, expressed in bytes per
second

Intel Xeon X5650: 32 GB/s Tesla M2070: 148 GB/s


Architecture

Tesla M2070 Processor:

• Streaming Multiprocessors (SM): 14


• Streaming Processors on each SM: 32

Total: 14 x 32 = 448 Cores

Each Streaming Multiprocessor supports 1024 threads.


Architecture

CUDA:

SIMT philosophy: Single Instruction Multiple Thread

Computationally intensive—The time spent on computation significantly


exceeds the time spent on transferring data to and from GPU memory.

Massively parallel—The computations can be broken down into


hundreds or thousands of independent units of work.
Architecture

# Copy tutorial files


scc1 % cp –r /scratch/katia/cuda .

# Request interactive session on the node with GPU


scc1 % qrsh –l gpus=1

# Change directory
scc1-ha1 % cd deviceQuery

# Set Environment variables to link to CUDA 5/0


scc1-ha1 % module load cuda/5.0

# Execute deviceQuery program


scc1-ha1 % ./deviceQuery
Architecture
Information that we will need later in this tutorial:

CUDA Driver Version / Runtime Version 5.0 / 5.0


CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 5375 MBytes
(14) Multiprocessors x ( 32) CUDA Cores/MP: 448 CUDA Cores

Total amount of constant memory: 65536 bytes


Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
CUDA Architecture
Information that we will need later in this tutorial:

Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
CUDA Architecture
Query device capabilities and measure GPU/CPU bandwidth.
This is a simple test program to measure the memcopy bandwidth of the GPU and
memcpy bandwidth across PCI-e

# Change directory
scc1-ha1 % cd bandwidthTest

# Execute bandwidthTest program


scc1-ha1 % ./bandwidthTest
CUDA Terminology

CUDA:

Host
The CPU and its memory (host memory)

Device
The GPU and its memory (device memory)
CUDA: C Language Extensions

CUDA:

• Based on industry-standard C

• Language extensions allow heterogeneous programming

• APIs for memory and device managing


Hello, Cuda!

CUDA: Basic example HelloCuda1.cu

#include <stdio.h>
int main(void){

printf("Hello, Cuda! \n");

return(0);
}

To build the program, use nvcc compiler:

scc-he1: % nvcc -o helloCuda1 helloCuda1.cu


Hello, Cuda!

CUDA Language closely follows C/C++ syntax with minimum set of extensions:

Function to be executed on the device (GPU) and called


from host code

__device__ void foo(){ . . . }

NVCC compiler will compile the function that run on the device and host compiler
(gcc) will take care about all other functions that run on the host (e.g. main() )
Hello, Cuda!

CUDA: Basic example HelloCuda2.cu

#include <stdio.h>

__global__ void cudakernel(void){


printf("Hello, I am CUDA kernel ! Nice to meet you!\n");
}
Hello, Cuda!

CUDA: Basic example HelloCuda2.cu

int main(void){

printf("Hello, Cuda! \n");

cudakernel<<<1,1>>>();
cudaDeviceSynchronize();

printf("Nice to meet you too! Bye, CUDA\n");

return(0);
}
Hello, Cuda!

CUDA: Basic example HelloCuda2.cu

cudakernel<<<N,M>>>();

cudaDeviceSynchronize();

Triple angle brackets indicate that the function will be executed on the device (GPU).
This function is called kernel.

Kernel is always of type void.


Program returns immediately after launching the kernel. To prevent program to finish
before kernel is completed, we have call cudaDeviceSynchronize().
CUDA: C Language Extensions

There is a number of cuda functions:

Device management:
cudaGetDeviceCount(), cudaGetDeviceProperties()

Error management:
cudaGetLastError(), cudaSafeCall(), cudaCheckError()

Device memory management:


cudaMalloc(), cudaFree(), cudaMemcpy()
Hello, Cuda!

CUDA: Basic example HelloCuda2.cu

To build the program, use nvcc compiler:

scc-he1: % nvcc -o helloCuda2 helloCuda2.cu –arch sm_20

The ability to print from within the kernel was added in a later generation of
architectural evolution. To request the support of Compute Capability 2.0, we need
to add this option into compilation command line.
Hello, Cuda!

CUDA: Basic example HelloCudaBlock.cu

#include <stdio.h>

__global__ void cudakernel(void){


printf("Hello, I am CUDA block %d !\n", blockIdx.x);
}

int main(void){

. . .
cudakernel<<<16,1>>>();
. . . To simplify compilation process we will use Makefile:
}
% make HelloCudaBlock
CUDA: C Language Extensions

CUDA provides special variable for thread identification in the


kernal:

dim3 threadIdx; // thread ID within the block

dim3 blockIdx; // block ID within the grid

dim3 blockDim; // number of threads per block

dim3 gridDim; // number of blocks in the grid

In the simple 1-dimentional case, we use only the first component of each variable,
e.g. threadIdx.x
CUDA: Blocks and Threads

Host
Serial Code

Device
Kernel A

Host
Serial Code

Device
Kernel B
CUDA: C Language Extensions

CUDA: Basic example HelloCudaThread.cu

#include <stdio.h>

__global__ void cudakernel(void){


printf("Hello, I am CUDA thread %d !\n", threadIdx.x);
}

int main(void){

. . .
cudakernel<<<1,16>>>();
. . .
}
CUDA: Blocks and Threads

• One kernel is executed on the device at a time


• Many threads execute each kernel
• Each thread execute the same code (SPMD)
• Threads are grouped into thread blocks
• Kernel is a grid of thread blocks
• Threads are scheduled as sets of warps
• Warp is a group of 32 threads
• SM executes same instruction on all threads in the warp
• Blocks cannot synchronize and can run in any order
Vector Addition Example

CUDA: vectorAdd.cu

__global__ void vectorAdd(const float *A,


const float *B,
float *C,
int numElements){

int i = blockDim.x * blockIdx.x + threadIdx.x;

if (i < numElements) {
C[i] = A[i] + B[i];
}
}
Vector Addition Example

CUDA: vectorAdd.cu

threadIdx.x threadIdx.x threadIdx.x threadIdx.x

01 2 3 4 5 6701234567012345670123 4 5 6 7

blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3

int i = blockDim.x * blockIdx.x + threadIdx.x;

Unlike blocks, threads have mechanisms to communicate and synchronize


Vector Addition Example

CUDA: vectorAdd.cu device memory allocation

int main(void) {
. . .
float *d_A = NULL;
err = cudaMalloc((void **)&d_A, size);

float *d_B = NULL;


err = cudaMalloc((void **)&d_B, size);

float *d_C = NULL;


err = cudaMalloc((void **)&d_C, size);
. . .
}
Vector Addition Example

CUDA: vectorAdd.cu

int main(void) {

. . .
// Copy input values to the device
cudaMemcpy(d_A, &A, size, cudaMemcpyHostToDevice);

cudaMemcpy(d_A, &A, size, cudaMemcpyHostToDevice);

. . .
}
Vector Addition Example

CUDA: vectorAdd.cu

int main(void) {
. . .
// Launch the Vector Add CUDA Kernel
int threadsPerBlock = 256;
int blocksPerGrid =(numElements + threadsPerBlock - 1) /
threadsPerBlock;

vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, N);


err = cudaGetLastError();
. . .
}
Vector Addition Example

CUDA: vectorAdd.cu

int main(void) {

. . .
// Copy result back to host
cudaMemcpy(&C, d_C, size, cudaMemcpyDeviceToHost);

// Clean-up
cudaFree(d_A); cudaFree(d_B); cudaFree(d_C);
. . .
}
Timing CUDA kernel

CUDA: vectorAddTime.cu
float memsettime;
cudaEvent_t start, stop;

// initialize CUDA timer


cudaEventCreate(&start); cudaEventCreate(&stop);
cudaEventRecord(start,0);

// CUDA Kernel
. . .

// stop CUDA timer


cudaEventRecord(stop,0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&memsettime,start,stop);
printf(" *** CUDA execution time: %f *** \n", memsettime);
cudaEventDestroy(start);
cudaEventDestroy(stop);
Timing CUDA kernel

CUDA: vectorAddTime.cu
scc-ha1 % make

// specify the number of threads per block


scc-ha1 % vectorAddTime 128

Explore the CUDA kernel execution time based on the block size:

Remember:
• CUDA Streaming Multiprocessor executes threads in warps (32 threads)
• There is a maximum of 1024 threads per block (for our GPU)
• There is a maximum of 1536 threads per multiprocessor (for our GPU)
Dot Product

CUDA: dotProd1.cu

a0 * b0

a1 * b1
+ C
a2 * b2

a3 * b3

C = A * B = ( a 0, a 1 , a 2 , a 3 ) * ( b 0, b 1 , b 2 , b 3 ) = a 0 * b 0 + a 1 * b 1 + a 2 * b 2 + a 3 * b 3
Dot Product

CUDA: dotProd1.cu

A block of threads shares common memory, called shared memory

Shared Memory is extremely fast on-chip memory

To declare shared memory use __shared__ keyword

Shared Memory is not visible to the threads in other blocks


Dot Product

CUDA: dotProd1.cu
#define N 512
__global__ voiddot( int*a, int*b, int*c ) {

// Shared memory for results of multiplication


__shared__ inttemp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];

// Thread 0 sums the pairwise products


if( threadIdx.x == 0 ) {
int sum = 0;
for( int i= 0; i< N; i++ ) sum += temp[i];
*c = sum;
}
}

What if thread 0 starts to calculate sum before other threads completed their calculations?
Thread Synchronization

CUDA: dotProd1.cu
#define N 512
__global__ voiddot( int*a, int*b, int*c ) {

// Shared memory for results of multiplication


__shared__ inttemp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];

__syncthreads();

// Thread 0 sums the pairwise products


if( threadIdx.x == 0 ) {
int sum = 0;
for( int i= 0; i< N; i++ ) sum += temp[i];
*c = sum;
}
}
Thread Synchronization

CUDA: dotProd1.cu
int main(void) {
. . .
// copy input vectors to the device
. . .

// Launch CUDA kernel


dotProductKernel <<<1, N >>> (dev_A, dev_B, dev_C);

. . .
// copy input vectors from the device
. . .
}

But our vector is limited to the maximum block size. Can we use blocks?
Race Condition
CUDA: dotProd2.cu

a0 * b0
Block 0
a1 * b1
+ sum
a2 * b2
a3 * b3
C

a4 * b4
Block 1
a5 * b5
+ sum
a6 * b6
a7 * b7
Race Condition
CUDA: dotProd2.cu
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
__global__ void dotProductKernel( int*a, int*b, int*c ) {
__shared__ int temp[THREADS_PER_BLOCK];

int index = threadIdx.x + blockIdx.x * blockDim.x;

temp[threadIdx.x] = a[index] * b[index];


__syncthreads();

if( threadIdx.x == 0) {
intsum = 0;
for( int i= 0; i< THREADS_PER_BLOCK; i++ )sum += temp[i];
*c += sum;
}
}
Blocks interfere with each other – Race condition
Race Condition
CUDA: dotProd2.cu
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
__global__ void dotProductKernel( int*a, int*b, int*c ) {
__shared__ int temp[THREADS_PER_BLOCK];

int index = threadIdx.x + blockIdx.x * blockDim.x;

temp[threadIdx.x] = a[index] * b[index];


__syncthreads();

if( threadIdx.x == 0) {
intsum = 0;
for( int i= 0; i< THREADS_PER_BLOCK; i++ )sum += temp[i];
atomicAdd(c,sum);
}
}
Atomic Operations
Race conditions - behavior depends upon relative timing of
multiple event sequences.
Can occur when an implied read-modify-write is interruptible

Read-Modify-Write uninterruptible – atomic


atomicAdd() atomicInc()
atomicSub() atomicDec()
atomicMin() atomicExch()
atomicMax() atomicCAS()
CUDA Best Practices

NVIDIA’s link:
http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html

1. Locate part of the slowest part of the code


Assess gcc -O2 -g -pg myprog.c
gprof ./a.out > profile.txt

Compare the outcome with the 4. 2.


original expectations. Deploy Parallelize Use CUDA to parallelize code;
Use optimize cu* libraries if
possible;

3. Overlapping data transfers, fine-tuning


Optimize operation sequences
CUDA Debugging

CUDA-GDB - GNU Debugger that runs on Linux and Mac: http://


developer.nvidia.com/cuda-gdb

The NVIDIA Parallel Nsight debugging and profiling tool for


Microsoft Windows Vista and Windows 7 is available as a free plugin for Microsoft
Visual Studio:
http://developer.nvidia.com/nvidia-parallel-nsight
This tutorial has been made possible by
Scientific Computing and Visualization
group
at Boston University.

Katia Oleinik
koleinik@bu.edu

http://www.bu.edu/tech/research/training/tutorials/list/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy