CRC Wiki
CRC Wiki
Log in


From CRC Wiki

General Description

CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA. Here are a few examples:

Identify hidden plaque in arteries: Heart attacks are the leading cause of death worldwide. Harvard Engineering, Harvard Medical School and Brigham & Women's Hospital have teamed up to use GPUs to simulate blood flow and identify hidden arterial plaque without invasive imaging techniques or exploratory surgery.

Analyze air traffic flow: The National Airspace System manages the nationwide coordination of air traffic flow. Computer models help identify new ways to alleviate congestion and keep airplane traffic moving efficiently. Using the computational power of GPUs, a team at NASA obtained a large performance gain, reducing analysis time from ten minutes to three seconds.

Visualize molecules: A molecular simulation called NAMD (nanoscale molecular dynamics) gets a large performance boost with GPUs. The speed-up is a result of the parallel architecture of GPUs, which enables NAMD developers to port compute-intensive portions of the application to the GPU using the CUDA Toolkit.

CUDA contains libraries, compiler directives, and language extensions that make it easy to use GPUs for general purpose computing. CUDA is developed by NVIDIA for three languages, C, C++, and Fortran although their are third party wrappers available for many other languages such as Python (PyCUDA), Java (jCUDA), and Perl (KappaCUDA). Languages like Mathematica even have native support.

CUDA's aim is to make parallel algorithms easy to write and execute. Since CUDA takes serial algorithms and then executes them thousands of times across a GPU it is often only a few lines of code to convert a serial algorithm to a parallel one. This means that the user can spend more time finding areas of their program that can be easily parallelized rather than having to worry about micromanaging a GPU.

Basic Usage

There are a number of guides to help you get started using CUDA. A great starting point is An Even Easier Guide to CUDA.

Useful Options

Further Information

See the official website: CUDA