GPU Computing

CPUs typically have 4 or 8 cores, whereas Video Cards can have 100’s even 1,000’s of cores on a single card. GPU computing allows you to take advantage of those Video Card cores in order to make parallel computations.

GPU Computing is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU).

The School of Computer Science has several GPU computing resources available for current Computer Science students that are taking a course that requires GPU programming. Graduate students can ask their graduate supervisor for their GPU resources.

Here are the schools GPU course hardware using CUDA 11.4:

Some of the typical software applications that run on the GPU servers are:

  • MPI (Message Passing Interface) is a standardized and portable API for communicating data via messages between distributed processes.
  • CUDA, a parallel computing platform and application programming interface (API) model created by Nvidia.
  • Tensorflow is a software library for numerical computation using data flow graphs.

SCS CUDA Instructions: SCS GPU Computing with Openstack