GPU Computing

CPUs typically have 4 or 8 cores, whereas Video Cards can have 100’s even 1,000’s of cores on a single card. GPU computing allows you to take advantage of those Video Card cores in order to make parallel computations.

GPU Computing is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU).

The School of Computer Science has several GPU computing resources available for current Computer Science students that are taking a course that requires GPU programming. Graduate students can ask their graduate supervisor for their GPU resources.

Here are the schools GPU course hardware using CUDA 11.4:


  • 8 GB memory / card
  • 4,864 CUDA Cores
  • 152 tensor cores
  • 38 RT cores


  • 12 GB memory / card
  • 3,584 CUDA Cores
  • 120 tensor cores
  • 30 RT cores


  • 12 GB memory / card
  • 5120 CUDA Cores
  • 640 Tensor Cores

GeForce RTX 2080 SUPER

  • 8GB memory / card
  • 3072 CUDA cores
  • 384 Tensor cores
  • 48 RT cores

GeForce GTX 1080 Ti

  • 11 GB memory / card
  • 3584 CUDA Cores / card

Some of the typical software applications that run on the GPU servers are:

  • MPI (Message Passing Interface) is a standardized and portable API for communicating data via messages between distributed processes.
  • CUDA, a parallel computing platform and application programming interface (API) model created by Nvidia.
  • Tensorflow is a software library for numerical computation using data flow graphs.

SCS CUDA Instructions: SCS GPU Computing with Openstack