GPU Computing

CPU’s typically have 4 or 8 cores where as Video Cards can have 100’s even 1,000’s of cores on a single card. GPU computing allows you to take advantage of those Video Card cores in order to make parallel computations.

GPU Computing is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform  computation in applications traditionally handled by the central processing unit (CPU).

The School of Computer Science has several GPU computing resources available for Computer Science students.

The schools GPU hardware have the following specs:

  • GeForce GTX 1080 Ti
  • CUDA Driver 9.0/8.0
  • 11 GB memory / card
  • 3584 CUDA Cores / card

Some of the typical software applications that run on the GPU servers are:

  • MPI (Message Passing Interface) is a standardized and portable API for communicating data via messages between distributed processes.
  • CUDA, a parallel computing platform and application programming interface (API) model created by Nvidia.
  • TenserFlow is a software library for numerical computation using data flow graphs.

SCS CUDA Instructions: http://carleton.ca/scs/technical-support/linux/cuda-gpu-computing/

SCS Tensorflow set up instructions: https://carleton.ca/scs/gpu/tensorflow-set-up-instructions/