GPU VM Account Details
Testing the GPU VM software
Troubleshooting
This page will show you how to test your virtual machine’s GPU and access and test the software that is pre-installed on it.
GPU VM Account Details
- Image Information
-
Image Name: SCS-GPU-fall-2023-08-16-v3
Creation Date: August 16, 2023
Operating System: Ubuntu 22.04
Window Manager: XFCE
Intended usage: Openstack GPU virtual machine with AI programming support
- Account
-
Accessing your GPU Virtual Machine. You will be given a username and password once your VM is ready.
Please change your password as soon as the VM is provisioned for you. This can be done by logging into your VM and then opening a terminal window and typing ‘passwd’.
Please note:
- Change the default password for your account!
- Your account does not have Openstack dashboard access. Access is by IP address only.
- There are no system backups for your VM, that is, you are responsible for your own backups!
- Accessing your GPU Virtual Machine
-
1. From outside of Carleton you will need to VPN to Carleton in order to access the VM
- VPN: Carleton VPN when connecting from outside of the campus
2. Listed are the ways you can connect to your VM:
- x2go (graphical desktop): download x2go client to get a full graphical desktop (use Session Type: XFCE). More info: SSH Connection with x2go Remote Desktop Client
- ssh (command line): use a terminal window to gain ssh access
- System Administration
-
- You have full root/sudo privileges on your GPU VM.
- You can restart your VM using the ‘reboot’ command
- If you shutdown your VM then you need to contact an SCS Sysadmin to Start the VM for you
- If your VM is in an unusable state or it is difficult to fix errors then you have the option to re-launch your VM. Please contact the SCS System Administrator to re-launch the VM for you. Re-launching it means terminating the VM (you lose all local data) and launching a new instance.
- Sometimes the VM is not accessible via x2go but you can access it using an ssh terminal like putty
- run the command ‘passwd’ to change your local Ubuntu password
- Software
-
This is the tested software installed on this virtual machine.
Software Version NVIDIA Driver Ver. 12.0 CUDA Runtime 11.8 cuDNN 8.6.0 GCC 11.4.0 + (9.4.0) Conda 23.5.2 Bazel 5.3.0 Python software:
Software Version Python 3.10.12 Pip3 22.1.1 Tensorflow 2.9.1 Python-torch 1.8.0a0 Keras 2.9.0 Keras-processing 1.1.2 Pandas 2.0.3 Numpy 1.23.1
Testing the GPU VM software
Here are some helpful tests to see if your software is running correctly.
- Probing your GPU
-
This command allows you test if the GPU is being detected, identifies the GPU and shows any running jobs, utilisation, and memory usage in real time:
nvidia-smi -l
You should be able to view a realtime command prompt window that shows:
- CUDA version
- GPU name
- What job is running on your GPU
- Stats about your GPU: realtime heat, load and memory usage
- Testing CUDA
-
There are CUDA samples that you can download, compile and run for your version of CUDA. These samples have been compiled for you in your account. One of the samples probes your GPU and gives you detailed spec’s about it. You can try to run this sample code in your account (long output):
/home/student/cuda-samples/Samples/1_Utilities/deviceQuery/deviceQuery
CUDA Sample code output:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: “NVIDIA TITAN V”
CUDA Driver Version / Runtime Version 12.0 / 11.8
CUDA Capability Major/Minor version number: 7.0
Total amount of global memory: 12057 MBytes (12642746368 bytes)
(080) Multiprocessors, (064) CUDA Cores/MP: 5120 CUDA Cores
GPU Max Clock rate: 1455 MHz (1.46 GHz)
Memory Clock rate: 850 Mhz
Memory Bus Width: 3072-bit
L2 Cache Size: 4718592 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 98304 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 7 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 5
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.0, CUDA Runtime Version = 11.8, NumDevs = 1
Result = PASS
- Test CuDNN
-
The sample code is already compiled in the ‘student’ user. First change to the correct directory:
cd /home/student/cudnn-samples/mnistCUDNN
After that run the program:
./mnistCUDNN
CuDNN Sample code output:
Executing: mnistCUDNN
cudnnGetVersion() : 8600 , CUDNN_VERSION from cudnn.h : 8600 (8.6.0)
Host compiler version : GCC 11.4.0There are 1 CUDA capable devices on your machine :
device 0 : sms 80 Capabilities 7.0, SmClock 1455.0 Mhz, MemSize (Mb) 12057, MemClock 850.0 Mhz, Ecc=0, boardGroupID=0
Using device 0Testing single precision
Loading binary file data/conv1.bin
Loading binary file data/conv1.bias.bin
Loading binary file data/conv2.bin
Loading binary file data/conv2.bias.bin
Loading binary file data/ip1.bin
Loading binary file data/ip1.bias.bin
Loading binary file data/ip2.bin
Loading binary file data/ip2.bias.bin
Loading image data/one_28x28.pgm
Performing forward propagation …
Testing cudnnGetConvolutionForwardAlgorithm_v7 …
^^^^ CUDNN_STATUS_SUCCESS for Algo 1: -1.000000 time requiring 0 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 0: -1.000000 time requiring 0 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 2: -1.000000 time requiring 0 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 5: -1.000000 time requiring 178432 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 4: -1.000000 time requiring 184784 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 7: -1.000000 time requiring 2057744 memory
^^^^ CUDNN_STATUS_NOT_SUPPORTED for Algo 6: -1.000000 time requiring 0 memory……..
^^^^ CUDNN_STATUS_SUCCESS for Algo 2: 0.179200 time requiring 64000 memory
^^^^ CUDNN_STATUS_NOT_SUPPORTED for Algo 6: -1.000000 time requiring 0 memory
^^^^ CUDNN_STATUS_NOT_SUPPORTED for Algo 3: -1.000000 time requiring 0 memory
Resulting weights from Softmax:
0.0000000 0.0000000 0.0000000 1.0000000 0.0000000 0.0000714 0.0000000 0.0000000 0.0000000 0.0000000
Loading image data/five_28x28.pgm
Performing forward propagation …
Resulting weights from Softmax:
0.0000000 0.0000008 0.0000000 0.0000002 0.0000000 1.0000000 0.0000154 0.0000000 0.0000012 0.0000006Result of classification: 1 3 5
Test passed!
- Check Tensorflow
-
Tensorflow version:
python3 -c 'import tensorflow as tf; print(tf.__version__)'
Test tensorflow CPU support:
python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Test tensorflow GPU support:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
- AI Benchmark
-
This is useful to check if your tensorflow install is actually using your GPU. It is not uncommon for tensorflow to run on CPU. This is one way to check if tensorflow is using GPU and to see if it is performing as expected.
Test using AI benchmark (may take 20 minutes):
python3 -c 'from ai_benchmark import AIBenchmark;benchmark = AIBenchmark();results = benchmark.run()'
Then lookup the results on the Ai Benchmark link below. The AI website chart will tell you how well your GPU is performing. If its not performing well then you are not using the GPU!
Here are the AI Benchmark scores: http://ai-benchmark.com/ranking_deeplearning.html
Successful AI Benchmark output:
>> AI-Benchmark-v.0.1.2
>> Let the AI Games begin..* TF Version: 2.9.1
* Platform: Linux-5.15.0-37-generic-x86_64-with-glibc2.35
* CPU: N/A
* CPU RAM: 47 GB
* GPU/0: NVIDIA GeForce RTX 2080 SUPER
* GPU RAM: 6.5 GB
* CUDA Version: 11.7
* CUDA Build: V11.7.64The benchmark is running…
The tests might take up to 20 minutes
Please don’t interrupt the script1/19. MobileNet-V2
1.1 – inference | batch=50, size=224×224: 60.8 ± 12.9 ms
1.2 – training | batch=50, size=224×224: 203 ± 2 ms2/19. Inception-V3
2.1 – inference | batch=20, size=346×346: 63.6 ± 2.6 ms
2.2 – training | batch=20, size=346×346: 199 ± 4 ms3/19. Inception-V4
3.1 – inference | batch=10, size=346×346: 62.2 ± 3.8 ms
3.2 – training | batch=10, size=346×346: 216 ± 1 ms4/19. Inception-ResNet-V2
4.1 – inference | batch=10, size=346×346: 85.2 ± 6.2 ms
4.2 – training | batch=8, size=346×346: 232 ± 8 ms5/19. ResNet-V2-50
5.1 – inference | batch=10, size=346×346: 44.0 ± 2.3 ms
5.2 – training | batch=10, size=346×346: 127 ± 1 ms6/19. ResNet-V2-152
6.1 – inference | batch=10, size=256×256: 55.9 ± 2.3 ms
6.2 – training | batch=10, size=256×256: 178 ± 3 ms7/19. VGG-16
7.1 – inference | batch=20, size=224×224: 102 ± 1 ms
7.2 – training | batch=2, size=224×224: 166 ± 1 ms8/19. SRCNN 9-5-5
8.1 – inference | batch=10, size=512×512: 77.3 ± 1.9 ms
8.2 – inference | batch=1, size=1536×1536: 90.8 ± 3.5 ms
8.3 – training | batch=10, size=512×512: 253 ± 4 ms9/19. VGG-19 Super-Res
9.1 – inference | batch=10, size=256×256: 112 ± 3 ms
9.2 – inference | batch=1, size=1024×1024: 178 ± 1 ms
9.3 – training | batch=10, size=224×224: 266.1 ± 0.8 ms10/19. ResNet-SRGAN
10.1 – inference | batch=10, size=512×512: 112 ± 1 ms
10.2 – inference | batch=1, size=1536×1536: 99.7 ± 1.9 ms
10.3 – training | batch=5, size=512×512: 166 ± 3 ms11/19. ResNet-DPED
11.1 – inference | batch=10, size=256×256: 115.0 ± 0.8 ms
11.2 – inference | batch=1, size=1024×1024: 185 ± 4 ms
11.3 – training | batch=15, size=128×128: 185.6 ± 0.7 ms12/19. U-Net
12.1 – inference | batch=4, size=512×512: 216 ± 1 ms
12.2 – inference | batch=1, size=1024×1024: 209.7 ± 1.0 ms
12.3 – training | batch=4, size=256×256: 220 ± 1 ms13/19. Nvidia-SPADE
13.1 – inference | batch=5, size=128×128: 95.4 ± 1.0 ms
13.2 – training | batch=1, size=128×128: 175 ± 1 ms14/19. ICNet
14.1 – inference | batch=5, size=1024×1536: 233 ± 16 ms
14.2 – training | batch=10, size=1024×1536: 776 ± 35 ms15/19. PSPNet
15.1 – inference | batch=5, size=720×720: 415 ± 7 ms
15.2 – training | batch=1, size=512×512: 156 ± 2 ms16/19. DeepLab
16.1 – inference | batch=2, size=512×512: 114 ± 2 ms
16.2 – training | batch=1, size=384×384: 127 ± 4 ms17/19. Pixel-RNN
17.1 – inference | batch=50, size=64×64: 1019 ± 33 ms
17.2 – training | batch=10, size=64×64: 5049 ± 216 ms18/19. LSTM-Sentiment
18.1 – inference | batch=100, size=1024×300: 763 ± 46 ms
18.2 – training | batch=10, size=1024×300: 2314 ± 142 ms19/19. GNMT-Translation
19.1 – inference | batch=1, size=1×20: 271 ± 14 ms
Device Inference Score: 10318
Device Training Score: 10631
Device AI Score: 20949For more information and results, please visit http://ai-benchmark.com/alpha
- Check Pytorch
-
Version check:
python3 -c "import torch; print(torch.__version__)"
Test code:
python3 -c 'import torch;x=torch.rand(5,3);print(x)'
Troubleshooting
- x2go connection failed errors
-
If you cannot login to your VM using x2go the most common issue is that your VM ran out of space. In this case you can:
- login to your VM using the ssh-terminal (putty for Windows)
- free up space by deleting files & folders (Identify large folders using: du -s -h *) (filesystem space: df -h)
- try to login again using x2go
- Extend the LVM filesystem
-
Usually the file system is using all the allocated space assigned to it. There is a simple script that can expand the filesystem to use all the allocated space. In case you need to expand the space you can run the provided script:
sudo /home/student/extend-lvm/extend-lvm.sh /dev/vda
- NUMA node read from SysFS...
-
Because this is ‘bleeding’ edge software you can expect some software issues. This one in particular is concerning NUMA node error reading from memory. If you want to fix this particular ‘warning’ there is a bash script you can run:
sudo /home/student/os_scripts/numa-node-fix.bash
Reference: https://gist.github.com/zrruziev/b93e1292bf2ee39284f834ec7397ee9f
- Need to use another version of gcc?
-
There are two versions of gcc installed on this VM. You can use the update-alternatives to switch between the versions:
sudo update-alternatives --config gcc
- Tensorflow error: SysFS had negative value (-1)
-
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
Add this environment variable to /etc/environment
TF_ENABLE_ONEDNN_OPTS=0
Find the NVIDIA GPU ID:
lspci -D | grep NVIDIA
In my case it was
0000:00:05.0
Then add a line to the crontab (note YOUR device id):
sudo crontab -e
Then add this line:
@reboot (echo 0 | tee -a “/sys/bus/pci/devices/0000:00:05.0/numa_node”)
After that reboot and try the tensorflow GPU command again. That should fix this issue.
Share: Twitter, Facebook
Short URL:
https://carleton.ca/scs/?p=17254