Difference between revisions of "GPU identifiers"
Line 1: | Line 1: | ||
− | The GPUs accessible by your system in an interactive session (i.e., assuming you can directly ssh onto the node where the GPUs are installed) can be shown with {{t|nvidia-smi}}. | + | The GPUs accessible by your system in an interactive session (i.e., assuming you can directly ssh onto the node where the GPUs are installed) can be shown with {{t|nvidia-smi}}. The system assigns an integer number to each device CUDA can talk to. |
Remember that some GPUs are dual devices. A single NIVIDIA K80 will show as two different devices, with two different integer numbers. | Remember that some GPUs are dual devices. A single NIVIDIA K80 will show as two different devices, with two different integer numbers. |
Revision as of 15:09, 19 May 2016
The GPUs accessible by your system in an interactive session (i.e., assuming you can directly ssh onto the node where the GPUs are installed) can be shown with nvidia-smi. The system assigns an integer number to each device CUDA can talk to.
Remember that some GPUs are dual devices. A single NIVIDIA K80 will show as two different devices, with two different integer numbers.
Contents
Setting the identifiers
You need to make certain that the identifiers that you pass to the project are correct. During unfolding, Dynamowill not check that the device numbers are correct!
On the GUI
In the dcp GUI go to the computing environment GUI. In the GPU panel there is an edit field for the gpu identifiers.
Through the command line
The parameter is called gpu_identifier_set (short gpus). You can set it into a project using dvput, for instance:
dvput myProject gpus [0,1,2,3]
Environment variable
In some rare systems, you might to be required to manually set the environment variable CUDA_VISIBLE_DEVICES to the device numbers that you already selected inside Dynamo.