Difference between revisions of "GPU identifiers"
Line 22: | Line 22: | ||
In some rare systems, you might to be required to manually set the environment variable {{t|CUDA_VISIBLE_DEVICES}} to the device numbers that you already selected inside ''Dynamo''. | In some rare systems, you might to be required to manually set the environment variable {{t|CUDA_VISIBLE_DEVICES}} to the device numbers that you already selected inside ''Dynamo''. | ||
− | <tt>export CUDA_VISIBLE_DEVICES=1</tt> | + | <tt>export CUDA_VISIBLE_DEVICES="1"</tt> |
+ | |||
+ | or | ||
+ | |||
+ | <tt>export CUDA_VISIBLE_DEVICES="0,2"</tt> |
Revision as of 14:19, 29 August 2017
The GPUs accessible by your system in an interactive session (i.e., assuming you can directly ssh onto the node where the GPUs are installed) can be shown with nvidia-smi. The system assigns an integer number to each device CUDA can talk to.
Remember that some GPUs are dual devices. A single NIVIDIA K80 will show as two different devices, with two different integer numbers.
Contents
Setting the identifiers
You need to make certain that the identifiers that you pass to the project are correct. During unfolding, Dynamowill not check that the device numbers are correct!
On the GUI
In the dcp GUI go to the computing environment GUI. In the GPU panel there is an edit field for the gpu identifiers.
Through the command line
The parameter is called gpu_identifier_set (short gpus). You can set it into a project using dvput, for instance:
dvput myProject gpus [0,1,2,3]
Environment variable
In some rare systems, you might to be required to manually set the environment variable CUDA_VISIBLE_DEVICES to the device numbers that you already selected inside Dynamo.
export CUDA_VISIBLE_DEVICES="1"
or
export CUDA_VISIBLE_DEVICES="0,2"