Difference between revisions of "Project parameter systemNoMPIWrapUnfoldingGPU"

From Dynamo
Jump to navigation Jump to search
(Created page with "This parameter can be used when creating projects that will use MPI to talk to remote GPUs. This project is only active In general, it is possible to use several nodes, each...")
 
 
Line 1: Line 1:
This parameter can be used when creating projects that will use MPI to talk to remote GPUs. This project is only active
+
[[Category:project parameters]]
 +
[[Category:Advanced project parameters]]
  
 +
This parameter can be used when creating projects that will use MPI to talk to remote GPUs. This project parameter is only active during the unfolding of a project with destination <tt>mpi_gpu</tt>.
 +
 +
==General use==
 
In general, it is possible to use several nodes, each one talking to several GPUs. However, this procedure might fail if the queueing system assigns the GPU device numbers in real time. The current version of ''Dynamo'' works on the assumption that the user can set beforehand the GPU identifier numbers for each node.  
 
In general, it is possible to use several nodes, each one talking to several GPUs. However, this procedure might fail if the queueing system assigns the GPU device numbers in real time. The current version of ''Dynamo'' works on the assumption that the user can set beforehand the GPU identifier numbers for each node.  
This approach will work properly when each node has a single device, or when the queueing system does not allow for allocation of GPU nodes to several users.  
+
This approach will work properly when each node has a single device, or when the queueing system does not allow for allocation of GPU nodes to several users
 +
If it is not the case, ''Dynamo'' will not be able to use GPUs on different nodes.
 +
 
 +
 
 +
== Submitting jobs to a single node ==
 +
 
 +
Some ''Dynamo'' users have access to GPU-enabled machines only through submission queues, even for the use of a single server.
 +
In this case, you don't really need an MPI system (as you'll only take to one node), but you still want ''Dynamo'' to create a submission script for the precise syntax of your queuing system when you unfold the project. This precise syntax should be told to ''Dynamo'' by the <tt>cluster_header</tt> project parameter.  
  
There
+
In this case, you just specify the value of the parameter <tt>systemNoMPIWrapUnfoldingGPU</tt>, telling ''Dynamo'' to create a submission script that does not invoke any MPI executable.

Latest revision as of 15:57, 22 September 2016


This parameter can be used when creating projects that will use MPI to talk to remote GPUs. This project parameter is only active during the unfolding of a project with destination mpi_gpu.

General use

In general, it is possible to use several nodes, each one talking to several GPUs. However, this procedure might fail if the queueing system assigns the GPU device numbers in real time. The current version of Dynamo works on the assumption that the user can set beforehand the GPU identifier numbers for each node. This approach will work properly when each node has a single device, or when the queueing system does not allow for allocation of GPU nodes to several users. If it is not the case, Dynamo will not be able to use GPUs on different nodes.


Submitting jobs to a single node

Some Dynamo users have access to GPU-enabled machines only through submission queues, even for the use of a single server. In this case, you don't really need an MPI system (as you'll only take to one node), but you still want Dynamo to create a submission script for the precise syntax of your queuing system when you unfold the project. This precise syntax should be told to Dynamo by the cluster_header project parameter.

In this case, you just specify the value of the parameter systemNoMPIWrapUnfoldingGPU, telling Dynamo to create a submission script that does not invoke any MPI executable.