Difference between revisions of "GPUs EMBO 2016"

From Dynamo
Jump to navigation Jump to search
Line 1: Line 1:
  
CSCS Lugano is the [http://www.cscs.ch Nacional Supercomputing Center of Switzerland], which kindly provides the course with 20 accounts. Each account should be able to submit jobs to a single node connected to a K20 GPU and four CPU cores.
+
CSCS in Lugano is the [http://www.cscs.ch Swiss Nacional Supercomputing Centre].  CSCS kindly provides the EMBO course with 20 accounts. Each account should be able to submit jobs to a single node connected to a K20 GPU and four CPU cores.
  
  
Line 16: Line 16:
  
 
===Transferring projects===
 
===Transferring projects===
In this example, we show how to transfer a project from a local machine into the remote system.
+
In this example, we show how to transfer a project from a local machine into the remote system, by [[tarring projects|''Dynamo''-tarring a project]] in a local machine, copying it into a remote machine and untarring it there.
  
 
;On the local machine
 
;On the local machine

Revision as of 08:53, 25 August 2016

CSCS in Lugano is the Swiss Nacional Supercomputing Centre. CSCS kindly provides the EMBO course with 20 accounts. Each account should be able to submit jobs to a single node connected to a K20 GPU and four CPU cores.


Connecting with CSCS

First you need to connect to the gate node ela using your cscs credentials from the credentials handout.

ssh -Y stud01@ela.cscs.ch

and then you can connect to the computing machine called daint, again you will be requested to type in your credentials.

stud01@ela2:~> ssh -Y daint


Using Dynamo

We are using a slightly older version of Dynamo on the supercomputer GPUs for compatibility reasons

Transferring projects

In this example, we show how to transfer a project from a local machine into the remote system, by Dynamo-tarring a project in a local machine, copying it into a remote machine and untarring it there.

On the local machine
  1. tar your project in Dynamo (in Dynamo wizard >> Tools >> Create a tarball
  2. rsync -avr my_project.tar stud##@ela.cscs.ch:~/
  3. Also rsync your data to CSCS
  4. Untar your Dynamo project
You will need the Dynamo terminal for this:
dynamo &
dvuntar myProject
On CSCS,
  1. type
    salloc --gres=gpu:1
    to get a node with a gpu. It can take some time till the system allocates you a node. You can allocate up to two nodes.
    you can check the GPU on your node by:
    srun nvidia-smi
  2. type
    source ~/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh
    to activate Dynamo in your shell.
  3. open Dynamo with dynamo &
  4. open your project, and re-unfold it (make sure standalone GPU is selected and make sure your data is in the same relative location as on the local machine)
    Note
    if the graphical interface is too slow, you can use the command line instead:
    open a Dynamo console in your shell with dynamo x
    dvput my_project -destination system_gpu
    dvunfold my_project
  5. run your alignment by typing srun my_project.exe

Creating tutorial data sets

We can use the system terminal as an equivalent of the Matlab terminal using the Dynamo standalone. This is an example on how to use it to create a phantom project like the one we did yesterday.

dynamo x

in a linux shell (you'll need to source Dynamo activation script on that shell beforehand).

  • create a tutorial project. For this, type inside the Dynamo console:
dtutorial myTest -p ptest -M 128
  • tune the project to work in a GPU
dvput ptest -destination system_gpu
  • unfold the project
dvunfold ptest.exe inside the Dynamo console
  • run the project with srun
srun ptest.exe in a terminal shell, i.e., not inside the Dynamo console
  • when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with dview, type:
ddb ptest:a -v


Note about performance You will notice that the project stops at several points during execution. Those are the points where the project accesses the MCR libraries. This overhead is a constant, and is a very small fraction of the computing time for a real project with thousands of particles.

We are using an old Dynamo version. Modern Dynamo versions don't access the MCR library several times.