GPUs Basel 2019

From Dynamo
Revision as of 12:51, 27 August 2019 by Stefano Scaramuzza (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Here we describe how to use the GPUs provided for the Basel Workshop 2018. We go through each step and use a simple tutorial dataset and project as an example. You can use the same steps described here on your own project.

The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE ( which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using a limited amount of GPUs. You should have received the credentials needed to log in on the cluster at the beginning of the workshop.

The main idea is to create an alignment project on your local machine, move it to the sciCORE cluster and then run it using a pre-installed Dynamo standalone version on sciCORE. How this is done is described in the following steps.

On your local Matlab session with Dynamo loaded:

  • Create a tutorial project with Dynamo:
dtutorial myParticles -p myProject -M 128

We now have a tutorial dataset with 128 particles in the directory myParticles and a tutorial alignment project myProject.

  • Open the alignment project window:
dcp myProject

and under computing environment select GPU (standalone) as an environment.

  • Check and unfold the project.
  • Before moving the data to sciCORE we have to compress the project. In the project window go to Tools and then create a tarball. In this example it does not matter whether you choose skipping or including results.
  • Close the alignment project window.

On your local Linux terminal:

  • Open a new local Linux terminal and navigate to the directory where you just created the tutorial dataset and project. Copy the project data (particles) to sciCORE:
rsync -avuP myParticles
  • Copy the previously created tar file of the project to sciCORE:
rsync -avuP myProject.tar
  • Login to your sciCORE account:
ssh -Y

If asked to continue type "yes". Use the provided password.

While logged in to your sciCORE account:

  • While logged in to your sciCORE account, activate dynamo:
source /scicore/home/s-gpu-course/GROUP/
  • Go to the location where you copied the data:
cd dynamo_projects 
  • Untar the Dynamo project:
dvuntar myProject.tar 
  • Create a blank SLURM submission script (text file) named
  • Copy (and adapt) the following lines into the newly created script. Depending on your project you might have to adapt the project name and the time requested (time=hh:mm:ss) in the script. The limit for the parameter time is 6 hours:
#!/bin/bash -l
#SBATCH --job-name=dTest
#SBATCH --qos=emgpu
#SBATCH --time=00:30:00
#SBATCH --mem=16G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --partition=titanx
#SBATCH --gres=gpu:1
#SBATCH --reservation=dynamo
module load CUDA/7.5.18
source /scicore/home/s-gpu-course/GROUP/
cd $HOME/dynamo_projects
echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" >
echo "dvunfold myProject" >>
chmod u=rxw ./myProject.exe

  • Save and close the nano text editor by pressing first ctrl+shift+x and then Y to save changes and finally enter.
  • You can now run your alignment project by submitting the previously created script to SLURM with:
  • With the following commands you can check the overall status of the submitted jobs:

Check your status in the queue:

squeue -u USERNAME

See all users in the queue:

squeue -q emgpu

To cancel the job type scancel followed by the job ID that was shown by the previous squeue command:

scancel my_job_id

To see the progress type:

ls -rtl

The last item in the list is the latest output file. You can have a live view of it by typing:

tail -f slurm-45994509.out

Exit the live view by typing ctrl+c.

To check the last average load the standalone Dynamo environment by typing dynamo into the terminal and use the usual commands, e.g.:

ddb myProject:a -v