Difference between revisions of "GPUs Basel 2018"

From Dynamo
Jump to navigation Jump to search
Line 26: Line 26:
 
dynamo dvuntar myProject.tar
 
dynamo dvuntar myProject.tar
 
15) create SLURM submission script "submit_job.sh":
 
15) create SLURM submission script "submit_job.sh":
 +
Adapt the expected time (time=???) and the paths
 +
<nowiki>#!/bin/bash -l
 +
#
 +
#SBATCH --job-name=dTest
 +
#SBATCH --qos=30min
 +
#SBATCH --time=00:60:00
 +
#SBATCH --mem=16G
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks-per-node=1
 +
#SBATCH --cpus-per-task=1
 +
#SBATCH --partition=k80
 +
#SBATCH --gres=gpu:1
 +
module load CUDA/7.5.18
 +
source PATH/dynamo_activate_linux_shipped_MCR.sh
 +
cd PATH/dynamo_projects
 +
echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh
 +
echo "dvunfold myProject" >> dcommands.sh
 +
dynamo dcommands.sh
 +
chmod u=rxw ./myProject.m
 +
./myProject.m</nowiki>
  
#!/bin/bash -l
+
<nowiki>#!/bin/bash -l
 
#
 
#
 
#SBATCH --job-name=dTest
 
#SBATCH --job-name=dTest
#SBATCH --qos=30min      (for titanX: emgpu)
+
#SBATCH --qos=emgpu
#SBATCH --time=00:60:00 (adapt time)
+
#SBATCH --time=00:60:00
 
#SBATCH --mem=16G
 
#SBATCH --mem=16G
 
#SBATCH --nodes=1
 
#SBATCH --nodes=1
 
#SBATCH --ntasks-per-node=1
 
#SBATCH --ntasks-per-node=1
 
#SBATCH --cpus-per-task=1
 
#SBATCH --cpus-per-task=1
#SBATCH --partition=k80  (for titanX: titanx)
+
#SBATCH --partition=titanx
 
#SBATCH --gres=gpu:1
 
#SBATCH --gres=gpu:1
 
module load CUDA/7.5.18
 
module load CUDA/7.5.18
Line 45: Line 65:
 
dynamo dcommands.sh
 
dynamo dcommands.sh
 
chmod u=rxw ./myProject.m
 
chmod u=rxw ./myProject.m
./myProject.m
+
./myProject.m</nowiki>
 +
 
  
 
16) launch job on slurm with:
 
16) launch job on slurm with:

Revision as of 13:55, 20 August 2018

Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your dataset/project of choice.

The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE (https://scicore.unibas.ch) which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using just a few GPUs.

We will create an alignment project locally, move it to sciCORE and run it there using a pre-installed Dynamo standalone version.


On your local Matlab session with dynamo loaded: 1) Create the tutorial project: dtutorial myParticles -p myProject -M 128 2) Open the alignment project window with dcp myProject and in computing environment select gpu as computing environment. The rest remains default. 3) Check and Unfold the project 4) Before moving the data to sciCORE we have to compress the project: in dcp gui go to tools and then create tarball

On local linux terminal: 7) copy project data (particles) to sciCORE with following command: rsync -avuP myParticles USERNAME@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 8) copy tar of project to scicore: rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 9) login to scicore: ssh -Y USERNAME@login.scicore.unibas.ch

On scicore: 13) activate dynamo: source PATH/dynamo_activate_linux_shipped_MCR.sh 14) untar dynamo project: dynamo dvuntar myProject.tar 15) create SLURM submission script "submit_job.sh": Adapt the expected time (time=???) and the paths

#!/bin/bash -l
#
#SBATCH --job-name=dTest
#SBATCH --qos=30min
#SBATCH --time=00:60:00
#SBATCH --mem=16G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --partition=k80
#SBATCH --gres=gpu:1
module load CUDA/7.5.18
source PATH/dynamo_activate_linux_shipped_MCR.sh
cd PATH/dynamo_projects
echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh
echo "dvunfold myProject" >> dcommands.sh
dynamo dcommands.sh
chmod u=rxw ./myProject.m
./myProject.m

#!/bin/bash -l # #SBATCH --job-name=dTest #SBATCH --qos=emgpu #SBATCH --time=00:60:00 #SBATCH --mem=16G #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=1 #SBATCH --partition=titanx #SBATCH --gres=gpu:1 module load CUDA/7.5.18 source PATH/dynamo_activate_linux_shipped_MCR.sh cd PATH/dynamo_projects echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh echo "dvunfold myProject" >> dcommands.sh dynamo dcommands.sh chmod u=rxw ./myProject.m ./myProject.m


16) launch job on slurm with: sbatch submit_job.sh

17) check queue: squeue -u USERNAME

see all users in queue: squeue -q 30min (for titanX: squeue -q empgu)


18) cancel job: scancel ????? (job id given from squeue command)

19) check last output: ls -rtl tail -f slurm-45994509.out less slurm-45994509.out

20) check last average dynamo ddb dTutorial:a -v