Difference between revisions of "GPUs Basel 2018"

From Dynamo
Jump to navigation Jump to search
(Created page with "Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the...")
 
Line 1: Line 1:
Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your project of choice.
+
Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your dataset/project of choice.
  
GPUs are on cluster SCICORE (link) and use queing system SLURM (a queuing system arranges the tasks from different users that want to use the same gpu)
+
The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE (https://scicore.unibas.ch) which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using just a few GPUs.
 +
 
 +
We will create an alignment project locally, move it to sciCORE and run it there using a pre-installed Dynamo standalone version.
  
We differ from 3 spaces: lcoal matlab, local shell, scicore shell
 
  
 
On your local Matlab session with dynamo loaded:
 
On your local Matlab session with dynamo loaded:
1) Create tutorial project: dtutorial pTutorial -p dTutorial -M 128
+
1) Create the tutorial project: dtutorial myParticles -p myProject -M 128
2) select gpu as computing environement (default id, will be changed later)
+
2) Open the alignment project window with dcp myProject and in computing environment select gpu as computing environment. The rest remains default.
3) unfold
+
3) Check and Unfold the project
4) tar project: in dcp gui go to tools and then create tarball
+
4) Before moving the data to sciCORE we have to compress the project: in dcp gui go to tools and then create tarball
  
On local shell:
+
On local linux terminal:
5) download standalone dynamo
+
7) copy project data (particles) to sciCORE with following command:
6) copy it to scicore:
+
rsync -avuP myParticles USERNAME@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects
rsync -avuP dynamo-v-1.1.333_MCR-9.2.0_GLNXA64_withMCR.tar scaramuz@login.bc2.unibas.ch:/scicore/home/stahlberg/scaramuz/dynamo_standalone
 
7) copy project data (particles) to scicore:
 
rsync -avuP pTutorial scaramuz@login.bc2.unibas.ch:/scicore/home/stahlberg/scaramuz/dynamo_scicore_testing
 
 
8) copy tar of project to scicore:
 
8) copy tar of project to scicore:
rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/stahlberg/scaramuz/dynamo_scicore_testing
+
rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects
 
9) login to scicore:
 
9) login to scicore:
ssh -Y scaramuz@login.scicore.unibas.ch
+
ssh -Y USERNAME@login.scicore.unibas.ch
  
 
On scicore:
 
On scicore:
10) untar stnadalone dynamo:
 
tar -xf dynamo-v-1.1.333_MCR-9.2.0_GLNXA64_withMCR.tar
 
11) compile cuda executables:
 
    module load CUDA/7.5.18
 
    go to /cuda directory of matlab standalone folder
 
    which nvcc (should be correct version)
 
    source config.sh
 
  make clean
 
    make all
 
 
12) put mcr cache root on scratch???
 
 
13) activate dynamo:
 
13) activate dynamo:
source dynamo_activate_linux_shipped_MCR.sh
+
source PATH/dynamo_activate_linux_shipped_MCR.sh
 
14) untar dynamo project:
 
14) untar dynamo project:
dynamo dvuntar dTutorial.tar
+
dynamo dvuntar myProject.tar
15) create slurm script "dTest_submit_k80_ID.sh":
+
15) create SLURM submission script "submit_job.sh":
  
 
#!/bin/bash -l
 
#!/bin/bash -l
Line 52: Line 39:
 
#SBATCH --gres=gpu:1
 
#SBATCH --gres=gpu:1
 
module load CUDA/7.5.18
 
module load CUDA/7.5.18
source /scicore/home/stahlberg/scaramuz/dynamo_standalone/dynamo_activate_linux_shipped_MCR.sh
+
source PATH/dynamo_activate_linux_shipped_MCR.sh
cd /scicore/home/stahlberg/scaramuz/dynamo_scicore_testing/
+
cd PATH/dynamo_projects
echo "dvput dTutorial -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > commands.sh
+
echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh
echo "dvunfold dTutorial" >> commands.sh
+
echo "dvunfold myProject" >> dcommands.sh
dynamo commands.sh
+
dynamo dcommands.sh
chmod u=rxw ./dTutorial.m
+
chmod u=rxw ./myProject.m
./dTutorial.m
+
./myProject.m
  
 
16) launch job on slurm with:
 
16) launch job on slurm with:
sbatch dTest_submit_k80_ID.sh
+
sbatch submit_job.sh
  
 
17) check queue:
 
17) check queue:
squeue -u scaramuz
+
squeue -u USERNAME
  
 
see all users in queue:
 
see all users in queue:
Line 81: Line 68:
 
dynamo
 
dynamo
 
ddb dTutorial:a -v
 
ddb dTutorial:a -v
 
21) interactive session for testing:
 
run --nodes=1 --cpus-per-task=1 --mem=16G --gres=gpu:4 --partition=k80 --pty bash
 

Revision as of 13:36, 20 August 2018

Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your dataset/project of choice.

The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE (https://scicore.unibas.ch) which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using just a few GPUs.

We will create an alignment project locally, move it to sciCORE and run it there using a pre-installed Dynamo standalone version.


On your local Matlab session with dynamo loaded: 1) Create the tutorial project: dtutorial myParticles -p myProject -M 128 2) Open the alignment project window with dcp myProject and in computing environment select gpu as computing environment. The rest remains default. 3) Check and Unfold the project 4) Before moving the data to sciCORE we have to compress the project: in dcp gui go to tools and then create tarball

On local linux terminal: 7) copy project data (particles) to sciCORE with following command: rsync -avuP myParticles USERNAME@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 8) copy tar of project to scicore: rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 9) login to scicore: ssh -Y USERNAME@login.scicore.unibas.ch

On scicore: 13) activate dynamo: source PATH/dynamo_activate_linux_shipped_MCR.sh 14) untar dynamo project: dynamo dvuntar myProject.tar 15) create SLURM submission script "submit_job.sh":

  1. !/bin/bash -l
  2. SBATCH --job-name=dTest
  3. SBATCH --qos=30min (for titanX: emgpu)
  4. SBATCH --time=00:60:00 (adapt time)
  5. SBATCH --mem=16G
  6. SBATCH --nodes=1
  7. SBATCH --ntasks-per-node=1
  8. SBATCH --cpus-per-task=1
  9. SBATCH --partition=k80 (for titanX: titanx)
  10. SBATCH --gres=gpu:1

module load CUDA/7.5.18 source PATH/dynamo_activate_linux_shipped_MCR.sh cd PATH/dynamo_projects echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh echo "dvunfold myProject" >> dcommands.sh dynamo dcommands.sh chmod u=rxw ./myProject.m ./myProject.m

16) launch job on slurm with: sbatch submit_job.sh

17) check queue: squeue -u USERNAME

see all users in queue: squeue -q 30min (for titanX: squeue -q empgu)


18) cancel job: scancel ????? (job id given from squeue command)

19) check last output: ls -rtl tail -f slurm-45994509.out less slurm-45994509.out

20) check last average dynamo ddb dTutorial:a -v