Difference between revisions of "GPUs Basel 2018"
(Created page with "Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the...") |
|||
Line 1: | Line 1: | ||
− | Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your project of choice. | + | Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your dataset/project of choice. |
− | GPUs are on cluster | + | The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE (https://scicore.unibas.ch) which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using just a few GPUs. |
+ | |||
+ | We will create an alignment project locally, move it to sciCORE and run it there using a pre-installed Dynamo standalone version. | ||
− | |||
On your local Matlab session with dynamo loaded: | On your local Matlab session with dynamo loaded: | ||
− | 1) Create tutorial project: dtutorial | + | 1) Create the tutorial project: dtutorial myParticles -p myProject -M 128 |
− | 2) select gpu as computing | + | 2) Open the alignment project window with dcp myProject and in computing environment select gpu as computing environment. The rest remains default. |
− | 3) | + | 3) Check and Unfold the project |
− | 4) | + | 4) Before moving the data to sciCORE we have to compress the project: in dcp gui go to tools and then create tarball |
− | On local | + | On local linux terminal: |
− | + | 7) copy project data (particles) to sciCORE with following command: | |
− | + | rsync -avuP myParticles USERNAME@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects | |
− | |||
− | 7) copy project data (particles) to | ||
− | rsync -avuP | ||
8) copy tar of project to scicore: | 8) copy tar of project to scicore: | ||
− | rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/ | + | rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects |
9) login to scicore: | 9) login to scicore: | ||
− | ssh -Y | + | ssh -Y USERNAME@login.scicore.unibas.ch |
On scicore: | On scicore: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
13) activate dynamo: | 13) activate dynamo: | ||
− | source dynamo_activate_linux_shipped_MCR.sh | + | source PATH/dynamo_activate_linux_shipped_MCR.sh |
14) untar dynamo project: | 14) untar dynamo project: | ||
− | dynamo dvuntar | + | dynamo dvuntar myProject.tar |
− | 15) create | + | 15) create SLURM submission script "submit_job.sh": |
#!/bin/bash -l | #!/bin/bash -l | ||
Line 52: | Line 39: | ||
#SBATCH --gres=gpu:1 | #SBATCH --gres=gpu:1 | ||
module load CUDA/7.5.18 | module load CUDA/7.5.18 | ||
− | source | + | source PATH/dynamo_activate_linux_shipped_MCR.sh |
− | cd / | + | cd PATH/dynamo_projects |
− | echo "dvput | + | echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh |
− | echo "dvunfold | + | echo "dvunfold myProject" >> dcommands.sh |
− | dynamo | + | dynamo dcommands.sh |
− | chmod u=rxw ./ | + | chmod u=rxw ./myProject.m |
− | ./ | + | ./myProject.m |
16) launch job on slurm with: | 16) launch job on slurm with: | ||
− | sbatch | + | sbatch submit_job.sh |
17) check queue: | 17) check queue: | ||
− | squeue -u | + | squeue -u USERNAME |
see all users in queue: | see all users in queue: | ||
Line 81: | Line 68: | ||
dynamo | dynamo | ||
ddb dTutorial:a -v | ddb dTutorial:a -v | ||
− | |||
− | |||
− |
Revision as of 13:36, 20 August 2018
Here we describe on how to use the GPUs provided for the Basel Workshop 2018. We go through each step by using a simple tutorial dataset/project as an example. You can use the same steps on your dataset/project of choice.
The GPUs we use are located on the high performance computing cluster of the University of Basel called sciCORE (https://scicore.unibas.ch) which uses the SLURM queuing system. A queuing system coordinates the access to the GPUs and is needed when there are many users using just a few GPUs.
We will create an alignment project locally, move it to sciCORE and run it there using a pre-installed Dynamo standalone version.
On your local Matlab session with dynamo loaded:
1) Create the tutorial project: dtutorial myParticles -p myProject -M 128
2) Open the alignment project window with dcp myProject and in computing environment select gpu as computing environment. The rest remains default.
3) Check and Unfold the project
4) Before moving the data to sciCORE we have to compress the project: in dcp gui go to tools and then create tarball
On local linux terminal: 7) copy project data (particles) to sciCORE with following command: rsync -avuP myParticles USERNAME@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 8) copy tar of project to scicore: rsync -avuP dTutorial.tar scaramuz@login.bc2.unibas.ch:/scicore/home/.../dynamo_projects 9) login to scicore: ssh -Y USERNAME@login.scicore.unibas.ch
On scicore: 13) activate dynamo: source PATH/dynamo_activate_linux_shipped_MCR.sh 14) untar dynamo project: dynamo dvuntar myProject.tar 15) create SLURM submission script "submit_job.sh":
- !/bin/bash -l
- SBATCH --job-name=dTest
- SBATCH --qos=30min (for titanX: emgpu)
- SBATCH --time=00:60:00 (adapt time)
- SBATCH --mem=16G
- SBATCH --nodes=1
- SBATCH --ntasks-per-node=1
- SBATCH --cpus-per-task=1
- SBATCH --partition=k80 (for titanX: titanx)
- SBATCH --gres=gpu:1
module load CUDA/7.5.18 source PATH/dynamo_activate_linux_shipped_MCR.sh cd PATH/dynamo_projects echo "dvput myProject -gpu_identifier_set $CUDA_VISIBLE_DEVICES" > dcommands.sh echo "dvunfold myProject" >> dcommands.sh dynamo dcommands.sh chmod u=rxw ./myProject.m ./myProject.m
16) launch job on slurm with: sbatch submit_job.sh
17) check queue: squeue -u USERNAME
see all users in queue: squeue -q 30min (for titanX: squeue -q empgu)
18) cancel job:
scancel ????? (job id given from squeue command)
19) check last output: ls -rtl tail -f slurm-45994509.out less slurm-45994509.out
20) check last average dynamo ddb dTutorial:a -v