Difference between revisions of "Computers Basel 2017"

From Dynamo
Jump to navigation Jump to search
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
== WLAN connection  ==
 +
 +
The credentials are:
 +
<nowiki>unibas-event
 +
User    :  eventbzpz
 +
Password : DynamoWS17!</nowiki>
 +
 
== Biozentrum ==
 
== Biozentrum ==
  
Line 15: Line 22:
 
You should see something like this:  
 
You should see something like this:  
  
  <nowiki>C dynamo_activate_mac_shipped_MCR.sh
+
  <nowiki>C
MCRforMac dynamo_activate_windows.bat
+
MCRforMac
MacOS dynamo_compile_mpi.sh
+
MacOS
README_dynamo_installation.txt dynamo_setup_cluster.sh
+
README_dynamo_installation.txt
cuda dynamo_setup_linux.sh
+
cuda
ddemos dynamo_setup_mac.sh
+
ddemos
doc examples
+
doc
dynamo_activate.m licenses
+
dynamo_activate.m
dynamo_activate_linux.sh matlab
+
dynamo_activate_linux.sh
dynamo_activate_linux_shipped_MCR.sh mex
+
dynamo_activate_linux_shipped_MCR.sh
dynamo_activate_mac.sh mpi
+
dynamo_activate_mac.sh
dynamo_activate_mac.sh~ </nowiki>
+
dynamo_activate_mac.sh~
 +
dynamo_activate_mac_shipped_MCR.sh
 +
dynamo_activate_windows.bat
 +
dynamo_compile_mpi.sh
 +
dynamo_setup_cluster.sh
 +
dynamo_setup_linux.sh
 +
dynamo_setup_mac.sh
 +
examples
 +
licenses
 +
matlab
 +
mex
 +
mpi</nowiki>
  
 
====Installing  ''Dynamo''====
 
====Installing  ''Dynamo''====
Line 71: Line 89:
 
=== Connecting===
 
=== Connecting===
 
First you need to connect to the gate node <tt>ela</tt> using your cscs credentials from the credentials handout.
 
First you need to connect to the gate node <tt>ela</tt> using your cscs credentials from the credentials handout.
<pre>ssh -Y stud01@ela.cscs.ch</pre>
+
<pre>ssh -Y course01@ela.cscs.ch</pre>
 
and then you can connect to the computing machine called <tt>daint</tt>, again you will be requested to type in your credentials.
 
and then you can connect to the computing machine called <tt>daint</tt>, again you will be requested to type in your credentials.
  
Line 77: Line 95:
  
 
=== Using ''Dynamo''===
 
=== Using ''Dynamo''===
 +
 
We are using a slightly older version of ''Dynamo'' on the supercomputer GPUs for compatibility reasons
 
We are using a slightly older version of ''Dynamo'' on the supercomputer GPUs for compatibility reasons
;On the local machine
+
 
#tar your project in'' Dynamo'' (in <tt>Dynamo wizard >> Tools >> Create a tarball</tt>
+
;On the local machine:
#<tt>rsync -avr my_project.tar stud##@ela.cscs.ch:~/</tt>
+
# Create alignment project >> computational parameters >> system_gpu
#Also rsync your data to CSCS
+
# Dynamo Wizard >> Tools >> Create tarball  
 +
#: alternatively you can use the command line <br /> <tt>dvtar myProject</tt>
 +
# <tt>scp <project.tar> course##@ela.cscs.ch:~/ </tt>
 +
# <tt>ssh -Y course##@ela.cscs.ch >> mkdir data/ </tt>
 +
# <tt>scp <your data folder> course##@ela.cscs.ch:~/data/ </tt>
 +
 
 +
;On CSCS:
 +
# <tt>source /users/course42/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh </tt>
 +
# <tt>dynamo x </tt>
 
#Untar your ''Dynamo'' project  
 
#Untar your ''Dynamo'' project  
: You will need the ''Dynamo'' terminal for this:
+
# <tt>dvuntar <project.tar> </tt>
: <tt>dynamo &</tt>
+
# <tt>dcp <project> </tt>
 
+
# Check >> Unfold
: <tt>dvuntar myProject</tt>
 
;On CSCS,
 
# type <br /><tt>salloc --gres=gpu:1</tt> <br /> to get a node with a gpu. It can take some time till the system allocates you a node. You can allocate up to two nodes. <br /> you can check the GPU on your node by: <br /> <tt>srun nvidia-smi</tt>
 
# type <br /><tt>source ~/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh</tt><br /> to activate ''Dynamo'' in your shell.
 
#open ''Dynamo'' with <tt>dynamo &</tt>  
 
#open your project, and re-unfold it (make sure standalone GPU is selected and make sure your data is in the same relative location as on the local machine)
 
 
#: ''Note''
 
#: ''Note''
 
#: if the graphical interface is too slow, you can use the command line instead:
 
#: if the graphical interface is too slow, you can use the command line instead:
Line 97: Line 118:
 
#: <tt>dvput my_project -destination system_gpu</tt>
 
#: <tt>dvput my_project -destination system_gpu</tt>
 
#: <tt>dvunfold my_project</tt>
 
#: <tt>dvunfold my_project</tt>
#run your alignment by typing  <tt>srun my_project.exe</tt>
+
# Make sure everything is correctly located.
 +
#: <tt>dvcheck myProject</tt>
 +
# (on ela): <br />
 +
#:<tt>ssh -Y daint </tt>
 +
# <tt>source /users/course42/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh</tt>
 +
# <tt>salloc -C gpu</tt>
 +
# <tt>srun <project.exe></tt>
  
 
=== ''Dynamo'' as standalone ===
 
=== ''Dynamo'' as standalone ===
Line 114: Line 141:
 
* when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with <tt>dview</tt>, type:
 
* when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with <tt>dview</tt>, type:
 
: <tt> ddb ptest:a -v</tt>
 
: <tt> ddb ptest:a -v</tt>
 
  
 
'''Note about performance'''
 
'''Note about performance'''
Line 120: Line 146:
 
   
 
   
 
We are using an old ''Dynamo'' version. Modern ''Dynamo'' versions don't access the MCR library several times.
 
We are using an old ''Dynamo'' version. Modern ''Dynamo'' versions don't access the MCR library several times.
 
== Strubi Oxforfd==
 
 
We are also using some accounts from the GPU cluster in the Structural Biology department in the University of Oxford.
 

Latest revision as of 11:23, 25 August 2017

WLAN connection

The credentials are:

unibas-event
User     :  eventbzpz
Password : DynamoWS17!

Biozentrum

Use the credentials from your credential handout to log into the Linux workstations.

Opening a terminal

Auxiliary click on the Ubuntu screen and choose the Open terminal option.

Checking the Dynamo installation

Dynamo should be already installed in your local directory ~/dynamo. Check in with:

ls ~/dynamo

You should see something like this:

C
MCRforMac
MacOS
README_dynamo_installation.txt
cuda
ddemos
doc
dynamo_activate.m
dynamo_activate_linux.sh
dynamo_activate_linux_shipped_MCR.sh
dynamo_activate_mac.sh
dynamo_activate_mac.sh~
dynamo_activate_mac_shipped_MCR.sh
dynamo_activate_windows.bat
dynamo_compile_mpi.sh
dynamo_setup_cluster.sh
dynamo_setup_linux.sh
dynamo_setup_mac.sh
examples
licenses
matlab
mex
mpi

Installing Dynamo

If Dynamo is not installed: 'To install Dynamo locally, just open a Linux terminal (not a matlab terminal ) and write:

/clab-share/installDynamo.sh

It should take around a minute.

Opening Matlab

We will be using the Matlab release R2017a. To open it, type in the shell:

/usr/local/matlab/R2017a/bin/matlab &

Opening Dynamo in Matlab

After opening a Matlab session, you'll need to activate Dynamo in that session. Dynamo should be installed locally in ~/dynamo. In order to activate your local Dynamo version, please type in the Matlab shell:

run ~/dynamo/dynamo_activate.m

As a fallback, we have a centrally installed version in /clab-share/dynamo

Opening Dynamo as standalone

If you want to use Dynamo from a Linux terminal:

source ~/dynamo/dynamo_activate_linux_shipped_MCR.sh

To follow the workshop, we recommend to use the Matlab version.

Activate Chimera UCSF

If you want to use Chimera UCSF during a Dynamo session inside Matlab, you need to tell Dynamo where Chimera is

run  /clab-share/activateChimera.m


CSCS: Lugano

CSCS Lugano is the Nacional Supercomputing Center of Switzerland. Each account should be able to submit jobs to a single node connected to a K20 GPU and four cores.


Connecting

First you need to connect to the gate node ela using your cscs credentials from the credentials handout.

ssh -Y course01@ela.cscs.ch

and then you can connect to the computing machine called daint, again you will be requested to type in your credentials.

stud01@ela2:~> ssh -Y daint

Using Dynamo

We are using a slightly older version of Dynamo on the supercomputer GPUs for compatibility reasons

On the local machine
  1. Create alignment project >> computational parameters >> system_gpu
  2. Dynamo Wizard >> Tools >> Create tarball
    alternatively you can use the command line
    dvtar myProject
  3. scp <project.tar> course##@ela.cscs.ch:~/
  4. ssh -Y course##@ela.cscs.ch >> mkdir data/
  5. scp <your data folder> course##@ela.cscs.ch:~/data/
On CSCS
  1. source /users/course42/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh
  2. dynamo x
  3. Untar your Dynamo project
  4. dvuntar <project.tar>
  5. dcp <project>
  6. Check >> Unfold
    Note
    if the graphical interface is too slow, you can use the command line instead:
    open a Dynamo console in your shell with dynamo x
    dvput my_project -destination system_gpu
    dvunfold my_project
  7. Make sure everything is correctly located.
    dvcheck myProject
  8. (on ela):
    ssh -Y daint
  9. source /users/course42/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh
  10. salloc -C gpu
  11. srun <project.exe>

Dynamo as standalone

We can use the system terminal as an equivalent of the Matlab terminal using the Dynamo standalone. This is an example on how to use it to create a phantom project like the one we did yesterday.

dynamo x

in a linux shell (you'll need to source Dynamo activation script on that shell beforehand).

  • create a tutorial project. For this, type inside the Dynamo console:
dtutorial myTest -p ptest -M 128
  • tune the project to work in a GPU
dvput ptest -destination system_gpu
  • unfold the project
dvunfold ptest.exe inside the Dynamo console
  • run the project with srun
srun ptest.exe in a terminal shell, i.e., not inside the Dynamo console
  • when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with dview, type:
ddb ptest:a -v

Note about performance You will notice that the project stops at several points during execution. Those are the points where the project accesses the MCR libraries. This overhead is a constant, and is a very small fraction of the computing time for a real project with thousands of particles.

We are using an old Dynamo version. Modern Dynamo versions don't access the MCR library several times.