Averaging of subtomograms

From Dynamo
Revision as of 20:41, 3 December 2017 by Daniel Castaño (talk | contribs)
Jump to navigation Jump to search

Category:General steps

Averaging of subtomograms means applying an available set of alignment parameters onto a set of data particles and adding them together. Because of the missing wedge, a compensation in the Fourier space is operated during this step.

From the operative point of view, the particles are stored in a data folder, which are indexed by a table.

Through the command line

To operate manually averages of particles (given an available metadata), use the daverage command, or its GUI version dvarage_GUI. daverage The most important flags are

  • fcompensate : if tuned to 1, a fourier compensation step will be carried.
  • fmin : minimum number of particles that need to contribute to a given Fourier component.

Inside an alignment project

At the end of each iteration, the refined table generated by Dynamo is used to produce an average, which will be used as starting template in the next iteration.

There are different project parameters that can be used to modify the default behaviour of the daverage command used inside an alignment project.

  • fCompensationSmoothingMask
  • implicitRotationMask'

Reproducing averaging steps of a project from the command line

It is possible to apply onto a set of particles exactly the same averaging flags that were using during the runtime of a project using the command dpkproject.pipeline.genericInput2Average.

o = dpkproject.pipeline.genericInput2Average(vpr,myTable,'ite',ite); 

Here, o would be the output of the daverage command as operated with the flags passed to it by in the project vpr. These flags can be obtained explicitely with a second left hand side:

[o,flags] = dpkproject.pipeline.genericInput2Average(vpr,myTable,'ite',ite);

Parallelism

It is possible to select the number of cores that will be used during the averaging. The project parameter is called matlab_workers_average (shortform mwa), and can be modified also in dcp GUI (in the section of environment variables).

This option is available for computations on a single server with several cores. It will not work for the current MPI implementation.