The main program can be run by typing MCTDHX (in wrappers or in runscripts the MCTDHX alias might not work, and one needs to use MCTDHX_intel, MCTDHX_pgf or MCTDHX_gcc). Although, in the case of a computationally intense task, it is a lot faster to use the shared memory and distributed memory parallelization of the program and run it with one the various instances of MPI launchers (such as mpirun,mpiexec, mpiexec.hydra,aprun,...). Examples for running MCTDHX in parallel can be found in the example PBS scripts directory PBS_Scripts. Since the configuration of such an hybridly parallel job is complicated, it is easier to just use the MonsterScript.sh script that will automate whole series of hybridly parallel computations. They can be found in the ./Computation_Scripts directory. If a manual configuration of a task is needed or desired, this is done by adapting one of the PBS runscript examples. Depending on the hardware architecture, the most
efficient way
is usually to
run MCTDH-X with at
least as many MPI processes as there are orbitals. The OpenMP shared memory parallelization takes care of efficiently performing the computational task inside of each MPI process. The program's structure is entirely modular, i.e., all subroutines are collected in Fortran modules. To inspect the program structure, please consult the html documentation by opening the index.html file in the documentation/html subdirectory. In this html documentation all important variables are explained and for each routine call- and caller-graphs are given.
Back to http://ultracold.org