Amber 14 on Fujitsu Computers

Future of Amber


Installation of Amber 16 on the Fujitsu Linux clusters

Decompress and untar distribution:

tar xvfj AmberTools17.tar.bz2

tar xvfj Amber16.tar.bz2

if tar does not support option “j” on a particular platform use:

bzip2 -d AmberTools17.tar.bz2

tar xvf AmberTools17.tar

bzip2 -d Amber16.tar.bz2

tar xvf Amber16.tar

Both distributions will unpack into the same directory tree, with amber16 at its root.

Set up the AMBERHOME environment variable to point to where the Amber tree resides on a machine. For example, using csh, tcsh, etc:

setenv AMBERHOME /path-to/amber16

or using sh/bash:

export AMBERHOME=/path-to/amber16

Building serial version

To build Amber using Intel compilers:

cd $AMBERHOME

setenv MKL_HOME $MKLROOT  # or export MKL_HOME=$MKLROOT

./configure intel

or for GNU compilers

./configure gnu

To show other options use:

./configure --full-help

configure script might ask you of any available, if any, updates and bugfixes:

$ ./configure gnu
Checking for updates...
Checking for available patches online. This may take a few seconds...

Available AmberTools 17 patches:

update.1 (modifies antechamber, package)
Description:
This addresses a number of issues:

.....

Available Amber 16 patches:

update.1 (modifies pmemd, pmemd.cuda, pmemd.cuda.MPI)
update.2 (modifies pmemd.cuda.MPI)
update.3 (modifies pmemd)
update.4.gz (modifies No information available)
update.5 (modifies pmemd.cuda)
update.6 (modifies pmemd.cuda)
update.7 (modifies pmemd.cuda)
update.8 (modifies pmemd)
update.9 (modifies pmemd)
update.10 (modifies pmemd, pmemd.cuda)

There are patches available. Do you want to apply them now? [y/N] (Recommended Y)

The configure step will create two resource files in the AMBERHOME directory to set up environment variables: amber.sh and amber.csh.

source /home/myname/amber16/amber.sh # for bash, zsh, ksh, etc.
source /home/myname/amber16/amber.csh # for csh, tcsh

Now you can start to compile the codes:

make install

If this step fails, try to read the error messages carefully to identify the problem. This can be followed by

make test

which will run tests and will report successes or failures.

Where "possible FAILURE" messages are found, go to the indicated directory under $AMBERHOME/AmberTools/test or $AMBERHOME/test, and look at the "*.dif" files.

Differences should involve round-off in the final digit printed, or occasional messages that differ from machine to machine.

As with compilation, if you have trouble with individual tests, you may wish to comment out certain lines in the Makefiles (i.e., $AMBERHOME/AmberTools/test/Makefile or $AMBERHOME/test/Makefile), and/or

go directly to the test subdirectories to examine the inputs and outputs in detail.

For convenience, all of the failure messages and differences are collected in the $AMBERHOME/logs directory; you can quickly see from these if there is anything more than round-off errors.

Building parallel version of Amber

To compile parallel (MPI) versions of Amber:

cd $AMBERHOME

./configure -mpi intel # or ./configure -mpi gnu

make install

To test installation:

setenv DO_PARALLEL ”mpirun -np 2”

make test

This assumes that you have installed MPI and that mpicc and mpif90 are in your PATH

Where "possible FAILURE" messages are found, go to the indicated directory under $AMBERHOME/AmberTools/test or $AMBERHOME/test, and look at the "*.dif" files.

Building and testing the GPU code

Setup CUDA environment. For the latest GPU hardware you will need CUDA-8.0 or higher. If you are using the Environment Modules:

module load cuda/8.0

export CUDA_HOME=$CUDA_ROOT

cd $AMBERHOME
./configure -cuda gnu
make install

make test

For building a parallel executable:

cd $AMBERHOME
./configure -cuda -mpi gnu
make install

export DO_PARALLEL=’mpirun -np 2’ # for bash/sh
setenv DO_PARALLEL “mpirun -np 2” # for csh/tcsh
make test.cuda_parallel

CUDA pmemd executables are:

pmemd.cuda -> pmemd.cuda_SPFP
pmemd.cuda_SPFP
pmemd.cuda.MPI -> pmemd.cuda_SPFP.MPI
pmemd.cuda_SPFP.MPI
pmemd.cuda_DPFP
pmemd.cuda_SPXP
pmemd.cuda_DPFP.MPI
pmemd.cuda_SPXP.MPI

where pmemd.cuda linked to pmemd.cuda_SPFP and pmemd.cuda.MPI linked to pmemd.cuda_SPFP.MPI