ASPECT INSTALLATION – HAMILTON CLUSTER v4.0
Phil Heron, Durham University
Oct 2018
email: philip.j.heron@durham.ac.uk
This is the setup guide for installing ASPECT on the Hamilton Cluster.
.DOCX VERSION: Hamilton_cluster_2018_aspect2_dealii9
There are some prerequisites for this endeavour, one of them being a basic knowledge of linux commands and VI editor commands.
Below are a few links to these as a refresher or to get you up to speed.
Linux commands: https://diyhacking.com/linux-commands-for-beginners/
VI Editor commands: https://www.cs.colostate.edu/helpdocs/vi.html
For this guide, lines that start with > indicate a terminal window. Anything after the > is what is needed to type in the terminal window.
e.g.,
> pwd /ddn/home/^username^
Where ever ^username^ is written, you will need to put in your own username (as given by CIS).
In order to install ASPECT you need to install the numerical code deal.ii. To do this, we need to use the installer package, Candi.
Therefore, we need to install deal.ii (and it’s components trilinos and p4est) via Candi first.
STEP 1 – logging on
To start, you need to log into hamilton from a terminal window. If you are off campus you will need to log into MIRA first, then hamilton.
> ssh -Y ^username^@mira.dur.ac.uk
> ssh -X ^username^@hamilton.dur.ac.uk
If you are on campus, you can simply log into hamilton:
> ssh -X ^username^@hamilton.dur.ac.uk
> pwd /ddn/home/^username^
The home directory doesn’t seem to have the capacity to be able to setup aspect, so all the compilation needs to be done in the /ddn/data/^username^/ folder
> cd /ddn/data/^username^/
STEP 2 – downloading Candi
To download Candi we should set up a new folder for the installation.
> pwd /ddn/data/^username^ > mkdir install > cd install
To download Candi, we can use a git command which will download the whole package into your folder.
> git clone https://github.com/dealii/candi > cd candi > ls AUTHORS LICENSE README.md candi.cfg candi.sh deal.II-toolchain
STEP 3 – setup modules
We are required to setup modules for the installation:
> module load module-git gcc/4.9.1 cmake/3.6.2 lapack/gcc/3.5.0 zlib/gcc/1.2.7 sge/current openmpi/gcc/2.1.1 gsl/gcc/64/1.15
and to define the compilers that are to be used.
> setenv CC /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpicc setenv CXX /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpicxx setenv FC /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpif90 setenv FF /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpif77
Or if that doesn't work
>
export CC=mpicc; export CXX=mpicxx; export FC=mpif90; export FF=mpif77
STEP 4 – editing Candi setup
As candi can download a great number of packages that deal.ii can work with, we only need to download the ones that are directly relevant to ASPECT. As a result, we can can comment out the packages that we do not need (which are open cascade, PETSC, SLEPC).
In the candi folder:
> ls AUTHORS LICENSE README.md candi.cfg candi.sh deal.II-toolchain > nano candi.cfg
line 19 needs to be edited to:
INSTALL_PATH=/ddn/data/^username^/install/candi
(or wherever the candi folder is located)
We need to edit the following from line 48, by putting a # in the first column (therefore commenting out that line).
The packages open cascade, parmetis, superlu_dist, petsc, and slepc need to be commented out:
#PACKAGES="${PACKAGES} once:opencascade" #PACKAGES="${PACKAGES} once:parmetis" #PACKAGES="${PACKAGES} once:superlu_dist" PACKAGES="${PACKAGES} once:hdf5" PACKAGES="${PACKAGES} once:p4est" PACKAGES="${PACKAGES} once:trilinos" #PACKAGES="${PACKAGES} once:petsc" #PACKAGES="${PACKAGES} once:slepc" PACKAGES="${PACKAGES} dealii"
In the same file, we need to look into specifying the BLAS and LAPACK libraries. Therefore, towards the end of the file (line 85) we need to edit their path.
We need to remove the comment indicator ‘#’ and specify the path:
BLAS_DIR=/ddn/apps/Cluster-Apps/lapack/gcc-4.9.1/3.5.0/ LAPACK_DIR=/ddn/apps/Cluster-Apps/lapack/gcc-4.9.1/3.5.0/
This helps the program to locate the BLAS and LAPACK directories that are required for the code installation. BLAS and LAPACK are linked to the lapack module.
From here we need to exit the candi.cfg file and edit the candi.sh file.
>
nano candi.sh
We need to edit the path to the build directories, to point to the ddn/data folder:
PREFIX=/ddn/data/^username^/install/deal.ii-candi
*or wherever you would like the code to build*
Next we need to force candi to installing with it’s own internal threading system.
> cd deal.II-toolchain/packages nano dealii.package -D DEAL_II_FORCE_BUNDLED_THREADS:BOOL=ON \
This should be on (or near) line 21 – OFF should be changed to ON.
Then save an exit.
STEP 5 – running Candi
From here we can get back to the candi.sh file and then run the program.
>
cd ../../
>
./candi.sh --platform=deal.II-toolchain/platforms/supported/centos7.platform -j12
You are asked to review the set up of candi once the installation begins. Press enter for these parts unless you notice an error. The program will make sure you have the relevant external libraries installed and then make sure the compilers are correct: c++ etc.
When using -j8 it took 40 minutes.
Build finished in 2386 seconds.
Summary of timings: dealii-prepare: 0 s p4est: 107 s trilinos: 847 s petsc: 162 s dealii: 1496 s
Head to the dealii build directory to test:
>
cd /ddn/data/^username^/install//deal.ii-candi/tmp/build/deal.II-v9.0.0
>
make test
This should run 12 tests, they all should pass
Start 1: step.debug 1/10 Test #1: step.debug ....................... Passed 30.08 sec Start 2: step.release 2/10 Test #2: step.release ..................... Passed 27.75 sec Start 3: affinity.debug 3/10 Test #3: affinity.debug ................... Passed 20.79 sec Start 4: mpi.debug 4/10 Test #4: mpi.debug ........................ Passed 23.10 sec Start 5: tbb.debug 5/10 Test #5: tbb.debug ........................ Passed 19.57 sec Start 6: p4est.debug 6/10 Test #6: p4est.debug ...................... Passed 24.46 sec Start 7: step-trilinos.debug 7/10 Test #7: step-trilinos.debug .............. Passed 27.25 sec Start 8: lapack.debug 8/10 Test #8: lapack.debug ..................... Passed 19.17 sec Start 9: umfpack.debug 9/10 Test #9: umfpack.debug .................... Passed 25.24 sec Start 10: gsl.debug 10/10 Test #10: gsl.debug ........................ Passed 20.01 sec 100% tests passed, 0 tests failed out of 10 Total Test time (real) = 237.45 sec Built target test
STEP 6 – building ASPECT
Wherever you would like ASPECT to be downloaded, I would recommend /ddn/data/^username^/install, go into the directory and acquire the development package of ASPECT by:
>
git clone --recursive https://github.com/geodynamics/aspect.git
Once downloaded, we can begin to build and then configure. For the build we need to let ASPECT know where deal.ii is, and from there deal.ii is configured and linked to trilinos and p4est.
> cd aspect > ls AUTHORS CTestConfig.cmake VERSION cookbooks docker tests CMakeLists.txt LICENSE benchmarks data include CONTRIBUTING.md README.md cmake doc source > cmake -DDEAL_II_DIR=/ddn/data/^username^/install/deal.ii-candi/tmp/build/deal.II-v9.0.0 -DCMAKE_BUILD_TYPE=release .
Once the cmake is complete, you need to make the executable:
>
make release
This will take an hour or so to run.
Once the excludable is made, make a copy of it
>
cp aspect aspect.rel
STEP 7 – if you need to stop your installation and restart
You will need to make sure all your modules and setting are setup correctly before you restart your code:
We are required to setup modules for the installation:
> module load module-git gcc/4.9.1 cmake/3.6.2 lapack/gcc/3.5.0 zlib/gcc/1.2.7 sge/current openmpi/gcc/2.1.1 gsl/gcc/64/1.15
and to define the compilers that are to be used.
> setenv CC /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpicc setenv CXX /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpicxx setenv FC /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpif90 setenv FF /usr/local/Cluster-Apps/openmpi/gcc-4.9.1/2.1.1/bin/mpif77
Or if that doesn't work
>
export CC=mpicc; export CXX=mpicxx; export FC=mpif90; export FF=mpif77
Then you can pick up where you left off – any questions, large or small – philip.j.heron@durham.ac.uk
STEP 8 – setting up a model to run
For completeness – here is how to setup an input file. However, at this stage, please head over to http://community.dur.ac.uk/jeroen.van-hunen/ASPECT/ to complete the setup.
You will need to make a script in order to run a model. This script is called a ‘Slurm file’.
Below is an example of this slurm file, which will need to be saved as mpi.sh (for example),
and then created into a script via:
>
chmod +x mpi.sh
Here is the slurm file:
#!/bin/csh # simple template job script for submission of an MPI job with # SLURM directives # In this script the minimum requirements have been set for SLURM # except for these two changes that have to be made # # 1. replacing <number_of_tasks> for the -n option by the actual # number of MPI tasks required # 2. replacing my_mpi_program by the actual executable name # The job can be submitted with the command # sbatch -p parallel_queue name_of_job_script # or with overriding the number of tasks as option # sbatch -p parallel_queue -n number_of_tasks name_of_job_script # If successful SLURM will return a jobID for this job which can be # used to query its status. ###################################################################### ####### ## All lines that start with #SBATCH will be processed by SLURM. ## Lines in this template script that have white space between # and SBATCH ## will be ignored. They provide examples of further options that can be ## activated by deleting the white space and replacing any text after the ## option. ## By default SLURM uses as working directory the directory from where the ## job script is submitted. To change this the standard Linux cd command has ## to be used. ## Name of the job as it will appear when querying jobs with squeue(the ## default is the name of the job script) #SBATCH -J job-name ## By default SLURM combines the standard output and error streams in a single ## file based on the jobID and with extension .out ## These streams can be directed to separate files with these two directives #SBATCH -o out_file_name.o%j #SBATCH -e err_file_name.e%j ## where SLURM will expand %j to the jobID of the job. ## Request email to the user when certain type of events occur to ## the job #SBATCH --mail-type=ALL ## where <type> can be one of BEGIN, END, FAIL, REQUEUE or ALL, ## and send to email address #SBATCH --mail-user email@youraddress ## The default email name is that of the submitting user as known to the system. ## Specify project or account name (currently not required). ##SBATCH -A ITSall ###################################################################### ####### ## This job requests number_of_tasks MPI tasks (without OpenMP) #SBATCH -n 24 #SBATCH --time=0-01:15 # Request submission to a queue (partition) for parallel jobs #SBATCH -p par7.q module purge module load slurm/current ## Load any modules required here module load module-git gcc/4.9.1 cmake/3.6.2 lapack/gcc/3.5.0 zlib/gcc/1.2.7 sge/current openmpi/gcc/2.1.1 gsl/gcc/64/1.15 ## Execute the MPI program mpirun -np 24 aspect.rel inputprogram.prm