The ARC software examples are a collection of scripts and input files
that illustrate how various software packages are used on the ARC

These examples are used by ARC staff to make simple checks that
various pieces of software are working on any cluster where they
are installed. This means the scripts contain some extra print
statements and error checks that are not part of the computation.

If users aren’t distracted by the print statements and error checks,
they can use a typical example script to learn how to run a particular
piece of software on a given cluster and queue.

Purpose of examples

The examples are simple illustrations of:

  • appropriate PBS commands to request system resources (time,
    processors, GPU’s, queues);
  • the module commands necessary to load the software;
  • how to start the job in the appropriate directory;
  • the command lines needed to compile, load, and run programs.
  • the additional steps necessary when invoking OpenMP, MPI, or CUDA.

The intent is that an interested user could copy the files for a
particular software example and submit the job immediately for
execution. In some cases, this is exactly true, because wherever
possible, the open_q queue is specified. However, in many
cases, especially on NewRiver and Cascades, some resources cannot
be accessed through open_q; such scripts will include a PBS
allocation command of the form

#PBS -A arcadm

and the user must have their own allocation, and replace “arcadm”
with the identifier for that allocation, in order to run the job.

When the software is a library, a user calling program may need
to be supplied, compiled, and loaded with the library. In such
cases, a simple calling program is included, along with the
necessary commands to compile and load, usually with the
gcc compiler.

Software may be installed on only some of the ARC clusters;
from cluster to cluster, the version of the software that is
installed may vary. Thus, a separate job submission script is
made available for each cluster on which the software is installed.

The examples are organized as subdirectories of a single master
directory /opt/examples. Thus, one could get a complete list
of the example subdirectories by the command

  ls /opt/examples

For instance: GCC examples

For instance, one of the example subdirectories is for the GCC compiler
family, most often used to compile C programs.
To see what’s in the directory, on any ARC login node, type

  ls /opt/examples/gcc

which should return the following list:


  • gcc_article.html is a brief discussion of the purpose
    and use of the GCC compiler;
  • gcc_blueridge.sh is a script to run a sample job using
    the GCC compiler on the ARC BlueRidge cluster; the other “.sh”
    files are for other ARC clusters;
  • heated_plate.c is a C program needed as input for the
    GCC compiler example;

You can copy files from an example directory, to view, print, or modify.
All the examples can be submitted to the scheduler on the corresponding
system. To continue our BlueRidge example, then, you might do the
following, while working in a directory that belongs to you:

  • cp /opt/examples/gcc/gcc_blueridge.sh .
  • cp /opt/examples/gcc/heated_plate .
  • qsub gcc_blueridge.sh

On BlueRidge, the job goes into the open_q queue and waits its
turn for execution. When the job has completed, you should see a
file named something like


which contains the results of the job.

What’s in an example job script?

All job scripts start with some information for the scheduler, including
the resources (nodes, processors, gpu’s, time), the requested job queue,
and the name of the allocation to be charged. (No allocation is charged
however, in cases where the open_q can be used.

Next comes the mysterious command


which simply tells the system to work in the directory from which you
submitted the job. That’s how the GCC compiler example job can find
the source code file heated_plate.c that it needs to compile.

Next come a few module commands, that set up the software
environment. The first command is usually

  module purge

which clears out any existing software definitions. This can be
followed by commands that specify the name, and usually the specific
version number, of the software that is to be used.

Finally there are the commands that correspond to things you could
have typed in an interactive session, and which carry out the task
you are interested in.

For the GCC example on BlueRidge, the complete file looks like this:

#! /bin/bash
#PBS -l walltime=00:05:00
#PBS -l nodes=1:ppn=1
#PBS -W group_list=blueridge
#PBS -q open_q
#PBS -j oe
module purge
module load gcc/5.3.0
echo "GCC_BLUERIDGE: Normal beginning of execution."
gcc -c heated_plate.c
if [ $? -ne 0 ]; then
  echo "GCC_BLUERIDGE: Compile error!"
  exit 1
gcc -o heated_plate heated_plate.o
if [ $? -ne 0 ]; then
  echo "GCC_BLUERIDGE: Load error!"
  exit 1
rm heated_plate.o
./heated_plate > gcc_blueridge.txt
if [ $? -ne 0 ]; then
  echo "GCC_BLUERIDGE: Run error!"
  exit 1
rm heated_plate
echo "GCC_BLUERIDGE: Normal end of execution."
exit 0

The “echo” statements are used by ARC staff to identify the specific
command involved if a failure occurs. To do this, the if..then
block are used after every important command. If you find these
distracting, you can remove them with an editor.

What examples are available?

The examples directories are occasionally updated, but here is a
recently compiled list:

abaqus/		flann/		  minia/	       phdf5/
abinit/		flint/		  mkl/		       picard/
abyss/		fluent/		  mkl_ifort/	       pigz/
allinea-forge/	ga/		  mothur/	       proj/
amber/		gatb/		  mpe2/		       pycuda/
anaconda/	gatk/		  mpich/	       python/
anaconda2/	gaussian/	  mpip/		       python3gdal/
ansys/		gcc/		  mrbayes/	       python-ucs2/
apache-ant/	gdal/		  mvapich2/	       r/
apbs-static/	geos/		  mvapich2-test/       r-parallel/
archive/	gfortran/	  namd/		       rutils/
aspect/		glm/		  namd-gpu/	       sac/
atlas/		glog/		  nastran/	       samtools/
autodocksuite/	gmsh/		  ncbi-blast+/	       scalapack/
automake/	gmt/		  ncl/		       scipy/
bamtools/	gnuplot/	  nco/		       scons/
bcftools/	gromacs/	  ncview/	       seqtk/
beagle-lib/	gshhg/		  netcdf/	       singular/
bedtools/	gsl/		  netcdf-c/	       slurm/
blas_atlas/	guile/		  netcdf-c-par/        sox/
blas_mkl/	harfbuzz/	  netcdf-cxx/	       spades/
boost/		harminv/	  netcdf-fortran/      sparsehash/
boost-mpi/	hdf5/		  netcdf-fortran-par/  sprai/
boost-ucs2/	hmmer/		  netcdf-par/	       stata/
bowtie/		hpl/		  normal_q/	       swig/
bowtie2/	ifort/		  nose/		       szip/
bwa/		imagemagick/	  numpy/	       tbb/
bzip2/		impi/		  octave/	       tecplot/
caelus/		intel/		  openblas/	       test_suite_blueridge/
cddlib/		iozone/		  opencv/	       test_suite_cascades_broadwell/
cgal/		ipm/		  openfoam/	       test_suite_cascades_skylake/
citcoms/	ipp/		  openmp/	       test_suite_dragonstooth/
clapack/	ipython/	  openmpi/	       test_suite_huckleberry/
cmake/		jags/		  openmpi-test/        test_suite_newriver_broadwell/
comsol/		jdk/		  open_q/	       test_suite_newriver_ivybridge/
comsol_scalar/	julia/		  opensees/	       theano/
cora/		lame/		  p100/		       tophat/
cplex/		lammps/		  p100_dev_q/	       torch/
cuda/		lapack_atlas/	  p100_normal_q/       trimmomatic/
cuda_fortran/	lapack_mkl/	  p2fa/		       trinityrnaseq/
cufflinks/	largemem_q/	  p4est/	       uuid/
dcw/		libgtextutils/	  p7zip/	       v100_dev_q/
dealii/		libjpeg-turbo/	  pango/	       v100_large_q/
dev_q/		lordec/		  papi/		       v100_normal_q/
ea-utils/	ls-dyna/	  parallel/	       valgrind/
eigen/		lsopt/		  parallel-netcdf/     vasp/
espresso/	lua/		  parmetis/	       velvet/
examl/		luajit/		  pbdr/		       viennacl/
expat/		m4/		  pbs/		       vis_q/
fastqc/		mathematica/	  pbs_nodefile/        wannier90/
fastx_toolkit/	matlab/		  pcl/		       x264/
fdk-aac/	matlab_parallel/  perl/		       yasm/
ffmpeg/		matplotlib/	  pgf90/	       zip/
fftw/		metis/		  pgi/		       zlib/