ARC Resources

Advanced Research Computing offers a wide range of resources to Virginia Tech researchers and collaborators, listed below. Click on a link to read more about a given resource.

High-Performance Computing

Vendor/ModelCray CS-300Seneca (CPU-GPU)SGI UV-1000Appro (CPU-GPU)IBM iDataPlex
Key Features, UsesLarge-scale CPU or MIC computationLarge-scale GPU or CPU computationShared MemoryGPU computing & visualizationHigh memory nodes, MATLAB queue
Login Node
blueridge1 or blueridge2hokiespeed1 or hokiespeed2hokieoneathena1ithaca1 or ithaca2
AvailableMarch 2013September 2012April 2012March 2011 Fall 2009
Operating SystemCentOS Linux 6CentOS Linux 6SUSE Linux 11CentOS Linux 5CentOS Linux 6
Theoretical Peak (TFlops/s)398.7238.25.412.46.1
CPU ModelIntel Xeon E5-2670 (Sandy Bridge)Intel Xeon E5645 (Westmere)Intel Xeon X7542 (Westmere)AMD Opteron 6134 (Magny Cours)Intel Xeon E5520 (Nehalem)
CPU Speed2.60 GHz2.40 GHz2.66 GHz2.30 GHz2.26 GHz
Accelerator ModelIntel Xeon Phi (MIC) 5110P NVIDIA Tesla C2050N/ANVIDIA Tesla S870N/A
Memory Size27.3 TB5.0 TB2.62 TB2.7 TB2 TB
Memory/Core4 GB**2 GB5.3 GB2 GB3 GB*
Memory/Node64 GB**24 GBN/A*64 GB24 GB*
InterconnectQDR InfiniBandQDR InfiniBandQDR InfiniBandQDR InfiniBandQDR InfiniBand
NotesRequires Allocation
*For 130 MIC nodes
**18 nodes have 128 GB (8 GB/core)
*6 cores & 32 GB per socket
(when requesting resources)
*For 16-node GPU queue*10 nodes have 48 GB (6 GB/core)



ARC provides user guides for the following software packages:

  • Unix: A detailed guide to the Unix operating system
  • OpenMP: An introduction to OpenMP, a common means of obtaining parallelism on shared-memory systems
  • MPI: An introduction to Message Passing Interface (MPI), a standard for obtaining parallelism, particularly on distributed-memory systems.
  • MATLAB: An introduction to MATLAB numerical computing software with instructions for and examples of submitting jobs to Ithaca
  • NAMD: An introduction to NAMD molecular dynamics software, including information on running, scaling, and GPU acceleration

The table below describes the availability of select (but not all) software packages on ARC systems:

ABAQUSFinite Element6.13-1, 6.12-1, 6.11-26.13-1, 6.11-26.10-2, 6.10-EF16.13-1, 6.12-1
ANSYSFluid Dynamics14.514.014.5
GaussianComputational Chemistry09.A-0209.A-0209.A-02
GromacsMolecular Dynamics4.
LAMMPSMolecular Dynamics1Feb14, 27Aug1227Aug1227Aug1227Aug12
LS-DYNAFinite Elementv971 Rev 5 & 6 (MPP & SMP)
MATLABNumerical ComputingR2010b, R2011b, R2012aR2013b, R2013a, R2012b, R2012a
NAMDMolecular Dynamics2.
OpenFOAMFluid Dynamics2.3.0, 2.2.0,, 2.2.0,, 2.2.0
ParaViewVisualization4.,, 3.8.1
Python*2.7.2,,, 2.6.8
R**Statistics3.0.3,,,, 2.14.1
VASPAb initio Simulation5.

*Python installations include numpy and scipy for numerical and scientific computing, as well as matplotlib for graphing.

**R installations (aside from HokieOne) include snow, Rmpi, and pbdR packages for parallel computing.

Not logged in. [Log in]