ARC Resources

Advanced Research Computing offers a wide range of resources to Virginia Tech researchers and collaborators, listed below. Click on a link to read more about a given resource.

High-Performance Computing

NameBlueRidgeHokieSpeedHokieOneIthaca
Vendor/ModelCray CS-300Seneca (CPU-GPU)SGI UV-1000IBM iDataPlex
Key Features, UsesLarge-scale CPU or MIC computationLarge-scale GPU or CPU computationShared MemoryHigh memory nodes, MATLAB queue
Login Node
(xxx.arc.vt.edu)
blueridge1 or blueridge2hokiespeed1 or hokiespeed2hokieoneithaca1 or ithaca2
AvailableMarch 2013September 2012April 2012 Fall 2009
Operating SystemCentOS Linux 6CentOS Linux 6SUSE Linux 11CentOS Linux 6
Theoretical Peak (TFlops/s)398.7238.25.46.1
Nodes408201N/A79
Cores6,5282,412492632
Cores/Node1612N/A*8
CPU ModelIntel Xeon E5-2670 (Sandy Bridge)Intel Xeon E5645 (Westmere)Intel Xeon X7542 (Westmere)Intel Xeon E5520 (Nehalem)
CPU Speed2.60 GHz2.40 GHz2.66 GHz2.26 GHz
Accelerators/Coprocessors260408N/AN/A
Accelerator ModelIntel Xeon Phi (MIC) 5110P NVIDIA Tesla C2050N/AN/A
Accelerators/Node2*2N/AN/A
Memory Size27.3 TB5.0 TB2.62 TB2 TB
Memory/Core4 GB**2 GB5.3 GB3 GB*
Memory/Node64 GB**24 GBN/A*24 GB*
InterconnectQDR InfiniBandQDR InfiniBandQDR InfiniBandQDR InfiniBand
NotesRequires Allocation
*For 130 MIC nodes
**18 nodes have 128 GB (8 GB/core)
*6 cores & 32 GB per socket
(when requesting resources)
*10 nodes have 48 GB (6 GB/core)

Visualization

Software

ARC provides user guides for the following software packages:

  • Unix: A detailed guide to the Unix operating system
  • OpenMP: An introduction to OpenMP, a common means of obtaining parallelism on shared-memory systems
  • MPI: An introduction to Message Passing Interface (MPI), a standard for obtaining parallelism, particularly on distributed-memory systems.
  • MATLAB: An introduction to MATLAB numerical computing software with instructions for and examples of submitting jobs to Ithaca
  • NAMD: An introduction to NAMD molecular dynamics software, including information on running, scaling, and GPU acceleration

The table below describes the availability of select (but not all) software packages on ARC systems:

SoftwareDescriptionBlueRidgeHokieSpeedHokieOneIthaca
ABAQUSFinite Element6.13-1, 6.12-1, 6.11-26.13-1, 6.11-26.13-1, 6.12-1
ANSYSFluid Dynamics14.514.5
GaussianComputational Chemistry09.A-0209.A-0209.A-02
GromacsMolecular Dynamics4.5.54.5.54.5.54.5.5
LAMMPSMolecular Dynamics1Feb14, 27Aug1227Aug1227Aug1227Aug12
LS-DYNAFinite Elementv971 Rev 5 & 6 (MPP & SMP)
MATLABNumerical ComputingR2014b, R2013b, R2013a, R2012b, R2012a
MothurBioinformatics1.31.21.32.11.32.1
NAMDMolecular Dynamics2.92.82.82.9
OpenFOAMFluid Dynamics2.3.0, 2.2.0, 2.1.12.3.0, 2.2.0, 2.1.12.3.0, 2.2.0
ParaViewVisualization4.0.14.0.1, 3.14.1
Python*2.7.2, 2.6.82.7.2, 2.6.82.7.22.7.2, 2.6.8
QIIMEBioinformatics1.7.0
R**Statistics3.0.3, 2.14.13.0.3, 2.14.13.0.3, 2.14.13.0.3, 2.14.1
VASPAb initio Simulation5.3.3

*Python installations include numpy and scipy for numerical and scientific computing, as well as matplotlib for graphing.

**R installations (aside from HokieOne) include snow, Rmpi, and pbdR packages for parallel computing.

Not logged in. [Log in]