Often times in bioinformatics, we need to utilize software and run programs that our normal computers or laptops are unable to handle. When this situation arises, the ARC super-computing environment helps alleviate some of this burden.
ARC is happy to announce the release of a new cluster, named Cascades, available at cascades1.arc.vt.edu and cascades2.arc.vt.edu. Cascades is a 196-node system capable of tackling the full spectrum of computational workloads, from problems requiring hundreds of compute cores to data-intensive problems requiring large amount of memory and storage resources. Cascade contains three compute engines designed for distinct workloads:
General – Distributed, scalable workloads. With Intel’s latest-generation Broadwell processors, 2 16-core processors and 128 GB of memory on each node, this 190-node compute engine is suitable for traditional HPC jobs and large codes using MPI.
GPU – Data visualization and code acceleration! There are four nodes in this compute engine which have – two Nvidia K80 GPUs, 512 GB of memory, and one 2 TB NVMe PCIe flash card.
Very Large Memory – Graph analytics and very large datasets. With 3TB (3072 gigabytes) of memory, four 18-core processors and 6 1.8TB direct attached SAS hard drives, 400 GB SAS SSD drive, and one 2 TB NVMe PCIe flash card , each of these two servers will enable analysis of large highly-connected datasets, in-memory database applications, and speedier solution of other large problems.
The computational capacity of BlueRidge, ARC’s flagship cluster, has been significantly enhanced through the addition of Intel MIC coprocessors. This new hardware presents exciting new opportunities for Virginia Tech’s researchers. For more information, click here.
In March 2013, Virginia Tech’s Advanced Research Computing (ARC) released BlueRidge, providing faculty, staff, and students with the largest computing asset to date as measured by memory and number of cores. This Cray CS-300 cluster ranked number 402 on the November 2012 Top500 list, the industry-standard ranking of the world’s 500 fastest supercomputers, with a score of 86.3 teraflops, or 86.3 trillion floating point operations per second. This is more than eight times the computing power provided by System X, which put Virginia Tech on the supercomputing map in 2003.
BlueRidge, which was purchased through funding provided by Virginia Tech and the State of Virginia, is composed of 318 nodes (individual computers) each outfitted with two octa-core Intel Sandy Bridge central processing units (CPUs) and 64 gigabytes (GB) of memory. In addition, five nodes are equipped with 128 GB of memory for jobs that are especially memory intensive. The systemwide totals of 5,088 cores and 20.4 terabytes (TB) of memory are two and a half times as many cores and four times the memory of any other system at Virginia Tech. BlueRidge is also the first Sandy Bridge cluster at Virginia Tech, an important distinction as Sandy Bridge CPUs have the ability to do twice the number of double precision computations in a single cycle as their Intel Westmere predecessors.
The large number of cores available on BlueRidge will allow Virginia Tech researchers to run massively-parallel simulations, allowing them to tackle more complicated problems more quickly than they have before. And the system’s huge memory footprint will enable faculty to investigate the kinds of big data subjects that are increasingly the focus of attention in computationally-intensive arenas.
In addition, ARC is currently working on the addition of two Intel Xeon Phi coprocessors on 130 of the 318 nodes (260 Xeon Phi cards in all), with expected release of those nodes in Fall 2013. This architecture (also known as Many-Integrated-Core or MIC) is considered a significant development in high-performance computing, providing accelerated capability reminiscent of GPUs, but more integration with CPUs and compatibility with existing CPU programming paradigms (C/C++, Fortran, etc).
BlueRidge, the NSF-funded HokieSpeed CPU-GPU cluster, and the shared-memory system HokieOne provide researchers with a variety of options to address specific computing requirements arising from an array of research areas. All of these systems are housed in the university’s cooled, access-restricted machine room in the Corporate Research Center and maintained by ARC, a unit within the Office of the Vice President of Information Technology devoted to maintaining, advancing, and providing support to large-scale research computing systems in the university.