See upcoming classes on the Google Calendar on the sidebar.

**Class type:** Introduction

**Class date: **2015-09-18

The number of cores in modern processors continues to increase; developers must parallelize their code reap the maximum benefits of new and future hardware. Thread Building Blocks is a C++ template library designed for expressing task parallelism. It includes parallel algorithms and data structures and scalable memory allocations and task scheduling. TBB is portable across most OSes and is installed on VT’s clusters. Topics covered include: – General discussion on parallel programming

– TBB overview

– TBB examples

SLIDES: ThreadBuildingBlocks_2015Sept18

CODE: tbbTest

**Class type:** Programming Language

**Class date: **2015-10-08

This session provides a introduction to parallel programming in OpenMP, including a lecture component and a hands-on component. Participants will receive an introduction to OpenMP and write basic parallel programs to illustrate and use these concepts. Topics covered include: Introduction to shared-memory architecture Compiling and running OpenMP code Fundamental concepts in OpenMP: pragmas, functions and environment variables Parallelization of loops using OpenMP Using OpenMP to develop scalable, high-performance applications for shared memory Participants will be expected to have familiarity with the Linux environment and prior programming experience in either C, C++ or Fortran is recommended.

**Slides:** OpenMP_2015Oct08

**Codes:** OpenMP_2015Oct08_codes

**Class type:** Introduction

**Class date: **2015-10-07

This workshop provides an introduction to parallel programming to graduate students, staff, and faculty tackling computationally intensive science and engineering problems. Upon successful completion of this course, the attendee will:

- Understand the theoretical background and need for high performance computing
- Understand the basic concepts and challenges of parallel computing
- Fundamental concepts in MPI: basic structure of MPI programs, communicators, collective operations
- Fundamental concepts in OpenMP: pragmas, functions and environment variables
- Compiling and running MPI and OpenMP programs
- Be familiar with optimization best practices

**Slides:**

**Class type:** Programming Language

**Class date: **2015-10-05

R is a statistical programming language that has become increasingly popular in data analysis and statistical applications. This course, the second in a two-part series, describes how a programmer can speed up an R program by parallelizing it with the Rmpi and pbdR (“Programming with Big Data in R””) packages. The intended audience for this course is experienced R users who want to leverage parallel architectures to speed up existing R code. Interested parties who are new to R are encouraged to attend the introductory R sessions offered by the Virginia Tech Statistics Department’s Laboratory for Interdisciplinary Statistical Analysis (LISA).

**Slides:** Rmpi_pbdR_2015Oct05

**Course codes** are available on ARC’s R page.

**Class type:** Introduction

**Class date: **2015-09-21

R is a statistical programming language that has become increasingly popular in data analysis and statistical applications. This course, the first in a two-part series, describes how an R programmer can use the snow package to run many similar serial jobs at once. We will show how this functionality can be used to leverage the parallel computing capabilities of modern supercomputers to run large numbers of similar operations, such as Monte Carlo simulations, in parallel. The intended audience for this course is experienced R users who want to leverage parallel architectures to speed up existing R code. Interested parties who are new to R are encouraged to attend the introductory R sessions offered by the Virginia Tech Statistics Department’s Laboratory for Interdisciplinary Statistical Analysis (LISA).

**Slides:** R_Snow_2015Sept22

**Codes:** R_Snow_2015Sept22_codes

**Class type:** Programming Language

**Class date: **2015-09-18

This workshop will describe how to use Matlab to leverage the parallel computing capabilities of modern CPU architectures, including those of Virginia Tech’s supercomputers. Users will learn how to use parfor, spmd, and distributed array constructs to parallelize Matlab code, best practices for doing so, how to run parallel jobs locally, and how to run them on Virginia Tech’s supercomputers. A variety of example codes will be provided for student use. Attendees are expected to be familiar with Matlab basics, but will not need any experience with parallel programming.

**Slides:** Matlab_Workshop_2015Sept18

**Class type:** Programming Language

**Class date: **2015-09-16

This short course is the first in a three-part series on parallel programming in Matlab. This course presents an overview of Matlab Parallel Computing constructs and the applications of each. It discusses several ways to run parallel jobs in Matlab and finally discusses migrating those jobs to Virginia Tech’s Ithaca supercomputer. Attendees are expected to be familiar with Matlab basics, but will not need any experience with cluster computing.

**Slides:** Matlab_I_2015Sept16

**Code:** Matlab_I_2015Sept16_code

**Class type:** Introduction

**Class date: **2016-03-21

This session provides an overview of the high performance computing (HPC) resources managed by Advanced Research Computing(ARC) followed by a hands-on introduction to the user environment. This session provides an excellent opportunity for less experienced HPC uses to acquaint themselves with the systems available at Virginia Tech and become more comfortable with their use. Topics covered include:

- Getting an account / allocation on ARC systems
- Fundamentals of the Linux environment
- Job submission and monitoring
- Managing the module environment

This session is designed to familiarize inexperienced users with the skills needed to interact with HPC resources at Virginia Tech.

**Slides:** Intro_ARC_2015Sept09

**Class type:** Introduction

**Class date: **2016-03-21

This class will provide an overview of high-performance computing from a beginner’s perspective. It will cover the following topics: 1) What is supercomputing? 2) Why supercomputing or parallel computing is important 3) Key terminology and concepts in parallel and supercomputing 4) What modern supercomputers look like 5) Theoretical background for parallel computing Any or all interested parties are welcome. Participants are not expected to have any background or familiarity with supercomputing or even programming to attend this session.

**Slides:** Intro_HPC_2015Sept09