See upcoming classes on the Google Calendar on the sidebar.

**Class type:** Programming Language

**Class date: **2018-10-05

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Julia’s LLVM-based just-in-time (JIT) compiler combined with the language’s design allow it to approach and often match the performance of C. This short course will provide an overview of the language, including comparisons with Matlab, R, and Python. No prior familiarity with Julia is required.

**Slides:** Julia_2018Oct05

**Class type:** Introduction

**Class date: **2018-09-21

Intro_HPC_ARC_combined_Sep2018

**Class type:** Introduction

**Class date: **2018-02-21

This two hour NLI class was presented on February 21 by ARC computational scientist Ahmed Ibrahim.

It presented an overview:

* what system are available

* how to request an account and allocations

* how to login and transfer files

* how to prepare a job script

PDF version of slides

**Class type:** Introduction

**Class date: **2018-02-22

This presentation introduces the ARC cluster NewRiver, and explains how a new user can log in, set up the environment, transfer files, and write scripts to request that a program be executed in a particular queue. Brief examples are given of job scripts for sequential, OpenMP, MPI, and CUDA programs.

The presentation was prepared for the CMDA3634 class, and the students are expected to use the allocation associated with the class. Other interested users may find the presentation helpful. The example scripts using open_q will work as given, but those that specify an allocation would have to be changed to the user’s own allocation.

Note that ARC staff are available to give similar presentations to classes which involve high performance computing.

The 8-page PDF handout;

The TAR file of examples

**Class type:** Introduction

**Class date: **2018-02-14

“R” is a statistical programming language that has become increasingly popular in data analysis and statistical applications. This tutorial will describe how an R user can leverage the parallel computing capabilities of modern supercomputers to speed up large computations and/or run large numbers of similar operations, such as Monte Carlo simulations, at the same time.Parallel-R-2018

**Class type:** Programming Language

**Class date: **2018-02-12

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Julia’s LLVM-based just-in-time (JIT) compiler combined with the language’s design allow it to approach and often match the performance of C. This short course will provide an overview of the language, including comparisons with Matlab, R, and Python. No prior familiarity with Julia is required.

**Slides:** Julia_2018Feb12

**Code:** Julia_2018Feb12_Examples

**Class type:** Introduction

**Class date: **2017-11-01

A presentation on the use of CUDA on NewRiver has been prepared for the class CMDA 3634.

The notes and accompanying sample codes are available in

cmda3634_cuda_2017_fall_vt.tar

**Class type:** Programming Language

**Class date: **2017-10-27

Many scientific programmers write all of their own code. But many common programming operations amount to linear algebra operations (e.g. matrix multiplication or factorization). Linear algebra libraries such as MKL or ATLAS provide linear algebra implementations optimized for given CPU architectures and the promise of code that is faster to both write and execute.

This session will provide an overview of linear algebra libraries, the routines they provide, and how to use them in code. An overview of higher-level packages and solvers, such as PETSc and Trilinos, will also be provided.

**Slides:** LA_Libraries_2017Oct27

**Hands On Instructions:** LA_Libraries_2017Oct27_HandsOn_Instructions

**Examples and Hands On Materials:** LA_Libraries_2017Oct27_Codes

**Class type:** Introduction

**Class date: **2017-10-26

The notes and example files for the NLI class “Introduction to OpenMP Programming”, offered on 27 October 2017, are available here:

openmp_intro_2017_vt.tar

**Class type:** Programming Language

**Class date: **2017-10-25

This workshop will describe how to use Matlab to leverage the parallel computing capabilities of modern CPU architectures, including those of Virginia Tech’s supercomputers. Users will learn how to use parfor, spmd, and distributed array constructs to parallelize Matlab code, best practices for doing so, how to run parallel jobs locally, and how to run them on Virginia Tech’s supercomputers.

A variety of example codes will be provided for student use.

*Attendees are expected to be familiar with Matlab basics, but will not need any experience with parallel programming.

**Slides:** ARC_Matlab_2017Oct25

**Parfor Codes:** ARC_Matlab_2017Oct25_parfor_codes

**SPMD Codes:** ARC_Matlab_2017Oct25_spmd_codes