skip to main content

High Performance Parallel Computing Bootcamp

June 23, 2008

July 28 - August 2, 2008
Sponsored by Virginia Tech and University of Virginia
At Advanced Communication and Information Technology Center (ACITC)
Torgersen Hall, Virginia Tech

High Performance Parallel Computing Bootcamp
High Performance Parallel Computing Bootcamp

Instructors: Drs. John Burkardt, Nicholas Polys, and Cal Ribbens, (VT)

Purpose: The purpose of this course is to introduce the attendee to the basics of high-performance parallel computing. The course is targeted at graduate students, staff, and faculty with computational science and engineering problems that demand high performance.  When successfully completed the attendee will know how to: 1) optimize sequential applications, 2) understand the basics of parallel computing, 3) write basic MPI and OpenMP applications, and 4) understand the opportunities and challenges of data visualization tools and display technologies. Attendees will use queuing systems such as PBS and use existing high-end resources at UVA and Virginia Tech.

Who Should Attend: Faculty, graduate students, and research staff at UVA, Virginia Tech, or other Commonwealth university with computational science and engineering problems that need high performance and those who want your programs to run faster, complete sooner, or tackle problems you previously thought were too computationally difficult.

Topics to be covered include:

  • Performance measurement & optimization
  • Parallel computing concepts
    • Interconnection networks
    • Vector processors and multiprocessors
    • Multi-computers and clusters
    • Tightly coupled MPP's
    • Limits to parallelization
  • Distributed memory computing: MPI (Message Passing Interface)
    • Simple stencil problems (e.g., explicit methods)
    • More complex irregular structures
  • Shared memory computing: OpenMP
  • Graphics and visualization
    • Design effective visualizations
    • Manipulate heterogeneous data with common visual analytic tools
    • Produce visualizations for collaboration or publication

Pre-Requisites: It is expected that the participant know one or more of C, C++, or Fortran, as well as Unix basics such as editing, compiling, the file system, and simple scripts.   A brief  "Introduction to Unix" will be offered for those who might need it or want a refresher the week or so before this seminar begins.

Format:   Morning lectures and afternoon hands-on computer exercises with multiple support staff present to assist participants.   Free morning & afternoon snacks & boxed lunch will be provided.

Suggested Supplemental Text:      Quinn, Michael "Parallel Programming in C with MPI and OpenMP"

The bootcamp is free to attendees. Financial assistance is available for accommodations for UVA students.

Contact:

At Virginia Tech: Nicholas Polys, Director of Visual Computing
Email: npolys@vt.edu

At University of Virginia: Alice Howard, Special Assistant to the VP/CIO
Email: agh@virginia.edu


High Performance Parallel Computing Bootcamp

July 28 -- August 2, 2008

Sponsored by Virginia Tech and University of Virginia

At Advanced Communication and Information Technology Center (ACITC)

Torgersen Hall, Virginia Tech

 

Day

Topic

Details

TBA, as needed at local campus

Unix Basics

This optional session will introduce Unix basics such as editing, makefiles, shell scripting, ssh, and the file system to attendees who are unfamiliar with Unix.

Monday, July 28

(Burkardt)

Background

Parallelism basics

Computer architecture basics - especially the cache. Sequential program optimization, performance profiles, etc.

Parallel computer architectures, Flynn's taxonomy, message passing and shared memory machines. Styles of parallel programming from high-throughput to vector, problem decomposition. High-throughput examples from bioinformatics to movies.

HW: Optimize a sequential program.

Tuesday, July 29

(Ribbens)

Distributed Memory (DM) 1

Basic architecture, scalability discussion, programming model, performance cost modeling & estimation. MPI basics (init, send, receive), and setting up an MPI job in the queue.

HW: A simple MPI program such as rings.

Wednesday, July 30

(Ribbens)

DM 2

DM 3

More MPI: scatter/gather, barrier, data layout, HALO.

HW: HALO performance surface generation.

Simple methods with MPI: matrix multiply, SOR, Gaussian elimination, sorting, etc.

HW: Gaussian elimination

Thursday July 31

(Burkardt)

Shared memory

Shared memory lecture with a focus on limitations and programming model, e.g., threads.

HW: Gaussian elimination using threads and/or OpenMP.

Friday,  August 1

(Polys)

Picnic/Tailgate

Graphics & visualization 

Design effective visualizations, manipulate heterogeneous data with common visual analytic tools, and produce visualizations for collaboration or publication

HW: Vis Design

Saturday, August 2

(Polys)

Visualization

Prize and Awards Session

Designing for multiple screens, stereo;

Experience first-hand the latest large-format and immersive display technologies.