wiki:WikiStart/Spring17

Welcome to Physics 244, Parallel Computing for Science and Engineering, Spring 2017

Course Location: Mayer Hall Addition 2623, Tu & Th, 2:00-3:20pm

Course Instructor: Michael Norman, mlnorman@…,

Guest Lecturers: Mahidhar Tatineni, mahidhar@…, Andreas Goetz, agoetz@…, Alex Breuer, anbreuer@…, James Bordner, jobordner@…

TritonEd site

Class Announcements

  • Check this space frequently for homework assignments and announcements
  • (3/29) Follow these instructions to request an account on Comet.
  • (3/29) Homework 1 posted (due Monday, April 10)
  • (4/3) Due to grad open house, Thursday's class will be held in Mayer Hall 4322 (Mayer Room)
  • (4/9) Homework 2 posted (due Monday, April 17)
  • (4/10) Comet user guide is now available on Cornell Virtual Workshop
  • (4/13) Enrollment limit increased. I have increased class enrollment from 40 to 46, the maximum allowed by the room and the physics department. The first 6 students on the wait list will be enrolled in the course by the end of the day. To those of you further down the waitlist, I am sorry. I will be teaching PHYS 244 again next year, and hope to see you there.
  • (4/18) OpenMP lecture slides posted
  • (4/20) Homework 3 posted (due Friday, April 28)
  • (4/28) Class project guidelines are here. Note multiple deadlines.
  • (4/25) CUDA/GPU lectures posted for those who want to get a head start
  • (4/28) Advanced OpenMP slides and labs posted to Lecture Schedule and Slides page.
  • (5/15) MPI lectures posted
  • (5/15) Be sure to use account csd453 in your batch script when submitting to Comet.
  • (5/22) Schedule Change: Thursday's lecture will be on how to debug your application on Comet.
  • (5/22) Due to the large number of students this year, there will be no in-class presentations of term projects. Grades will be based solely on submitted materials. Revised guidelines will be posted shortly.
  • (5/31) Revised term project guidelines are here. This replaces and supersedes previous guidelines. Note new deadline.
  • (6/1) instructions how to use VisIt on Comet
  • (6/6) GPU node reservations Please read these instructions to improve your GPU node throughput. Please limit yourself to 30min jobs so others can get in. THANK YOU.

Description and Syllabus

Syllabus

This hands-on course will cover the theory and practice of using supercomputers and parallel clusters to solve problems in science and engineering. Students will learn how to write their own simple parallel programs using OpenMP, the Message Passing Interface (MPI) library, and CUDA and execute them on HPC resources at the San Diego Supercomputer Center. Students will learn how to debug parallel programs and analyze their parallel performance using state-of-the-art tools. Students will also be exposed to parallel applications packages used to solve large-scale problems in science and engineering. Instruction will consist of both classroom lectures and computer lab tutorials and exercises. Topics include:

  • Overview of scientific and engineering supercomputing
  • Parallel architectures
  • Parallel programming models
  • Parallel programming with OpenMP, MPI, and CUDA
  • Debugging parallel programs
  • Performance analysis and optimization
  • Parallel applications in science and engineering
  • Parallel data analysis and visualization tools
  • Advanced topics (hybrid parallelism, GPGPUs, petascale computing)

Preparation

Students are expected to possess a working knowledge of Unix or Linux, and be capable of understanding and writing simple programs in at least one of the following compiled languages: C, C++, or Fortran. For those who need a refresher course on these topics, I recommend the tutorials available at the Cornell Virtual Workshop. Much of what we cover in this course is also covered there. Consider it your textbook for the course.

Schedule and Lecture Slides

Class Resources (updated 6/1/17)

Last modified 14 months ago Last modified on 06/08/17 13:24:40

Attachments (11)

Download all attachments as: .zip