Scalable Parallel Computing Architectures (Spring 2026)

Date:

Overview

A workshop for university researchers exploring how to scale computation from a single CPU core to multiple GPUs across nodes. Covers fundamental concepts, real benchmark results, and working code examples runnable on the cluster.

Topics Covered

  • Why parallel computing? (pipelining vs. data parallelism)
  • Flynn’s Taxonomy and types of parallelism
  • Shared vs. distributed memory models (OpenMP, MPI)
  • Amdahl’s Law and strong vs. weak scaling
  • CPU parallelism with Conway’s Game of Life (serial, OpenMP, MPI+OpenMP)
  • GPU computing fundamentals and CUDA workflow
  • Scaling ML workloads with PyTorch (single GPU, multi-GPU, multi-node)
  • Parallel computing in Python, R, and MATLAB

Materials

Recording