Parallelism and Concurrency

  • Class 45
  • Practice 15
  • Independent work 90
Total 150

Course title

Parallelism and Concurrency

Lecture type


Course code






Lecturers and associates

Course objectives

Multiple simultaneous computations; Goals of parallelism (e.g., throughput) versus concurrency (e.g., controlling access to shared resources).
Parallelism, communication, and coordination; Goals and basic models of parallelism.
Shared Memory; Atomicity; Symmetric multiprocessing (SMP).
Multicore processors; Shared vs. distributed memory.
SIMD, vector processing; GPU, co-processing.
Programming constructs for parallelism.
Task-based decomposition.
Midterm exam.
Data-parallel decomposition.
Programming errors not found in sequential programming.
Models for parallel program performance.
Evaluating communication overhead.
Load balancing.
Actors and reactive processes (e.g., request handlers).
Final exam.

Required reading

John L. Hennessy, David A. Patterson (2017.), Computer Architecture, Morgan Kaufmann
Peter Pacheco (2011.), An Introduction to Parallel Programming, Elsevier
Ruud van der Pas, Eric Stotzer, Christian Terboven (2017.), Using OpenMP -- The Next Step, MIT Press
David R. Kaeli, Perhaad Mistry, Dana Schaa, Dong Ping Zhang (2015.), Heterogeneous Computing with OpenCL 2.0, Morgan Kaufmann

Minimal learning outcomes

  • Recognize the types of parallelism in computer systems
  • Recognize the models of execution in parallel systems
  • Recognize the concept of concurrecy and distinguish it from the concept of paralellism
  • Recognize the concepts of coherence, synchronization and memory models in parallel systems
  • Apply learned concepts to decompose simple problems for parallel execution
  • Apply learned concepts to performance optimizations of programs
SHARE : Facebook Twitter