The research group for Parallel Computing is concerned with means and methods for efficiently utilizing real (compute clusters, supercomputers) and idealized (PRAM, communication networks) parallel computing architectures for solving challenging computational problems.

Some concrete themes that we are pursuing are:

  • Parallel programming interfaces for High Performance Computing (HPC) and their efficient algorithmic support and implementation. The Message-Passing Interface (MPI) is an important paradigm that poses interesting design and implementation problems.
    Some specific topics are the quality and performance portability of such interfaces and algorithms for collective communication operations.
  • Shared-memory parallel computing support: programming models, interfaces (OpenMP), frameworks, algorithms and (concurrent, lock- and wait-free) data structures, and scheduling support.