This lesson is in the early stages of development (Alpha version)

High-Performance Computing, Beyond: Cheatsheets for Queuing System Quick Reference

Key Points

Compiling on a cluster
  • Compilers turn instuctions in programming languages to machine code that the computer can execute

Introduction to Parallelism
  • Parallelism can speed up the execution of your program

  • The structure of your code, and scheduling considerations, will influence how you parallelize your computational work

Compiling MPI code on a cluster
  • mpicc compiles MPI code written in C

  • mpif90 compiles MPI code written in Fortran (90)

  • mpirun is used to run MPI programs, specifying the number of processors with the -np flag

  • The rank of an MPI process does not determine which process starts or finishes first

Compiling OpenMP code on a cluster
  • gcc with the -fopenmp flag compiles OpenMP code written in C

  • gfortran with the -fopenmp flag compiles OpenMP code written in Fortran

  • The program generated is run as usual, but the number of processors is controlled with the OMP_NUM_THREADS environment variable

Benchmarking and Optimization
  • Deciding how many processors to run on is an iterative task

  • Speedup and efficiency are two measures that can help us figure out how many processors to run on

Cheatsheets for Queuing System Quick Reference

Glossary