Memory profiling of MPI-parallel runs

Initialization

In order to execute the individual solver runs, we are going to employ the mini batch processor, for running the calculations on the local machine. We also have to initialize the workflow management system and create a database.

Note:

  1. This tutorial can be found in the source code repository as as convStudy.ipynb. One can directly load this into Jupyter to interactively work with the following code examples.
  2. In the following line, the reference to BoSSSpad.dll is required. You must either set #r "BoSSSpad.dll" to something which is appropirate for your computer (e.g. C:\Program Files (x86)\FDY\BoSSS\bin\Release\net5.0\BoSSSpad.dll if you installed the binary distribution), or, if you are working with the source code, you must compile BoSSSpad and put it side-by-side to this worksheet file (from the original location in the repository, you can use the scripts getbossspad.sh, resp. getbossspad.bat).

Memory instrumetation of grid generation

Peform runs

Asserting success:

Analysis and plot

We are going to observe that the memory scaling is far from perfect at this point;

Maximum of each trace:

Reporting of largest Allocators

Reporting of difference/imbalance in between different Runs: