This distribution contains the source code for mpimemu, a memory consumption benchmark for MPI implementations.
Please see README.QUICKSTART.
See docs/mpimemu-report-20181114.pdf for more information.
@techreport{gutierrez2018mpimemu,
title={{A Memory Consumption Benchmark for MPI Implementations}},
author={Samuel K. Guti\'{e}rrez},
year={2018},
institution={Los Alamos National Laboratory (LANL)},
address={Los Alamos, NM},
number={LA-UR-18-30898}
}
Citing memnesia
@article{gutierrez:ccpe:2019,
title={{On the Memory Attribution Problem: A Solution and Case Study Using MPI}},
author={Samuel K. Guti\'{e}rrez and Dorian C. Arnold and Kei Davis and Patrick
McCormick},
journal={Journal on Concurrency and Computation: Practice and Experience},
pages={e5159},
publisher={Wiley Online Library},
year={2019},
month={February},
doi={https://doi.org/10.1002/cpe.5159}
}
First make certain that mpicc
or another MPI wrapper compiler with similar
capabilities is in your $PATH
.
# mpicc and cc are checked by default
./configure
./configure CC=<NEWCC>
make && make install
./configure --prefix=$HOME/local/mpimemu && make && make install
mpimemu does not necessarily need to be installed via make install
. It can
be used directly from within its source directory.
mpimemu-run
is a helper utility used to run a succession of mpimemu instances
of various sizes. The following environment variables change the way
mpimemu-run
behaves.
MPIMEMU_MAX_PES
: Specifies the maximum number of MPI processes that will be
launched.
MPIMEMU_RUN_CMD
: Specifies a run template that will be used to launch
mpimemu jobs.
mpirun -n nnn aaa
mpirun -n nnn -npernode NNN aaa
aprun -n nnn aaa
aprun -n nnn -N NNN aaa
nnn
: Replaced with the total number processes to be launched. This value
changes after each run and is determined by MPIMEMU_NUMPE_FUN(X)
. More on
MPIMEMU_NUMPE_FUN
below.
aaa
: Replaced with the mpimemu run string.
NNN
: Replaced with MPIMEMU_PPN
. If MPIMEMU_PPN
is not set, 1 will be used.
MPIMEMU_NUMPE_FUN
: Specifies the function that determines how nnn
grows with
respect to X. "X" must be provided within the string defining the function.
export MPIMEMU_START_INDEX=1
export MPIMEMU_NUMPE_FUN="X+1"
export MPIMEMU_MAX_PES=4
Will run jobs of size 1, 2, 3, 4.
+ Addition
* Multiplication
** Exponentiation
MPIMEMU_START_INDEX
: Specifies the starting integer value of an increasing
value, X. When set, X is incremented by 1, starting at the specified value,
while MPIMEMU_NUMPE_FUN(X) <= MPIMEMU_MAX_PES
.
MPIMEMU_PPN
: Specifies the number of MPI processes per node.
MPIMEMU_DATA_DIR_PREFIX
: Specifies the base directory where mpimemu data are
stored.
MPIMEMU_BIN_PATH
: Specifies the directory where mpimemu is located.
MPIMEMU_SAMPS_PER_S
: Integer value specifying the number of samples that will
be collected every second. Default: 10 samples per second.
MPIMEMU_SAMP_DURATION
: Integer value specifying the sampling duration in
seconds. Default: 10 seconds.
MPIMEMU_DISABLE_WORKLOAD
: If set, disables synthetic MPI communication
workload.
Please note that mpimemu requires that all nodes have the same number of
tasks. This implies that for all values of MPIMEMU_NUMPE_FUN(X)
, the task per
node requirement must be maintained. Otherwise, some run failures may occur.
Once your environment is setup, add your mpimemu installation prefix to your
$PATH
or run from within the source distribution.
mpimemu-run
./src/mpimemu-run
When complete, the data directory will be echoed. For example:
data written to: /users/samuel/mpimemu-samuel-01032013
mpimemu-mkstats
consolidates data generated by mpimemu-run
. To generate CSV
files, simply run mpimemu-mkstats -i [/path/to/data]
. For more options and
information, run mpimemu-mkstats -h
.
mpimemu-plot
can be used to generate node and process memory usage from output
generated by mpimemu-mkstats
. gnuplot
and ps2pdf
are required to be in
your PATH.
Take idle system memory usage into consideration (e.g., system image size). mpimemu can provide a general sense of memory usage scaling, but it is important to note that not all the memory being consumed on a compute resource is due to MPI library usage. Generally it is good practice to start your scaling runs at 1 MPI process to get a general sense about close-to-base memory usage (i.e., base system usage).
mpimemu-plot
presents MPI memory usage as MemUsed - Pre Init MemUsed
. Please
contact me if you have a better metric.
Why is configure failing with:
[X] cannot compile MPI applications. cannot continue.
Try the following, if using mpicc
.
./configure CC=mpicc