Skip to content

Commit

Permalink
Merge pull request #231 from hpc/master
Browse files Browse the repository at this point in the history
Merging master back into RC
  • Loading branch information
JulianKunkel authored Jun 24, 2020
2 parents 5271198 + d483e3b commit 2a17f3e
Show file tree
Hide file tree
Showing 65 changed files with 5,620 additions and 2,212 deletions.
14 changes: 14 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
tags
Makefile
Makefile.in
aclocal.m4
config.log
config.status
COPYING
INSTALL
config/compile
config/config.guess
config/config.sub
Expand All @@ -12,24 +15,35 @@ config/missing
config/test-driver
configure
contrib/.deps/
contrib/cbif
contrib/Makefile
contrib/Makefile.in
contrib/cbif
doc/Makefile
doc/Makefile.in
src/.deps/
src/mdtest
src/Makefile
src/Makefile.in
src/config.h
src/config.h.in
src/stamp-h1
*[-.]mod
*~
NOTES.txt
autom4te.cache
contrib/cbif.o
src/*.o
src/*.i
src/*.s
src/*.a
src/ior
src/mdtest
src/testlib
src/test/.deps/
src/test/.dirstamp
src/test/lib.o
build/

doc/doxygen/build
doc/sphinx/_*/
Expand Down
2 changes: 1 addition & 1 deletion META
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Package: ior
Version: 3.2.0
Version: 3.3.0+dev
Release: 0
18 changes: 17 additions & 1 deletion NEWS
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
Version 3.3.0+dev
--------------------------------------------------------------------------------

New major features:

New minor features:

Bugfixes:

Version 3.2.1
--------------------------------------------------------------------------------

- Fixed a memory protection bug in mdtest (Julian Kunkel)
- Fixed correctness bugs in mdtest leaf-only mode (#147) (rmn1)
- Fixed bug where mdtest attempted to stat uncreated files (Julian Kunkel)

Version 3.2.0
--------------------------------------------------------------------------------

Expand Down Expand Up @@ -104,7 +120,7 @@ Version 2.10.1
- Corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
IOR_GetFileSize() calls
- Changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
are: [segmentCount][numTasks][numTransfers][transferSize]
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
single dimension is > 4GB).
- Finalized random-capability release
Expand Down
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,31 @@
# HPC IO Benchmark Repository [![Build Status](https://travis-ci.org/hpc/ior.svg?branch=master)](https://travis-ci.org/hpc/ior)

This repo now contains both IOR and mdtest.
See also NOTES.txt
This repository contains the IOR and mdtest parallel I/O benchmarks. The
[official IOR/mdtest documention][] can be found in the `docs/` subdirectory or
on Read the Docs.

# Building
## Building

0. If "configure" is missing from the top level directory, you
probably retrieved this code directly from the repository.
Run "./bootstrap".
1. If `configure` is missing from the top level directory, you probably
retrieved this code directly from the repository. Run `./bootstrap`
to generate the configure script. Alternatively, download an
[official IOR release][] which includes the configure script.

If your versions of the autotools are not new enough to run
this script, download and official tarball in which the
configure script is already provided.
1. Run `./configure`. For a full list of configuration options, use
`./configure --help`.

1. Run "./configure"
2. Run `make`

See "./configure --help" for configuration options.
3. Optionally, run `make install`. The installation prefix
can be changed via `./configure --prefix=...`.

2. Run "make"
## Testing

3. Optionally, run "make install". The installation prefix
can be changed as an option to the "configure" script.
* Run `make check` to invoke the unit tests.
* More comprehensive functionality tests are included in `testing/`. These
scripts will launch IOR and mdtest via MPI.
* Docker scripts are also provided in `testing/docker/` to test various
distributions at once.

# Testing

Run "make check" to invoke the unit test framework of Automake.

* To run basic functionality tests that we use for continuous integration, see ./testing/
* There are docker scripts provided to test various distributions at once.
* See ./testing/docker/
[official IOR release]: https://github.com/hpc/ior/releases
[official IOR/mdtest documention]: http://ior.readthedocs.org/
86 changes: 86 additions & 0 deletions README_DAOS
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
Building
----------------------

The DAOS library must be installed on the system.

./bootstrap
./configure --prefix=iorInstallDir --with-daos=DIR --with-cart=DIR

One must specify "--with-daos=/path/to/daos/install and --with-cart". When that
is specified the DAOS and DFS driver will be built.

The DAOS driver uses the DAOS API to open a container (or create it if it
doesn't exist first) then create an array object in that container (file) and
read/write to the array object using the daos Array API. The DAOS driver works
with IOR only (no mdtest support yet). The file name used by IOR (passed by -o
option) is hashed to an object ID that is used as the array oid.

The DFS (DAOS File System) driver creates an encapsulated namespace and emulates
the POSIX driver using the DFS API directly on top of DAOS. The DFS driver works
with both IOR and mdtest.

Running with DAOS API
---------------------

ior -a DAOS [ior_options] [daos_options]

In the IOR options, the file name should be specified as a container uuid using
"-o <container_uuid>". If the "-E" option is given, then this UUID shall denote
an existing container created by a "matching" IOR run. Otherwise, IOR will
create a new container with this UUID. In the latter case, one may use
uuidgen(1) to generate the UUID of the new container.

The DAOS options include:

Required Options:
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
--daos.cont <cont_uuid>: container for the IOR files/objects (can use `uuidgen`)

Optional Options:
--daos.group <group_name>: group name of servers with the pool
--daos.chunk_size <chunk_size>: Chunk size of the array object controlling striping over DKEYs
--daos.destroy flag to destory the container on finalize
--daos.oclass <object_class>: specific object class for array object

Examples that should work include:

- "ior -a DAOS -w -W -o file_name --daos.pool <pool_uuid> --daos.svcl <svc_ranks>\
--daos.cont <cont_uuid>"

- "ior -a DAOS -w -W -r -R -o file_name -b 1g -t 4m \
--daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <cont_uuid>\
--daos.chunk_size 1024 --daos.oclass R2"

Running with DFS API
---------------------

ior -a DFS [ior_options] [dfs_options]
mdtest -a DFS [mdtest_options] [dfs_options]

Required Options:
--dfs.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--dfs.svcl <pool_svcl>: pool svcl list (: separated)
--dfs.cont <co_uuid>: container uuid that will hold the encapsulated namespace

Optional Options:
--dfs.group <group_name>: group name of servers with the pool
--dfs.chunk_size <chunk_size>: Chunk size of the files
--dfs.destroy flag to destory the container on finalize
--dfs.oclass <object_class>: specific object class for files

In the IOR options, the file name should be specified on the root dir directly
since ior does not create directories and the DFS container representing the
encapsulated namespace is not the same as the system namespace the user is
executing from.

Examples that should work include:
- "ior -a DFS -w -W -o /test1 --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"

Running mdtest, the user needs to specify a directory with -d where the test
tree will be created. Some examples:
- "mdtest -a DFS -n 100 -F -D -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
104 changes: 96 additions & 8 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,17 @@ AM_INIT_AUTOMAKE([check-news dist-bzip2 gnu no-define foreign subdir-objects])
m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])])
AM_MAINTAINER_MODE

# Check for system-specific stuff
case "${host_os}" in
*linux*)
;;
*darwin*)
CPPFLAGS="${CPPFLAGS} -D_DARWIN_C_SOURCE"
;;
*)
;;
esac

# Checks for programs

# We can't do anything without a working MPI
Expand All @@ -39,7 +50,7 @@ AC_CHECK_HEADERS([fcntl.h libintl.h stdlib.h string.h strings.h sys/ioctl.h sys/
AC_TYPE_SIZE_T

# Checks for library functions.
AC_CHECK_FUNCS([getpagesize gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_CHECK_FUNCS([sysconf gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_SEARCH_LIBS([sqrt], [m], [],
[AC_MSG_ERROR([Math library not found])])

Expand All @@ -65,19 +76,33 @@ AS_IF([test "$ac_cv_header_gpfs_h" = "yes" -o "$ac_cv_header_gpfs_fcntl_h" = "ye
# Check for system capabilities
AC_SYS_LARGEFILE

AC_DEFINE([_XOPEN_SOURCE], [700], [C99 compatibility])

# Check for lustre availability
AC_ARG_WITH([lustre],
[AS_HELP_STRING([--with-lustre],
[support configurable Lustre striping values @<:@default=check@:>@])],
[], [with_lustre=check])
AS_IF([test "x$with_lustre" != xno], [
AS_IF([test "x$with_lustre" = xyes ], [
AC_CHECK_HEADERS([linux/lustre/lustre_user.h lustre/lustre_user.h], break, [
if test "x$with_lustre" != xcheck -a \
"x$ac_cv_header_linux_lustre_lustre_user_h" = "xno" -a \
"x$ac_cv_header_lustre_lustre_user_h" = "xno" ; then
AC_MSG_FAILURE([--with-lustre was given, <lustre/lustre_user.h> not found])
fi
])
AC_CHECK_HEADERS([linux/lustre/lustreapi.h lustre/lustreapi.h],
[AC_DEFINE([HAVE_LUSTRE_LUSTREAPI], [], [Lustre user API available in some shape or form])], [
if test "x$with_lustre" != xcheck -a \
"x$ac_cv_header_linux_lustre_lustreapi_h" = "xno" -a \
"x$ac_cv_header_lustre_lustreapi_h" = "xno" ; then
AC_MSG_FAILURE([--with-lustre was given, <lustre/lustreapi.h> not found])
fi
])
])
AM_CONDITIONAL([WITH_LUSTRE], [test x$with_lustre = xyes])
AM_COND_IF([WITH_LUSTRE],[
AC_DEFINE([WITH_LUSTRE], [], [Build wth LUSTRE backend])
])

# IME (DDN's Infinite Memory Engine) support
Expand Down Expand Up @@ -172,7 +197,75 @@ AM_COND_IF([USE_RADOS_AIORI],[
AC_DEFINE([USE_RADOS_AIORI], [], [Build RADOS backend AIORI])
])

# CEPHFS support
AC_ARG_WITH([cephfs],
[AS_HELP_STRING([--with-cephfs],
[support IO with libcephfs backend @<:@default=no@:>@])],
[],
[with_cephfs=no])
AS_IF([test "x$with_cephfs" != xno], [
CPPFLAGS="$CPPFLAGS -D_FILE_OFFSET_BITS=64 -std=gnu11"
])
AM_CONDITIONAL([USE_CEPHFS_AIORI], [test x$with_cephfs = xyes])
AM_COND_IF([USE_CEPHFS_AIORI],[
AC_DEFINE([USE_CEPHFS_AIORI], [], [Build CEPHFS backend AIORI])
])

# DAOS Backends (DAOS and DFS) IO support require DAOS and CART/GURT
AC_ARG_WITH([cart],
[AS_HELP_STRING([--with-cart],
[support IO with DAOS backends @<:@default=no@:>@])],
[], [with_daos=no])

AS_IF([test "x$with_cart" != xno], [
CART="yes"
LDFLAGS="$LDFLAGS -L$with_cart/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_cart/lib64"
LDFLAGS="$LDFLAGS -L$with_cart/lib -Wl,--enable-new-dtags -Wl,-rpath=$with_cart/lib"
CPPFLAGS="$CPPFLAGS -I$with_cart/include/"
AC_CHECK_HEADERS(gurt/common.h,, [unset CART])
AC_CHECK_LIB([gurt], [d_hash_murmur64],, [unset CART])
])

AC_ARG_WITH([daos],
[AS_HELP_STRING([--with-daos],
[support IO with DAOS backends @<:@default=no@:>@])],
[], [with_daos=no])

AS_IF([test "x$with_daos" != xno], [
DAOS="yes"
LDFLAGS="$LDFLAGS -L$with_daos/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_daos/lib64"
CPPFLAGS="$CPPFLAGS -I$with_daos/include"
AC_CHECK_HEADERS(daos_types.h,, [unset DAOS])
AC_CHECK_LIB([uuid], [uuid_generate],, [unset DAOS])
AC_CHECK_LIB([daos_common], [daos_sgl_init],, [unset DAOS])
AC_CHECK_LIB([daos], [daos_init],, [unset DAOS])
AC_CHECK_LIB([dfs], [dfs_mkdir],, [unset DAOS])
])

AM_CONDITIONAL([USE_DAOS_AIORI], [test x$DAOS = xyes])
AM_COND_IF([USE_DAOS_AIORI],[
AC_DEFINE([USE_DAOS_AIORI], [], [Build DAOS backends AIORI])
])

# Gfarm support
AC_MSG_CHECKING([for Gfarm file system])
AC_ARG_WITH([gfarm],
[AS_HELP_STRING([--with-gfarm=GFARM_ROOT],
[support IO with Gfarm backend @<:@default=no@:>@])],
[], [with_gfarm=no])
AC_MSG_RESULT([$with_gfarm])
AM_CONDITIONAL([USE_GFARM_AIORI], [test x$with_gfarm != xno])
if test x$with_gfarm != xno; then
AC_DEFINE([USE_GFARM_AIORI], [], [Build Gfarm backend AIORI])
case x$with_gfarm in
xyes) ;;
*)
CPPFLAGS="$CPPFLAGS -I$with_gfarm/include"
LDFLAGS="$LDFLAGS -L$with_gfarm/lib" ;;
esac
AC_CHECK_LIB([gfarm], [gfarm_initialize],, [AC_MSG_ERROR([libgfarm not found])])
AC_CHECK_MEMBERS([struct stat.st_mtim.tv_nsec])
fi

# aws4c is needed for the S3 backend (see --with-S3, below).
# Version 0.5.2 of aws4c is available at https://github.com/jti-lanl/aws4c.git
Expand Down Expand Up @@ -245,12 +338,6 @@ Consider --with-aws4c=, CPPFLAGS, LDFLAGS, etc])
])








# Enable building "IOR", in all capitals
AC_ARG_ENABLE([caps],
[AS_HELP_STRING([--enable-caps],
Expand All @@ -261,6 +348,7 @@ AM_CONDITIONAL([USE_CAPS], [test x$enable_caps = xyes])

AC_CONFIG_FILES([Makefile
src/Makefile
src/test/Makefile
contrib/Makefile
doc/Makefile])
AC_OUTPUT
Loading

0 comments on commit 2a17f3e

Please sign in to comment.