Getting started (native version)#

These instructions cover installing Pepper and its dependencies for native serial running on a CPU or native parallel running on CUDA devices.

These instructions assume your environment has a reasonable compiler installed, e.g. g++ v11 or later, and a recent version of CMake, e.g. v3.17 or later.

There is an example Build Script with configurable parameters on the top that will go through the process of building the dependencies and Pepper. You can run it using bash build_pepper.sh <install-path>.

Third party libraries#

In the following we describe how optional external libraries are used by Pepper, and provide compilation instructions for most of them. Your mileage with these instructions might vary; in doubt please refer to the manual of the respective package.

Message Passing Interface (MPI)#

Pepper uses MPI to utilize many CPU cores in parallel. Excellent scaling has been shown for up to 1000 cores on the Polaris system at ALCF [BCG+23].

An MPI installation is usually best provided by the local system admins to ensure best performance.

HDF5#

Pepper uses the HDF5 database library to write partonic events to persistent storage, see Writing events.

To achieve best performance when using MPI, writing all events generated across all MPI nodes into a single output file, HDF5 needs to have been configured with --enable-parallel.

The following snippet provides an example of installing HDF5 yourself, assuming that MPI is available (if not, remove the --enable-parallel and --enable-parallel-tools arguments):

git clone -b hdf5-1_14_2 https://github.com/HDFGroup/hdf5.git 
cd hdf5

HDF5_ROOT="$HOME/.opt/hdf5-1.14.2"

autoreconf -i
./configure --prefix="$HDF5_ROOT" --enable-parallel --enable-parallel-tools
make -j install

# exporting this variable allows Pepper's configuration to find HDF5
export HDF5_ROOT

Note that we install HDF5 to $HOME/.opt/hdf5-1.14.2. We will follow this convention in the snippets below, too, but you can of course use any other place to install the tools.

LHAPDF#

Pepper uses the LHAPDF library to evaluate parton density functions (PDF).

Using lhapdf install <pdf set name>, you can install the PDF set you would like to use for your Pepper event generation.

For best performance when running on GPU, you should install LHAPDF using its kokkos_version branch, which will enable Pepper to evaluate PDF directly on the GPU. Despite the name of the branch, interfaces are provided for both Kokkos and native CUDA compilations.

To achieve best performance when using MPI, LHAPDF needs to have been configured with --enable-mpi. Then LHAPDF will load PDF sets only once for all ranks, not on each rank individually.

The following snippet provides an example of installing LHAPDF yourself, assuming that MPI is available (if not, remove the --enable-mpi argument):

git clone -b kokkos_version https://gitlab.com/hepcedar/lhapdf.git
cd lhapdf

LHAPDF_ROOT="$HOME/.opt/lhapdf-kokkos_version"

autoreconf -i
./configure --prefix="$LHAPDF_ROOT" --enable-mpi
make -j install

PATH="$LHAPDF_ROOT/bin:$PATH"

# exporting this variable allows Pepper's configuration to find LHAPDF
export PKG_CONFIG_PATH="$(lhapdf-config --libdir)/pkgconfig:$PKG_CONFIG_PATH"

# install the PDF set used by your setup; in this case we install the default
# one used by Pepper
lhapdf install NNPDF30_nlo_as_0118.tar.gz

FORM#

For standard processes and jet multiplicities, color factors are pre-installed by Pepper. Beyond this, FORM is required to calculate them. Pepper will do this automatically, as long as FORM is found in the PATH.

The following snippet provides an example of installing FORM yourself:

git clone https://github.com/vermaseren/form.git
cd form

FORM_ROOT="$HOME/.opt/form"

autoreconf -i
./configure --prefix="$FORM_ROOT" --enable-parform
make -j install

# make sure to export PATH such that Pepper can find the form executable at
# runtime
export PATH="$FORM_ROOT/bin:$PATH"

Chili#

Pepper has internal phase-space generators such as basic Chili and Rambo. Alternatively, the standalone Chili library can be used as an external generator. Let us know if you need installation instructions for using the standalone Chili with Pepper.

Building Pepper#

Finally, with the above dependencies in place, we can build and install Pepper itself. The dependencies will be found and used automatically when the environment is set up as described above.

Provided that the CUDA Toolkit (v11.8 or later) is available, Pepper will find it automatically during configuration and compiles for running on CUDA GPU. If there is no CUDA in the environment, Pepper will compile for running serially on the CPU instead. The automatic configuration for CUDA can be explicitly disabled by adding -DPEPPER_CUDA_DISABLED=1 to the cmake invocation below. Similar flags exist for disabling other external dependencies.

git clone -b native git@gitlab.com:spice-mc/pepper.git
cd pepper

PEPPER_ROOT="$HOME/.opt/pepper"

cmake -S . -B build -DCMAKE_INSTALL_PREFIX="$PEPPER_ROOT"
cmake --build build -j --target install

PATH="$PEPPER_ROOT/bin:$PATH"

You should now be able to run Pepper, and e.g. display its help using

pepper -h