wiki:MpiApps

MPI applications

It is possible to run MPI applications under BOINC to run multi-process jobs under a single host machine. This has been tested using the mpich2 library (available at http://www.mcs.anl.gov/research/projects/mpich2/) and should work for other libraries. the mpich2 version used for testing in BOINC is version 1.3.2p1. Examples have been made using the BOINC wrapper to handle calling the MPI processes. The application version will need to provide it's own mpiexec program to call the MPI programs (statically linked so there are no external dependencies; or else provide shared objects in the project path).

It is preferable to setup your MPI project using a BOINC "plan class" mechanism, using the "mt" (multithreaded) app plan. This will enable the BOINC client to know that this job will use all available CPUs (and not try to run more than one workunit at a time), and will pass a command-line argument "--nthreads N" to the app (or wrapper) so that appropriate setup on the client can be done to run the number of processors available (i.e. -np 2 or 4 or 8 etc).

If you use the flag <append_cmdline_args/> in your job.xml wrapper specification file, the "--nthreads N" argument will be the last argument passed to your wrapper and your programs, so you can have a program that sets up your workunit at run-time to use "N" processors (i.e. substitution in Fortran namelists NPROC etc).


There are some platform-specific concerns with MPI programs under BOINC:

Linux

It is suggested to build mpich2 using the simple gforker process manager. This can be done by using this configure command: ./configure --with-pm=gforker For your application you will want to access the mpich2 libraries & include files, with a command similar to this:

./configure \
MPIFC=~/mpich2-1.3.2p1/bin/mpif90 \
MPICC=~/mpich2-1.3.2p1/bin/mpicc \
MPILIBS="-static-libgfortran -L ~/mpich2-1.3.2p1/lib -lfmpich -lmpichf90 -lmpl -lopa"

Mac

Similar to Linux, but you will probably want to specify the Mac architecture you are building (i.e. i386, ppc, x86_64): ./configure --with-pm=gforker CC='gcc -arch i386' Note that if you are using gfortran, you may be limited to i386 unless you compile gfortran versions for the other platforms. You will then possibly want to make a universal binary version of mipexec and your MPI applications. You will probably need to use a configure command for your application using the above references to mpich2 libraries & include files & execs

Windows

It is simplest to download the 32 and/or 64-bit (depending on your application needs) install package from the mpich2 website given above. You will want to distribute mpiexec.exe, smpd.exe, mpich2mpi.dll, and mpich2nemesis.dll with your app.

Note that Windows Firewall will block these programs (as well as your MPI app) upon first launch. This is path-dependent, so if you change these filenames it will "count" as a new program for Windows to block. Hence it is probably more sensible to make a zip-file bundle of your programs, and track changes by the wrapper and job.xml file. Otherwise every new application version you make, Windows Firewall will see it as a new potentially threatening program, and your users will have to allow this newly named program through.

PENDING: mpich2-1.3.3 should remove the smpd.exe requirement, and no Windows Firewall issues if run as: mpiexec -localonly -n 2 -channel nemesis:none mympipgm.exe


Using the Wrapper and job.xml Files

Here is an example wrapper job.xml file for a Windows MPI app. I assume you will zip the above files and put them in a subdirectory "bin" of your boinc project dir (using a small program called "movefiles"). So the first task is "movefiles" which unzips the programs, then the next task is to run the smpd.exe MPI daemon/service locally, and then the final task is to run the MPI job. You will probably have another post-processing task after that.

<job_desc>
    <task>
        <application>movefiles</application>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>wu_win.zip</command_line>
    </task>
    <task>
        <application>bin/smpd.exe</application>
        <daemon/>
        <exec_dir>$PROJECT_DIR</exec_dir>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout_smpd.txt</stdout_filename>
        <command_line>-d 0</command_line>
    </task>
    <task>
        <application>bin/mpiexec.exe</application>
        <exec_dir>$PROJECT_DIR</exec_dir>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>-wdir $PROJECT_DIR/bin -n 4 ./cpi.exe 1000000000</command_line>
    </task>
</job_desc>

Here's an example wrapper job.xml file for a Linux or Mac application with multiple non-MPI and MPI tasks:

<job_desc>
    <task>
        <application>movefiles</application>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>wu_lin.zip</command_line>
        <setenv>LD_LIBRARY_PATH=$PROJECT_DIR:/usr/lib</setenv>
    </task>
    <task>
        <application>mpiexec</application>
        <exec_dir>$PROJECT_DIR/bin</exec_dir>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>-np $NTHREADS ./xmeshfem3D</command_line>
        <setenv>DYLD_LIBRARY_PATH=$PROJECT_DIR</setenv>
        <setenv>LD_LIBRARY_PATH=$PROJECT_DIR:/usr/lib</setenv>
    </task>
    <task>
        <application>movefiles</application>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>copymesh</command_line>
        <setenv>LD_LIBRARY_PATH=$PROJECT_DIR:/usr/lib</setenv>
    </task>
    <task>
        <application>mpiexec</application>
        <exec_dir>$PROJECT_DIR/bin</exec_dir>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>-np $NTHREADS ./xgenerate_databases</command_line>
        <setenv>LD_LIBRARY_PATH=$PROJECT_DIR:/usr/lib</setenv>
    </task>
    <task>
        <application>mpiexec</application>
        <exec_dir>$PROJECT_DIR/bin</exec_dir>
        <stdin_filename></stdin_filename>
        <stdout_filename>stdout.txt</stdout_filename>
        <command_line>-np $NTHREADS ./xspecfem3D</command_line>
        <setenv>LD_LIBRARY_PATH=$PROJECT_DIR:/usr/lib</setenv>
    </task>
</job_desc>


Note especially the differences in the Windows mpiexec call versus the Linux or Mac. This is required due to the Windows CreateProcess?() calling directory feature not working for MPI (fortunately mpiexec allows a working directory flag as shown).

Example MPI Application - SPECFEM3D

As a test application, the seismic wave propagation software SPECFEM3D (http://www.geodynamics.org/cig/software/specfem3d) has been compiled to work with BOINC as an MPI wrapper application. There is a demonstration BOINC project at http://qcn.stanford.edu/mpitest with applications for the Linux and the Mac. The applications are a 5 part job (you can inspect the workunit after attaching to the project) consisting of generating a mesh, database, and running the simulation.

You can build mpich2 and SPECFEM3D using the basic GNU tools such as gcc and gfortran and reference the 'gforker' mpich2 you have built above:

Linux configure

./configure \
MPIFC=~/projects/mpich2-1.3.2p1/bin/mpif90 \
MPICC=~/projects/mpich2-1.3.2p1/bin/mpicc \
MPILIBS="-static-libgfortran -L ~/projects/mpich2-1.3.2p1/lib -lfmpich -lmpichf90 -lmpl -lopa"

Mac configure

./configure \
CC='gcc -arch i386' \
MPIFC=~/projects/mpich2-1.3.2p1/bin/mpif90 \
MPICC=~/projects/mpich2-1.3.2p1/bin/mpicc \
MPILIBS="-static-libgfortran -L~/projects/mpich2-1.3.2p1/lib -lfmpich -lmpichf90 -lmpl -lopa"
Last modified 13 years ago Last modified on Mar 20, 2011, 1:47:09 AM