BOINC Complied for MPI Usage

Message boards : Questions and problems : BOINC Complied for MPI Usage
Message board moderation

To post messages, you must log in.

AuthorMessage
Nathan B

Send message
Joined: 30 Jan 13
Posts: 3
United States
Message 47568 - Posted: 30 Jan 2013, 1:50:08 UTC

I recently had the fun of setting up a beowulf cluster under Fedora 15. I have been following this guide to some extent: http://www.tldp.org/HOWTO/html_single/Beowulf-HOWTO/ I would like to run BOINC under this cluster. I have used this page with no luck: http://boinc.berkeley.edu/trac/wiki/MpiApps I am using The current version of LAM for the cluster and it provides mpiexec for each node in the cluster. Although I configured the source as shown, when the BOINC is run using mpiexec, I get a signal 15 as well as LAM telling me that I am only supposed to run MPI applications using mpiexec. I have installed all needed libraries and I am using the current BOINC stable to compile the client. Just any help on compiling for MPI would be grateful.

Nate.
ID: 47568 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15484
Netherlands
Message 47574 - Posted: 30 Jan 2013, 11:19:57 UTC - in response to Message 47568.  

Since BOINC doesn't do any of the hard work, it doesn't crunch any of the data, what are you trying to do exactly? Compiling a project's science application(s) to be used in a multithreaded (mt) way, or are you setting up your own project for this cluster to work on?

If you just want to compile BOINC to be able to run on the OS of the cluster, there's probably a way to do so. But it has nothing to do with using MPI (mt) apps.

As for projects still using mt applications, I think that only Milkyway has one and the rest stopped long ago.

So please explain what it is you're trying to accomplish. We can give better help that way.
ID: 47574 · Report as offensive
Nathan B

Send message
Joined: 30 Jan 13
Posts: 3
United States
Message 47575 - Posted: 30 Jan 2013, 13:14:02 UTC

I was planning on using the cluster for computation for SETI@Home. I posted over there and received nothing. So, yes, I wanted to compile a projects client for MPI application over the cluster. I would be running one instance of the client on the head node and LAM should distribute the computational tasks to the other nodes, rather than running an instance of boinc on each node separately. That was my original plan.
ID: 47575 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15484
Netherlands
Message 47580 - Posted: 30 Jan 2013, 17:56:09 UTC - in response to Message 47575.  

I don't think that you can make a multithreaded application of the Seti (Multibeam) application, just by compiling its source code as such. The code needs to be able to do so as well.

Now, normally you're better off asking at the Seti forums, but I'll ask Eric Korpela to pass by here and give comment. He's one of the project's developers.
ID: 47580 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15484
Netherlands
Message 47582 - Posted: 30 Jan 2013, 18:33:33 UTC

Eric emailed me back with the following:
Eric Korpela wrote:
It's not only difficult, it's not worth the effort. SETI@home is already well distributed. Communications and synchronization overhead would slow it down. Even doing that as a multithreaded app on a multicore processor wouldn't beat one instance per core in processing power. It would probably be easy to replace FFTW with an MPI compile of FFTW, but that would just distribute the FFTs. But that would probably slow down the FFTs overall.

The optimal way to use SETI@home on a cluster is to run BOINC on each node. It's not sexy, but it gives the best speed.

ID: 47582 · Report as offensive
Nathan B

Send message
Joined: 30 Jan 13
Posts: 3
United States
Message 47586 - Posted: 31 Jan 2013, 0:07:50 UTC - in response to Message 47582.  

Okay. Thank you very much for your time. Luckily, LAM has a command that runs non-MPI programs on each node separately. So, building the cluster wasn't a complete waste. ;)

Thanks again,
Nate
ID: 47586 · Report as offensive

Message boards : Questions and problems : BOINC Complied for MPI Usage

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.