Folding@Home on BOINC

Message boards : Projects : Folding@Home on BOINC
Message board moderation

To post messages, you must log in.

AuthorMessage
Falconet

Send message
Joined: 3 Oct 10
Posts: 48
Portugal
Message 96958 - Posted: 21 Mar 2020, 13:15:54 UTC

I know there was an attempt many years ago to get Folding@Home on BOINC but it was eventually abandoned.

At Folding@Home Reddit AMA, when asked by a volunteer about Folding@Home running on BOINC, Greg Bowman said:

"In principle, it should be doable. Once we get the new open source client out there, want to take a stab at it? We'd love to empower folks in our community to see and seize opportunities like this."

https://www.reddit.com/r/pcmasterrace/comments/flgm7q/ama_with_the_team_behind_foldinghome_coronavirus/fl0gbxd/

Just throwing this out here. Folding@Home has gained an insane amount of computing because of SARS-CoV-2.

Cheers
ID: 96958 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 96959 - Posted: 21 Mar 2020, 13:54:18 UTC - in response to Message 96958.  

Since there's not many developers reading here by default, I have forwarded your post to the BOINC developers list, with a link to this thread.
ID: 96959 · Report as offensive
Jim1348

Send message
Joined: 8 Nov 10
Posts: 310
United States
Message 96960 - Posted: 21 Mar 2020, 13:55:03 UTC - in response to Message 96958.  

The main problem was always that Folding needed the lowest possible latency. So they developed their own client to allow downloading a new work unit just as the old one was ending (at 99%).
However, BOINC more or less has that capability now too, with the addition of "zero resource share". Then, you download a new work unit when you run out of the old ones.

With a little tweaking, it should work. And I think BOINC is better developed overall in how it handles server outages, etc., though there is still room for improvement.
For example, at Rosetta at the moment stalled downloads prevent others from downloading too, until the que is cleared and all the old work finishes.
ID: 96960 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 96961 - Posted: 21 Mar 2020, 14:57:13 UTC - in response to Message 96960.  

'zero resource share' is probably the wrong tool for this job. I have that set for Einstein GPU jobs, for when a SETI hiccup last longer than my cache. It fetches when dry (good for latency), but the last time it kicked into action I was left with unstarted work when the primary project came back to life. And with very low resource share ==> low priority, the surplus tasks hung around until within 24 hours of deadline. 13 day latency. Not good (but I think Einstein will survive).

If that's the problem, there are probably other options we could pick to solve it - some of these may not have been available the last time they evaluated BOINC to see if it was fit for their purpose. Try:

<fetch_minimal_work>0|1</fetch_minimal_work>
Fetch one job per device (see --fetch_minimal_work).
<report_results_immediately>0|1</report_results_immediately>
If 1, each job will be reported to the project server as soon as it's finished, with an inbuilt 60 second delay from completion of result upload. (normally it's deferred for up to one hour, so that several jobs can be reported in one request). Using this option increases the load on project servers, and should generally be avoided. This is intended to be used only on computers whose disks are reformatted daily.
The problem is that these are set by the user, and - outside a corporate environment - they won't be.

GPUGrid has the same working principle, and gets round it by setting - in the sched_reply from the server, so under their control -

* Maximum two tasks per GPU - one to run, one spare to start after
* Short deadlines - 5 days maximum, 50% bonus gollum points for finishing within 24 hours
* Return results immediately (server version of the above)

Some combination of those might be enough, or might be tweakable to be enough. If they want our user-base, they might be willing to lend a programmer or two to do the tweaking.

But - big question - is their current server setup compatible with BOINC clients? Are the sched_request and sched_reply formats compatible? If not, do they want to throw away their current servers, or do they want to run two different server farms to support the two different platforms? That would be a nightmare to administer, and for researchers who want to submit work and retrieve the results.
ID: 96961 · Report as offensive
Jim1348

Send message
Joined: 8 Nov 10
Posts: 310
United States
Message 96962 - Posted: 21 Mar 2020, 15:07:57 UTC - in response to Message 96961.  
Last modified: 21 Mar 2020, 15:09:06 UTC

Yes. One tweak is that they would need to make the solution specific to Folding, and not the other BOINC projects you are attached to.

For example, Zero Resource share might still be used in that case, in which case any other project coming back to life would not affect it. And the "surplus tasks" problem (very annoying) would also not be present.

Even "<fetch_minimal_work>0|1</fetch_minimal_work>" would need to be changed, since I think that is for cc_config.xml, and affects all projects.

I don't know which of these is most feasible; you are the expert. But my long (for what it is worth) experience with Folding suggests that yes, their servers would need to be changed. They seem to be very particular. But why not? If they are installing a bunch of new servers anyway, why not just do BOINC?

And remember, they have a bunch of different locations now. I get a lot of work from Temple University rather than Stanford, which seems to have receded if not disappeared. They could upgrade one at a time I would think.
ID: 96962 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 96963 - Posted: 21 Mar 2020, 15:21:07 UTC - in response to Message 96962.  
Last modified: 21 Mar 2020, 15:22:47 UTC

You would have to overcome generations of rivalry between the Cardinals and the Bears to solve that one. See Big Game
ID: 96963 · Report as offensive
Jim1348

Send message
Joined: 8 Nov 10
Posts: 310
United States
Message 96964 - Posted: 21 Mar 2020, 15:31:00 UTC - in response to Message 96963.  

Their new locations are in the East (well, St. Louis east). They are innocent of such things.
Or to be more precise, they have their own rivalries.

It could be a new beginning.
ID: 96964 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 96965 - Posted: 21 Mar 2020, 15:50:37 UTC - in response to Message 96964.  

And with a lot more tech-savvy people working from home on (probably) modern employer-supplied kit (and when not working, bored out of their skulls), the next three months are the time to do it.

Jord's post to the mailing list has got one positive endorsement, so far:

Steffen Möller wrote:
How about their lead programmer attending the upcoming workshop so we
can work on that BOINC-driven F@H client over the hackathon (and evenings)?

What is also obvious: There is a considerable amount of compute power
out there that would join BOINC if the scientific results would seem
more appealing to them. And frankly, F@H website looks great.
Functionally I did not know how to participate, really, but I managed in
the end and it looks great, feels good to join.
ID: 96965 · Report as offensive
Jim1348

Send message
Joined: 8 Nov 10
Posts: 310
United States
Message 96966 - Posted: 21 Mar 2020, 16:22:16 UTC - in response to Message 96965.  

Another advantage is that it would earn BOINC credits.
I never look at credits myself (more or less), but it seems to be a drawback for some people who would otherwise migrate a GPU over to Folding.

And I am sure Jacob Klein would be delighted to help (he did Zero Resource Share anyway).
http://www.gpugrid.net/forum_thread.php?id=5078&nowrap=true#53905
ID: 96966 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 96969 - Posted: 21 Mar 2020, 17:42:58 UTC - in response to Message 96965.  

That is of course if the workshop isn't cancelled. And did I post that email without title? I'm getting senile. ;-)
ID: 96969 · Report as offensive
Falconet

Send message
Joined: 3 Oct 10
Posts: 48
Portugal
Message 97000 - Posted: 23 Mar 2020, 13:44:39 UTC - in response to Message 96959.  

Since there's not many developers reading here by default, I have forwarded your post to the BOINC developers list, with a link to this thread.


Thanks Ageless. I hope both BOINC and Folding@Home take a good look at this.
ID: 97000 · Report as offensive

Message boards : Projects : Folding@Home on BOINC

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.