Message boards : Projects : Folding@Home on BOINC
Message board moderation
Author | Message |
---|---|
![]() Send message Joined: 29 Aug 05 Posts: 15612 ![]() |
Since there's not many developers reading here by default, I have forwarded your post to the BOINC developers list, with a link to this thread. |
Send message Joined: 8 Nov 10 Posts: 310 ![]() |
The main problem was always that Folding needed the lowest possible latency. So they developed their own client to allow downloading a new work unit just as the old one was ending (at 99%). However, BOINC more or less has that capability now too, with the addition of "zero resource share". Then, you download a new work unit when you run out of the old ones. With a little tweaking, it should work. And I think BOINC is better developed overall in how it handles server outages, etc., though there is still room for improvement. For example, at Rosetta at the moment stalled downloads prevent others from downloading too, until the que is cleared and all the old work finishes. |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
'zero resource share' is probably the wrong tool for this job. I have that set for Einstein GPU jobs, for when a SETI hiccup last longer than my cache. It fetches when dry (good for latency), but the last time it kicked into action I was left with unstarted work when the primary project came back to life. And with very low resource share ==> low priority, the surplus tasks hung around until within 24 hours of deadline. 13 day latency. Not good (but I think Einstein will survive). If that's the problem, there are probably other options we could pick to solve it - some of these may not have been available the last time they evaluated BOINC to see if it was fit for their purpose. Try: <fetch_minimal_work>0|1</fetch_minimal_work> <report_results_immediately>0|1</report_results_immediately>The problem is that these are set by the user, and - outside a corporate environment - they won't be. GPUGrid has the same working principle, and gets round it by setting - in the sched_reply from the server, so under their control - * Maximum two tasks per GPU - one to run, one spare to start after * Short deadlines - 5 days maximum, 50% bonus gollum points for finishing within 24 hours * Return results immediately (server version of the above) Some combination of those might be enough, or might be tweakable to be enough. If they want our user-base, they might be willing to lend a programmer or two to do the tweaking. But - big question - is their current server setup compatible with BOINC clients? Are the sched_request and sched_reply formats compatible? If not, do they want to throw away their current servers, or do they want to run two different server farms to support the two different platforms? That would be a nightmare to administer, and for researchers who want to submit work and retrieve the results. |
Send message Joined: 8 Nov 10 Posts: 310 ![]() |
Yes. One tweak is that they would need to make the solution specific to Folding, and not the other BOINC projects you are attached to. For example, Zero Resource share might still be used in that case, in which case any other project coming back to life would not affect it. And the "surplus tasks" problem (very annoying) would also not be present. Even "<fetch_minimal_work>0|1</fetch_minimal_work>" would need to be changed, since I think that is for cc_config.xml, and affects all projects. I don't know which of these is most feasible; you are the expert. But my long (for what it is worth) experience with Folding suggests that yes, their servers would need to be changed. They seem to be very particular. But why not? If they are installing a bunch of new servers anyway, why not just do BOINC? And remember, they have a bunch of different locations now. I get a lot of work from Temple University rather than Stanford, which seems to have receded if not disappeared. They could upgrade one at a time I would think. |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
You would have to overcome generations of rivalry between the Cardinals and the Bears to solve that one. See Big Game |
Send message Joined: 8 Nov 10 Posts: 310 ![]() |
Their new locations are in the East (well, St. Louis east). They are innocent of such things. Or to be more precise, they have their own rivalries. It could be a new beginning. |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
And with a lot more tech-savvy people working from home on (probably) modern employer-supplied kit (and when not working, bored out of their skulls), the next three months are the time to do it. Jord's post to the mailing list has got one positive endorsement, so far: Steffen Möller wrote: How about their lead programmer attending the upcoming workshop so we |
Send message Joined: 8 Nov 10 Posts: 310 ![]() |
Another advantage is that it would earn BOINC credits. I never look at credits myself (more or less), but it seems to be a drawback for some people who would otherwise migrate a GPU over to Folding. And I am sure Jacob Klein would be delighted to help (he did Zero Resource Share anyway). http://www.gpugrid.net/forum_thread.php?id=5078&nowrap=true#53905 |
![]() Send message Joined: 29 Aug 05 Posts: 15612 ![]() |
That is of course if the workshop isn't cancelled. And did I post that email without title? I'm getting senile. ;-) |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.