BOINC credit system

Message boards : Questions and problems : BOINC credit system
Message board moderation

To post messages, you must log in.

AuthorMessage
Marcus

Send message
Joined: 24 May 13
Posts: 1
Brazil
Message 49300 - Posted: 24 May 2013, 20:56:04 UTC

Hi,

I found this blog page (http://boinc.berkeley.edu/trac/wiki/CreditNew) discussing the BOINC's credit system, and in this page they list different systems, which they call "first credit system", "second credit system" and "new (third) credit system".

Does anyone knows if the "new (third) credit system" is already running in production for the major BOINC projects? It seems that this new system is more robust for dealing with different hosts' architectures, different applications' versions, cheating, and so on. It would be nice to see it working and have any feedback of the improvements on fairness of crediting among different hosts and different projects, while avoiding cheating.
ID: 49300 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15476
Netherlands
Message 49302 - Posted: 24 May 2013, 21:54:50 UTC - in response to Message 49300.  

It's in use at Seti@Home. None of the other projects have thus far adopted it, as far as I know.
ID: 49302 · Report as offensive
SekeRob2

Send message
Joined: 6 Jul 10
Posts: 585
Italy
Message 49303 - Posted: 25 May 2013, 7:46:10 UTC - in response to Message 49302.  
Last modified: 25 May 2013, 8:32:03 UTC

Yes, WCG has, yet it never came up to the credit level SETI has [still a huge gap], which as knreed posted on the forums recently, he has no resources for to analyze [requires massive credit evolution tracking]. It's not able to handle variable run time very well [many of those at WCG of non-deterministic nature]. With the device scoring coming up to decide what size of task to send [strong devices get tasks with more work than the weaker brethren], that will be another test to the 'robustness', to include that of the feeder system and generators.

Edit: And since this touches on the homogeneity controls... if one copy is send to score level A, platform X, then any other wingman copy/copies also has/have to go to the same level A, platform X. This locks slots in shared memory for as long as a matching device does not ask for the same [and there's more than a few sciences running simultaneous at WCG]. So much so that support for the PPC platform is getting on it's last leg [only HFCC will provide PPC jobs, when it resumes again... only 230 devices left that recently asked for work].

Edit2: I'd have thought that any project adopting server v700+ has boarded this system, but again from knreed's notes, there was lots of 'legacy' rules maintained, which is planned to be dropped. Certainly when reading that one project has special compensatory rules in place for Sandy/Ivy Bridge, think it even hit through client notices, it suggests that some projects are and continue to 'massage' the system. Given that v7 clients are credit share driven, the more credit is given, the less computing time is accorded... at least this is how I interpret the new ways... to counteract over-awarding projects, which for some will lead to the reaction to focus more and more on the highest awarders [and seen some posts to that effect... some do not crunch for the scientific value, or at least that impression could be derived. Well to each her/his own]
Coelum Non Animum Mutant, Qui Trans Mare Currunt
ID: 49303 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 49306 - Posted: 25 May 2013, 9:16:42 UTC - in response to Message 49303.  

... Given that v7 clients are credit share driven, the more credit is given, the less computing time is accorded... at least this is how I interpret the new ways... to counteract over-awarding projects...

Yes, that's how the main CreditNew document is written.

But in fact, the current operational document is ClientSchedOctTen: rather than being credit share driven, v7 clients are REC driven - "There are problems with using project-granted credit as a basis for this approach". You bet.

The estimated credit is scaled from speed*time, so effectively we're using a simple flop-counting scheduler: there's no resource-share punishment for projects granting high actual credit.

Mind you, what the server does with the returned data, and how it turns it into credit, is a closed book to me. It's frequently been likened to a random number generator.
ID: 49306 · Report as offensive
SekeRob2

Send message
Joined: 6 Jul 10
Posts: 585
Italy
Message 49307 - Posted: 25 May 2013, 11:11:43 UTC - in response to Message 49306.  

Thanks for letting my mind wander back to the alpha mail list discussion and the mechanism that had to be there to account for not-yet awarded credit [waiting on validation]. Also the REC doc is good reference. Thank you too.

Still don't understand why a very small share project gets so much run time as I just tested again with WCG set to 500 and SIMAP/Malariacontrol set to 1. Clients kept running latter 2 and kept fetching more and more... clients 7.0.65/66, 7.1.1. After few days got sick of it, and suspend the fetches for them 2. Maybe, with a share of 1, equal to about 3 minutes a day per core, these tasks with 0.75-1.5 hours length ran in high priority, thinking they otherwise could not complete in time [4-7 day deadline] under normal app switching rules. Anyway, it ticked me off so much and did a sack on them.
Coelum Non Animum Mutant, Qui Trans Mare Currunt
ID: 49307 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 49308 - Posted: 25 May 2013, 11:37:27 UTC - in response to Message 49307.  

Thanks for letting my mind wander back to the alpha mail list discussion and the mechanism that had to be there to account for not-yet awarded credit [waiting on validation]. Also the REC doc is good reference. Thank you too.

Still don't understand why a very small share project gets so much run time as I just tested again with WCG set to 500 and SIMAP/Malariacontrol set to 1. Clients kept running latter 2 and kept fetching more and more... clients 7.0.65/66, 7.1.1. After few days got sick of it, and suspend the fetches for them 2. Maybe, with a share of 1, equal to about 3 minutes a day per core, these tasks with 0.75-1.5 hours length ran in high priority, thinking they otherwise could not complete in time [4-7 day deadline] under normal app switching rules. Anyway, it ticked me off so much and did a sack on them.

A project which has been running for a while will have accumulated a substantial REC. A newly-attached project, or a project which has been idle (not supplying work) for a while, will start with a REC at or near zero. That means that the new project will be given a higher priority for work fetch and running tasks, until its REC rises (and the other project's REC decays) to the point at which their respective REC values are in proportion to Resource Share. I think.

And don't tell me about the problems trying to balance projects with GPU applications (REC in the 100,000 range) with CPU-only projects (REC in the 1,000 range)...

You can accelerate the balancing process with

<rec_half_life_days>X</rec_half_life_days>
A project's scheduling priority is determined by its estimated credit in the last X days. Default is 10; set it larger if you run long high-priority jobs.

Smaller is good.
ID: 49308 · Report as offensive
SekeRob2

Send message
Joined: 6 Jul 10
Posts: 585
Italy
Message 49309 - Posted: 25 May 2013, 12:16:45 UTC - in response to Message 49308.  

OK, thanks, set it to 1 [The fully populated cc_config.xml had the entry indeed with 10]. Will experiment... only sporadically have other projects active, only when some specific sciences are in short supply.
Coelum Non Animum Mutant, Qui Trans Mare Currunt
ID: 49309 · Report as offensive

Message boards : Questions and problems : BOINC credit system

Copyright © 2024 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.