Message boards : BOINC client : Feature Request: Hierarchical project ordering
Message board moderation
Author | Message |
---|---|
Send message Joined: 28 Jan 11 Posts: 3 |
I run Seti exclusively. However, sometimes it has no work units to do. I would like my BOINC to do work units for something else in the downtime. I tried adding a second project, but that just lets me split active CPU effort between the projects -- I want 100% on Seti when there's Seti to do, and 100% on something else, and only when there's no Seti work to do, if you follow that. Seti seems to put a pause in every week or two while they're messing around with their servers, freeing up 1-3 days of doing nothing. I note this would only be useful in practice if the "timeout" for processing a work unit for my "backup" project were longer than the Seti up/downtime cycling. (And, to a lesser extent, if I could get at least one work packet done per downtime.) |
Send message Joined: 23 Apr 07 Posts: 1112 |
You could increase your cache setting to a couple of days, then you are unlikely to run out of work, At your backup project, set your Resource share to Zero, then you should only get work when Seti has none, (Check on your backup projects Forum's if they support a Resource share of Zero, if not set it to One, and set Seti's to say, a 1000) Claggy |
Send message Joined: 17 Jul 09 Posts: 110 |
This can cause problems, as when work for the low priority project is received, and then new work from the high priority one comes in (which it will, as a priority), the low priority one will run at high priority due to it's low resource share and block the high priority one. e.g. Project A at 1000, Project B at 1, both have a 7 day deadline, both have WUs ready to run. Project B will be forced to run as it thinks it only has 7/1001ths of a day to run. Al. |
Send message Joined: 28 Jan 11 Posts: 3 |
You could increase your cache setting to a couple of days, then you are unlikely to run out of work, This might be useful to do as then I could span server downtime and "no work to do", assuming that period doesn't stretch more than a couple of days. It would be in keeping with my desire to get Seti work done as primary, but I wonder if I might actually be slowing down the Seti effort. Think about it -- I'm hogging extra work units, so to speak, that "everybody else but me" would probably get done faster. On the other hand, the downtime puts a much bigger monkey wrench into this -- a million computers start spinning their wheels, making a mockery of efficiency, so to speak. Now if everybody had a 3 day cache to buffer over the down time, even more work would get done. Yes, this is a pipeline type operation, where the important thing is average throughput. Therefore a large cache, keeping my computer working 100%, would be beneficial. Ok, I will try the larger cache. ETA: Ok, set it to 4 days. Advanced --> Preferences... --> network usage --> Additional work buffer == 4.0 now (if that's the wrong place, tell me) |
Send message Joined: 28 Jan 11 Posts: 3 |
Well, 4 day buffer isn't enough -- I've run out of work to do. So either: 1. It's been down for over 4 days now 2. The buffer estimator needs a little work. :) |
Send message Joined: 29 Aug 05 Posts: 15575 |
1. It's been down for over 4 days now Seti has been down for those last 4 days plus it will be down for the whole next weekend: Mmmkay. Now the less good news. Looks like gowron is having some fundamental RAID issues. The issues has been whittled down to one RAID1 pair tagged as degraded that won't rebuild no matter what we do. THe guys at Overland have been super helpful - but this is actually an old SnapAppliance (not a box that Overland sells) and running a (very) old version of the OS. So it's looking like our best bet to move forward is to upgrade the OS on the thing. However to do so we need to copy the workunits on the system (about 2 terabyte's worth) elsewhere temporarily. How about... thumper! That copy process is happening now. |
Send message Joined: 29 Aug 06 Posts: 82 |
I thought setting a project to 0% (as Claggy says) meant that it would effectively be a backup project - work would only download for it if the main project were unavailable and work were about to run out, but that's not what I'm seeing (BOINC 6.12.33). I set Rosetta to 100% and climate prediction to 0% but all three of my computers here are running both climate prediction and rosetta. Is BOINC capable of managing one or more backup projects and if so, what settings does that require? |
Send message Joined: 29 Aug 05 Posts: 15575 |
The projects themselves need to be running the updated back-end code as well, for that option to work. I'm not sure that either Rosetta or CPDN are running that code. |
Send message Joined: 29 Aug 06 Posts: 82 |
Ah! Claggy's comment about supporting zero makes sense now. thanks Ageless ;) |
Send message Joined: 5 Oct 06 Posts: 5135 |
If you can enter zero for resource share in the preferences box on a project website, and if you see a zero resource share for the project in BOINC Manager after updating, then that's "support" enough. Al that controls is whether the BOINC client will request new work from the project - it'll only request work if no other project is available and willing to supply work. Once the work is downloaded, it doesn't run with a resource share of zero - that would imply that any task in progress when the main project came back on line would never get completed and returned - which isn't the idea at all. In contrast, it actually behaves the opposite way, running continuously until completion (as it it were running high priority), so it can be cleared out of the way as quickly as possible and return you to your primary project. From that perspective, Climate Prediction (which only has multi-day tasks on offer) might be seen as an odd choice for a backup project. |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.