Message boards : Questions and problems : Switch between applications every X minutes problem
Message board moderation
Author | Message |
---|---|
Send message Joined: 14 Jun 11 Posts: 48 |
Hello guys, I was just suspicious until a few days ago about this, but today I observed without any doubt: The option "Switch between applications every X minutes" does not work properly (anymore ?). BOINC seems to decide totally by itself how much computing time it gives to a project; for example, I enlisted about a week ago into The SkyNet POGS (http://pogs.theskynet.org/pogs/), and since then BOINC does only compute POGS WUs if some are in the queue. Other projects' WUs like WCG or Asteroids@home are kept waiting. If no POGS WUs are available, the other projects seem to get the switch... I am aware that BOINC indeed does give the projects internal values like the Duration Correction Factor to determine the best amount of computing time. But if I want the projects to be changed after a selected period of time, BOINC should just do right that and nothing else. Maybe I got this all wrong and all works as it should though; in that case someone please give me a short explaination. Thank you very much! :) My systems' specs: BOINC v7.4.36 WinXP SP3 2 GB RAM PhenomII 945 CPU |
Send message Joined: 23 Apr 07 Posts: 1112 |
The "Switch between applications every X minutes" is the interval Boinc when may switch applications, Not when it must switch applications. Claggy |
Send message Joined: 8 Jan 06 Posts: 448 |
Switch between applications every X minutes is the interval that BOINC uses to check if it should be changing project. It is not a firm switch time. It the project is still owed time based on RS it wont switch. If it is time to switch project, it will continue until the next checkpoint then switch. If the project doesn't have checkpoints it will continue until the WU is completed, then switch project. Boinc V 7.4.36 Win7 i5 3.33G 4GB NVidia 470 |
Send message Joined: 14 Jun 11 Posts: 48 |
I didnt know that, I always thought it is a must-switch...... Thanks guys for clearing that up! Much appreciated! :-) |
Send message Joined: 1 Oct 11 Posts: 17 |
As far as I am aware, there is no functionality for an end user to accurately and reliably tell BOINC how to split our donated system resources. All we can do is tell BOINC what projects we want do donate time to, and how we want our resources divided. After doing so, BOINC completely ignores our desires and spends all of it's time working on the project whose development team has been more aggressive about cheating the system. If I say I want a resource share of 100 devoted to one project and a resource share of 5 devoted to another project, I couldn't care less what funny calculations are happening somewhere other than at my computer. My local machine is more than capable of tracking how much real processor time is spent on each project. All resource share allocations should be performed on the client. Period. But for some strange reason they aren't. Before some expert cruncher steps up and gives a thirty-step process on how you can modify this, and adjust that while restricting other things in order to maybe have a chance of BOINC doing what you have already told it to do... Once a person who is generous enough to devote cycles to BOINC projects has indicated what they want to contribute to by adjusting resource shares, that's it. Nothing else should be required. End. This really needs to be fixed. |
Send message Joined: 8 Jan 06 Posts: 448 |
As far as I am aware, there is no functionality for an end user to accurately and reliably tell BOINC how to split our donated system resources. Your perception is incorect. A project may be able to jump the queue with short deadline forcing priority processing, but this doen't give more processing time to the project in the long term. Boinc just wont ask that project for more work until RS is met with other project. Boinc V 7.4.36 Win7 i5 3.33G 4GB NVidia 470 |
Send message Joined: 1 Oct 11 Posts: 17 |
As far as I am aware, there is no functionality for an end user to accurately and reliably tell BOINC how to split our donated system resources. It is not my 'perception', it is a fact. I have had resource share set to 100 for SETI@Home currently, and set to 5 for Asteroids@Home. That should yield a 20:1 ratio of SETI processor time to Asteroids time. It is NOT doing so. While Asteroids has work available, My machine has not preformed any SETI work. None. Asteroids@Home has completely dominated my machine. The same nine SETI work units that my machine was processing when new Asteroids work became available have been sitting, completely and utterly untouched. By every conceivable measurement of "resource sharing" Asteroids isn't sharing. This isn't a small data sample. SETI and Asteroids have been running on my machine for a couple months now. As mentioned above, the only time when Asteroids allowed any SETI work to run at all was when there were no Asteroids work units available. Here are the stats. Resource Shares: Asteroids @ 5 SETI @ 100 Average Work: Asteroids @ 25,000 SETI @ 1,300 and falling Total Work? Asteroids @ 1.36 million SETI @ 11.62 million Processor time when Asteroids work units were available? Asteroids @ 100% SETI @ 0% There is clearly no sharing happening here. It's very simple. Whether intentional or not, the Asteroids@Home team is simply abusing BOINC's poorly implemented resource sharing mechanisms. What good is a resource management program that can't manage resources? |
Send message Joined: 8 Jan 06 Posts: 448 |
How are you measuring 'work'? I'm not familiar with Asteroids project or if it is CPU or GPU or both. If you're looking at credits issued, that has nothing to do with time spent processing. Many projects grant excessive credits. In my case my RS for Seti=1000, and Milkyway=170 to achieve parity in credits received. I know of projects were the disparity is in the millions. Boinc V 7.4.36 Win7 i5 3.33G 4GB NVidia 470 |
Send message Joined: 23 Feb 08 Posts: 2501 |
Asteroids deadlines are much much shorter than Seti deadlines. This makes things look weird to humans. Over the course of months, the length of Seti deadlines, it will balance out, but that is for estimated credits. Granted credits will never balance as different projects give different amounts for the same amount of work. With the much shorter deadlines, it looks like a denial of crunch attack, as they must be chewed through before they go stale. Eventually the scheduler catches on when the credit estimates catch up and the long deadline work units aren't so far out and they will be crunched. To get some idea what is going on you can look at the properties of each project and the properties of each work unit. Or you could turn on the debugging flags for the scheduler and work fetch, and wade through the mountain of entries. But none of this may even come in to play if you are the lucky or unlucky one. If your rig requests Seti work and none is available that microsecond, it next asks for Asteroids as it wants to be sure it has some kind of work. If that project has work it will completely fill the buffer. That work has to be returned before the buffer empties enough to ask for more work. If you get unlucky again at the time it requests, well this can go on for a while, but shouldn't. |
Send message Joined: 1 Oct 11 Posts: 17 |
There is no sharing, no matter how you try to calculate it, because at any time when Asteroids has had work available, it has allowed Zero (0) SETI work to process on my machine for over two months now. I have SETI jobs ready to be worked on, right now. I even have partially completed SETI jobs sitting, waiting to get calculation time. It appears as if the only way my machine will be able to process SETI work again is if I drop Asteroids, which I will be doing, after I run out of Asteroids work units. I have set Asteroids to no longer retrieve new work. If my donation of my computer cycles were to BOINC to use as it sees fit, or if I had chosen some sort of option to allow my machine to perform work for crunch time projects only, then I could understand this behavior. But it's not. My choice is to devote 100 resource share to SETI and 5 resource share to Asteroids. A 20:1 ratio, however you want to try to calculate it. BOINC is failing to allow this, no matter how it is calculated. This could be addressed very easily by simply calculating CPU and GPU time on each project on the local machine. I work X time on Asteroids, then I work 20X time on SETI. I do not want to do exclusively SETI work, but I will be forced back to that condition soon. By any and every measure over an extended timeframe, with work units from both projects on my machine now, BOINC is failing to allow me to donate my computer time as I wish. If I am going to be doing SETI work exclusively, what do I need the BOINC project manager for? Please fix this. |
Send message Joined: 23 Apr 07 Posts: 1112 |
I do not want to do exclusively SETI work, but I will be forced back to that condition soon. By any and every measure over an extended timeframe, with work units from both projects on my machine now, BOINC is failing to allow me to donate my computer time as I wish. If I am going to be doing SETI work exclusively, what do I need the BOINC project manager for? Please fix this. As you're repeatily been told on the Seti forum, your cache setting is causing this, you're maxed out your cache setting, got your 200 tasks, Now Boinc needs to fill it's cache, so it gets Asteroids work, lots of it, now the Seti deadlines are all further away, and the Asteroids deadlines are only a few days away, So Boinc has to do the Asteroids work 'Right Now' otherwise it won't meet the deadline, Soon it'll have to do the Seti work otherwise it won't meet deadline, You're been told that adjusting your cache settings to a more reasonable amount will get things running more to your liking, But you won't listen. The only real way of fixing this is for Seti to remove their limits so you actually get your 10 or 20 days of work, or projects not to have real short deadline, so it doesn't have to done 'Right Now' to meet deadline. Claggy |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.