Message boards :
Questions and problems :
Seti CUDA Likes To Hog The GPU
Message board moderation
Author | Message |
---|---|
Send message Joined: 7 Sep 09 Posts: 167 |
Maybe I'm thick but I've had to give up allowing Seti@home access to my GPU as it just takes over and I find that anything else using graphics suffers as a result - juddering, slowness etc. According to what I've read and my settings it's supposed to suspend itself if the GPU is in use, but I don't see that happening. So unfortunately I'm forced to accept fewer work units as a result of allowing only CPU access. Comments or suggestions welcome. Peter Toronto, Canada |
Send message Joined: 8 Jan 06 Posts: 448 |
Maybe I'm thick but I've had to give up allowing Seti@home access to my GPU as it just takes over and I find that anything else using graphics suffers as a result - juddering, slowness etc. Doing anything that is GPU intensive will be effected by Boinc. AFAIK there isn't a capability to tell Boinc to go low priority on the GPU. |
Send message Joined: 20 Dec 07 Posts: 1069 |
Did you select "Run based on preferences" in the Activity menu? What is your BOINC version? Did you check online and local preferences? Gruß, Gundolf Computer sind nicht alles im Leben. (Kleiner Scherz) |
Send message Joined: 14 Mar 09 Posts: 215 |
you may also want to uncheck "use gpu while in use" as VLAR wu's kill gpu screen performance. also hogging, it's most likely that your share resource for seti is set very high compared to other gpu projects. 300 for seti and 100 for gpugrid is like running 30 seti mb wu's to 1 gpugrid wu. |
Send message Joined: 20 Dec 07 Posts: 1069 |
Doing anything that is GPU intensive will be affected by Boinc. AFAIK there isn't a capability to tell Boinc to go low priority on the GPU. Not to go low priority, but to suspend GPU processing when the user is active. Gruß, Gundolf |
Send message Joined: 7 Sep 09 Posts: 167 |
I apologise about not getting back sooner but I never received notification of your replies. Now that I spotted the Subscibe button, that shouldn't happen. I'm using Boinc Manager 6.6.36. I never conciously set the shares each application uses. But here's what they are right now. I have it set to run based on preferences in the client and have made sure that each application is set acordingly at their websites. I can't tell it to do work only when the computer is idel as it would never get any work done at all, I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage. Seti throws a big guilt trip at me now saying jobs would be available except my settings don't allow GPU usage. Tough! Peter Toronto, Canada |
Send message Joined: 29 Aug 05 Posts: 15487 |
I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage. By using the CPU throttle function, you throttle the GPU as well, as that will slow down the amount of kernels that the CPU application makes for the GPU to work on. Other than that, there is no way at this time that the BOINC developers or the Nvidia developers can throttle a GPU. When using the GPU, it is saturated with kernels. This causes the slowdown you see. Hence why we advice you to only do CUDA when you're not otherwise using the PC. See my CUDA/CAL FAQ (in signature) for more information. |
Send message Joined: 7 Sep 09 Posts: 167 |
Interesting reading, thank you. I'll keep watch for developments on that front. Peter Toronto, Canada |
Send message Joined: 7 Sep 09 Posts: 167 |
By the way I forgot to ask...I stated earlier that I had done nothing to set the resource shares for each application...is that done automatically or should I be meddling with it? Peter Toronto, Canada |
Send message Joined: 14 Mar 09 Posts: 215 |
in the computing preferences page of the account page of the project(ugh what a phrase)... and yes, its find to meddle with it, you need to take into account the fact that your computers accumulate debt to projects and will run a lot often if you keep those projects from running(via the means of no new work). my quad is setup in this fashion with no new work set and i babysit the computer. i love doing it for the quad, not so much for the p4's. |
Send message Joined: 7 Sep 09 Posts: 167 |
Thanks. I'm going to leave it alone for now until I've researched the subject a bit more. I've only recently restarted Boinc after a very long absence so have forgotten most of whatever I learned in the past. Peter Toronto, Canada |
Send message Joined: 14 Mar 09 Posts: 215 |
Thanks. I'm going to leave it alone for now until I've researched the subject a bit more. I've only recently restarted Boinc after a very long absence so have forgotten most of whatever I learned in the past. I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time. |
Send message Joined: 7 Sep 09 Posts: 167 |
Wait until you get old like me....LOL Peter Toronto, Canada |
Send message Joined: 5 Oct 06 Posts: 5082 |
I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time. The danger with that - especially with BOINC - is that quite often the things you've learned turn out to have changed while your back was turned, and hence not to be true any more! ;-) |
Send message Joined: 14 Mar 09 Posts: 215 |
I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time. True. |
Send message Joined: 8 Sep 09 Posts: 5 |
I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage. Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;) In case of the ATI application for MW there is even a command line option a user can set in the app_info.xml for this exact purpose. So it is true that it can't be enforced by the BOINC client, nevertheless such an option would be possible if the science app supports it. |
Send message Joined: 29 Aug 05 Posts: 15487 |
Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;) Which is why when you throttle the CPU, that temporarily suspends the CPU application sending those kernels to the GPU. But I said that already. ;-) Ah, I may add for those that don't know this, the CPU throttle in BOINC suspends and resumes the running tasks for a couple of seconds every 10 seconds. If you set it for 80% it will not continuously give 80% CPU, but rather run 8 seconds, pause 2. Btw, welcome around here Gipsel. Your work at Milkyway (and other places) is highly appreciated. |
Send message Joined: 8 Sep 09 Posts: 5 |
Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;) But that's so slow the fan on the card may already spin up and down all the time. The other solution is much more fine grained :) Btw, welcome around here Gipsel. Your work at Milkyway (and other places) is highly appreciated. Thanks! I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;) |
Send message Joined: 29 Aug 05 Posts: 15487 |
I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;) It started with 6.6.16 And whomever is responsible? I don't know... I did make a BOINC Manager that shows both CPU time and Wall time in columns, though. ;-) (32bit Windows only) |
Send message Joined: 8 Sep 09 Posts: 5 |
I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;) I know, that was the reason for the smiley. But I think it creates a lot of confusion to some people, especially if the possibility to run more than one WU per GPU (officially supported since 6.10.3) is used. A directive to the GPU developers to report the GPU time (measured either way by most project apps) instead of CPU time to the client and the manager simply showing the time reported by the science app (that was the behaviour before 6.6.16) would be better in my opinion. It requires the cooperation of the project developers (guess that's a killer argument), but the manager would show some much more representative times. The current state over at MW for instance is that recent manager versions show roughly triple the time needed for a WU (if one uses the default 3 WUs per GPU) in case of ATI cards, and the task list is showing the GPU times (as my apps report the GPU time instead of the CPU time to the client). But using the CUDA app leads to the very low CPU times in the task list. All in all there is not much coherence between the times seen in the manager and on the project pages. But I guess that is a bit offtopic here and I don't want to hijack the thread. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.