Seti CUDA Likes To Hog The GPU

Message boards : Questions and problems : Seti CUDA Likes To Hog The GPU
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27141 - Posted: 7 Sep 2009, 22:45:43 UTC

Maybe I'm thick but I've had to give up allowing Seti@home access to my GPU as it just takes over and I find that anything else using graphics suffers as a result - juddering, slowness etc.
According to what I've read and my settings it's supposed to suspend itself if the GPU is in use, but I don't see that happening.
So unfortunately I'm forced to accept fewer work units as a result of allowing only CPU access.

Comments or suggestions welcome.


Peter
Toronto, Canada
ID: 27141 · Report as offensive
Aurora Borealis
Avatar

Send message
Joined: 8 Jan 06
Posts: 448
Canada
Message 27142 - Posted: 7 Sep 2009, 22:53:04 UTC - in response to Message 27141.  

Maybe I'm thick but I've had to give up allowing Seti@home access to my GPU as it just takes over and I find that anything else using graphics suffers as a result - juddering, slowness etc.
According to what I've read and my settings it's supposed to suspend itself if the GPU is in use, but I don't see that happening.
So unfortunately I'm forced to accept fewer work units as a result of allowing only CPU access.

Comments or suggestions welcome.


Doing anything that is GPU intensive will be effected by Boinc. AFAIK there isn't a capability to tell Boinc to go low priority on the GPU.
ID: 27142 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 20 Dec 07
Posts: 1069
Germany
Message 27143 - Posted: 7 Sep 2009, 23:02:52 UTC - in response to Message 27141.  
Last modified: 7 Sep 2009, 23:03:22 UTC

Did you select "Run based on preferences" in the Activity menu?

What is your BOINC version?

Did you check online and local preferences?

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
ID: 27143 · Report as offensive
ZPM
Avatar

Send message
Joined: 14 Mar 09
Posts: 215
United States
Message 27144 - Posted: 7 Sep 2009, 23:05:13 UTC - in response to Message 27143.  
Last modified: 7 Sep 2009, 23:06:44 UTC

you may also want to uncheck "use gpu while in use" as VLAR wu's kill gpu screen performance.

also hogging, it's most likely that your share resource for seti is set very high compared to other gpu projects.

300 for seti and 100 for gpugrid is like running 30 seti mb wu's to 1 gpugrid wu.
ID: 27144 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 20 Dec 07
Posts: 1069
Germany
Message 27145 - Posted: 7 Sep 2009, 23:06:20 UTC - in response to Message 27142.  

Doing anything that is GPU intensive will be affected by Boinc. AFAIK there isn't a capability to tell Boinc to go low priority on the GPU.

Not to go low priority, but to suspend GPU processing when the user is active.

Gruß,
Gundolf
ID: 27145 · Report as offensive
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27152 - Posted: 8 Sep 2009, 12:53:51 UTC
Last modified: 8 Sep 2009, 13:02:57 UTC

I apologise about not getting back sooner but I never received notification of your replies. Now that I spotted the Subscibe button, that shouldn't happen.

I'm using Boinc Manager 6.6.36.

I never conciously set the shares each application uses. But here's what they are right now.



I have it set to run based on preferences in the client and have made sure that each application is set acordingly at their websites.

I can't tell it to do work only when the computer is idel as it would never get any work done at all, I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage.

Seti throws a big guilt trip at me now saying jobs would be available except my settings don't allow GPU usage. Tough!
Peter
Toronto, Canada
ID: 27152 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15481
Netherlands
Message 27153 - Posted: 8 Sep 2009, 13:16:22 UTC - in response to Message 27152.  

I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage.

By using the CPU throttle function, you throttle the GPU as well, as that will slow down the amount of kernels that the CPU application makes for the GPU to work on. Other than that, there is no way at this time that the BOINC developers or the Nvidia developers can throttle a GPU.

When using the GPU, it is saturated with kernels. This causes the slowdown you see. Hence why we advice you to only do CUDA when you're not otherwise using the PC. See my CUDA/CAL FAQ (in signature) for more information.
ID: 27153 · Report as offensive
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27155 - Posted: 8 Sep 2009, 15:03:48 UTC - in response to Message 27153.  

Interesting reading, thank you. I'll keep watch for developments on that front.
Peter
Toronto, Canada
ID: 27155 · Report as offensive
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27156 - Posted: 8 Sep 2009, 15:08:24 UTC

By the way I forgot to ask...I stated earlier that I had done nothing to set the resource shares for each application...is that done automatically or should I be meddling with it?
Peter
Toronto, Canada
ID: 27156 · Report as offensive
ZPM
Avatar

Send message
Joined: 14 Mar 09
Posts: 215
United States
Message 27157 - Posted: 8 Sep 2009, 16:33:09 UTC - in response to Message 27156.  

in the computing preferences page of the account page of the project(ugh what a phrase)...

and yes, its find to meddle with it, you need to take into account the fact that your computers accumulate debt to projects and will run a lot often if you keep those projects from running(via the means of no new work). my quad is setup in this fashion with no new work set and i babysit the computer. i love doing it for the quad, not so much for the p4's.
ID: 27157 · Report as offensive
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27158 - Posted: 8 Sep 2009, 17:39:46 UTC - in response to Message 27157.  

Thanks. I'm going to leave it alone for now until I've researched the subject a bit more. I've only recently restarted Boinc after a very long absence so have forgotten most of whatever I learned in the past.
Peter
Toronto, Canada
ID: 27158 · Report as offensive
ZPM
Avatar

Send message
Joined: 14 Mar 09
Posts: 215
United States
Message 27159 - Posted: 8 Sep 2009, 17:47:15 UTC - in response to Message 27158.  

Thanks. I'm going to leave it alone for now until I've researched the subject a bit more. I've only recently restarted Boinc after a very long absence so have forgotten most of whatever I learned in the past.


I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time.
ID: 27159 · Report as offensive
Profile Peter
Avatar

Send message
Joined: 7 Sep 09
Posts: 167
Canada
Message 27160 - Posted: 8 Sep 2009, 17:51:19 UTC - in response to Message 27159.  

Wait until you get old like me....LOL
Peter
Toronto, Canada
ID: 27160 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 27161 - Posted: 8 Sep 2009, 17:52:39 UTC - in response to Message 27159.  

I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time.

The danger with that - especially with BOINC - is that quite often the things you've learned turn out to have changed while your back was turned, and hence not to be true any more! ;-)
ID: 27161 · Report as offensive
ZPM
Avatar

Send message
Joined: 14 Mar 09
Posts: 215
United States
Message 27162 - Posted: 8 Sep 2009, 17:58:59 UTC - in response to Message 27161.  

I'm one of those people that doesn't forget anything I've learn, even if i was gone for a long time.

The danger with that - especially with BOINC - is that quite often the things you've learned turn out to have changed while your back was turned, and hence not to be true any more! ;-)


True.
ID: 27162 · Report as offensive
Gipsel

Send message
Joined: 8 Sep 09
Posts: 5
Germany
Message 27171 - Posted: 8 Sep 2009, 21:03:34 UTC - in response to Message 27153.  

I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage.

By using the CPU throttle function, you throttle the GPU as well, as that will slow down the amount of kernels that the CPU application makes for the GPU to work on. Other than that, there is no way at this time that the BOINC developers or the Nvidia developers can throttle a GPU.

Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;)

In case of the ATI application for MW there is even a command line option a user can set in the app_info.xml for this exact purpose. So it is true that it can't be enforced by the BOINC client, nevertheless such an option would be possible if the science app supports it.
ID: 27171 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15481
Netherlands
Message 27174 - Posted: 8 Sep 2009, 21:16:08 UTC - in response to Message 27171.  
Last modified: 8 Sep 2009, 21:19:07 UTC

Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;)

Which is why when you throttle the CPU, that temporarily suspends the CPU application sending those kernels to the GPU. But I said that already. ;-)

Ah, I may add for those that don't know this, the CPU throttle in BOINC suspends and resumes the running tasks for a couple of seconds every 10 seconds. If you set it for 80% it will not continuously give 80% CPU, but rather run 8 seconds, pause 2.

Btw, welcome around here Gipsel. Your work at Milkyway (and other places) is highly appreciated.
ID: 27174 · Report as offensive
Gipsel

Send message
Joined: 8 Sep 09
Posts: 5
Germany
Message 27180 - Posted: 8 Sep 2009, 21:39:17 UTC - in response to Message 27174.  

Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;)

Which is why when you throttle the CPU, that temporarily suspends the CPU application sending those kernels to the GPU. But I said that already. ;-)

Ah, I may add for those that don't know this, the CPU throttle in BOINC suspends and resumes the running tasks for a couple of seconds every 10 seconds. If you set it for 80% it will not continuously give 80% CPU, but rather run 8 seconds, pause 2.

But that's so slow the fan on the card may already spin up and down all the time. The other solution is much more fine grained :)

Btw, welcome around here Gipsel. Your work at Milkyway (and other places) is highly appreciated.

Thanks!
I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;)
ID: 27180 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15481
Netherlands
Message 27182 - Posted: 8 Sep 2009, 22:00:46 UTC - in response to Message 27180.  

I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;)

It started with 6.6.16

And whomever is responsible? I don't know... I did make a BOINC Manager that shows both CPU time and Wall time in columns, though. ;-)
(32bit Windows only)
ID: 27182 · Report as offensive
Gipsel

Send message
Joined: 8 Sep 09
Posts: 5
Germany
Message 27185 - Posted: 8 Sep 2009, 22:24:31 UTC - in response to Message 27182.  

I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;)

It started with 6.6.16

And whomever is responsible? I don't know... I did make a BOINC Manager that shows both CPU time and Wall time in columns, though. ;-)
(32bit Windows only)

I know, that was the reason for the smiley.

But I think it creates a lot of confusion to some people, especially if the possibility to run more than one WU per GPU (officially supported since 6.10.3) is used. A directive to the GPU developers to report the GPU time (measured either way by most project apps) instead of CPU time to the client and the manager simply showing the time reported by the science app (that was the behaviour before 6.6.16) would be better in my opinion. It requires the cooperation of the project developers (guess that's a killer argument), but the manager would show some much more representative times.
The current state over at MW for instance is that recent manager versions show roughly triple the time needed for a WU (if one uses the default 3 WUs per GPU) in case of ATI cards, and the task list is showing the GPU times (as my apps report the GPU time instead of the CPU time to the client). But using the CUDA app leads to the very low CPU times in the task list. All in all there is not much coherence between the times seen in the manager and on the project pages.

But I guess that is a bit offtopic here and I don't want to hijack the thread.
ID: 27185 · Report as offensive
1 · 2 · Next

Message boards : Questions and problems : Seti CUDA Likes To Hog The GPU

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.