Big ask - auto throttling of CPU tasks when using GPUs?

Message boards : GPUs : Big ask - auto throttling of CPU tasks when using GPUs?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96528 - Posted: 8 Mar 2020, 21:43:14 UTC - in response to Message 96524.  

But what I was observing was if the CPU is too busy with unrelated tasks (eg CPU WUs), then the GPU slowed down. This suggests to me that the CPU part of that GPU WU was not being given enough CPU time. But it should have been, as its priority in the OS was higher than the CPU WUs.
Not necessarily. It might be that the CPU had to refetch a lot of data that wasn't in local cache memory, at every 'context switch' (look up that one up too).


From experience all I see is that if the CPU overall usage is hitting 100%, then it cannot service GPUs adequately, hence I (annoyingly) have to babysit Boinc to get it not to use every single CPU core so the GPU is never left idle. It seems to make no difference if a 6 core CPU is doing nothing else but servicing one GPU, or if it's doing 5 CPU WUs aswell, but give it 6 and there's no spare cycles to help the GPU. This does not make sense as the priorities in the OS should stop it doing this.
ID: 96528 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4602
United Kingdom
Message 96532 - Posted: 8 Mar 2020, 22:00:10 UTC - in response to Message 96526.  

Not the first time the code has diverged from the documentation, over time.
Gotta wonder though which one it is now. I'm not going to ask.


'GetDecics' are a BOINC CPU app
''MB8_win_x86_SSE3_' are a BOINC NVidia GPU app
'hsgamma_' is a BOINC intel GPU app from Einstein, which is an interesting divergence from normal - Peter might be interested.
ID: 96532 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96534 - Posted: 8 Mar 2020, 22:07:53 UTC - in response to Message 96532.  

Not the first time the code has diverged from the documentation, over time.
Gotta wonder though which one it is now. I'm not going to ask.


'GetDecics' are a BOINC CPU app
''MB8_win_x86_SSE3_' are a BOINC NVidia GPU app
'hsgamma_' is a BOINC intel GPU app from Einstein, which is an interesting divergence from normal - Peter might be interested.


Did it decide to set realtime by itself? I don't use Intel GPU apps, as I get the same power from a CPU app, and the Intel GPU takes a core away from the CPU.
ID: 96534 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14733
Netherlands
Message 96535 - Posted: 8 Mar 2020, 22:12:16 UTC - in response to Message 96532.  
Last modified: 8 Mar 2020, 22:16:56 UTC

Nice image and that, but it doesn't answer our question, what sets the thread priority? BOINC or the application programmer?
Although considering your RT application there, I'd bend to the latter.
ID: 96535 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4602
United Kingdom
Message 96536 - Posted: 8 Mar 2020, 22:14:57 UTC - in response to Message 96534.  
Last modified: 8 Mar 2020, 22:33:11 UTC

Did it decide to set realtime by itself? I don't use Intel GPU apps, as I get the same power from a CPU app, and the Intel GPU takes a core away from the CPU.
No, I deliberately choose to run Process Lasso (from which the screengrab is taken) to lock that particular application to RealTime - note the 'R' in the rules column.

It's a particular special case which needs very little CPU support - but by god, does it need it quickly. This is a quad core - I'm running 2x integer CPU, 2x OpenCL needing 100% CPU, and the Intel_gpu using next to nothing. Keeps it all within the TDP power limit, so no throttling.

Edit - notice that there are no rules controling the CPU and GPU apps. If you want, I can run those two apps (tomorrow!) in a command window or bench test, outside the BOINC app_start environment. That should settle it.
ID: 96536 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 644
United States
Message 96540 - Posted: 9 Mar 2020, 3:37:09 UTC

I guess that feature is largely determined by the project you're running tasks for.

The collatz conjecture for instance, does not use much CPU at all. I could run it even if all my CPU cores are fully utilized, without much of performance loss.

Primegrid on the other hand, uses quite some CPU!
Sometimes bottlenecking the GPU in cases of when you have a fast GPU and a slow CPU.
I would estimate, that an rtx2080Ti needs at least a 3,7Ghz, preferably 4Ghz CPU. For the collatz, the CPU could possibly feed the 2080Ti running at only 300-400Mhz, without much of a performance penalty at all.
ID: 96540 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4602
United Kingdom
Message 96562 - Posted: 9 Mar 2020, 17:41:28 UTC - in response to Message 96532.  

We saw this list last night:


'GetDecics' are a BOINC CPU app
''MB8_win_x86_SSE3_' are a BOINC NVidia GPU app
'hsgamma_' is a BOINC intel GPU app from Einstein.

Here are two more versions. This is a different machine, with only one NVidia GPU - but all four cores are still committed.And I'm showing you the einsteinbinary_BRP4 app, again for intel_gpu, because these tasks take less time, so it's easier to show you changes. The machine is Einstein host 8864187 - Intel i5 with HD 4600 GPU.

This one is running under BOINC, but with the Process Lasso rule removed:


And this one is with the Einstein app running from the command line, outside BOINC:


I think we can confirm that BOINC is capable of setting process priorities to 'below normal' (in this GPU case), and that this particular Einstein developer chose to let BOINC do its thing. That's just one example: it is not possible to generalise that every developer at every project will make the same choice.
ID: 96562 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96566 - Posted: 9 Mar 2020, 19:16:22 UTC - in response to Message 96562.  

I think we can confirm that BOINC is capable of setting process priorities to 'below normal' (in this GPU case), and that this particular Einstein developer chose to let BOINC do its thing. That's just one example: it is not possible to generalise that every developer at every project will make the same choice.


Agreed, the trouble is (in Windows 10 at least) it doesn't work. A real life analogy - two things happen to you at once - your favourite TV program comes on and your house catches fire. The fire has a higher priority, so you sort that and only that, you spend zero time on the TV program until you have extinguished the fire. Why on earth is Windows 10 running the idle priority CPU WUs at the detriment of below normal priority GPU tasks?
ID: 96566 · Report as offensive
Previous · 1 · 2

Message boards : GPUs : Big ask - auto throttling of CPU tasks when using GPUs?

Copyright © 2021 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.