Amount of CPU for AMD Radeon vs NVidia GTX on Einstein

Message boards : Questions and problems : Amount of CPU for AMD Radeon vs NVidia GTX on Einstein
Message board moderation

To post messages, you must log in.

AuthorMessage
magarity

Send message
Joined: 27 Jan 13
Posts: 5
United States
Message 105224 - Posted: 23 Aug 2021, 16:09:40 UTC
Last modified: 23 Aug 2021, 16:12:15 UTC

I gave away my GTX graphics card and bought a Radeon and now it says 0.9 CPU + AMD/ATI where I could have sworn Einstein@Home used to say 0.1 CPU + NVidia. Am I remembering incorrectly or what's the deal with that ratio?
ID: 105224 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 105225 - Posted: 23 Aug 2021, 16:21:53 UTC

The fraction of a CPU is just an initial estimate and may, or may not, be the actual fraction of a CPU used by the GPU. The application will use whatever fraction it needs to, and that will vary up and down as the GPU task proceeds.
ID: 105225 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 105265 - Posted: 26 Aug 2021, 19:41:28 UTC
Last modified: 26 Aug 2021, 19:43:42 UTC

That 0.1 or 0.9 you can adjust yourself in settings.
best thing you can do, is run the GPU WU, and see in taskmanager or Htop (linux) how much % of a CPU thread it's using.
My estimate is that you'll have close to 0.25 CPU (25% or less of a CPU thread for processing the IGP WU).

So long the value is below 1, your CPU will use this thread to process CPU WUs at the same time, through the same thread, if set up for it.
If you have 12 threads, 11 of which are assigned to CPU WUs, the 12th thread is best to run the OS and GPU WU.
ID: 105265 · Report as offensive
Bryn Mawr
Help desk expert

Send message
Joined: 31 Dec 18
Posts: 284
United Kingdom
Message 105266 - Posted: 26 Aug 2021, 20:30:02 UTC - in response to Message 105265.  

That 0.1 or 0.9 you can adjust yourself in settings.
best thing you can do, is run the GPU WU, and see in taskmanager or Htop (linux) how much % of a CPU thread it's using.
My estimate is that you'll have close to 0.25 CPU (25% or less of a CPU thread for processing the IGP WU).

So long the value is below 1, your CPU will use this thread to process CPU WUs at the same time, through the same thread, if set up for it.
If you have 12 threads, 11 of which are assigned to CPU WUs, the 12th thread is best to run the OS and GPU WU.


Do we have any experimental results for this?

The reason I ask is that I don’t reserve a thread for the os and it quite happily skims 2-3% off the top of each thread leaving me with 23.5 threads running Boinc on a 24 thread machine but I don’t know what the overhead is for task swapping between os and Boinc to achieve that.
ID: 105266 · Report as offensive
Harri Liljeroos

Send message
Joined: 25 Jul 18
Posts: 62
Finland
Message 105267 - Posted: 26 Aug 2021, 21:33:56 UTC

My experience with Nvidia GPUs at Einstein under windows, is that you should always reserve one CPU core for each GPU task you are running plus 2 CPU cores to OS if you plan to use the computer for your daily routines.
ID: 105267 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 105268 - Posted: 26 Aug 2021, 21:45:32 UTC - in response to Message 105267.  

My experience with Nvidia GPUs at Einstein under windows, is that you should always reserve one CPU core for each GPU task you are running plus 2 CPU cores to OS if you plan to use the computer for your daily routines.
The same applies to most projects using the OpenCL programming language for NVidia GPUs. It's not so important for projects that use the proprietary CUDA programming language.
ID: 105268 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 105285 - Posted: 29 Aug 2021, 22:49:20 UTC - in response to Message 105267.  
Last modified: 29 Aug 2021, 22:51:41 UTC

My experience with Nvidia GPUs at Einstein under windows, is that you should always reserve one CPU core for each GPU task you are running plus 2 CPU cores to OS if you plan to use the computer for your daily routines.

The 1 thread is only necessary on high powered GPUs, like an RTX 3080 or 3090.
Windows task manager will tell you that 1 entire thread is used per GPU, but the kernel data, as well as in Linux HTOP, you can see that quite often an RTX 2060 only uses about 1,5Ghz out of a 4Ghz thread, or about 25-33%.
We've done some testing with Folding at home, at which we limited the CPU frequency, and so long we didn't hit that 33% threshold, the performance penalty was only very small.
That means in theory you can run 2 to 3 (sometimes even 4) GPUs per CPU thread.

The reason Nvidia sets 1 GPU per thread, is so the thread can be locked to the GPU, and performance somewhat increases (by a few FPS or percent).
With boinc, this CPU allocation isn't really necessary, as just like with mining, most of the data is directly read and written from and to VRAM, and the CPU just queues the work in VRAM.
Boinc automatically assigns 0.1 to 0.9 threads for GPU in most projects. And whatever isn't used will be used for a CPU WU.

Even if your Nvidia GPU WU says 0.9 CPU, but actually only uses 10% of the CPU thread, the remaining 90% goes to CPU WU processing, or in some cases to other GPU WUs that have 0.1 CPU.
ID: 105285 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 105287 - Posted: 30 Aug 2021, 7:13:43 UTC - in response to Message 105285.  

NOT TRUE - how much CPU is required by a GPU is very much governed by the application. The performance of the GPU is a very small contributor to the demand placed on the CPU.
For tasks running on the CPU the BOINC architecture does not properly support CPU core sharing, but the operating system may allow job swapping (do a bit of job A, save the data, load job B, do a bit then save it's data; load job C......) but does so with a performance penalty on al jobs running like this, so is somewhat undesirable when performance is a must.
ID: 105287 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 105288 - Posted: 30 Aug 2021, 7:29:35 UTC - in response to Message 105285.  

The 1 thread is only necessary on high powered GPUs, like an RTX 3080 or 3090.
The problem with your example is that previous to these being the high powered GPUs, the GTX 1080 and 1080 Ti, followed by the RTX 2070, 2080 and 2080 Ti were the high powered ones.

For gaming this may have changed. But not for BOINC and its projects, as the science applications used won't have changed (much) over the past GPU generations.
ID: 105288 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 105290 - Posted: 30 Aug 2021, 14:16:50 UTC

For Einstein (gamma ray), Nvidia GPUs will use 100% of a CPU thread.
For Einstein (gravity wave), Nvidia GPUs will often use MORE than 100% of a CPU thread (meaning it will use 2 threads).

for better BOINC accounting of resources, it would be best to set nvidia tasks to reflect that 1 CPU is being used for each GPU task in the app_config file. and leave yourself some extra buffer room on CPU use % in the compute preferences if you're running gravity wave tasks. that way BOINC wont think you have more CPU resources available than you really do and try to spin up extra CPU tasks that might over commit your CPU and slow down processing overall.
ID: 105290 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 105324 - Posted: 7 Sep 2021, 1:26:47 UTC - in response to Message 105288.  

The 1 thread is only necessary on high powered GPUs, like an RTX 3080 or 3090.
The problem with your example is that previous to these being the high powered GPUs, the GTX 1080 and 1080 Ti, followed by the RTX 2070, 2080 and 2080 Ti were the high powered ones.

For gaming this may have changed. But not for BOINC and its projects, as the science applications used won't have changed (much) over the past GPU generations.


The thing is that even those 1080-2080Ti GPUs don't often run at maximum capacity.
Many of them only run a good hundred watts, even if rated at 200 or 300W.
The wattage is a good indicator on how much the GPU is loaded. Maybe some 32-bit shaders are waiting for FPP 64-bit instructions to be processed?

Either way, for most projects a 2080 Ti runs fine at 2,5Ghz. If your CPU has a 5Ghz boost frequency, and can sustain it, you could run 2x 2080 Ti GPUs on 1 core.
I've tried Einstein, Milkyway, and a few others doubling, tripling or quadrupling WUs to get more out of the GPU, and it still wouldn't surpass 1 CPU thread (on a +3Ghz Core i3, non HT CPU).
ID: 105324 · Report as offensive
Profile Keith Myers
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 17 Nov 16
Posts: 863
United States
Message 105326 - Posted: 7 Sep 2021, 2:15:59 UTC - in response to Message 105324.  
Last modified: 7 Sep 2021, 2:19:03 UTC

You must be doing something wrong. On my 2080's running Gamma-Ray, they use 97% of the gpu at whatever wattage I have them limited to. IOW, they use the full 200W of the 200W I have them set to.

I run only 1X as there is no more wattage or spare capacity available to run at greater than 1X.

OTOH, they only use half the 200W available on MilkyWay tasks.

[Edit] It all depends on the application whether they use the card's full potential.
ID: 105326 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 105369 - Posted: 10 Sep 2021, 19:06:04 UTC - in response to Message 105324.  


Either way, for most projects a 2080 Ti runs fine at 2,5Ghz.


Not unless you’re on liquid nitrogen. 2.5GHz ain’t happening under normal circumstances. 2.0-2.1 max on good silicon. But the power required to sustain it often isn’t worth it.

you could run 2x 2080 Ti GPUs on 1 core.
I've tried Einstein..


You can’t run 2x 2080Ti on 1 CPU core with Einstein (not natively). Each task requires a full CPU core. If you’re setting 0.5CPUx1GPU in your app_config file or something, just know that this doesn’t enforce these settings and they are only used for BOINC to figure out how many resources are free based on what’s running. If you check the CPU use, you’ll find it’s still using 100% of a core for each GPU task.
ID: 105369 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 106039 - Posted: 9 Nov 2021, 0:38:35 UTC - in response to Message 105369.  

I don't have the exact numbers right now, but even Einstein runs fine using more than 1GPU per CPU thread.
I actually run 2 GPUs (a 2070 and 2060) + the Intel IGP + 1 to 2 CPU threads on my Celeron (2 core 4 thread, 3,2Ghz), running at 80C.
It's finely tuned, and certain Einstein tasks aren't enabled for me, as my main limitation now is the 8GB RAM limitation.

You can easily disable the Einstein tasks that use too much CPU resources if it becomes a problem.

I'm not running any Einstein tasks right now, as I'm more focused on CPU WUs at this moment.
ID: 106039 · Report as offensive

Message boards : Questions and problems : Amount of CPU for AMD Radeon vs NVidia GTX on Einstein

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.