Two projects on one GPU?

Message boards : GPUs : Two projects on one GPU?
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93468 - Posted: 31 Oct 2019, 18:34:37 UTC

Can I force Einstein and Milkyway both to run on the same GPU at once? The reason I'm considering this is Milkyway is using the double precision part of the GPU, and Einstein uses the single precision part. Are these parts of the chip independant? Can I run both and get twice the work done?
ID: 93468 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4094
United Kingdom
Message 93469 - Posted: 31 Oct 2019, 18:43:49 UTC - in response to Message 93468.  

Dunno. You could be the first to try it!

You'd need two app_config.xml files - one for each project - both with a <gpu_usage> value of 0.5 and a <max_concurrent> of 1. I'd suggest you avoid fetching CPU tasks from either project while you test.
ID: 93469 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93471 - Posted: 31 Oct 2019, 19:25:11 UTC - in response to Message 93469.  

Well that was a nice idea, but.... mission failure.

Count 1) It doesn't do any more work. Einstein runs at a fifth of the speed. Milkyway runs at 4/5ths of the speed. Total, the same.

Count 2) Boinc isn't listening to me! Strangely only Einstein ran, despite all the tasks saying 0.5 ATI (and I restarted the client and downloaded fresh tasks to make sure). When I suspended Einstein, Milkyway started, then I resumed Einstein and it started too. Which only lasted until the Milkyway task was completed, then only Einstein ran again.

Oh well.... since the card clearly can't do SP and DP at once, there's no point in worrying why my configuration didn't work. It's a Radeon HD 7970.
ID: 93471 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4094
United Kingdom
Message 93473 - Posted: 31 Oct 2019, 19:39:36 UTC - in response to Message 93471.  

That reminds me... BOINC v7.14.2 doesn't handle <max_concurrent> very well. You might find count (2) might work better under v7.16.3

For count (1) - I can't speak for ATI tasks, but some Einstein tasks are very much boosted by running at a greatly enhanced process priority. The machine I'm typing on has Einstein's intel_gpu app running at real time priority under Process Lasso. I notice a brief stutter each time a task finishes and another starts, but at once every five hours that's not a hardship. Use that factoid with care and at your own risk.
ID: 93473 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93476 - Posted: 31 Oct 2019, 19:55:39 UTC - in response to Message 93473.  
Last modified: 31 Oct 2019, 20:07:54 UTC

That reminds me... BOINC v7.14.2 doesn't handle <max_concurrent> very well. You might find count (2) might work better under v7.16.3


Already running 7.16.3.

For count (1) - I can't speak for ATI tasks, but some Einstein tasks are very much boosted by running at a greatly enhanced process priority. The machine I'm typing on has Einstein's intel_gpu app running at real time priority under Process Lasso. I notice a brief stutter each time a task finishes and another starts, but at once every five hours that's not a hardship. Use that factoid with care and at your own risk.


I just changed the priority manually in Windows 10 task manager, and it had zero effect. At any priority it appears to use about 0.75 CPU cores (as reported by task manager) and a varying 50-100% GPU load (as reported by GPU-Z). That's on a Radeon RX 560. My 7970 is for Milkyway as it's a lot better with double precision. I guess I could make it run two at once....

Pah! I tried two Einsteins on one card. I got an increase in GPU usage and task speed of 20%, yet an increase in CPU load of 100%. Very strange. I think I'll stick to the way Boinc was designed. One task on one GPU.
ID: 93476 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93478 - Posted: 31 Oct 2019, 20:14:58 UTC - in response to Message 93475.  

It's been a while since I've run either of those two projects seriously and can't remember how much resources they each need.
I thought you could run Milkyway tasks in parallel, the easiest way to keep a GPU fed would be to have multiple clients running one WU at a time. No idea how many can be run on a 7970 simultaneously though. Hopefully not all the clients would complete their cache before getting more work.

There's always going to be a hit for running more than 1 task (whether the same project or not), fortunately the credit system didn't punish long running tasks on the project I was trying to run.


A long time ago (I think on a Radeon HD 290) I used to run more than one (about 3) Milkyway tasks at once (just using the one client). I got a reasonable speed increase. But now I don't, the GPU is already running at about 95% anyway with just one task. Either Milkyway has changed, or this card is different.
ID: 93478 · Report as offensive
Profile Joseph Stateson
Volunteer tester
Avatar

Send message
Joined: 27 Jun 08
Posts: 538
United States
Message 93507 - Posted: 4 Nov 2019, 2:24:42 UTC - in response to Message 93478.  


A long time ago (I think on a Radeon HD 290) I used to run more than one (about 3) Milkyway tasks at once (just using the one client). I got a reasonable speed increase. But now I don't, the GPU is already running at about 95% anyway with just one task. Either Milkyway has changed, or this card is different.



not sure when but a few years ago milkyway started doubling up the number of work units each job has. looking in a result file one finds
<number_WUs> 4 </number_WUs>
so currently each job is 4 simple work units
ID: 93507 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93523 - Posted: 4 Nov 2019, 23:57:00 UTC - in response to Message 93507.  

not sure when but a few years ago milkyway started doubling up the number of work units each job has. looking in a result file one finds
<number_WUs> 4 </number_WUs>
so currently each job is 4 simple work units


Yes they now bundle 4 or 5 per WU. But that doesn't change how much GPU is used. It just means more work is done before it has to change to the next WU.
ID: 93523 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 546
United States
Message 93670 - Posted: 12 Nov 2019, 22:57:32 UTC
Last modified: 12 Nov 2019, 22:57:49 UTC

I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision.
A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around.

Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance.
ID: 93670 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93671 - Posted: 12 Nov 2019, 23:16:09 UTC - in response to Message 93670.  

I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision.
A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around.

Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance.


I don't know enough about the insides of GPUs to disagree, but that doesn't make sense. Since there are GPUs available with completely different ratios of double and single speed, I always thought that they were independant units. I'd love to see some designs of GPUs (even just block diagrams).
ID: 93671 · Report as offensive
Profile Keith Myers
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 17 Nov 16
Posts: 397
United States
Message 93673 - Posted: 13 Nov 2019, 1:25:32 UTC - in response to Message 93671.  

AnandTech is always the best source for high level analysis of new cpu or gpu architectures with good block diagrams and really knowledgeable analysis of the design by writers like Dr. Ian Cutress for cpus and Anton Shilov, Ryan Smith and Nate Oh for gpus.
ID: 93673 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 546
United States
Message 93674 - Posted: 13 Nov 2019, 2:13:19 UTC - in response to Message 93671.  

I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision.
A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around.

Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance.


I don't know enough about the insides of GPUs to disagree, but that doesn't make sense. Since there are GPUs available with completely different ratios of double and single speed, I always thought that they were independant units. I'd love to see some designs of GPUs (even just block diagrams).


double precision or single precision software has different benchmark scores, when ran at double precision capable hardware. That's just the way double and single precision work.

However, if you're sharing that same hardware with 2 different tasks, is like running 2 operating systems on one CPU.

There'll be a lot of overhead data switching back and forth between hardware and hardware.
On remote terminals you probably won't notice this much, due to the fact that a CPU does a lot of the swapping in idle CPU moments.
However, when folding/crunching, a CPU's utilization is nearly constantly at 100%. There's no idle time to swap between tasks, so primary tasks need to be shut down, and caches need to be flushed, to load the secondary task.
I would say you'll probably lose somewhere between 15-25%, compared to running the tasks independently.
ID: 93674 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14157
Netherlands
Message 93688 - Posted: 13 Nov 2019, 16:06:17 UTC - in response to Message 93670.  

A GPU doesn't have separate circuitry for processing single or double precision
Single precision (32bit) and double precision (64bit) are types of floating point calculations. Double precision calculations can store a wider range of values with more precision. Both are calculated using the same floating point unit on the GPU, there's no data being moved around. Science applications are either single precision (most projects) or double precision. They're never both at the same time.
Please do not private message me for tech support, these will be ignored!
ID: 93688 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 546
United States
Message 93826 - Posted: 22 Nov 2019, 10:21:46 UTC

I have the same question.
I'm running an RTX2060, with an Einstein@home CPU+GPU task.
The GPU part only taxes 80W, or 50% of my GPU.
I added my apps_config.xml file in the folder with this content:

<app_config>
   [<app>
      <name>Einstein@home</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>0.5</gpu_usage>
      </gpu_versions>
    </app>]
</app_config>


however, now I'm seeing only 40Watts usage.

So I changed the <gpu_usage>0.5</gpu_usage> value to 1, but without success.

Any help is greatly appreciated.
ID: 93826 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14157
Netherlands
Message 93828 - Posted: 22 Nov 2019, 11:09:50 UTC - in response to Message 93826.  

At Einstein you can change how many tasks you want to run on the GPU via the project preferences. Change it there.
Please do not private message me for tech support, these will be ignored!
ID: 93828 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4094
United Kingdom
Message 93834 - Posted: 22 Nov 2019, 12:19:15 UTC - in response to Message 93826.  

<app_config>
   [<app>
      <name>Einstein@home</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>0.5</gpu_usage>
      </gpu_versions>
    </app>]
</app_config>
Remove the square brackets round <app></app> - they are used in programming manuals to indicate optional sections.
ID: 93834 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93840 - Posted: 22 Nov 2019, 15:04:05 UTC - in response to Message 93834.  

It amazes me we are editing config files in the 21st century. Come on, this isn't DOS anymore. Why isn't all this in the GUI?
ID: 93840 · Report as offensive
Profile Dave

Send message
Joined: 28 Jun 10
Posts: 877
United Kingdom
Message 93841 - Posted: 22 Nov 2019, 15:52:29 UTC - in response to Message 93840.  

It amazes me we are editing config files in the 21st century. Come on, this isn't DOS anymore. Why isn't all this in the GUI?


ACHTUNG!
ALLES TURISTEN UND NONTEKNISCHEN LOOKENSPEEPERS!
DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN.
IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.
ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.


So those who know not that with which they play don't screw things up is I suspect the reason. Deliberate policy rather than laziness etc.
ID: 93841 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 609
United Kingdom
Message 93842 - Posted: 22 Nov 2019, 16:03:04 UTC - in response to Message 93841.  

ROFL! I used to have that notice on one of my servers where I worked. A German speaking colleague saw it and burst out laughing.

But.... most programs do have things you shouldn't play with, they're in a seperate "advanced" section, with warnings not to fiddle. And since the settings are then GUI based, it's impossible to put in a stupid value.
ID: 93842 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14157
Netherlands
Message 93843 - Posted: 22 Nov 2019, 16:43:37 UTC - in response to Message 93842.  

And since the settings are then GUI based, it's impossible to put in a stupid value.
Try enabling all debug flags in the Event Log Options window and you'll find that having such things available through the GUI isn't always a good thing. I won't tell you what'll happen... Just try it. :-)
Please do not private message me for tech support, these will be ignored!
ID: 93843 · Report as offensive
1 · 2 · Next

Message boards : GPUs : Two projects on one GPU?

Copyright © 2020 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.