Message boards : GPUs : Two projects on one GPU?
Message board moderation
Author | Message |
---|---|
Send message Joined: 5 Oct 06 Posts: 5124 |
Dunno. You could be the first to try it! You'd need two app_config.xml files - one for each project - both with a <gpu_usage> value of 0.5 and a <max_concurrent> of 1. I'd suggest you avoid fetching CPU tasks from either project while you test. |
Send message Joined: 5 Oct 06 Posts: 5124 |
That reminds me... BOINC v7.14.2 doesn't handle <max_concurrent> very well. You might find count (2) might work better under v7.16.3 For count (1) - I can't speak for ATI tasks, but some Einstein tasks are very much boosted by running at a greatly enhanced process priority. The machine I'm typing on has Einstein's intel_gpu app running at real time priority under Process Lasso. I notice a brief stutter each time a task finishes and another starts, but at once every five hours that's not a hardship. Use that factoid with care and at your own risk. |
Send message Joined: 27 Jun 08 Posts: 641 |
not sure when but a few years ago milkyway started doubling up the number of work units each job has. looking in a result file one finds <number_WUs> 4 </number_WUs> so currently each job is 4 simple work units |
Send message Joined: 8 Nov 19 Posts: 718 |
I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision. A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around. Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance. |
Send message Joined: 17 Nov 16 Posts: 888 |
AnandTech is always the best source for high level analysis of new cpu or gpu architectures with good block diagrams and really knowledgeable analysis of the design by writers like Dr. Ian Cutress for cpus and Anton Shilov, Ryan Smith and Nate Oh for gpus. |
Send message Joined: 8 Nov 19 Posts: 718 |
I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision. double precision or single precision software has different benchmark scores, when ran at double precision capable hardware. That's just the way double and single precision work. However, if you're sharing that same hardware with 2 different tasks, is like running 2 operating systems on one CPU. There'll be a lot of overhead data switching back and forth between hardware and hardware. On remote terminals you probably won't notice this much, due to the fact that a CPU does a lot of the swapping in idle CPU moments. However, when folding/crunching, a CPU's utilization is nearly constantly at 100%. There's no idle time to swap between tasks, so primary tasks need to be shut down, and caches need to be flushed, to load the secondary task. I would say you'll probably lose somewhere between 15-25%, compared to running the tasks independently. |
Send message Joined: 29 Aug 05 Posts: 15552 |
A GPU doesn't have separate circuitry for processing single or double precisionSingle precision (32bit) and double precision (64bit) are types of floating point calculations. Double precision calculations can store a wider range of values with more precision. Both are calculated using the same floating point unit on the GPU, there's no data being moved around. Science applications are either single precision (most projects) or double precision. They're never both at the same time. |
Send message Joined: 8 Nov 19 Posts: 718 |
I have the same question. I'm running an RTX2060, with an Einstein@home CPU+GPU task. The GPU part only taxes 80W, or 50% of my GPU. I added my apps_config.xml file in the folder with this content: <app_config> [<app> <name>Einstein@home</name> <max_concurrent>2</max_concurrent> <gpu_versions> <gpu_usage>0.5</gpu_usage> </gpu_versions> </app>] </app_config> however, now I'm seeing only 40Watts usage. So I changed the <gpu_usage>0.5</gpu_usage> value to 1, but without success. Any help is greatly appreciated. |
Send message Joined: 29 Aug 05 Posts: 15552 |
At Einstein you can change how many tasks you want to run on the GPU via the project preferences. Change it there. |
Send message Joined: 5 Oct 06 Posts: 5124 |
Remove the square brackets round <app></app> - they are used in programming manuals to indicate optional sections.<app_config> [<app> <name>Einstein@home</name> <max_concurrent>2</max_concurrent> <gpu_versions> <gpu_usage>0.5</gpu_usage> </gpu_versions> </app>] </app_config> |
Send message Joined: 28 Jun 10 Posts: 2676 |
It amazes me we are editing config files in the 21st century. Come on, this isn't DOS anymore. Why isn't all this in the GUI? ACHTUNG! So those who know not that with which they play don't screw things up is I suspect the reason. Deliberate policy rather than laziness etc. |
Send message Joined: 29 Aug 05 Posts: 15552 |
And since the settings are then GUI based, it's impossible to put in a stupid value.Try enabling all debug flags in the Event Log Options window and you'll find that having such things available through the GUI isn't always a good thing. I won't tell you what'll happen... Just try it. :-) |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.