Message boards : GPUs : Two projects on one GPU?
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Send message Joined: 29 Aug 05 Posts: 15540 |
And if you do screw up, you can just go back and click "set defaults for this page" or something.Not in the case of the debug flags, because your BOINC Manager won't connect to the client anymore due to the absolute huge amount of RPC traffic it will spit out. Here your only options are to manually edit cc_config.xml or remove it. The options menu won't warn you about this either. In the case of the app_info.xml file (which is for using the anonymous platform and thus never available via the GUI and it requires a client restart) and app_config.xml file it's probably best to do it by hand. Or have a separate GUI for either, but best not have it included in the main program or BOINC Manager. |
Send message Joined: 25 Nov 05 Posts: 1654 |
Try doing what Jord said, and enabling all debug flags. It's great fun, keep you busy for hours. |
Send message Joined: 29 Aug 05 Posts: 15540 |
Well most programs don't need text files editing. The GUI is set up so you can't kill the program entirely just by playing with options. You get a range for each value that makes sense etc.You get a range for the debug flags as well, either 1 for on, or 0 for off. It's entirely possible to run the client with all the debug flags on. The client will run fine. Just don't expect the Manager to be able to keep up with remote procedure calls, as these go by something of 1,500-2,000 per second - while normal operations have a maximum of 1,000 per second. Most programs don't download separate programs that run intricate calculations of data. The trouble with automating or GUI-fying app_config.xml is that you still need information read from the client_state.xml file -> the application name. While a GUI may have an easier time reading the application name from client_state.xml, you'd have to add enough intelligence that it grabs the correct app name for the correct project, or shows all app names for all projects. And then you can still make screw ups. As an aside I see that ProDigit filled in Einstein@Home as the application name, which won't work either. These things are for really Advanced users only, and then preferably those that know where to find the documentation by heart. And who know that they best make backups, disable their internet connection, exit the client between all editing and tinkering etc. BOINC shouldn't be easy for everyone. That's why it has the Simple GUI first, Advanced View second. But if you think that's outdated, no one will keep you from adding code that does exactly what you want. If you expect someone else to do it though, do know he's still quite busy. |
Send message Joined: 8 Nov 19 Posts: 718 |
The funny thing is, that on an RTX 2060, Einstein uses 50% of the resources ~80W of 160W). The 2060 also has about 1920 shaders or Cuda Cores. So I thought, perhaps that's just how their tasks are designed. So I'll run it from one of my lower end GPUs. Meaning, running it from a GT 1030 (max TDP 30W), 384 cores, it should keep the load high. But no. My GT 1030s are loaded 30-40% Something on Einstein's end has to change. THey have to increase the amount of GPU utilization from there. |
Send message Joined: 8 Nov 19 Posts: 718 |
The funny thing is, that on an RTX 2060, Einstein uses 50% of the resources ~80W of 160W). It appears for at least Einstein (and most GPU projects I've crunched for on Boinc) this statement needs adjusting. When I select 50% of my CPU utilization (1 of 2 cores), I can clearly see that my CPU is utilized by 100% on the first core, and 7-12% on the second core. The second core is now feeding both Intel and Nvidia projects together. The statement of 1 CPU core per GPU, seems not to hold true anymore for Nvidia cards. For Folding perhaps yes, but for Boinc, no. |
Send message Joined: 25 May 09 Posts: 1295 |
BOINC does NO crunching - it is ONLY the applications sent to you by the projects that do any processing at all. BOINC provides an environment for the project applications to run, it does manage (loosely) the communication between you and the projects. Each project is responsible for its own applications, and each application has different requirements for CPU support of GPUs, and some need far more than others, so the general advise is set aside one CPU core for every concurrent GPU application running "just in case". Most of the tools we use to monitor the CPU &GPU use take far too long to do their measurements as there are ties when a GPU application will require very large amounts of CPU support, but only for a very short period of time, but this will appear to be, say, 10% CPU usage averaged over the measuring period. But if you have that CPU core bound up doing something else there is an overhead of unloading the running job, loading the pending job, then swapping back again - all of which takes time. It is a balancing act, so you have to determine which way is actually better for you, on your system, and with your mix of projects, and that will take time to do - days or even weeks of monitoring in one configuration, watching processing times, then doing it all again on the next configuration - again for days or weeks. |
Send message Joined: 29 Aug 05 Posts: 15540 |
I've seen you make statements towards the load of your GPU now a couple of times. You cannot compare the load of the CPU with that of the GPU, other than that both use a science application that runs on the CPU. For calculations done on the GPU, data from a task needs to be translated into kernels that the GPU can run. That's done by the CPU. A lot of data in the tasks is too difficult to translate into kernels and is therefore run on the CPU, not the GPU. |
Send message Joined: 2 Jul 14 Posts: 17 |
I have the same question. Should be more like the below: <app_config> <app> <name>hsgamma_FGRPB1G</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.2</cpu_usage> </gpu_versions> </app> <app> <name>hsgamma_BRP4G</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.2</cpu_usage> </gpu_versions> </app> </app_config> Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community. |
Send message Joined: 25 May 09 Posts: 1295 |
I have the same question. This only means that whatever par of the GPU being used is drawing 80W, it does not mean that the application is only using 50% of the GPU's capacity - it could be using 100% of a part of the GPU that only draws 50% of the power. Power draw is an extremely poor metric for "amount of GPU in use" to use when performing calculations. |
Send message Joined: 8 Nov 19 Posts: 718 |
No, the power usage is a great way to see how much of the GPU is being used. GPU utilization is well below 100% (well below 80% even). |
Send message Joined: 8 Nov 19 Posts: 718 |
No, the power usage is a great way to see how much of the GPU is being used. A GPU has cores and ram. GPU utilization depends on how many cores are processing, and that's the main readout on Watt and GPU utilization. Any additional parts of the GPU (Tensor/RT Cores), only make out a tiny part of the GPU utilization. Tensor cores on Nvidia make out 1/8th, or 12% of the CUDA cores, and RT cores only 1,5%. These cores operate at 16 or 8 bit (half or quarter precision), and therefor would make up less than 1/2 to 1/4th of CUDA core performance, so a GPU running at 90-95% can be considered fully utilized. Running 2 projects into 1 GPU, usually would not result in 'one piece of the GPU being under 100% load, while the rest is idle'. Running 2 projects into 1 GPU usually means more CUDA cores/shaders will be active; with as benefit that 2 projects can be worked on at a time; with as cost, a potential small slowdown per task. The only time when a a GPU can have lower output like that, is when both projects depend on tensor or RT cores. However, with only 50% of Cuda cores being used, the project should use the cuda cores instead (even if they only need half or quarter precision from Tensor/RT cores). It would not be a very efficient form of coding, if the entire task depends on the amount of Tensor cores (seeing they are 1/8th the amount of CUda cores, at least on Nvidia). |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.