Message boards : Projects : Performance issues since I added cosmology@home
Message board moderation
Author | Message |
---|---|
Send message Joined: 20 Feb 16 Posts: 3 |
Hi, I used to run 3 projects at a time: einstein@home, milkyway@home, seti@home and recently added cosmology@home. I noticed that whenever the cosmology@home app is running, my system automatically downclocks my GTX970 gpu and video memory clocks. if cosmology@home doesn't run or is supended it goes back up to my normal GPU overclock speeds (1475mhz without cosmology running vs. 1114 with cosmology). Why is cosmology@home reducing my GPU speeds drastically even so its not using the GPU? A seti@home 8.0 cuda50 project usually takes 3 to 8 minutes on my computer when running with other projects, but when cosmology is running it takes up to 45 minutes. I really would like to support cosmology@home, but it is slowing down all other projects so much. Any ideas how I can change the preferences and settings in order to speed things up while runnig cosmology@home with some of the other projects at the same time? Thank you! My Computer info: Intel i5 2500k overclocked to 4.2Mhz, 16gb ram, windows 10, MSI GTX 970 Gaming 4G |
Send message Joined: 29 Aug 05 Posts: 15563 |
Have you asked at the Cosmology forums if someone sees the same behaviour? Could it be that their application needs the use of all the CPU cores? Unless someone who runs at least Seti and Cosmology on his system as well passes by, it's difficult to answer. |
Send message Joined: 20 Feb 16 Posts: 3 |
Yes...I have 4 cores and all the cosmology@home apps use 4CPUs. That's the reason. But would that explain that it changes settings on my video card? as long as cosmology@home runs I cannot change clock settings in afterburner. |
Send message Joined: 16 Nov 13 Posts: 5 |
Yes...I have 4 cores and all the cosmology@home apps use 4CPUs. That's the reason. But would that explain that it changes settings on my video card? as long as cosmology@home runs I cannot change clock settings in afterburner. GPU tasks do require at least "part" of a CPU core. I have come to the realization that with my 4 core CPU, if I'm also running a GPU workunit (usually with my nVidia card), such as SETI or Einstein, I allocate a whole core just to the GPU task running... I also have an Intel GPU and will allocate a whole CPU to that if its running a task as well. So I end up running 2 CPU tasks, and 2 GPU tasks (1 nVidia, 1 Intel) and leave 2 CPU's for the GPU tasks to use. By using 4 CPU cores for cosmology, and running a GPU task, you are essentially "starving" the GPU of CPU cycles it needs to complete a task, causing it to take longer to process a task. Perhaps it downclocks the GPU because its running too fast for the CPU to keep up with limited CPU clock cycles? As a good rule of thumb, for me at least, is for every GPU task I run, I count on using 1 CPU core also for that task. |
Send message Joined: 20 Feb 16 Posts: 3 |
how do I specifically allocate a CPU core to GPU tasks? |
Send message Joined: 16 Nov 13 Posts: 5 |
That can be complicated if you use multithreaded apps. For normal single threaded CPU tasks, which most BOINC project apps are, I will only allow the BOINC manager to use 3 of my 4 processors, and the GPU task will use the remaining core. If it is a multithreaded app, the app will use as many cores as you have allocated to the BOINC client, I think. If you don't restrict how many cores it can use, it will use all of them. With milkyway@home, if I restrict the number of processors to three, and then update the project, when I run the multithreaded app there, it will only use 3 CPU cores, leaving 1 core available to help crunch the GPU task. If you look at the GPU tasks running in the BOINC manager, they will tell you how much of a CPU core the app uses. For example, for Einstein@home, most of the the GPU apps will use 0.2 CPU + 1 GPU which means it should only use 20% of the CPU core and the GPU. SETI@home uses 0.04 CPU + 1 GPU for the multibeam app, which means it should only use 4% of the CPU and the GPU. POEM@home actually uses 1 CPU + 1 GPU for the nVidia app, so it will take a whole CPU core and the GPU. I'm not sure how the cosmology@home apps work since I haven't run the new ones yet, but I am assuming since they are multithreaded they would work the same as in milkyway@home. By restricting the number of cores BOINC can use, you can leave a core available for GPU tasks. Also, I have found with the nVidia card, CUDA apps use less CPU cycles and OpenCL apps use more CPU cycles. When I run the nVidia astropulse app for SETI@home, it uses an entire CPU core, but the multibeam CUDA app barely uses the CPU at all. Go to the computing preferences in the options or tools menu, and see the option: Use at most ___ % of the CPUs, you can control how many CPUs BOINC will use. http://boinc.berkeley.edu/wiki/Local_preferences |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.