Message boards : GPUs : Boinc using integrated Intel GPU instead of Nvidia GPU - Lenovo Y700
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Send message Joined: 5 Oct 06 Posts: 5130 |
Jacob Klein has pointed out to me that each of the four mini performance graphs to the top right of my screenshot has a dropdown arrow beside the label. That opens a list of (I think) 12 different metrics to plot: compute_0 is the one that BOINCers are likely to want. Unfortunately, Task Manager: Does NOT remember that I last viewed the performance tab. DOES remember that I viewed the performance of the GPU. Does NOT remember that I want to view compute_0. That's just sloppy GUI design... |
Send message Joined: 9 Dec 17 Posts: 9 |
plot: compute_0 is the one that BOINCers are likely to want. Wow. Changing that option shows usage jump to 100%. Looks like this is what we were looking for all along =D. Also i have few questions if you guys can help me in. Q1.) I have 2 projects running on my Laptop currently. NFS@Home and SETI@Home. NFS is configured that it only uses CPU and SETI Tasks are all of NVIDIA GPU. But, Seti too uses cpu, so my question is that whether i should have both working together or do they cut each other out in effeciency by throttling each other. Q2.) Since both my CPU and Nvidia GPU are being used i coudn't find a project For the INTEL GPU, my question is that is it worth it having 3 projects, 1 running only on CPU, one on Nvidia GPU and one running on Intel GPU. If yes, could you guys also reccomend a project that requires intel GPU ??( I tried Einstien@Home but the version of my Intel GPU is higher than the maximum allowed on it and hence my Intel is sitting idle) ( I also tried SETI by giving it allowance for both INTEL and Nvidia GPU but then it so happens that One task for intel says waiting to run while the other for Nvidia is running and they do so in turns and not simultaneously) Q3.) So is it worth also ( as stated here http://it.com/gridcoin/@vortac/gridcoin-gpu-mining-6-obtaining-the-maximum-performance-out-of-your-gpus ) to have multiple BOINC tasks per GPU or will that too throtle each task and they will work 1/4th effeciency??? Sorry for asking more questions and thanks a lot for helping out already =D |
Send message Joined: 20 Nov 12 Posts: 801 |
The same answer goes for all your questions: You have to try it out and measure what's best on your hardware. To get you jumpstarted, it is often beneficial to run more than one task per AMD or NVIDIA GPU. Intel GPUs are weak and probably don't show any benefit. How many tasks depends on your hardware, low end -> less tasks, high end -> more tasks. It also depends on the science app, some apps can use the entire GPU efficiently with just one copy, some apps you need to run five copies to use the entire GPU. Project message boards can give you better optimising tips. As for Intel GPU you need to try it out. I have an i5-6200U that is configured for 15W for the package. That is enough to run CPU at max turbo limit or to run GPU at max speed. But it's not enough to run both CPU and GPU at max speed. If I run both GPU gets priority and CPU is downclocked. I don't have precise measurements but it seems that I would lose more from CPU processing than I would gain from GPU processing. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.