Message boards : BOINC client : CUDA on Vista SLI or not?
Message board moderation
Author | Message |
---|---|
Send message Joined: 17 Apr 09 Posts: 6 |
Scenario: Vista x64, GTX 295 (185.xx driver), BOINC 6.6.20. From what I understand in order to have BOINC see both GPUs I need to disable SLI otherwise only one CUDA is detected. However, do I really want to? Simply put, from a performance perspective which one is faster (and is there a difference at all?): a) SLI mode -- one workunit is supposedly crunched by both GPUs? b) Non-SLI mode -- two workunits are crunched by each GPU separately? I don't care about dedicating a single GPU to work and the other not to, so if there is no performance difference why disable SLI at all (scenario a). Or Am I missing something...? |
Send message Joined: 5 Mar 08 Posts: 272 |
Scenario: Vista x64, GTX 295 (185.xx driver), BOINC 6.6.20. Nothing is going to make use of the extra multiprocessors/shaders if you run it in SLI mode so its simply a waste. If you disable SLI it will see 2 cards and run 2 wu at a time, ie option b is the way to go. The guys over at GPUgrid are looking at being able to use SLI mode and any extra multiprocessors/shaders to speed up wu processing but have yet to release anything. MarkJ |
Send message Joined: 17 Apr 09 Posts: 6 |
Got it. So I guess it's a Vista/NVidia driver limitation? Any hope this will be resolved any time soon? |
Send message Joined: 29 Aug 05 Posts: 15582 |
No, SLI means that two or more of the same videocards produce a single output. There's nothing to fix for that. It won't speed up calculations, it will only speed up graphics in 3D games, making them believe you have a monster card. |
Send message Joined: 17 Apr 09 Posts: 6 |
So there's no way for BOINC to leverage the card in SLI mode just like '3D games do'...? |
Send message Joined: 29 Aug 05 Posts: 15582 |
I don't know, not from the way that you're asking it. Explain, please? If you mean, can BOINC switch SLI off to use the two GPUs as separate entities, then no, it can't. |
Send message Joined: 25 Nov 05 Posts: 1654 |
Breach You're looking at this in the wrong way. The idea of SLI, is to make the GPU more powerful by adding more cards in parallel to run a SINGLE game faster/better. But what BOINC needs to run more WUs, is lots of individual GPU cards, each one running a separate WU. It's like joining lots of locos together to be able to pull a single, very long line of wagons. (Much like they do in the NW of Australia, where I think they've had a line of iron ore wagons over a mile long.) This contrasts to BOINC, where the idea is to have lots of separate "trains", all going their own separate ways. To do this, you need to "uncouple" the power units, by removing the SLI link. |
Send message Joined: 17 Apr 09 Posts: 6 |
Okay, thanks. Let me try to clarify my question: For all purposes the way I understand SLI (and hence my question), it's like having one engine which performs better than two separate ones? If I can run game A at 30 fps with one card, and then add another one in SLI, which makes the game run at 60 fps (linearity for the sake of the argument), can't it work the same way with BOINC? For example if I have one GPU which crunches a workunit in 10 minutes, and then add another GPU in SLI mode can't BOINC leverage it, so that I am still processing one work unit at a time but it takes 5 minutes instead to complete. To the best of my I understanding, however, BOINC needs "to see" each card as a separate entity. Thanks for bearing with me ;) |
Send message Joined: 29 Aug 05 Posts: 15582 |
If I can run game A at 30 fps with one card, and then add another one in SLI, which makes the game run at 60 fps (linearity for the sake of the argument), can't it work the same way with BOINC? That's not how SLI works. Basically, what SLI does is split the rendering of the 3D scene in a game between the two cards. Both cards are given a 3D scene to render, of which the first card will only render the top half of the screen, while the second card will render the bottom half of the screen. The second card will send its output to the first card, which will combine the two images and output it to the monitor. Read more on SLI in the Wiki. So no, it won't speed up crunching CUDA on the projects that use CUDA at the moment, it will slow it down even as you're technically only using one GPU. To speed up, BOINC will have to see all GPUs separately and crunch one task per GPU. BOINC applications can't crunch in parallel, so it won't be able to use two GPUs on one task. |
Send message Joined: 17 Apr 09 Posts: 6 |
Thanks, that's exactly what I was wondering about, case closed (wish turning off/on SLI wasn't so sadistically slow though) |
Send message Joined: 17 Apr 09 Posts: 6 |
No, it's not the UI (which is 3 clicks away), it's the amount of time which it takes the card to actually detach SLI -- something like 5-10 seconds, text mode console, adapter resets, etc. Dunno if there's a way to turn off SLI with a command line command. It would help to script it -- exec a batch file which turnso off SLI when you start BOINC or something. |
Send message Joined: 29 Aug 05 Posts: 15582 |
|
Send message Joined: 14 Mar 09 Posts: 215 |
[quote]Okay, thanks. Let me try to clarify my question: For all purposes the way I understand SLI (and hence my question), it's like having one engine which performs better than two separate ones? [quote] more like, it's 2 small (in same car) block engines will run better than the 1 Big block (old school engine under the hood.) |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.