PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 20 · Next

AuthorMessage
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4470
United Kingdom
Message 95837 - Posted: 14 Feb 2020, 9:43:03 UTC - in response to Message 95831.  
Last modified: 14 Feb 2020, 9:43:32 UTC

Nvidia drivers can decide to run OpenCL tasks in CUDA if they want...
Yes. OpenCL is an intermediate level, cross-platform, programming language. Every manufacturer's driver - not just Nvidia's - compiles the OpenCL source code into machine code primitives to match the hardware in use. And since Nvidia's hardware runs CUDA primitives, that's what the Nvidia compiler implementation will be designed to output.

It's not a 'decision' by the driver: it's a deterministic pathway defined by the programmer.
ID: 95837 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95840 - Posted: 14 Feb 2020, 15:21:33 UTC - in response to Message 95833.  
Last modified: 14 Feb 2020, 15:30:49 UTC


My 2ct, a VRM is a Voltage Regulator Module.
It regulates the voltage, it doesn't convert it.
It takes DC in, and makes sure that the GPU core gets the 12V it needs.


A GPU core needs about 1V. It has to change 12V to 1V, dropping the voltage and increasing the current.

This is different from a PSU, which not only converts AC to DC, and changes the voltage/amps, but I'd say that it's safer to say that a PSU is larger than a VRM.
A vrm is just a feedback loop controller to make sure voltage and current draw, stay within limits.


It's more than that, what you're describing is more like a xener diode.

A VRM is a plain digital chip; and a PSU has capacitors, and digital circuitry inside; thus is more complex than a VRM.


The main PSU in a computer is only more complicated because of:

Filtering out interference.
Converting AC to DC.
Much higher wattage.
Several different voltage outputs.

It is in fact just a rectifier, a filter, and several VRMs.

I think most people would see a VRM as a controller, rather than a generator of power (a PSU, the old style, were transformer based, and thus generate voltage; though the newer ones are based on digital circuitry, which makes it function much closer to a VRM).

My 2 uneducated cents. I don't care if I'm right or wrong about this.
This is just what I (to this day) believe.


Actually a VRM does convert the voltage. In the same way as one of the stages of a switched mode power supply takes the rectified 340VDC (yes I meant 340 - there's a root 2 in there when rectifying the sine wave, I'm also assuming European voltage, in the USA, halve it) and drops it to 12VDC for your computer, a VRM is usually a buck converter. It drops 12V to say 1.1V for a CPU. It's therefore overall performing the same task, just minus the AC-DC conversion.

https://en.wikichip.org/wiki/voltage_regulator_module

Don't take this as me boasting, but I do have a degree in electronics :-)
ID: 95840 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95841 - Posted: 14 Feb 2020, 15:33:04 UTC - in response to Message 95834.  

From reading the forum, one user said the most optimal setting for an RX5700 (XT),I believe was 190W, with overclock.
The same performance was done between an RTX 2070 at 137W (slower) and an RTX 2070 Super at 150W (Faster than the 5700).
So while AMD may be faster, it also consumes more power.
The 40W more power, results in $+40 more on electricity per year.


It depends if you're taking account of electricity costs. As I've said before, it is possible to get FREE electricity if you have solar panels.
ID: 95841 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95843 - Posted: 14 Feb 2020, 17:44:09 UTC
Last modified: 14 Feb 2020, 17:45:20 UTC

So, to get back on topic, I just acquired my third AMD Radeon 280X. All three connected to one multiplexer on a PCIE 1.0 x1. Working brilliantly. Two cheap CIT 850W (only 650W on 12V) power supplies, one powering two cards, one powering the third card and the computer.

Thanks to everyone who helped me sort out the technical difficulties.

I'm about to attempt daisychaining multiplexers just out of interest. I'll post back shortly. I don't actually need to, since the CPU in that machine will only keep up with 8 cards on Milkyway, so 2 multiplexers on 2 sockets would be fine. But in the future with a Ryzen threadripper perhaps :-)
ID: 95843 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95844 - Posted: 14 Feb 2020, 18:20:30 UTC
Last modified: 14 Feb 2020, 18:20:53 UTC

Yes, you can. Multiplexers can be multiplexed. I have:

PCIE 1.0 x1 socket ---> 4 way multiplexer ---> GPU and 2nd 4 way multiplexer ---> 2 GPUs
ID: 95844 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 618
United States
Message 95872 - Posted: 17 Feb 2020, 3:03:43 UTC

I never managed to make it work on my motherboards, those pcie splitters.
ID: 95872 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95878 - Posted: 17 Feb 2020, 17:58:50 UTC - in response to Message 95872.  
Last modified: 17 Feb 2020, 18:00:27 UTC

I never managed to make it work on my motherboards, those pcie splitters.


The good ones work for me, even on a 12 year old motherboard. No drivers or anything, all cards detected as though they were plugged in normally into seperate sockets.

If it looks like one of these, it's rubbish:



If it looks like this, it will work:

ID: 95878 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 156
United States
Message 95904 - Posted: 18 Feb 2020, 14:38:29 UTC - in response to Message 95831.  


As in reference to the 15%, it's an estimation, based on playing around with WUs and tasks.
Nvidia-Xserver only shows increments of roughly 15-20%. It doesn't show accurate results.


Why do you think that? Looks like it shows increments of 1% to me. Readings from my test bench show 0-10% and every value in between. Which is expected for the link speed/width and application being run.

https://imgur.com/a/rhBWRqy
ID: 95904 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 618
United States
Message 95943 - Posted: 19 Feb 2020, 3:45:43 UTC - in response to Message 95878.  
Last modified: 19 Feb 2020, 3:50:44 UTC

I never managed to make it work on my motherboards, those pcie splitters.


The good ones work for me, even on a 12 year old motherboard. No drivers or anything, all cards detected as though they were plugged in normally into seperate sockets.

If it looks like one of these, it's rubbish:



If it looks like this, it will work:


I would say the opposite.
Those PCIE x1 to 4 never worked for me.
I use all the others; except my USB PCIE risers look like this:

and i use these:
ID: 95943 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 156
United States
Message 95961 - Posted: 19 Feb 2020, 16:53:18 UTC

The problems with the grey cables is that they aren’t shielded and usually can’t handle PCIe gen 3 speeds depending on cable length and other sources of interference that might be nearby (like other unshielded PCIe cables, or power cables). The USB 3.0 cables (quality ones) are usually shielded enough for a stable PCIe 3 link over 1-3ft distances. But the USB cabled risers were really designed for crypto mining which has very low bandwidth requirements and can easily run at gen 1/2 with no penalty and so they usually come with cheaper low quality cables.

I’ve used the 4-in-1 switch boards successfully for mining, but have limited use with them for BOINC crunching. They work OK provided you get a board that’s not defective. But you have to realize that a lot of these things were made cheaply with poor quality control to capitalize on the mining boom, so some work, some don’t.
ID: 95961 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 95965 - Posted: 19 Feb 2020, 20:02:28 UTC - in response to Message 95961.  

The problems with the grey cables is that they aren’t shielded and usually can’t handle PCIe gen 3 speeds depending on cable length and other sources of interference that might be nearby (like other unshielded PCIe cables, or power cables). The USB 3.0 cables (quality ones) are usually shielded enough for a stable PCIe 3 link over 1-3ft distances. But the USB cabled risers were really designed for crypto mining which has very low bandwidth requirements and can easily run at gen 1/2 with no penalty and so they usually come with cheaper low quality cables.

I’ve used the 4-in-1 switch boards successfully for mining, but have limited use with them for BOINC crunching. They work OK provided you get a board that’s not defective. But you have to realize that a lot of these things were made cheaply with poor quality control to capitalize on the mining boom, so some work, some don’t.


I've only tried two 1-4 boards, they look identical to the one in the picture, and were bought new from ebay about a month ago. Both work flawlessly. I've got three R9 280X cards running off one in a PCIE 1.0 x1 slot. They run Einstein Gamma and Milkyway without slowness. Not sure about Einstein Gravity, as that overloads the CPU and slows it anyway.
ID: 95965 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 618
United States
Message 95972 - Posted: 20 Feb 2020, 9:08:18 UTC

Shielded risers aren't necessary for 12 inch or below risers.
The weakness of the grey ones I use, is not static interference, but solder points breaking off (which is why they're doubled.
ID: 95972 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 156
United States
Message 95975 - Posted: 20 Feb 2020, 13:41:09 UTC - in response to Message 95972.  
Last modified: 20 Feb 2020, 13:46:31 UTC

It really depends on the situation. I’ve had many many of those grey risers that can’t maintain gen 3 speeds, even those shorter than 12”. But they worked acceptably at gen 2 or gen 1, which have much less strict requirements for the signal integrity. It wasn’t just a case of broken connections.

But weak connections are also a concern for them, especially the ones that have a power cable attached like in the pictures. It’s usually soldered by hand and covered in hot glue or something. Pretty easy to get one that’s defective and/or not made correctly, causing additional issues or frying your components.

In general I won’t even use the grey ribbon risers anymore.
ID: 95975 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 618
United States
Message 96044 - Posted: 24 Feb 2020, 23:35:14 UTC - in response to Message 95975.  

It really depends on the situation. I’ve had many many of those grey risers that can’t maintain gen 3 speeds, even those shorter than 12”. But they worked acceptably at gen 2 or gen 1, which have much less strict requirements for the signal integrity. It wasn’t just a case of broken connections.

But weak connections are also a concern for them, especially the ones that have a power cable attached like in the pictures. It’s usually soldered by hand and covered in hot glue or something. Pretty easy to get one that’s defective and/or not made correctly, causing additional issues or frying your components.

In general I won’t even use the grey ribbon risers anymore.

I never had a single issue with them, and have been using them for years. Gen 3, mostly PCIE 16x risers all feeding RTX GPUs.
For x1 risers, I use USB.
I do have a few x1 ribbon risers, but not the grey ones. (mine are shielded, and 12" or longer).
I never tried out the x8 grey ribbon risers, but I did use the x4 risers for a while, until I had a replacement for those risers that allowed more flexibility.
ID: 96044 · Report as offensive
Profile Tom M

Send message
Joined: 6 Jul 14
Posts: 76
United States
Message 96122 - Posted: 26 Feb 2020, 2:05:08 UTC - in response to Message 95008.  


I've had good luck with the UGREEN branded cables from amazon: https://www.amazon.com/UGREEN-Transfer-Enclosures-Printers-Cameras/dp/B00P0E3954/


+1

"Me too" :) I have had VERY good luck with them.

Tom
"You are entitled to your own opinion but not to your own facts." Senator and Professor Patrick Moynihan
In detail I am a Big Picture sort of guy.
ID: 96122 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 156
United States
Message 96145 - Posted: 27 Feb 2020, 19:56:03 UTC - in response to Message 96122.  
Last modified: 27 Feb 2020, 19:56:36 UTC

I finally found a project that very heavily relies on PCIe bandwidth, to a degree that running a USB riser on 3.0 x1 still isn't enough, much less a USB riser at lower PCIe specs.

GPUGRID.

~38-40% PCIe use on a PCIe 3.0 x8 link, about 18-20% on PCIe 3.0x16.
this means that anything less than PCIe [3.0 x4], [2.0 x8], or [1.0 x16] will likely be constrained

My 2 high bandwidth systems are handling them well though. my fastest system has 10-GPUs total; 8-GPUs at 3.0x8, and 2-GPUs at 3.0x1 (USB). So I just excluded the 2 USB connected cards from running GPUGRID.
ID: 96145 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 96146 - Posted: 27 Feb 2020, 20:09:21 UTC - in response to Message 96145.  

I finally found a project that very heavily relies on PCIe bandwidth, to a degree that running a USB riser on 3.0 x1 still isn't enough, much less a USB riser at lower PCIe specs.

GPUGRID.

~38-40% PCIe use on a PCIe 3.0 x8 link, about 18-20% on PCIe 3.0x16.
this means that anything less than PCIe [3.0 x4], [2.0 x8], or [1.0 x16] will likely be constrained

My 2 high bandwidth systems are handling them well though. my fastest system has 10-GPUs total; 8-GPUs at 3.0x8, and 2-GPUs at 3.0x1 (USB). So I just excluded the 2 USB connected cards from running GPUGRID.


How do you exclude certain GPUs on the same system from running a certain project?

I've never used GPUgrid as I don't have any Nvidia GPUs. I used to have one but somebody stole it.
ID: 96146 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4470
United Kingdom
Message 96147 - Posted: 27 Feb 2020, 20:43:39 UTC - in response to Message 96146.  

How do you exclude certain GPUs on the same system from running a certain project?
See the Client configuration - Options page in the User Manual.
ID: 96147 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1125
United Kingdom
Message 96148 - Posted: 27 Feb 2020, 21:01:31 UTC - in response to Message 96147.  

How do you exclude certain GPUs on the same system from running a certain project?
See the Client configuration - Options page in the User Manual.


Thanks, I didn't know that level of control was possible. This will be very useful as I expand my supercomputer :-)
ID: 96148 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 618
United States
Message 96155 - Posted: 28 Feb 2020, 6:14:14 UTC - in response to Message 96145.  
Last modified: 28 Feb 2020, 6:15:22 UTC

I finally found a project that very heavily relies on PCIe bandwidth, to a degree that running a USB riser on 3.0 x1 still isn't enough, much less a USB riser at lower PCIe specs.

GPUGRID.

~38-40% PCIe use on a PCIe 3.0 x8 link, about 18-20% on PCIe 3.0x16.
this means that anything less than PCIe [3.0 x4], [2.0 x8], or [1.0 x16] will likely be constrained

My 2 high bandwidth systems are handling them well though. my fastest system has 10-GPUs total; 8-GPUs at 3.0x8, and 2-GPUs at 3.0x1 (USB). So I just excluded the 2 USB connected cards from running GPUGRID.

I run GPUGRID with an RTX2080Ti through a PCIE x1 slot on Linux, and it seems to work reasonably well.
It's a test right now, and I've had to lower power output from 300W to 200W, at which my GPUs run at 1500-1700Mhz instead of 2Ghz, but they seem to note 100% GPU utilization.
Linux is much kinder to PCIE data.
Pretty soon I'll up the wattage back to 225-250W, as I'll be removing 1 GPU tomorrow, just to be able to run the ones I have installed as optimal as possible.
I don't use VM, just run it straight from a $15 64GB SATA SSD, and use F11 in Bios if I want to go back to the full blown Windows 10 on my other drive.
ID: 96155 · Report as offensive
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2021 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.