Thread 'PCI express risers to use multiple GPUs on one motherboard - not detecting card?'

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 20 · Next

AuthorMessage
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 229
United States
Message 95151 - Posted: 15 Jan 2020, 19:22:33 UTC - in response to Message 95149.  
Last modified: 15 Jan 2020, 19:33:45 UTC

Are you sure you mean aft? I call the back (aft) of the motherboard the side with the sockets on it, that faces the back of the tower case.
I guess that depends on your frame of reference. but the picture is self explanatory. you cannot cut the ends of the slot to fit a 16x card as the heatsink is still in the way.


Not sure about the battery, it might be low enough, I can't see at the moment as the GPU is in the slot above it.
trust me, it's not low enough. especially with the way the battery retention clip is oriented (this part sticks up higher).


I will experiment when I get more cards in the future - might aswell try. Some projects like Milkyway run almost everything on the GPU. Einstein would probably be hindered by 1x though. If they're both too slow, I'll get another motherboard for more GPUs.
I can confidently say that it will greatly slow down your processing times. feel free to try, but when you compare it to the times with the same card in the 16x slot you'll understand.

the bandwidth is crazy low. just 1/32 the (3.125%) the total bandwidth of the 16x 2.0 slots.
PCIe 1x 1.0 = 250MB/s (this is what you have)
PCIe 1x 2.0 = 500MB/s
PCIe 1x 3.0 = 1000MB/s

so your 16x 2.0 slots are capable of 8000MB/s.

SETI runs OK at PCIe 1x 3.0 (1000MB/s) and I consider SETI to be one of the Projects on the lower end of PCIe bandwidth requirements. I truly believe you're going to have a bad time trying to use the 1x 1.0 ports even if you ever get a card recognized in it. I think your only shot at getting a GPU recognized will be a USB style riser that has no power draw from the slot.


Isn't it possible to multiplex a 16x slot to put more cards in it? That's what those one 1x to four 16x risers do.
there's two ways to do this.

Bifurcation: which allows the lanes of one slot to be effectively broken up on multiple devices. for example, breaking a single 16x slot into two 8x, or four 4x. this needs to be supported on the hardware level AND the BIOS level. if the hardware can do it, but the BIOS cant, it wont work. if the BIOS can do it, but the hardware cant, it wont work. This feature is typically only seen on enterprise grade equipment. The only manufacturer I've seen to enable this feature on their consumer level products is AsRock.

PLX switch (multiplexing): this is accomplished with an add-on board like the one you mentioned. this requires specific hardware. so if your motherboard was not built with embedded PLX chips (and most are NOT) then you need an add-on board. keep in mind that the 4-in-1 board you linked only switches a SINGLE lane. so the other 15x lanes are not being utilized at all. you are now switching 4 devices on the a single PCIe 2.0 lane with only 500MB/s total bandwidth to share.


I can't believe Asus ever designed a board this badly! It wasn't my board, I was given it+CPU+RAM as payment for installing a new one of each in a friend's computer.
to be fair you're trying to use the board in a way it was never designed for. combined with the fact that this generation of hardware is about 12 years old now. it's ancient in the world of PC hardware. for multi GPU setups, the motherboard needs to be able to map all the memory of the GPU, and if the motherboard has insufficient resources to do this, it will not work. this is why you have a BIOS setting on newer motherboards called "Above 4G decoding" to allow memory mapping of larger amounts of memory and this is pivotal to getting multi GPU setups to work.
ID: 95151 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1299
United Kingdom
Message 95160 - Posted: 15 Jan 2020, 22:23:19 UTC

What about if I (as I do already to a certain extent), run several tasks on each GPU? Do you know what the continuous average data rate is for GPU projects? Running several tasks on one GPU would get rid of problems with peak transfer rate, just as at the moment I use it to get around the slow CPU.


That all depends on a whole pile of factors - some of which are: The GPU in question; The application(s) involved; The number of tasks/applications being run; The motherboard; The CPU; The motherboard RAM. I have no doubt that others can add to the list.

Let's go a bit deeper into the list.
The GPU - obviously the more powerful the GPU the more likely it is that it will be able to run multiple tasks
The application - a bit less obvious, but some applications have been designed to use all of one of the GPU's resources, thus trying to run two of them will cause quite a substantial amount of bus traffic. There are undoubtedly pairs of applications that are just so incompatible with each other that they will simply refuse to play ball.
The motherboard - fairly obvious, if this has a very poor PCIE bus, or another data bottleneck in the path between the CPU and the GPU then loading it more won't exactly help the overall performance.
The CPU - simply, some CPUs are better at supporting the demands of GPUs than others - for example the AMD FX family is pretty bad when compared with its contemporary Intel equivalents.
Motherboard RAM - is there enough of it - particularly important where the GPU shares RAM space with the CPU.

In truth the only way one finds out is try it and see - start with a low multiple and don't be surprised at the result (in either direction). Allowing plenty of time (days or even weeks) for a stable situation to be reached, and obviously be prepared to abandon any combinations or multiples that cause lots of errors to be generated.

Your second point sounds very much like a hardware/BIOS issue with your motherboard, some combinations will actually allow you to see a display from both GPUs during boot, others won't. When you've got two GPUs installed one way to find out if it is a slot dependency is to have a single monitor and see what happens during boot, trying both GPUs in turn, you may find that either GPU can be used for boot purposes, or only one of them. Weirdest one I had was a system with three GPUs where I could use any of them as the "boot GPU", but when I took one of the GPUs only the one is the slot nearest the CPU could be used As you can imagine it was "quite interesting" when the two GPUs were in the slots further away.
ID: 95160 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 95165 - Posted: 15 Jan 2020, 23:10:06 UTC - in response to Message 95160.  

Another bullet point: the science.

Look at Einstein's Gravity Wave application (now available for GPU). I downloaded 130 MB of data this morning - all of that has to get to the GPU somehow. Compare with a maths app: no data, just a parameter set. Is number X prime, or does number Y disprove the Collatz Conjecture? They place different demands on the GPU.
ID: 95165 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 95169 - Posted: 15 Jan 2020, 23:29:41 UTC - in response to Message 95141.  
Last modified: 15 Jan 2020, 23:58:36 UTC

That may require some nifty work with - probably not - a hacksaw, or a fine cutting tool like a Dremel. By opening the end of the PCIe x1 slot, the x16 riser could be physically inserted into the x1 slot. He'd still have to watch out for power consumption: the sense pin should tell the motherboard that a card is present. But he would still have to investigate and manage the card's actual power draw from each input.

It should be possible: his cards have nominal power inputs for 375W (75W from the PCIe slot, plus 2x 150W 8-pin supplementary inputs). The cards he has are rated at 250W average total board power. So there's headroom - it's just a question which input has the spare capacity, and that depends on the manufacturer.

To my mind, fitting the dual 8-pin suggests that the bulk of the power will be taken from them: if the full 75W was taken from the motherboard, they could have got away with 1 8-pin and 1 6-pin. But I am not a circuitry designer: it's all supposition, and it might still fry the motherboard. Proceed with extreme caution, and keep a fire extinguisher close at hand.


I was referring to the OP who said he had 2 full size slots, one of which couldn't be used due to not enough clearance for the GPU to breathe.
A PCIE 16x length cable, can offer 16x/8x/4x speeds, while still offering sufficient clearance for cooling.

Once all full size slots are occupied, you can start with the pcie 1x slots, which obviously need a 1x riser.
1x ribbon risers are less flexible than USB risers, in that they often offer inferior power distribution (often a cap soldered on a few legs on the GPU female slot side, and a dual wire to a 4pin HDD/FAN connector.
Not recommended.
The USB risers have better capacitors, and better power distribution too, and they don't use (or use less of) any caps on the motherboard.


Most motherboards will allow 2 or 3 GPUs on the full size slots, and 1 or 2 GPUs on the PCIE 1x ports.
In my case, I was lucky enough to find a board that supported 4 GPUs (1 on a PCIE 1x slot), and the fifth one fitted on an m.2 to PCIE 4x riser, then from a pcie 4x to 16x ribbon cable, to the GPU;
Running in 4x4x4x1x + 4x mode.
PCIE 3.0 4x is good enough for RTX 2080 Tis or RTX Titans.
PCIE 3.0 1x is good enough for RTX 2060/2060 Super GPUs
(both in Linux)
Any lower and there'd be a performance tradeoff; which is why I would recommend to use up as many full size slots on the mobo first.




I have received the first of my adapters - the 16x to 16x ribbon, but it's advertised as PCI Express 1.0 (unshielded) - it uses a ribbon similar to an IDE cable. I've ordered a shielded one that says PCI Express 3.0. The 1.0 ribbon does seem to be working though. 2xMilkyway on each of 2 cards, or 2x Einstein Gamma on each of 2 cards, no crashing :-)

For 6" to 1ft you don't need shielding on PCIE 3.0.
It only matters on PCIE 4.0, or longer cables.

If you're worried about signal integrity, the dual ribbon cables are pretty good!
The worst part is when one or more cables don't connect; which with a double ribbon the chances are much lower for that to happen.
They're made for more permanent solutions, not solutions where the GPU will be unplugged or swapped on a regular basis.
ID: 95169 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 95175 - Posted: 16 Jan 2020, 0:05:48 UTC - in response to Message 95173.  
Last modified: 16 Jan 2020, 0:10:42 UTC



Thanks for that info. The ribbon on this one is 6 inches (a rough guess comparing it with something a bit longer - from memory before you ask). I'll not bother fitting the 3.0 version when it arrives then. But probably best to buy those in future for faster stuff (they cost about 8 quid instead of 6 in the UK). It's strange that they only said it was 1.0 compliant though.

They may be single ribbon cables instead of double.
if they're single ribbon cables, as long as they don't have any broken leads, should be good for PCIE 3.0 speeds as well as lower speeds.
Since there aren't too many PCIE 4.0 devices out, I wouldn't know about PCIE 4.0.
Considering that they're just leads, like the ones built in the motherboard, they should work just fine.

PCIE 3.0 is not happy with 2x PCIE riser cables connected in series, which means 1ft of cable might be suspect to signal interference. But I never had any issues on 6"-1Ft risers.
The USB versions are more expensive, but I'd really recommend them on pcie 1x slots (in case you'd want to run 3 or 4 GPUs).

With USB risers, know that the SATA connector versions can only handle 2 risers per cable; but one is preferred.
The Risers use up ~35W of power, and a SATA lead from the PSU can only handle up to 75Watts.
I've had situations where GPUs would fail or error, because of occasional lack of power.
Once I provided 1 lead for each riser, the issue went away.
ID: 95175 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 95177 - Posted: 16 Jan 2020, 2:04:04 UTC - in response to Message 95176.  
Last modified: 16 Jan 2020, 2:21:43 UTC


You're confusing me now. Surely you have to have double or there won't be enough connections made?

.......

Depends on the PSU. Corsair PSUs have much thicker cabling than crap like CIT and Alpine.


The double PCIE ribbon cables are just doubles of their counterpart. They're not twice the amount of connections.
The two ribbons are soldered in parallel, meaning 2 wires carry 1 signal.
You could easily cut one of the ribbon cables, and still have a fully working ribbon riser cable.
IMO the only real reason they are doubled, is for the power provision.
PCIE 1x provides 25Watts, PCIE 4x provides 35W, PCIE 8x and 16x needs to provide 75Watts.
They only have a few wires that carry the +12V signal, so doubling them up is wise.

The specs for SATA connector cables is 75W, and is pretty much universal.
The Riser boards you have, have 6 pin connectors on them.
6 or 8 pin connectors also have 75W maximum carrying capacity. (the 2 additional wires on 8 pins are ground wires, which aren't used by the riser board).
So if you have 2x 6pin, or 2x 8 pin connectors on a lead from the PSU, for the risers, they're only rated at 75W.
If you had a way to swap out the last 2 ground wires of the 8 pin, with some ground wires on the 6pin connector, you could easily run more than 2 riser boards per cable (8 pin with ground is rated at 150W).

You can also connect 3 riser boards to 2 GPU cables, by using 2x6Pin to 1x 6 pin splitter, joining the power of 2 leads together.
It will balance out power fluctuations.

Even I had stability issues with EVGA and Corsair Gold 1000W PSUs, providing a mere 500-700W per system.
ID: 95177 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1299
United Kingdom
Message 95182 - Posted: 16 Jan 2020, 8:10:08 UTC

There are several configurations of "two ribbon" cable risers.
Those where each side of the connector is connected to its own cable. These do give a bit of signal segregation, so may be less noisy than just a single cable, but if not done properly could cause timing errors.
Those which double-up the power lines. Good idea to have a bit more copper available for the power
Those which use both cables for all signals. No real advantage over just doubling-up on the power line, but may cause some signal timing errors.

Given 6 inches is about a quarter-wavelength at the frequencies these buses run at the use of unscreened cables may be OK on some motherboards, but may be a disaster on others. It is always best to use screened cables. (Also the screened cables I've seen have tended to have been of a far better manufacturing quality than unscreened cables.)
ID: 95182 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 95188 - Posted: 16 Jan 2020, 9:33:04 UTC - in response to Message 95182.  

I'm of an age to remember when we went through a similar transition for IDE hard drives. The actual pinout required a 40 conductor cable, but the fastest motherboards and drives switched to using 80 conductors in the same form factor - thinner wires, so the ribbon cable felt much smoother to the touch. I don't know how the 80 conductor cables were wired - they used the same 40-pin connectors. They might have been two wires per signal, or they might have been grounded guard wires between each signal wire. Rob, any idea?
ID: 95188 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 2676
United Kingdom
Message 95190 - Posted: 16 Jan 2020, 9:40:35 UTC - in response to Message 95188.  

I'm of an age to remember when we went through a similar transition for IDE hard drives. The actual pinout required a 40 conductor cable, but the fastest motherboards and drives switched to using 80 conductors in the same form factor - thinner wires, so the ribbon cable felt much smoother to the touch. I don't know how the 80 conductor cables were wired - they used the same 40-pin connectors. They might have been two wires per signal, or they might have been grounded guard wires between each signal wire. Rob, any idea?


Just did a search to check my memory was correct. (The chips are getting a bit old)

An 80-conductor IDE cable has 40 pins and 80 wires – 40 wires are for communication and data, the other 40 are ground wires to reduce crosstalk on the cable.
ID: 95190 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 95192 - Posted: 16 Jan 2020, 10:07:50 UTC - in response to Message 95190.  
Last modified: 16 Jan 2020, 10:56:31 UTC

Just checked in my parts bin. An IDE cable is 2 inches across (like all electronics specified in the USA, I'm sure it's non-metric). So the wire pitch in a 40 conductor cable is 0.05 inches, for 80 conductors it's 0.025 inches. The difference is clear to the naked eye.

Seems like the guard wire technology came in with the ATA-66 specification around 2000.
ID: 95192 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15552
Netherlands
Message 95193 - Posted: 16 Jan 2020, 10:59:36 UTC - in response to Message 95175.  

Since there aren't too many PCIE 4.0 devices out...
But for all AMD GPUs available now, RX 5500, 5600 and 5700 series. Intel added support to its Optane SSDs. Asrock and Asus have launched own versions of M.2 cards in which you can slot up to 4 M.2 SSDs. There's enough PCIe 4.0 around, if only you care to look.
ID: 95193 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 95195 - Posted: 16 Jan 2020, 11:08:15 UTC - in response to Message 95193.  

Since there aren't too many PCIE 4.0 devices out...
But for all AMD GPUs available now, RX 5500, 5600 and 5700 series. Intel added support to its Optane SSDs. Asrock and Asus have launched own versions of M.2 cards in which you can slot up to 4 M.2 SSDs. There's enough PCIe 4.0 around, if only you care to look.
That sounds like the sort of device I helped Eric install in Muarae2 in the summer - 4 M.2 SSDs on a single PCIe card, lying flat in a 1U server case. Very fiddly. I doubt that would work on the end of a riser cable. Mind you, I think Eric is still having difficulty getting it to work in the datacenter.
ID: 95195 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.