PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 20 · Next

AuthorMessage
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95148 - Posted: 15 Jan 2020, 18:43:12 UTC - in response to Message 95137.  
Last modified: 15 Jan 2020, 18:49:37 UTC

Great, but if you read the description of what the OP's motherboard has, and what he want to achieve, then you would realise that what you are suggesting will not fully answer his quest.
Summary - The motherboard has two x16 slots and a number of x1 slots. He wants to run more than 2 GPUs, he wants to have them air-cooled, and the x16 slots are too close together for his comfort. Thus he needs to be able to use the at least one of the x16 slots with a riser, and the x1 slots will need a riser anyway - and those riser will have to be x1 to x16 because they have to be x1 at the motherboard end. So far he's had no joy in getting the motherboard to recognise anything sitting on a x1 to x16 riser. Ian&Steve has made a few suggestions, and I know he has done a lot of work in getting similar (not identical) systems working.


My 1-16x riser was cheap rubbish. A better one is in the post which may work. If it doesn't, I guess I'm stuck with two cards maximum (I'm not willing to hack into the sockets, I'll break something!). I will then assume that old motherboard is rather incompatible, and keep all the risers to try with different cards and motherboards in the future.

I have two other computers, but one is in here and I use it all the time, I don't like the noise of large cards in it. The other is a cheap Packard Bell (ack!) with a motherboard and CPU that looks like it should be in a laptop. The PSU is an external 60W brick, the CPU uses 6W. I'm guessing I might have even more problems trying to use the 16x PCI Express slot in there. I can certainly change the power supply and shove the whole thing in a proper case, but if the motherboard doesn't have enough copper to take 75W into the PCI Express socket.... something might fry? Or would the motherboard be sensible enough to say "you can't have 75W, tough", and the GPU sources it from the top connectors instead or refuses to start or runs slower? Maybe it's an x4 slot and therefore only has 25W available? https://en.wikipedia.org/wiki/PCI_Express#Power
ID: 95148 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95149 - Posted: 15 Jan 2020, 18:56:17 UTC - in response to Message 95147.  
Last modified: 15 Jan 2020, 18:58:07 UTC

actually he can't do that at all. even if he wanted to.

the topmost 1x slot is immediately obstructed aft of the slot by the north bridge heat sink.


Are you sure you mean aft? I call the back (aft) of the motherboard the side with the sockets on it, that faces the back of the tower case.

the lower 1x slot is obstructed a little further out by the BIOS battery. he would not be able to fit a 16x card here, maybe enough room for a 8x device though.


Not sure about the battery, it might be low enough, I can't see at the moment as the GPU is in the slot above it.

still poor option to try to use these slots for anything. 1x lane at PCIe 1.0 speeds is just too little bandwidth to be useful on BOINC projects.

stick to the two 16x slots only.


I will experiment when I get more cards in the future - might aswell try. Some projects like Milkyway run almost everything on the GPU. Einstein would probably be hindered by 1x though. If they're both too slow, I'll get another motherboard for more GPUs.

Isn't it possible to multiplex a 16x slot to put more cards in it? That's what those one 1x to four 16x risers do.

I can't believe Asus ever designed a board this badly! It wasn't my board, I was given it+CPU+RAM as payment for installing a new one of each in a friend's computer.

ID: 95149 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 151
United States
Message 95151 - Posted: 15 Jan 2020, 19:22:33 UTC - in response to Message 95149.  
Last modified: 15 Jan 2020, 19:33:45 UTC

Are you sure you mean aft? I call the back (aft) of the motherboard the side with the sockets on it, that faces the back of the tower case.
I guess that depends on your frame of reference. but the picture is self explanatory. you cannot cut the ends of the slot to fit a 16x card as the heatsink is still in the way.


Not sure about the battery, it might be low enough, I can't see at the moment as the GPU is in the slot above it.
trust me, it's not low enough. especially with the way the battery retention clip is oriented (this part sticks up higher).


I will experiment when I get more cards in the future - might aswell try. Some projects like Milkyway run almost everything on the GPU. Einstein would probably be hindered by 1x though. If they're both too slow, I'll get another motherboard for more GPUs.
I can confidently say that it will greatly slow down your processing times. feel free to try, but when you compare it to the times with the same card in the 16x slot you'll understand.

the bandwidth is crazy low. just 1/32 the (3.125%) the total bandwidth of the 16x 2.0 slots.
PCIe 1x 1.0 = 250MB/s (this is what you have)
PCIe 1x 2.0 = 500MB/s
PCIe 1x 3.0 = 1000MB/s

so your 16x 2.0 slots are capable of 8000MB/s.

SETI runs OK at PCIe 1x 3.0 (1000MB/s) and I consider SETI to be one of the Projects on the lower end of PCIe bandwidth requirements. I truly believe you're going to have a bad time trying to use the 1x 1.0 ports even if you ever get a card recognized in it. I think your only shot at getting a GPU recognized will be a USB style riser that has no power draw from the slot.


Isn't it possible to multiplex a 16x slot to put more cards in it? That's what those one 1x to four 16x risers do.
there's two ways to do this.

Bifurcation: which allows the lanes of one slot to be effectively broken up on multiple devices. for example, breaking a single 16x slot into two 8x, or four 4x. this needs to be supported on the hardware level AND the BIOS level. if the hardware can do it, but the BIOS cant, it wont work. if the BIOS can do it, but the hardware cant, it wont work. This feature is typically only seen on enterprise grade equipment. The only manufacturer I've seen to enable this feature on their consumer level products is AsRock.

PLX switch (multiplexing): this is accomplished with an add-on board like the one you mentioned. this requires specific hardware. so if your motherboard was not built with embedded PLX chips (and most are NOT) then you need an add-on board. keep in mind that the 4-in-1 board you linked only switches a SINGLE lane. so the other 15x lanes are not being utilized at all. you are now switching 4 devices on the a single PCIe 2.0 lane with only 500MB/s total bandwidth to share.


I can't believe Asus ever designed a board this badly! It wasn't my board, I was given it+CPU+RAM as payment for installing a new one of each in a friend's computer.
to be fair you're trying to use the board in a way it was never designed for. combined with the fact that this generation of hardware is about 12 years old now. it's ancient in the world of PC hardware. for multi GPU setups, the motherboard needs to be able to map all the memory of the GPU, and if the motherboard has insufficient resources to do this, it will not work. this is why you have a BIOS setting on newer motherboards called "Above 4G decoding" to allow memory mapping of larger amounts of memory and this is pivotal to getting multi GPU setups to work.
ID: 95151 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95152 - Posted: 15 Jan 2020, 21:11:33 UTC - in response to Message 95151.  
Last modified: 15 Jan 2020, 22:01:02 UTC

I can confidently say that it will greatly slow down your processing times. feel free to try, but when you compare it to the times with the same card in the 16x slot you'll understand.

the bandwidth is crazy low. just 1/32 the (3.125%) the total bandwidth of the 16x 2.0 slots.
PCIe 1x 1.0 = 250MB/s (this is what you have)
PCIe 1x 2.0 = 500MB/s
PCIe 1x 3.0 = 1000MB/s

so your 16x 2.0 slots are capable of 8000MB/s.

SETI runs OK at PCIe 1x 3.0 (1000MB/s) and I consider SETI to be one of the Projects on the lower end of PCIe bandwidth requirements. I truly believe you're going to have a bad time trying to use the 1x 1.0 ports even if you ever get a card recognized in it. I think your only shot at getting a GPU recognized will be a USB style riser that has no power draw from the slot.


What about if I (as I do already to a certain extent), run several tasks on each GPU? Do you know what the continuous average data rate is for GPU projects? Running several tasks on one GPU would get rid of problems with peak transfer rate, just as at the moment I use it to get around the slow CPU.

to be fair you're trying to use the board in a way it was never designed for. combined with the fact that this generation of hardware is about 12 years old now. it's ancient in the world of PC hardware.


But, it has two x16 slots. Clearly designed for two graphics cards. Nobody has two graphics cards (even back then) unless they're beefy sized (double width) ones. Which can't fit bang up against each other.

for multi GPU setups, the motherboard needs to be able to map all the memory of the GPU, and if the motherboard has insufficient resources to do this, it will not work. this is why you have a BIOS setting on newer motherboards called "Above 4G decoding" to allow memory mapping of larger amounts of memory and this is pivotal to getting multi GPU setups to work.


I had to do that to get the second one to be recognised. Although I didn't, I changed a Windows registry entry which does the same thing - presumably only card 1 works until Windows is loaded. I got the setting from a mining page.
ID: 95152 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95159 - Posted: 15 Jan 2020, 22:15:37 UTC - in response to Message 95151.  
Last modified: 15 Jan 2020, 22:15:47 UTC

combined with the fact that this generation of hardware is about 12 years old now. it's ancient in the world of PC hardware.


Hey, it's got solid aluminium capacitors! Remember the old electrolytic ones that burst when they got hot or old? They still use those horrid things in TVs for some reason (I've recently repaired my dad's and my neighbour's TV as they just died, but only needed new caps inside). I looked them up, they're only fractionally cheaper, no point in decreasing reliability for 20% off a component.
ID: 95159 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 952
United Kingdom
Message 95160 - Posted: 15 Jan 2020, 22:23:19 UTC

What about if I (as I do already to a certain extent), run several tasks on each GPU? Do you know what the continuous average data rate is for GPU projects? Running several tasks on one GPU would get rid of problems with peak transfer rate, just as at the moment I use it to get around the slow CPU.


That all depends on a whole pile of factors - some of which are: The GPU in question; The application(s) involved; The number of tasks/applications being run; The motherboard; The CPU; The motherboard RAM. I have no doubt that others can add to the list.

Let's go a bit deeper into the list.
The GPU - obviously the more powerful the GPU the more likely it is that it will be able to run multiple tasks
The application - a bit less obvious, but some applications have been designed to use all of one of the GPU's resources, thus trying to run two of them will cause quite a substantial amount of bus traffic. There are undoubtedly pairs of applications that are just so incompatible with each other that they will simply refuse to play ball.
The motherboard - fairly obvious, if this has a very poor PCIE bus, or another data bottleneck in the path between the CPU and the GPU then loading it more won't exactly help the overall performance.
The CPU - simply, some CPUs are better at supporting the demands of GPUs than others - for example the AMD FX family is pretty bad when compared with its contemporary Intel equivalents.
Motherboard RAM - is there enough of it - particularly important where the GPU shares RAM space with the CPU.

In truth the only way one finds out is try it and see - start with a low multiple and don't be surprised at the result (in either direction). Allowing plenty of time (days or even weeks) for a stable situation to be reached, and obviously be prepared to abandon any combinations or multiples that cause lots of errors to be generated.

Your second point sounds very much like a hardware/BIOS issue with your motherboard, some combinations will actually allow you to see a display from both GPUs during boot, others won't. When you've got two GPUs installed one way to find out if it is a slot dependency is to have a single monitor and see what happens during boot, trying both GPUs in turn, you may find that either GPU can be used for boot purposes, or only one of them. Weirdest one I had was a system with three GPUs where I could use any of them as the "boot GPU", but when I took one of the GPUs only the one is the slot nearest the CPU could be used As you can imagine it was "quite interesting" when the two GPUs were in the slots further away.
ID: 95160 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4414
United Kingdom
Message 95165 - Posted: 15 Jan 2020, 23:10:06 UTC - in response to Message 95160.  

Another bullet point: the science.

Look at Einstein's Gravity Wave application (now available for GPU). I downloaded 130 MB of data this morning - all of that has to get to the GPU somehow. Compare with a maths app: no data, just a parameter set. Is number X prime, or does number Y disprove the Collatz Conjecture? They place different demands on the GPU.
ID: 95165 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 615
United States
Message 95169 - Posted: 15 Jan 2020, 23:29:41 UTC - in response to Message 95141.  
Last modified: 15 Jan 2020, 23:58:36 UTC

That may require some nifty work with - probably not - a hacksaw, or a fine cutting tool like a Dremel. By opening the end of the PCIe x1 slot, the x16 riser could be physically inserted into the x1 slot. He'd still have to watch out for power consumption: the sense pin should tell the motherboard that a card is present. But he would still have to investigate and manage the card's actual power draw from each input.

It should be possible: his cards have nominal power inputs for 375W (75W from the PCIe slot, plus 2x 150W 8-pin supplementary inputs). The cards he has are rated at 250W average total board power. So there's headroom - it's just a question which input has the spare capacity, and that depends on the manufacturer.

To my mind, fitting the dual 8-pin suggests that the bulk of the power will be taken from them: if the full 75W was taken from the motherboard, they could have got away with 1 8-pin and 1 6-pin. But I am not a circuitry designer: it's all supposition, and it might still fry the motherboard. Proceed with extreme caution, and keep a fire extinguisher close at hand.


I was referring to the OP who said he had 2 full size slots, one of which couldn't be used due to not enough clearance for the GPU to breathe.
A PCIE 16x length cable, can offer 16x/8x/4x speeds, while still offering sufficient clearance for cooling.

Once all full size slots are occupied, you can start with the pcie 1x slots, which obviously need a 1x riser.
1x ribbon risers are less flexible than USB risers, in that they often offer inferior power distribution (often a cap soldered on a few legs on the GPU female slot side, and a dual wire to a 4pin HDD/FAN connector.
Not recommended.
The USB risers have better capacitors, and better power distribution too, and they don't use (or use less of) any caps on the motherboard.


Most motherboards will allow 2 or 3 GPUs on the full size slots, and 1 or 2 GPUs on the PCIE 1x ports.
In my case, I was lucky enough to find a board that supported 4 GPUs (1 on a PCIE 1x slot), and the fifth one fitted on an m.2 to PCIE 4x riser, then from a pcie 4x to 16x ribbon cable, to the GPU;
Running in 4x4x4x1x + 4x mode.
PCIE 3.0 4x is good enough for RTX 2080 Tis or RTX Titans.
PCIE 3.0 1x is good enough for RTX 2060/2060 Super GPUs
(both in Linux)
Any lower and there'd be a performance tradeoff; which is why I would recommend to use up as many full size slots on the mobo first.




I have received the first of my adapters - the 16x to 16x ribbon, but it's advertised as PCI Express 1.0 (unshielded) - it uses a ribbon similar to an IDE cable. I've ordered a shielded one that says PCI Express 3.0. The 1.0 ribbon does seem to be working though. 2xMilkyway on each of 2 cards, or 2x Einstein Gamma on each of 2 cards, no crashing :-)

For 6" to 1ft you don't need shielding on PCIE 3.0.
It only matters on PCIE 4.0, or longer cables.

If you're worried about signal integrity, the dual ribbon cables are pretty good!
The worst part is when one or more cables don't connect; which with a double ribbon the chances are much lower for that to happen.
They're made for more permanent solutions, not solutions where the GPU will be unplugged or swapped on a regular basis.
ID: 95169 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95172 - Posted: 15 Jan 2020, 23:44:12 UTC - in response to Message 95165.  

Another bullet point: the science.

Look at Einstein's Gravity Wave application (now available for GPU). I downloaded 130 MB of data this morning - all of that has to get to the GPU somehow. Compare with a maths app: no data, just a parameter set. Is number X prime, or does number Y disprove the Collatz Conjecture? They place different demands on the GPU.


Yip, I'm sure I'd find a project somewhere like Collatz that could run without a lot of PCI Express bandwidth required. I do of course have favourite projects which I think are more worthwhile, but the key thing is I like all parts of the system to be running flat out 24/7, so I'll change project if it works better on that hardware.
ID: 95172 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95173 - Posted: 15 Jan 2020, 23:50:24 UTC - in response to Message 95169.  

I was referring to the OP who said he had 2 full size slots, one of which couldn't be used due to not enough clearance for the GPU to breathe.
A PCIE 16x length cable, can offer 16x/8x/4x speeds, while still offering sufficient clearance for cooling.

Once all full size slots are occupied, you can start with the pcie 1x slots, which obviously need a 1x riser.
1x ribbon risers are less flexible than USB risers, in that they often offer inferior power distribution (often a cap soldered on a few legs on the GPU pins, and a dual wire to a 4pin HDD/FAN connector.
Not recommended.
The USB risers have better capacitors, and better power distribution too, and they don't use (or use less of) any caps on the motherboard.


Yip, the soldered on cap was what I was trying earlier, 1x to 16x, no USB just a ribbon. Still waiting for the USB version, should arrive tomorrow afternoon. The 4 way USB thing was damaged before posting, so the seller cancelled, I now have to wait a week to get one from someone else, but that's not needed yet until I get more GPUs, I just wanted to know if it would work, and what the bandwidth would be like. I like fiddling :-)

For 6" to 1ft you don't need shielding on PCIE 3.0.
It only matters on PCIE 4.0, or longer cables.


Thanks for that info. The ribbon on this one is 6 inches (a rough guess comparing it with something a bit longer - from memory before you ask). I'll not bother fitting the 3.0 version when it arrives then. But probably best to buy those in future for faster stuff (they cost about 8 quid instead of 6 in the UK). It's strange that they only said it was 1.0 compliant though.
ID: 95173 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 615
United States
Message 95175 - Posted: 16 Jan 2020, 0:05:48 UTC - in response to Message 95173.  
Last modified: 16 Jan 2020, 0:10:42 UTC



Thanks for that info. The ribbon on this one is 6 inches (a rough guess comparing it with something a bit longer - from memory before you ask). I'll not bother fitting the 3.0 version when it arrives then. But probably best to buy those in future for faster stuff (they cost about 8 quid instead of 6 in the UK). It's strange that they only said it was 1.0 compliant though.

They may be single ribbon cables instead of double.
if they're single ribbon cables, as long as they don't have any broken leads, should be good for PCIE 3.0 speeds as well as lower speeds.
Since there aren't too many PCIE 4.0 devices out, I wouldn't know about PCIE 4.0.
Considering that they're just leads, like the ones built in the motherboard, they should work just fine.

PCIE 3.0 is not happy with 2x PCIE riser cables connected in series, which means 1ft of cable might be suspect to signal interference. But I never had any issues on 6"-1Ft risers.
The USB versions are more expensive, but I'd really recommend them on pcie 1x slots (in case you'd want to run 3 or 4 GPUs).

With USB risers, know that the SATA connector versions can only handle 2 risers per cable; but one is preferred.
The Risers use up ~35W of power, and a SATA lead from the PSU can only handle up to 75Watts.
I've had situations where GPUs would fail or error, because of occasional lack of power.
Once I provided 1 lead for each riser, the issue went away.
ID: 95175 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95176 - Posted: 16 Jan 2020, 0:18:49 UTC - in response to Message 95175.  


They may be single ribbon cables instead of double.


The cheaper one I have here (the 1.0 version) is a double ribbon. Looks like two of IDE ribbons.

if they're single ribbon cables, as long as they don't have any broken leads, should be good for PCIE 3.0 speeds as well as lower speeds.


You're confusing me now. Surely you have to have double or there won't be enough connections made?

PCIE 3.0 is not happy with 2x PCIE riser cables connected in series, which means 1ft of cable might be suspect to signal interference. But I never had any issues on 6"-1Ft risers.
The USB versions are more expensive, but I'd really recommend them on pcie 1x slots (in case you'd want to run 3 or 4 GPUs).


Yip, I will always buy the best in future. I had difficulty finding anything at all before, which is why I ended up with crap. I've noted down the key word to use in an Ebay search: "riser" - you get several hundred results, instead of just 30 with something like "PCI Express extension" or "PCI Express adapter", I didn't know they were called risers.

With USB risers, know that the SATA connector versions can only handle 2 risers per cable; but one is preferred.
The Risers use up ~35W of power, and a SATA lead from the PSU can only handle up to 75Watts.
I've had situations where GPUs would fail or error, because of occasional lack of power.
Once I provided 1 lead for each riser, the issue went away.


Depends on the PSU. Corsair PSUs have much thicker cabling than crap like CIT and Alpine.
ID: 95176 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 615
United States
Message 95177 - Posted: 16 Jan 2020, 2:04:04 UTC - in response to Message 95176.  
Last modified: 16 Jan 2020, 2:21:43 UTC


You're confusing me now. Surely you have to have double or there won't be enough connections made?

.......

Depends on the PSU. Corsair PSUs have much thicker cabling than crap like CIT and Alpine.


The double PCIE ribbon cables are just doubles of their counterpart. They're not twice the amount of connections.
The two ribbons are soldered in parallel, meaning 2 wires carry 1 signal.
You could easily cut one of the ribbon cables, and still have a fully working ribbon riser cable.
IMO the only real reason they are doubled, is for the power provision.
PCIE 1x provides 25Watts, PCIE 4x provides 35W, PCIE 8x and 16x needs to provide 75Watts.
They only have a few wires that carry the +12V signal, so doubling them up is wise.

The specs for SATA connector cables is 75W, and is pretty much universal.
The Riser boards you have, have 6 pin connectors on them.
6 or 8 pin connectors also have 75W maximum carrying capacity. (the 2 additional wires on 8 pins are ground wires, which aren't used by the riser board).
So if you have 2x 6pin, or 2x 8 pin connectors on a lead from the PSU, for the risers, they're only rated at 75W.
If you had a way to swap out the last 2 ground wires of the 8 pin, with some ground wires on the 6pin connector, you could easily run more than 2 riser boards per cable (8 pin with ground is rated at 150W).

You can also connect 3 riser boards to 2 GPU cables, by using 2x6Pin to 1x 6 pin splitter, joining the power of 2 leads together.
It will balance out power fluctuations.

Even I had stability issues with EVGA and Corsair Gold 1000W PSUs, providing a mere 500-700W per system.
ID: 95177 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 952
United Kingdom
Message 95182 - Posted: 16 Jan 2020, 8:10:08 UTC

There are several configurations of "two ribbon" cable risers.
Those where each side of the connector is connected to its own cable. These do give a bit of signal segregation, so may be less noisy than just a single cable, but if not done properly could cause timing errors.
Those which double-up the power lines. Good idea to have a bit more copper available for the power
Those which use both cables for all signals. No real advantage over just doubling-up on the power line, but may cause some signal timing errors.

Given 6 inches is about a quarter-wavelength at the frequencies these buses run at the use of unscreened cables may be OK on some motherboards, but may be a disaster on others. It is always best to use screened cables. (Also the screened cables I've seen have tended to have been of a far better manufacturing quality than unscreened cables.)
ID: 95182 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4414
United Kingdom
Message 95188 - Posted: 16 Jan 2020, 9:33:04 UTC - in response to Message 95182.  

I'm of an age to remember when we went through a similar transition for IDE hard drives. The actual pinout required a 40 conductor cable, but the fastest motherboards and drives switched to using 80 conductors in the same form factor - thinner wires, so the ribbon cable felt much smoother to the touch. I don't know how the 80 conductor cables were wired - they used the same 40-pin connectors. They might have been two wires per signal, or they might have been grounded guard wires between each signal wire. Rob, any idea?
ID: 95188 · Report as offensive
Profile Dave

Send message
Joined: 28 Jun 10
Posts: 1268
United Kingdom
Message 95190 - Posted: 16 Jan 2020, 9:40:35 UTC - in response to Message 95188.  

I'm of an age to remember when we went through a similar transition for IDE hard drives. The actual pinout required a 40 conductor cable, but the fastest motherboards and drives switched to using 80 conductors in the same form factor - thinner wires, so the ribbon cable felt much smoother to the touch. I don't know how the 80 conductor cables were wired - they used the same 40-pin connectors. They might have been two wires per signal, or they might have been grounded guard wires between each signal wire. Rob, any idea?


Just did a search to check my memory was correct. (The chips are getting a bit old)

An 80-conductor IDE cable has 40 pins and 80 wires – 40 wires are for communication and data, the other 40 are ground wires to reduce crosstalk on the cable.
ID: 95190 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4414
United Kingdom
Message 95192 - Posted: 16 Jan 2020, 10:07:50 UTC - in response to Message 95190.  
Last modified: 16 Jan 2020, 10:56:31 UTC

Just checked in my parts bin. An IDE cable is 2 inches across (like all electronics specified in the USA, I'm sure it's non-metric). So the wire pitch in a 40 conductor cable is 0.05 inches, for 80 conductors it's 0.025 inches. The difference is clear to the naked eye.

Seems like the guard wire technology came in with the ATA-66 specification around 2000.
ID: 95192 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14560
Netherlands
Message 95193 - Posted: 16 Jan 2020, 10:59:36 UTC - in response to Message 95175.  

Since there aren't too many PCIE 4.0 devices out...
But for all AMD GPUs available now, RX 5500, 5600 and 5700 series. Intel added support to its Optane SSDs. Asrock and Asus have launched own versions of M.2 cards in which you can slot up to 4 M.2 SSDs. There's enough PCIe 4.0 around, if only you care to look.
ID: 95193 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 4414
United Kingdom
Message 95195 - Posted: 16 Jan 2020, 11:08:15 UTC - in response to Message 95193.  

Since there aren't too many PCIE 4.0 devices out...
But for all AMD GPUs available now, RX 5500, 5600 and 5700 series. Intel added support to its Optane SSDs. Asrock and Asus have launched own versions of M.2 cards in which you can slot up to 4 M.2 SSDs. There's enough PCIe 4.0 around, if only you care to look.
That sounds like the sort of device I helped Eric install in Muarae2 in the summer - 4 M.2 SSDs on a single PCIe card, lying flat in a 1U server case. Very fiddly. I doubt that would work on the end of a riser cable. Mind you, I think Eric is still having difficulty getting it to work in the datacenter.
ID: 95195 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1088
United Kingdom
Message 95207 - Posted: 16 Jan 2020, 18:36:52 UTC - in response to Message 95177.  
Last modified: 16 Jan 2020, 18:39:34 UTC

The double PCIE ribbon cables are just doubles of their counterpart. They're not twice the amount of connections.


Wow, I managed to be out by a factor of 2 when making a rough guess counting the ribbon wires and the PCI express pins. I did it again using the tip of a pencil to count them properly, and have found you are correct :-)

6 or 8 pin connectors also have 75W maximum carrying capacity. (the 2 additional wires on 8 pins are ground wires, which aren't used by the riser board).


I always wondered about that, how the 8 pin ones are quoted as 150W, double the power, yet all they provide is more ground pins!

So if you have 2x 6pin, or 2x 8 pin connectors on a lead from the PSU, for the risers, they're only rated at 75W.
If you had a way to swap out the last 2 ground wires of the 8 pin, with some ground wires on the 6pin connector, you could easily run more than 2 riser boards per cable (8 pin with ground is rated at 150W).

You can also connect 3 riser boards to 2 GPU cables, by using 2x6Pin to 1x 6 pin splitter, joining the power of 2 leads together.
It will balance out power fluctuations.


Extra power supplies are not a problem, I have 6 spare PSUs rated at 625W on 12V. They used to be used doing bitcoin mining on very similar GPUs to what I'm using now for Boinc. When I gave up the pointless unprofitable exercise of bitcoins and sold the GPUs, I kept the power supplies. Plus Corsairs with an even higher output cost pennies.

Even I had stability issues with EVGA and Corsair Gold 1000W PSUs, providing a mere 500-700W per system.


I used to use Alpine supplies, which went BANG! after 2 weeks of continuous usage within the specs! I changed to CIT, which will last for years within their specs, but their voltage output isn't great and their fans are quite loud at full load. One of the two I'm using now on the twin GPU setup in question provides 11V no load and 10.5V full load! The other is 12V no load, 11.5V full load. The GPUs don't care, the VRM onboard converts anything into 1V for the chip. The only slight downside is the fans run a little slower at 10.5V, but now I've shifted everything around with plenty of airflow, the fans don't need to run full speed, and I've replaced the two with dodgy bearings that were noisy and sticking a little.
ID: 95207 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2021 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.