PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · 17 . . . 20 · Next

AuthorMessage
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96275 - Posted: 2 Mar 2020, 19:27:17 UTC - in response to Message 96257.  
Last modified: 2 Mar 2020, 19:33:30 UTC

All (note *ALL*) RTX GPUs run stable at 15GBPS.


it's amazing that you have been able to test every RTX card that has been produced. /sarcasm

I'm sure MOST might handle that overclock. but that doesn't mean that ALL can. The stock speed for all but the 2080Super is 14Gbps. The 2080Super has memory binned / cherry picked to handle those increased speeds.

and it certainly depends on the task being run. +1000MHz off the base 14,000 is a pretty hefty jump.

i also disagree that power limiting somehow "turns off unused modules". that's not how this works. GPU memory modules are only individually clocked to 1750MHz. they get the increased "effective" speeds by reading from all modules simultaneously. they don't just fill up one by one. no matter how much or little data is being read from VRAM, it gets distributed across all modules. all are in use at all times.
ID: 96275 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96279 - Posted: 2 Mar 2020, 20:23:55 UTC - in response to Message 96277.  

there is variance in manufacture tolerances. which is why some chips can do higher clocks at lower voltages and others can't. not every piece of silicon (CPU or GPU or memory) will have the same exact limits. manufacturers select the speeds to run them based on what will be safe for nearly all products. anything that doesn't make the cut gets binned to a lower speed product or thrown away/recycled/whatever they do with it.

if you are willing to put in the time and effort to find a stable overclock/undervolt it can be beneficial and you can get some gains.

of course a stock card at stock speeds should always be stable, so there's nothing wrong with leaving it there. As I said before, the only reason nvidia underclocks your memory in compute loads, is their greed to push you to a higher priced Quadro card. that performance limiter has nothing to do with stability. I only overclock the memory to put back what they took away. And I do some very conservative core overclocking + power limiting that I've proven stable (GPUs can run months without me touching it, and with no downtime) for the increased efficiency since electricity ain't free and I'm using a lot of it, I'm pulling nearly 5000W 24/7/365 across my 3 main systems.

my watercooled 7x2080 system is power limited for 2 reasons. 1, to stay well below the PSU limits (single EVGA 1600W Platinum PSU), and 2, to keep temps reasonable since the heat is being radiated via a single 9x120mm radiator. 1400W is a lot of heat to dump from a single radiator, even though this one is large, 1400W is still pushing it.
ID: 96279 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 96281 - Posted: 2 Mar 2020, 20:48:53 UTC - in response to Message 96280.  

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!
ID: 96281 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 96282 - Posted: 2 Mar 2020, 20:58:48 UTC - in response to Message 96275.  
Last modified: 2 Mar 2020, 21:02:19 UTC

All (note *ALL*) RTX GPUs run stable at 15GBPS.


it's amazing that you have been able to test every RTX card that has been produced. /sarcasm

I'm sure MOST might handle that overclock. but that doesn't mean that ALL can. The stock speed for all but the 2080Super is 14Gbps. The 2080Super has memory binned / cherry picked to handle those increased speeds.

and it certainly depends on the task being run. +1000MHz off the base 14,000 is a pretty hefty jump.

i also disagree that power limiting somehow "turns off unused modules". that's not how this works. GPU memory modules are only individually clocked to 1750MHz. they get the increased "effective" speeds by reading from all modules simultaneously. they don't just fill up one by one. no matter how much or little data is being read from VRAM, it gets distributed across all modules. all are in use at all times.


It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)
ID: 96282 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 96284 - Posted: 2 Mar 2020, 21:01:28 UTC - in response to Message 96281.  
Last modified: 2 Mar 2020, 21:03:39 UTC

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!

Would be interesting to see, though I can't see how it would effectively work, seeing that you'd have to parallel feed all GPUs (meaning you'll need several pumps).
Serial feeding and you can do with 1 pump, but at 250+W, going from 1 GPU to a 2nd, to a 3rd, and the 3rd GPU will have a hard time staying cool enough.

You could try to run the whole system on oil, because radiators are bound to rust and generate air.
ID: 96284 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96291 - Posted: 2 Mar 2020, 23:35:50 UTC - in response to Message 96282.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


this only applies to the cards you have/had in your hands. certainly not ALL cards ever, and not all use cases either.

your gross simplification would be akin to saying something like "ALL CPUs can overclock to 5.3GHz because all of the ones I have did". It's simply not true in all cases.
ID: 96291 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96292 - Posted: 2 Mar 2020, 23:51:23 UTC - in response to Message 96284.  

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!

Would be interesting to see, though I can't see how it would effectively work, seeing that you'd have to parallel feed all GPUs (meaning you'll need several pumps).
Serial feeding and you can do with 1 pump, but at 250+W, going from 1 GPU to a 2nd, to a 3rd, and the 3rd GPU will have a hard time staying cool enough.

You could try to run the whole system on oil, because radiators are bound to rust and generate air.


the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


ID: 96292 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96297 - Posted: 3 Mar 2020, 0:43:47 UTC - in response to Message 96295.  
Last modified: 3 Mar 2020, 0:44:15 UTC

the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.


A central heating radiator is huge compared to anything normally designed for computers, so although the gradient is small, the surface area is large.

Or you could do what I did with a heating radiator, add fans. I got three times the heat out of the thing.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


Impossible, as the second set are receiving water already heated from the first set. Yes, if you have a fast enough flow, the output temperature of water isn't much higher than the input. Same works with central heating radiators - you can put two in parallel to save on effort/piping, and as long as the water flows fast enough, both will get hot. If not fast enough, the first one dissipates all the heat and the second one is cold.


it's not impossible. look at the numbers. I posted them.

GPU 4-5-6 are actually the LAST 3 cards, getting the "hot" water from GPU 0-1-2-3. yet they are actually running a few degrees cooler.

GPU temps in this kind of setup are affected a lot more by larger factors like the thermal transfer from the die to the waterblock. the water itself doesnt have a large gradient across the loop, probably varies ~1C if measured at any point in the loop.
ID: 96297 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 96340 - Posted: 3 Mar 2020, 17:59:12 UTC - in response to Message 96338.  
Last modified: 3 Mar 2020, 18:00:06 UTC

I put “hot” in quotes for a reason. The water temp varies very little across the loop.

Are you even looking at the pics? All cards are right next to each other. They are the only heat generating components in the loop (minus the negligible heat generated by the pumps). All 7 cards are ASUS Turbo (blower model) RTX 2080s, running at 200W each. This model was chosen since they were the only 2080 I could easily get at a good price, that also had single slot I/O for use with a single slot waterblock setup like this. You’ll notice that the middle card’s power connectors look slightly different. This one came as an “EVO” variant. But more or less the same as the others.

It runs 2 D5 water pumps on speed setting of 4 (out of 5), for redundancy more than anything else. I could run them with one pump if I wanted.

It’s probably down to how efficient each individual chip is and maybe how well the thermal transfer is from the die to the waterblock.
ID: 96340 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 96342 - Posted: 3 Mar 2020, 18:11:09 UTC - in response to Message 96285.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


Why so many broken ones?

Few through user error. Most because of swapping them between server, and static killed them. Few because of DOA, or the memory bug.
ID: 96342 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 96343 - Posted: 3 Mar 2020, 18:16:33 UTC - in response to Message 96291.  
Last modified: 3 Mar 2020, 18:34:35 UTC

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


this only applies to the cards you have/had in your hands. certainly not ALL cards ever, and not all use cases either.

your gross simplification would be akin to saying something like "ALL CPUs can overclock to 5.3GHz because all of the ones I have did". It's simply not true in all cases.

But you'd admit at least that you haven't had 25 RTX GPUs, and thus have less of an authority on the topic, no? And yes, within limits. most RTX GPUs have a different overclock, but the same GPU and memory boost frequencies. Most GPUs that don't reach high boost, is because they lack proper cooling, but GPU frequency is guaranteed by Nvidia, and steered by the Nvidia driver, based on the board temperature. So if you have the same board from a different brand, running at the same temperature, it'll get the same boost clock rates (within a 2-5% binning margin)
ID: 96343 · Report as offensive
Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · 17 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.