PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · 17 . . . 20 · Next

AuthorMessage
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96275 - Posted: 2 Mar 2020, 19:27:17 UTC - in response to Message 96257.  
Last modified: 2 Mar 2020, 19:33:30 UTC

All (note *ALL*) RTX GPUs run stable at 15GBPS.


it's amazing that you have been able to test every RTX card that has been produced. /sarcasm

I'm sure MOST might handle that overclock. but that doesn't mean that ALL can. The stock speed for all but the 2080Super is 14Gbps. The 2080Super has memory binned / cherry picked to handle those increased speeds.

and it certainly depends on the task being run. +1000MHz off the base 14,000 is a pretty hefty jump.

i also disagree that power limiting somehow "turns off unused modules". that's not how this works. GPU memory modules are only individually clocked to 1750MHz. they get the increased "effective" speeds by reading from all modules simultaneously. they don't just fill up one by one. no matter how much or little data is being read from VRAM, it gets distributed across all modules. all are in use at all times.
ID: 96275 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96277 - Posted: 2 Mar 2020, 19:44:42 UTC - in response to Message 96274.  

the GPU control systems these days are more sophisticated than you think they are. And Nvidia is a lot better at it than AMD is.

Power is only a function of Voltage and Current. Current is mostly a function of clock speed. (ignoring differences such as architecture and process node).

So if you keep clock speed the same, but reduce the voltage. the GPU will be just as fast and do the same work, but using less power. the degree to which you can do this depends on how good the individual chip is on the silicon level. AMD used to refer to this kind of thing as "Silicon Quality" and you could actually get a reading of this value from older AMD chips. if you reduce the voltage too much, it becomes unstable. but usually you should be able to reduce the voltage a little bit and remain stable. Any reduction in voltage will reduce power used.

we have to play with power limits to do this on nvidia, since they all but removed voltage control from the end user on their newer cards (unless you hard mod the GPU hardware). So if you say give me X clocks at Y power limit, the card will "try" the best it can, but at some point it stops giving you more clocks since it can't/won't reduce voltage any further.


I don't like unstable. Which is why I never adjust clock speeds or voltages. There's a reason they were set to those levels at manufacture.
ID: 96277 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96279 - Posted: 2 Mar 2020, 20:23:55 UTC - in response to Message 96277.  

there is variance in manufacture tolerances. which is why some chips can do higher clocks at lower voltages and others can't. not every piece of silicon (CPU or GPU or memory) will have the same exact limits. manufacturers select the speeds to run them based on what will be safe for nearly all products. anything that doesn't make the cut gets binned to a lower speed product or thrown away/recycled/whatever they do with it.

if you are willing to put in the time and effort to find a stable overclock/undervolt it can be beneficial and you can get some gains.

of course a stock card at stock speeds should always be stable, so there's nothing wrong with leaving it there. As I said before, the only reason nvidia underclocks your memory in compute loads, is their greed to push you to a higher priced Quadro card. that performance limiter has nothing to do with stability. I only overclock the memory to put back what they took away. And I do some very conservative core overclocking + power limiting that I've proven stable (GPUs can run months without me touching it, and with no downtime) for the increased efficiency since electricity ain't free and I'm using a lot of it, I'm pulling nearly 5000W 24/7/365 across my 3 main systems.

my watercooled 7x2080 system is power limited for 2 reasons. 1, to stay well below the PSU limits (single EVGA 1600W Platinum PSU), and 2, to keep temps reasonable since the heat is being radiated via a single 9x120mm radiator. 1400W is a lot of heat to dump from a single radiator, even though this one is large, 1400W is still pushing it.
ID: 96279 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96280 - Posted: 2 Mar 2020, 20:41:41 UTC - in response to Message 96279.  

there is variance in manufacture tolerances. which is why some chips can do higher clocks at lower voltages and others can't. not every piece of silicon (CPU or GPU or memory) will have the same exact limits. manufacturers select the speeds to run them based on what will be safe for nearly all products. anything that doesn't make the cut gets binned to a lower speed product or thrown away/recycled/whatever they do with it.

if you are willing to put in the time and effort to find a stable overclock/undervolt it can be beneficial and you can get some gains.

of course a stock card at stock speeds should always be stable, so there's nothing wrong with leaving it there. As I said before, the only reason nvidia underclocks your memory in compute loads, is their greed to push you to a higher priced Quadro card. that performance limiter has nothing to do with stability. I only overclock the memory to put back what they took away. And I do some very conservative core overclocking + power limiting that I've proven stable (GPUs can run months without me touching it, and with no downtime) for the increased efficiency since electricity ain't free and I'm using a lot of it, I'm pulling nearly 5000W 24/7/365 across my 3 main systems.

my watercooled 7x2080 system is power limited for 2 reasons. 1, to stay well below the PSU limits (single EVGA 1600W Platinum PSU), and 2, to keep temps reasonable since the heat is being radiated via a single 9x120mm radiator. 1400W is a lot of heat to dump from a single radiator, even though this one is large, 1400W is still pushing it.


I've broken too many chips by overclocking to try it again. Instability is usually just a nuisance, but if it's a CPU, you can corrupt your disk. And also the chip (GPU or CPU) can cease to function ever again.

If what you say about Nvidia underclocking memory is correct, then I understand why you circumvent it. But that's just given me another reason to choose AMD!

A 1600W PSU!? I thought 1kW was the highest you could get! They're expensive though, I think I'll stick to the 12V LED supplies at 1kW, about half the price per watt. I've yet to see if that's a real wattage though - if it's anything like cheap PC PSUs, anything over 50% load and you get smoke.

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
ID: 96280 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14687
Netherlands
Message 96281 - Posted: 2 Mar 2020, 20:48:53 UTC - in response to Message 96280.  

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!
ID: 96281 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 642
United States
Message 96282 - Posted: 2 Mar 2020, 20:58:48 UTC - in response to Message 96275.  
Last modified: 2 Mar 2020, 21:02:19 UTC

All (note *ALL*) RTX GPUs run stable at 15GBPS.


it's amazing that you have been able to test every RTX card that has been produced. /sarcasm

I'm sure MOST might handle that overclock. but that doesn't mean that ALL can. The stock speed for all but the 2080Super is 14Gbps. The 2080Super has memory binned / cherry picked to handle those increased speeds.

and it certainly depends on the task being run. +1000MHz off the base 14,000 is a pretty hefty jump.

i also disagree that power limiting somehow "turns off unused modules". that's not how this works. GPU memory modules are only individually clocked to 1750MHz. they get the increased "effective" speeds by reading from all modules simultaneously. they don't just fill up one by one. no matter how much or little data is being read from VRAM, it gets distributed across all modules. all are in use at all times.


It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)
ID: 96282 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96283 - Posted: 2 Mar 2020, 21:01:03 UTC - in response to Message 96281.  
Last modified: 2 Mar 2020, 21:02:38 UTC

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!


They seem like competant guys, yet they use what they already know to be a rusty old radiator to cool a very expensive PC. For goodness sake, a new radiator is a fraction of the cost of a PC. Rust and air bubbles in my central heating pump is bad enough, but in a computer?!

And why are they testing room temperature and comparing with a small heater? Clearly it's going to be identical if the thing functions. As he said at the start, energy cannot be created or destroyed.
ID: 96283 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 642
United States
Message 96284 - Posted: 2 Mar 2020, 21:01:28 UTC - in response to Message 96281.  
Last modified: 2 Mar 2020, 21:03:39 UTC

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!

Would be interesting to see, though I can't see how it would effectively work, seeing that you'd have to parallel feed all GPUs (meaning you'll need several pumps).
Serial feeding and you can do with 1 pump, but at 250+W, going from 1 GPU to a 2nd, to a 3rd, and the 3rd GPU will have a hard time staying cool enough.

You could try to run the whole system on oil, because radiators are bound to rust and generate air.
ID: 96284 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96285 - Posted: 2 Mar 2020, 21:03:59 UTC - in response to Message 96282.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


Why so many broken ones?
ID: 96285 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96286 - Posted: 2 Mar 2020, 21:08:25 UTC - in response to Message 96284.  

Would be interesting to see, though I can't see how it would effectively work, seeing that you'd have to parallel feed all GPUs (meaning you'll need several pumps).

Serial feeding and you can do with 1 pump, but at 250+W, going from 1 GPU to a 2nd, to a 3rd, and the 3rd GPU will have a hard time staying cool enough.


Or a big pump. Just use a central heating pump. I have a Grundfoss pump that can push water quite quickly through a 26mm pipe. And it's designed for very hot water.

You could try to run the whole system on oil, because radiators are bound to rust and generate air.


Actually they generate hydrogen. H2O + Fe --> FeO (rust) + H2. Don't smoke while bleeding your radiators.
ID: 96286 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96291 - Posted: 2 Mar 2020, 23:35:50 UTC - in response to Message 96282.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


this only applies to the cards you have/had in your hands. certainly not ALL cards ever, and not all use cases either.

your gross simplification would be akin to saying something like "ALL CPUs can overclock to 5.3GHz because all of the ones I have did". It's simply not true in all cases.
ID: 96291 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96292 - Posted: 2 Mar 2020, 23:51:23 UTC - in response to Message 96284.  

As for radiating heat, I heard once that someone had a very good result by simply using a central heating radiator and pumping the water through that. For those of you in more modern societies than the UK who have proper heat pumps, this is what I mean by a central heating radiator. Yes, we still use them in the UK to heat our homes, by actually pumping water through the house from a central boiler (furnace). In the 21st century!
Linus of LTT tried that, https://www.youtube.com/watch?v=1WLIm4XLPAE. The outcome wasn't pretty. Perhaps if you try this, don't use a used radiator!

Would be interesting to see, though I can't see how it would effectively work, seeing that you'd have to parallel feed all GPUs (meaning you'll need several pumps).
Serial feeding and you can do with 1 pump, but at 250+W, going from 1 GPU to a 2nd, to a 3rd, and the 3rd GPU will have a hard time staying cool enough.

You could try to run the whole system on oil, because radiators are bound to rust and generate air.


the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


ID: 96292 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96293 - Posted: 3 Mar 2020, 0:06:08 UTC - in response to Message 96291.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


this only applies to the cards you have/had in your hands. certainly not ALL cards ever, and not all use cases either.

your gross simplification would be akin to saying something like "ALL CPUs can overclock to 5.3GHz because all of the ones I have did". It's simply not true in all cases.


25 seems like a pretty big data set.
ID: 96293 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96295 - Posted: 3 Mar 2020, 0:08:37 UTC - in response to Message 96292.  
Last modified: 3 Mar 2020, 0:11:24 UTC

the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.


A central heating radiator is huge compared to anything normally designed for computers, so although the gradient is small, the surface area is large.

Or you could do what I did with a heating radiator, add fans. I got three times the heat out of the thing.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


Impossible, as the second set are receiving water already heated from the first set. Yes, if you have a fast enough flow, the output temperature of water isn't much higher than the input. Same works with central heating radiators - you can put two in parallel to save on effort/piping, and as long as the water flows fast enough, both will get hot. If not fast enough, the first one dissipates all the heat and the second one is cold.
ID: 96295 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96297 - Posted: 3 Mar 2020, 0:43:47 UTC - in response to Message 96295.  
Last modified: 3 Mar 2020, 0:44:15 UTC

the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.


A central heating radiator is huge compared to anything normally designed for computers, so although the gradient is small, the surface area is large.

Or you could do what I did with a heating radiator, add fans. I got three times the heat out of the thing.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


Impossible, as the second set are receiving water already heated from the first set. Yes, if you have a fast enough flow, the output temperature of water isn't much higher than the input. Same works with central heating radiators - you can put two in parallel to save on effort/piping, and as long as the water flows fast enough, both will get hot. If not fast enough, the first one dissipates all the heat and the second one is cold.


it's not impossible. look at the numbers. I posted them.

GPU 4-5-6 are actually the LAST 3 cards, getting the "hot" water from GPU 0-1-2-3. yet they are actually running a few degrees cooler.

GPU temps in this kind of setup are affected a lot more by larger factors like the thermal transfer from the die to the waterblock. the water itself doesnt have a large gradient across the loop, probably varies ~1C if measured at any point in the loop.
ID: 96297 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96338 - Posted: 3 Mar 2020, 17:41:49 UTC - in response to Message 96297.  

the problem with trying to run a computer through a radiator like that is that it doesnt generate enough heat to effectively transfer the heat out.

heat transfer is driven by temperature gradient.


A central heating radiator is huge compared to anything normally designed for computers, so although the gradient is small, the surface area is large.

Or you could do what I did with a heating radiator, add fans. I got three times the heat out of the thing.

your comments about flow and "staying cool enough" are also incorrect. water from the earlier GPUs has little effect on the temps of other components downstream.

I have 7GPUs in a single system. all of them are watercooled. flow is a hybrid flow setup: first 4 GPUs parallel flow -> serial into next bank -> last 3 GPUs in parallel flow. based on your view, the last 3 GPUs should be much hotter than the first 4, but in reality all GPUs are about the same temp.


Impossible, as the second set are receiving water already heated from the first set. Yes, if you have a fast enough flow, the output temperature of water isn't much higher than the input. Same works with central heating radiators - you can put two in parallel to save on effort/piping, and as long as the water flows fast enough, both will get hot. If not fast enough, the first one dissipates all the heat and the second one is cold.


it's not impossible. look at the numbers. I posted them.

GPU 4-5-6 are actually the LAST 3 cards, getting the "hot" water from GPU 0-1-2-3. yet they are actually running a few degrees cooler.

GPU temps in this kind of setup are affected a lot more by larger factors like the thermal transfer from the die to the waterblock. the water itself doesnt have a large gradient across the loop, probably varies ~1C if measured at any point in the loop.


Well if they get hotter water, they should be hotter by that amount of C. But if you're pumping the water fast enough, it may be negligible. So why do they run cooler? Are they different cards? Are they further from other sources of heat?
ID: 96338 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 159
United States
Message 96340 - Posted: 3 Mar 2020, 17:59:12 UTC - in response to Message 96338.  
Last modified: 3 Mar 2020, 18:00:06 UTC

I put “hot” in quotes for a reason. The water temp varies very little across the loop.

Are you even looking at the pics? All cards are right next to each other. They are the only heat generating components in the loop (minus the negligible heat generated by the pumps). All 7 cards are ASUS Turbo (blower model) RTX 2080s, running at 200W each. This model was chosen since they were the only 2080 I could easily get at a good price, that also had single slot I/O for use with a single slot waterblock setup like this. You’ll notice that the middle card’s power connectors look slightly different. This one came as an “EVO” variant. But more or less the same as the others.

It runs 2 D5 water pumps on speed setting of 4 (out of 5), for redundancy more than anything else. I could run them with one pump if I wanted.

It’s probably down to how efficient each individual chip is and maybe how well the thermal transfer is from the die to the waterblock.
ID: 96340 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 642
United States
Message 96342 - Posted: 3 Mar 2020, 18:11:09 UTC - in response to Message 96285.  

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


Why so many broken ones?

Few through user error. Most because of swapping them between server, and static killed them. Few because of DOA, or the memory bug.
ID: 96342 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 642
United States
Message 96343 - Posted: 3 Mar 2020, 18:16:33 UTC - in response to Message 96291.  
Last modified: 3 Mar 2020, 18:34:35 UTC

It's your right not to believe things, but I've had all RTX GPUs (save for a 2080 Super). The brand makes less of a difference in terms of performance or overclockability, as they all get their boards from Nvidia, just differently binned.
Meaning, ASUS, MSI, EVGA,... all have about the same max GPU frequency depending on what bin they're getting their chips from (nowadays all are A+ or A++ something).
The only difference is in what ports, how many fans, how efficient the cooling is, etc...
So, yes, I did own all 6 RTX models released by Nvidia, spread out over all the brands. While I currently run 5 to 6 RTX GPUs, and own 15, I've owned a total of 25 RTX GPUs (counting the working ones, broken ones // Returns, and DOAs)


this only applies to the cards you have/had in your hands. certainly not ALL cards ever, and not all use cases either.

your gross simplification would be akin to saying something like "ALL CPUs can overclock to 5.3GHz because all of the ones I have did". It's simply not true in all cases.

But you'd admit at least that you haven't had 25 RTX GPUs, and thus have less of an authority on the topic, no? And yes, within limits. most RTX GPUs have a different overclock, but the same GPU and memory boost frequencies. Most GPUs that don't reach high boost, is because they lack proper cooling, but GPU frequency is guaranteed by Nvidia, and steered by the Nvidia driver, based on the board temperature. So if you have the same board from a different brand, running at the same temperature, it'll get the same boost clock rates (within a 2-5% binning margin)
ID: 96343 · Report as offensive
Peter Hucker
Avatar

Send message
Joined: 6 Oct 06
Posts: 1144
United Kingdom
Message 96344 - Posted: 3 Mar 2020, 18:18:47 UTC - in response to Message 96340.  

I put “hot” in quotes for a reason. The water temp varies very little across the loop.

Are you even looking at the pics? All cards are right next to each other. They are the only heat generating components in the loop (minus the negligible heat generated by the pumps). All 7 cards are ASUS Turbo (blower model) RTX 2080s, running at 200W each. This model was chosen since they were the only 2080 I could easily get at a good price, that also had single slot I/O for use with a single slot waterblock setup like this. You’ll notice that the middle card’s power connectors look slightly different. This one came as an “EVO” variant. But more or less the same as the others.

It runs 2 D5 water pumps on speed setting of 4 (out of 5), for redundancy more than anything else. I could run them with one pump if I wanted.

It’s probably down to how efficient each individual chip is and maybe how well the thermal transfer is from the die to the waterblock.


So it looks like it's ok to run water through two cards in series without much problem. Depends on the setup I guess. I've only ever used systems that cool one card. You must have a damn good pump if there's only a few C difference after going through one set of cards. I hope the pressure isn't too large and it bursts something.
ID: 96344 · Report as offensive
Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · 17 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2021 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.