PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 20 · Next

AuthorMessage
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 95601 - Posted: 1 Feb 2020, 5:27:07 UTC - in response to Message 95590.  

I know for Folding, there's a benchmark utility.
This isn't available for Boinc.
But the best way to see how your GPU is doing, is to use Boinc for a few days, on a single project, and at the same time, run it from a GPU placed in a PCIE 16x slot.
And see how their score differs after a few days.
The x1 slot may seem like it's crunching fine, but could be crunching at only 80% (or less) of it's potential.


I just run several tasks and see what the average time to completion is. With Milkyway that's only a matter of minutes. Then I shift it to the new socket and compare.

But the GPU usage in MSI Afterburner or GPU-Z is also just as useful.


Yes, clockspeed and power consumption should also show how much of the GPU is really used.
Though a lot of projects will run your GPU at an indicated 92-100% at their rated power consumption levels, regardless of whether or not the GPU is in a x1 or x16 slot.
ID: 95601 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95606 - Posted: 1 Feb 2020, 13:33:16 UTC

Not really, it depends on what part of the GPU GPU-Z is using to determine the percentage use. For most GPUs it is possible to max out one part while other parts are a long way off fully loaded. Further, each part of a GPU will have its own, internal, power budget and that all those added together can exceed the total power that the internal PSU system is capable of supplying reliably.
ID: 95606 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 95658 - Posted: 4 Feb 2020, 16:25:36 UTC
Last modified: 4 Feb 2020, 16:26:43 UTC

I mean to say, some programs use pcie bus data as a source of GPU utilization. But pcie data transfer can not be measured with any close precision, because such measuring sensors don't exist in common PC hardware. At best, an approximation of 15% can be guessed.

Most GPUs crunching will show 100% utilization, even if they're plugged in a PCIE x1 slot. But programs like folding at home, have an accurate performance sensor, that shows a sharp performance drop on the fastest GPUs on pcie x1 ports, even if the GPU utilization sensors remain at 100%.

More than likely, they're based on GPU frequency, average load, as well as pcie data bus transfer (which some sensors measure as GPU 100% utilization, even if the GPU frequency drops to idle for a moment).
ID: 95658 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95663 - Posted: 4 Feb 2020, 17:18:23 UTC - in response to Message 95610.  

Yes, the PSUs built into the graphics card - the ones that convert the incoming 5 & 12 volts into whatever voltages are needed on-board.

Not so - it is very typical of modern electronics where the hardware designer does not know how the application programmer will need his system to work. In most cases it would be wasteful silicon, as it is only in the extreme cases that one is trying to load every part of a GPU to 100% of its capability simultaneously. By having an overall power budget less than the theoretical total off all the components you keep the losses down, and this keep the GPU coolr tan if one had the internal PSUs scaled for everything running at maximum all the time.
ID: 95663 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 95671 - Posted: 4 Feb 2020, 19:25:21 UTC - in response to Message 95658.  

I mean to say, some programs use pcie bus data as a source of GPU utilization. But pcie data transfer can not be measured with any close precision, because such measuring sensors don't exist in common PC hardware. At best, an approximation of 15% can be guessed.

Most GPUs crunching will show 100% utilization, even if they're plugged in a PCIE x1 slot. But programs like folding at home, have an accurate performance sensor, that shows a sharp performance drop on the fastest GPUs on pcie x1 ports, even if the GPU utilization sensors remain at 100%.

More than likely, they're based on GPU frequency, average load, as well as pcie data bus transfer (which some sensors measure as GPU 100% utilization, even if the GPU frequency drops to idle for a moment).


it can be measured with reasonable precision, at least on nvidia gpus. several tools exist that measure this. I've used them (gmonitor on linux). precision is good enough that you see a rough doubling of the PCIe bandwidth use when you halve the link speed or width. its good enough to be able to tell if you're bottlenecked or not, which is all most people care about.

i've never seen any tool read pcie bandwidth and report that value as "gpu utilization".

as far as folding performance, probably best to just read their own forums and what people there have reported: https://foldingforum.org/viewtopic.php?nomobile=1&f=38&t=31708

"My RTX 2080 Ti does seem to be constrained by the 1x bandwidth.
On a 4x/8x/16x slot it gets 2,3M PPD, on a PCIE 3.0 1x slot with this riser, it only gets 1,9-2,1M PPD"

that equates to about 9-17% performance loss, when cutting the bandwidth by over 90%, which means you don't need a huge amount of bandwidth to begin with. I don't think I would call that a sharp drop. seems like the limit is a little over PCIe 3.0 x1. all the way down to x4 seems to be no impact.

but I don't fold anymore (not since like 2005-ish) and don't know if PCIe traffic is used throughout the whole WU, or if it only uses it during certain times.
ID: 95671 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 95672 - Posted: 4 Feb 2020, 19:26:52 UTC - in response to Message 95663.  

Yes, the PSUs built into the graphics card - the ones that convert the incoming 5 & 12 volts into whatever voltages are needed on-board.


you mean the VRM.

A VRM is not a PSU.
ID: 95672 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95673 - Posted: 4 Feb 2020, 19:36:01 UTC - in response to Message 95672.  

A VRM is a PSU, - it supplies power to another part of the GPU. It is a PSU that comprises a voltage regulator plus some other supporting hardware, (And in many cases can be controlled to regulate voltage, current, or power, temperature etc.)
ID: 95673 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95680 - Posted: 4 Feb 2020, 21:33:12 UTC

And what exactly does it DO?
It SUPPLIES POWER to the various bits of the GPU, therefore it is a POWER SUPPLY,
Its function is to reduce the input voltage (12V, 5V, 3,3V or whatever) to a level that the chip it is powering can use. It may be a fixed voltage, or a variable voltage. Where variable this may be achieved by "digital" or analogue means.
These physically small (well, normally) small are called "Voltage Regulation Modules" only because they are normally a "single chip" solution, but that is all - I suppose the earliest "VRM" were members of the 78xx family dating back to before Noah was a lad - there were versions that could be used as controlled variable voltage supplies, that could regulate down to a few of mV.
ID: 95680 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95683 - Posted: 4 Feb 2020, 22:04:29 UTC - in response to Message 95676.  

NO - it is people who abuse these devices by pushing them beyond their designed envelopes that cause the problems with cooling.
In most GPUs you would never actually reach the point where you were loading the WHOLE of the GPU to 100%, that is a plain simple fact. This is because some combinations of maximum load are mutually exclusive, thus it is perfectly acceptable to have a "110% total power budget" because two mutually exclusive sub-systems' individual power budgets can, by the architectural design never reach their limits at the same time.

As for this little gem:
Electricians do the same nonsense - in the UK we have double wall outlets rated at 20 amps. But each socket is 13A. So you can theoretically plug in 26 amps of load and melt one, it's not fused! They claim that would never happen...


The "13 Amps" refers to the maximum power that can be drawn from a single outlet. The "20Amp" rating refers to the maximum combined load that one should draw from that pair of outlets, not forgetting that the individual rating for each of the pair is 13A - in theory one could run a 13A load from one and a 7A load from the other and be within the limits. If the cable (in the wall) is failing at 20A it was not the correct cable for the job as it is required to be rated for at least 30A (the "normal" 2.5mm^2 is more than capable of taking 30A), the sparky was cutting corners and should have his crimper cut off at the knees.
A ring main has to be fused to 30A (or have an equivalent rated breaker), and the cable used must be capable of carrying that current. There are some very detailed rules surrounding fused and un-fused spurs. In this context "fused" is a fuse or breaker at the point the spur is broken-out from its supply ring or other circuit. That's the "domestic" requirement - The rules change a bit for commercial or industrial installations, but then one has to have local breakers/fuses to protect smaller areas such as work benches.
Most fires are not caused by the installed wiring failing, but the attached devices (including trailing extension with multi-gang outlets) failing. Someone I know recently lost their home due to a faulty wireless modem which was drawing too little current to blow the "wall-end" fuse, but sufficient to set the diddy little wall-wart power supply on fire....
ID: 95683 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 95702 - Posted: 5 Feb 2020, 15:54:57 UTC
Last modified: 5 Feb 2020, 15:55:29 UTC

only in the most pedantic of definitions can you claim a "VRM is a PSU". it's not.

A PSU converts wall/mains AC into the myriad of required DC voltages. Utilizing a combination of bridge rectifier and then DC-DC conversions and filtering on several "rails" for different voltages (+12V/-12V/+5V/3.3V, etc). the PSU "supplies" DC voltage which otherwise doesn't exist before the PSU.

A VRM is much more simple and is just a buck converter. just because a buck converter is one of many components used in a PSU doesn't make a VRM the same thing as a PSU.
ID: 95702 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1283
United Kingdom
Message 95705 - Posted: 5 Feb 2020, 16:46:28 UTC

A PSU is simply a means of converting an "unusable" power supply into one that is usable by the device that you want to use.
In front of me as I type I have a 12v(AC) to 230v(DC) power supply, a couple of 230v(AC) to low voltage, variable frequency power supplies, several AC to low volt DC (fixed), a couple of AC in, variable DC out units.
Last week I had a 440HZ single phase to 60Hz three phase unit that was on its way to someone who needed to use some "normal" three-phase tools on an airframe, and the only power to hand is the 44oHz from an aircraft APU.
Then there is the 12v dc to 72v dc one that is in use as a door stop (some kind person drilled a hole through the case to mount it and wrecked one of the inductors, and we are waiting for a new one).

All the above are Power Supply Units.

What you say about VRM is actually the wrong way round as one of the ac-dc variable units uses a couple of VRM modules to drive its power FET output devices. That is the VRM is PART of a much larger PSU.
ID: 95705 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 95706 - Posted: 5 Feb 2020, 17:33:29 UTC - in response to Message 95705.  
Last modified: 5 Feb 2020, 17:35:42 UTC

Extraneous information aside, I still disagree that a VRM is a PSU. The lack of AC-DC conversion is a major design element that separates the two.

We’re talking standardized PC components here. A PSU is the device converting AC to DC at the voltages defined in the ATX standard. A VRM is merely a component, used within PSUs and many other PC circuits and systems.
ID: 95706 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 228
United States
Message 95709 - Posted: 5 Feb 2020, 20:03:48 UTC - in response to Message 95707.  

A wall wart is also not a PSU. Again. Standardized PC components here...

Personally I wouldn’t call a voltage converter in any sense to be a Power Supply Unit (especially when talking about something like PCs with standardized components and common vernacular). When outputting AC at a different voltage, it’s simply a transformer.

Go ahead and take your GPU to any computer shop and point to the VRMs and say “is this a PSU?” And they will say no. For good reason.
ID: 95709 · Report as offensive
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 20 · Next

Message boards : GPUs : PCI express risers to use multiple GPUs on one motherboard - not detecting card?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.