Different projects and PCI-e bandwidth usage

Message boards : GPUs : Different projects and PCI-e bandwidth usage
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile hiigaran
Avatar

Send message
Joined: 11 Sep 13
Posts: 57
Message 70281 - Posted: 17 Jun 2016, 15:17:29 UTC

I've been having some discussions on several sites regarding GPUs and bandwidth usage for distributed-computing projects, and I wanted to broaden things by hopefully getting some BOINC experts in on the matter.

Now most of us are probably familiar with GPU mining and how hardware is generally deployed in these farms, but for anyone who isn't too familiar with it, a mining farm is typically comprised of a cheap motherboard with as many PCI-e slots of any size, some basic RAM, a cheap CPU, and of course, the GPUs. Due to space limitations, these GPUs are normally connected to the motherboard via a flexible riser which is an x1 adapter at the motherboard end, an x16 adapter on the GPU, and a USB 3.0 cable connecting the two to each other. Essentially, these are PCI-e x1 extension cables. They do not actually use a USB interface. a 3.0 cable is used simply because it has the right number of wires inside to map to an x1 interface.

Now, given that these risers are bottlenecked at x1 bandwidths, this would limit performance for high-bandwidth applications such as gaming, and significant performance reductions would be observed. Since cryptocurrency mining does not require high bandwidth, no performance loss occurs here, as x1 bandwidth on PCI-e 2.0 or 3.0 is never maxed out.

I had assumed that since mining does not require such high levels of bandwidth, perhaps distributed-computing projects might be the same. In the past few weeks, I've been discussing this over on the Folding@Home forums, and to my disappointment, anything less than PCI-e 3.0 x4 or PCI-e 2.0 x8 would result in bandwidth saturation, and thus a performance loss occurs due to the GPUs never reaching full load. This was rather disappointing for me, as I had wanted to build a system specced similarly to a mining rig, for the purposes of distributed-computing.

After a bit of thinking, I started to wonder if every project would require the same levels of bandwidth as F@H, so here I am. With the lengthy backstory out of the way, my question to you guys is simply this: Are there any GPU projects on the BOINC platform that do not saturate the PCI-e x1 interface?

I would love to get some data from anyone working on GPU projects. MSI Afterburner shows bus usage, so if a few people are willing to spend two or three minutes to take a few measurements, I would really appreciate it. Please let me know what size and version the PCI-e slot of your GPU is as well.
This is a signature
ID: 70281 · Report as offensive
Matt Kowal
Avatar

Send message
Joined: 16 Dec 15
Posts: 15
United States
Message 70315 - Posted: 19 Jun 2016, 21:20:05 UTC - in response to Message 70281.  

Win 10 + GPU-Z 0.8.8
2500k + GTX760 + AMD 7970

The 760 is running Primegrid (Cuda PPS Sieve) and the 'Bus Interface Load ' appears to fluctuate between 1-3%.

The 7970 running Collatz and POEM@home. GPU-Z does not present any bandwidth information.

Both GPUs use custom app_config.xml files. The 7970 runs 2 POEM tasks at a time and the 760 runs 2 PPS Seive tasks. When the 7970 is on Collatz it runs 1 task at a time.

Let me know if I can provide any other information. I will be traveling later this week, so forgive any delayed response.
ID: 70315 · Report as offensive
Profile hiigaran
Avatar

Send message
Joined: 11 Sep 13
Posts: 57
Message 70321 - Posted: 20 Jun 2016, 11:39:00 UTC - in response to Message 70315.  

It's a start. Thanks.

From other BOINC project forums, as well as the F@H forums, it doesn't look too hopeful for the kind of system I wanted to plan out and build, but there are a lot of inconsistencies, so...the research continues!
This is a signature
ID: 70321 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 70322 - Posted: 20 Jun 2016, 12:18:35 UTC - in response to Message 70281.  
Last modified: 20 Jun 2016, 12:21:35 UTC

I suggest that you think about the sort of work each project is doing, and how it fits into your interests and intended construction.

GPUs - in the configuation you describe - are probably best suited to integer calculations based on a formula or algorithm. They are probably least suited to projects which search large quantities of pre-recorded data, and perform double-precision floating point arithmetic on it.

You can see how bitcoin would fit into the first group. It shares those characteristics with, say, Collatz Conjecture and PrimeGrid.

I don't know of any project which falls completely into the second group - Milkyway requires double-precision floating point arithmetic, but I think is doing algorithmic simulations, rather than data searches.

Examples of middle-of-the-road single precision data searches include Einstein, GPUGrid, SETI. (edit - GPUGrid is perhaps better described as a simulation based on large volumes of input data - but it still needs that bandwidth)

You can probably work out other contenders for yourself by reading down the Category column in Choosing BOINC projects.
ID: 70322 · Report as offensive
betreger
Volunteer tester
Help desk expert

Send message
Joined: 18 Oct 14
Posts: 1472
United States
Message 70339 - Posted: 21 Jun 2016, 20:26:09 UTC

GPUZ shows my GTX660 running 3 Cuda 55 tasks at a time shows 3% buss saturation on a PCIe 16 * 2 buss. PCIe buss bandwidth is not a bottleneck there.
ID: 70339 · Report as offensive
Profile hiigaran
Avatar

Send message
Joined: 11 Sep 13
Posts: 57
Message 70342 - Posted: 22 Jun 2016, 2:56:43 UTC

Hmm...And 3% on that slot would then equate to 24% on x1 3.0, or 48% on x1 2.0. But then again, a 660 is pretty outdated. Would be considered a low mid-end card by today's standards.

Likely, I'll need to just go for a standard system, then.
This is a signature
ID: 70342 · Report as offensive

Message boards : GPUs : Different projects and PCI-e bandwidth usage

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.