Thread 'Wishing important projects would start supporting GPU crunching!'

Message boards : The Lounge : Wishing important projects would start supporting GPU crunching!
Message board moderation

To post messages, you must log in.

AuthorMessage
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 97399 - Posted: 9 Apr 2020, 10:31:15 UTC

Most projects I want to support, only do CPU crunching.
Few do GPU crunching, and when they do (like Einstein), they don't do it well.
Math projects, like the Collatz, GPU Grid, Prime Grid, all make really good use of the GPU cores!
Too bad a lot of that data is wasted on prime numbers we won't ever need to know in life...
ID: 97399 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5121
United Kingdom
Message 97403 - Posted: 9 Apr 2020, 10:51:43 UTC - in response to Message 97399.  

Have you ever tried to program a GPU to do anything - even a simple mathematical task like finding primes?
ID: 97403 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 2638
United Kingdom
Message 97407 - Posted: 9 Apr 2020, 12:47:16 UTC

While some important projects could make use of GPU computing, some could not make good use of a GPU because each calculation depends on the result of the previous one as pointed out in another thread recently. My last programming adventures were with Algol60, AlgolW and PL1 and all over 40 years ago so I suspect I have little in the way of expertise to offer in respect of programming a GPU whether to find primes or to do something I consider more important.
ID: 97407 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5121
United Kingdom
Message 97409 - Posted: 9 Apr 2020, 13:03:59 UTC - in response to Message 97407.  

Snap, snap, and BCPL!
ID: 97409 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1295
United Kingdom
Message 97410 - Posted: 9 Apr 2020, 13:19:04 UTC

In order to port a program from what is essentially a serial processor(*) (CPU) to a highly parallel processor like a GPU there are a couple of prerequisites that need to be considered. First, and possibly most important, is "Is the problem amenable to high-parallelity processing?" While some problems are, there are a number that aren't; if the problem replies on a mono-linear path to its solution then don't bother with using a GPU. Next, "Is the resource available to develop the application in the time available"? - here "resource" includes people with the necessary skills and understanding to do the job, development time, money, hardware and so on.
There are other considerations, like accuracy and precision, which may be harder to quantify but are equally as important. It is pointless a program completing its task ten times faster on a GPU when the result it produces does not have the required accuracy or precision.

Back when I was working on the real-time control of major chemical plant I would have loved to have had a couple of GPUs available to speed up some of the processing as that would have better optimised the use of heat-transfer media between the "getting too hot" side of a process and the "we need to heat this up a bit more" areas (even within the same reactor vessels).

If you feel a project is "somewhat lacking" in their use of GPUs I would suggest you approach them directly with a decent sized pile of dollars/euros/pounds (at least a hundred thousand) and give them it to improve that performance, but don't be surprised if some say "thanks, but no thanks".


(*) - Yes I know about multi-threading and pre-fetch queues and the like, but they are still in low numbers when compared to the thousands of threads on a GPU.
ID: 97410 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1295
United Kingdom
Message 97411 - Posted: 9 Apr 2020, 13:22:11 UTC - in response to Message 97407.  

COBOL to you ;-)
Actually my set is Fortran (various varieties), C, C++, ADA, PL1 (the RTC one, not the "financial" one of the same name), Pascal, and a few more I'd rather not have to think about.....
ID: 97411 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5121
United Kingdom
Message 97412 - Posted: 9 Apr 2020, 13:48:06 UTC - in response to Message 97411.  
Last modified: 9 Apr 2020, 13:48:26 UTC

I'll see your COBOL and raise you LISP ;-)

My dissertation project was mainly AlgolW, but I deliberately wrote a couple of subroutines in FORTRAN to prove that I knew about cross-language linkers.

After that, I took a number of years away from computers, and my next machine came with an 8K BASIC interpreter mounted in a re-purposed 8-track cartridge case:



Trouble was, you couldn't read the manual when it was plugged into the computer!
ID: 97412 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15542
Netherlands
Message 97414 - Posted: 9 Apr 2020, 13:51:09 UTC - in response to Message 97412.  

Peel the sticker off it, or make a photograph. :)
ID: 97414 · Report as offensive
ProfileGary Charpentier
Avatar

Send message
Joined: 23 Feb 08
Posts: 2486
United States
Message 97421 - Posted: 9 Apr 2020, 14:30:56 UTC

ID: 97421 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 2638
United Kingdom
Message 97422 - Posted: 9 Apr 2020, 14:31:59 UTC - in response to Message 97411.  

COBOL to you ;-)
Actually my set is Fortran (various varieties), C, C++, ADA, PL1 (the RTC one, not the "financial" one of the same name), Pascal, and a few more I'd rather not have to think about.....


Didn't know about the "financial" so it must have been the other one. I did dabble in BASIC and Forth a couple of times too which may have only been about 35 years ago. The last to take the shine off things.
ID: 97422 · Report as offensive
Gary Roberts

Send message
Joined: 7 Sep 05
Posts: 130
Australia
Message 97451 - Posted: 10 Apr 2020, 9:30:14 UTC - in response to Message 97399.  

Hey all you guys insisting on wandering down memory lane ... don't you have any protocols about staying on topic?? :-) ;-).

Few do GPU crunching, and when they do (like Einstein), they don't do it well.
What was that saying about the poor workman and his tools???
I don't seem to be able to quite remember it ...
Cheers,
Gary.
ID: 97451 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 97542 - Posted: 12 Apr 2020, 23:11:05 UTC
Last modified: 12 Apr 2020, 23:13:02 UTC

A lot of people say 'can't be done' because a GPU is mostly 16 bit sp. But a lot of them have 32bit dp cores as well.
On top of that, a project could use a CPU for the most complex calculations (use 2 or 3 CPU cores if must per GPU), and use the GPU for the smaller, easier to calculate parts.
Folding at home is able to feed a GPU with enough data to keep the GPU running at full load, utilizing only 1 CPU core (granted, on the RTX series you'll need a 2,5 to 3Ghz CPU, and almost a 4 Ghz CPU to keep up with the fastest GPUs, like RTX Titans).

I'm not saying GPU programming is easy. But it definitely speeds up any crunching job. Even older GPUs running at below 1Ghz still outperforms even the biggest threadripper CPU, and is certainly more cost efficient in both purchase price, and running cost.
ID: 97542 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15542
Netherlands
Message 97547 - Posted: 13 Apr 2020, 6:27:40 UTC - in response to Message 97542.  
Last modified: 13 Apr 2020, 8:39:20 UTC

because a GPU is mostly 16 bit sp. But a lot of them have 32bit dp cores as well.
If your single precision is only 16bit, your GPU might be broken.

Half precision is 16bit floating point, single precision is 32bit floating point, double precision is 64bit floating point. Not all GPUs are capable of using full DP or HP. Half precision is used in computer graphics.

Even older GPUs running at below 1Ghz still outperforms even the biggest threadripper CPU
It's not so much the speed of the GPU that speeds up the calculations, but the sheer amount of processing cores it has that rip at the problem in parallel. But the problem should be capable to be translated into the language the GPU cores talk at, and that's not always possible or very sufficient.

But enough people have tried to explain that to you already. In all kinds of different forms and answers. You just continue to ignore what the experts say and go your own way, with your 16bit single precision and your 1GHz GPU. One day you'll be an expert in your own material.
ID: 97547 · Report as offensive

Message boards : The Lounge : Wishing important projects would start supporting GPU crunching!

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.