Message boards : The Lounge : Wishing important projects would start supporting GPU crunching!
Message board moderation
Author | Message |
---|---|
Send message Joined: 8 Nov 19 Posts: 718 |
Most projects I want to support, only do CPU crunching. Few do GPU crunching, and when they do (like Einstein), they don't do it well. Math projects, like the Collatz, GPU Grid, Prime Grid, all make really good use of the GPU cores! Too bad a lot of that data is wasted on prime numbers we won't ever need to know in life... |
Send message Joined: 5 Oct 06 Posts: 5121 |
Have you ever tried to program a GPU to do anything - even a simple mathematical task like finding primes? |
Send message Joined: 28 Jun 10 Posts: 2638 |
While some important projects could make use of GPU computing, some could not make good use of a GPU because each calculation depends on the result of the previous one as pointed out in another thread recently. My last programming adventures were with Algol60, AlgolW and PL1 and all over 40 years ago so I suspect I have little in the way of expertise to offer in respect of programming a GPU whether to find primes or to do something I consider more important. |
Send message Joined: 5 Oct 06 Posts: 5121 |
Snap, snap, and BCPL! |
Send message Joined: 25 May 09 Posts: 1295 |
In order to port a program from what is essentially a serial processor(*) (CPU) to a highly parallel processor like a GPU there are a couple of prerequisites that need to be considered. First, and possibly most important, is "Is the problem amenable to high-parallelity processing?" While some problems are, there are a number that aren't; if the problem replies on a mono-linear path to its solution then don't bother with using a GPU. Next, "Is the resource available to develop the application in the time available"? - here "resource" includes people with the necessary skills and understanding to do the job, development time, money, hardware and so on. There are other considerations, like accuracy and precision, which may be harder to quantify but are equally as important. It is pointless a program completing its task ten times faster on a GPU when the result it produces does not have the required accuracy or precision. Back when I was working on the real-time control of major chemical plant I would have loved to have had a couple of GPUs available to speed up some of the processing as that would have better optimised the use of heat-transfer media between the "getting too hot" side of a process and the "we need to heat this up a bit more" areas (even within the same reactor vessels). If you feel a project is "somewhat lacking" in their use of GPUs I would suggest you approach them directly with a decent sized pile of dollars/euros/pounds (at least a hundred thousand) and give them it to improve that performance, but don't be surprised if some say "thanks, but no thanks". (*) - Yes I know about multi-threading and pre-fetch queues and the like, but they are still in low numbers when compared to the thousands of threads on a GPU. |
Send message Joined: 25 May 09 Posts: 1295 |
COBOL to you ;-) Actually my set is Fortran (various varieties), C, C++, ADA, PL1 (the RTC one, not the "financial" one of the same name), Pascal, and a few more I'd rather not have to think about..... |
Send message Joined: 5 Oct 06 Posts: 5121 |
I'll see your COBOL and raise you LISP ;-) My dissertation project was mainly AlgolW, but I deliberately wrote a couple of subroutines in FORTRAN to prove that I knew about cross-language linkers. After that, I took a number of years away from computers, and my next machine came with an 8K BASIC interpreter mounted in a re-purposed 8-track cartridge case: Trouble was, you couldn't read the manual when it was plugged into the computer! |
Send message Joined: 29 Aug 05 Posts: 15542 |
Peel the sticker off it, or make a photograph. :) |
Send message Joined: 23 Feb 08 Posts: 2486 |
Urgent need for COBOL programmers https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-jersey-trnd/index.html |
Send message Joined: 28 Jun 10 Posts: 2638 |
COBOL to you ;-) Didn't know about the "financial" so it must have been the other one. I did dabble in BASIC and Forth a couple of times too which may have only been about 35 years ago. The last to take the shine off things. |
Send message Joined: 7 Sep 05 Posts: 130 |
Hey all you guys insisting on wandering down memory lane ... don't you have any protocols about staying on topic?? :-) ;-). Few do GPU crunching, and when they do (like Einstein), they don't do it well.What was that saying about the poor workman and his tools??? I don't seem to be able to quite remember it ... Cheers, Gary. |
Send message Joined: 8 Nov 19 Posts: 718 |
A lot of people say 'can't be done' because a GPU is mostly 16 bit sp. But a lot of them have 32bit dp cores as well. On top of that, a project could use a CPU for the most complex calculations (use 2 or 3 CPU cores if must per GPU), and use the GPU for the smaller, easier to calculate parts. Folding at home is able to feed a GPU with enough data to keep the GPU running at full load, utilizing only 1 CPU core (granted, on the RTX series you'll need a 2,5 to 3Ghz CPU, and almost a 4 Ghz CPU to keep up with the fastest GPUs, like RTX Titans). I'm not saying GPU programming is easy. But it definitely speeds up any crunching job. Even older GPUs running at below 1Ghz still outperforms even the biggest threadripper CPU, and is certainly more cost efficient in both purchase price, and running cost. |
Send message Joined: 29 Aug 05 Posts: 15542 |
because a GPU is mostly 16 bit sp. But a lot of them have 32bit dp cores as well.If your single precision is only 16bit, your GPU might be broken. Half precision is 16bit floating point, single precision is 32bit floating point, double precision is 64bit floating point. Not all GPUs are capable of using full DP or HP. Half precision is used in computer graphics. Even older GPUs running at below 1Ghz still outperforms even the biggest threadripper CPUIt's not so much the speed of the GPU that speeds up the calculations, but the sheer amount of processing cores it has that rip at the problem in parallel. But the problem should be capable to be translated into the language the GPU cores talk at, and that's not always possible or very sufficient. But enough people have tried to explain that to you already. In all kinds of different forms and answers. You just continue to ignore what the experts say and go your own way, with your 16bit single precision and your 1GHz GPU. One day you'll be an expert in your own material. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.