Fermi Cards recognition

Message boards : BOINC client : Fermi Cards recognition
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31825 - Posted: 29 Mar 2010, 13:58:50 UTC

Presently the Fermi cards are not being recognised correctly,

3/29/2010 ... NVIDIA GPU 0: GeForce GTX 480 (driver version 19733, CUDA version 3000, compute capability 2.0, 1503MB, 194 GFLOPS peak)

Tasks also fail to use all cores, but thats a different issue.
ID: 31825 · Report as offensive
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31828 - Posted: 29 Mar 2010, 14:32:37 UTC - in response to Message 31825.  

By GUI you mean what?
ID: 31828 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31831 - Posted: 29 Mar 2010, 15:03:51 UTC - in response to Message 31828.  

GUI = Graphics User Interface. That's all that BOINC Manager (boincmgr.exe) is, an easy way for you to give commands to the underlying client. It doesn't do any of the recognition, managing or other things. That's what the BOINC client (boinc.exe) does.

As for the recognition, and I am assuming you mean the GFLOPs number here, that's a drivers thing. The 197.33 drivers (not sure where you got them, the latest from Nvidia are 197.13, with their ForceWare drivers being 197.17) may just not recognize the GPU correctly, or as you think it should recognize it. BOINC merely takes the driver and gets the information about your GPU from within the driver files. If the driver says this is the peak GFLOPs, then there's nothing BOINC can do about that.

Nothing said about these cards not seeing the day of light for 'normal' users wanting to spend 500 dollars, before the 12th of April 2010. ;-)
ID: 31831 · Report as offensive
Profile David Anderson
Volunteer moderator
Project administrator
Project developer
Avatar

Send message
Joined: 10 Sep 05
Posts: 719
Message 31834 - Posted: 29 Mar 2010, 15:34:15 UTC - in response to Message 31825.  

BOINC gets the GPU name from the NVIDIA driver; I'll tell NVIDIA about this.
-- David
ID: 31834 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 31836 - Posted: 29 Mar 2010, 16:21:12 UTC - in response to Message 31825.  

3/29/2010 ... NVIDIA GPU 0: GeForce GTX 480 (driver version 19733, CUDA version 3000, compute capability 2.0, 1503MB, 194 GFLOPS peak)

194 GFLOPS isn't exactly exciting either - even my humble 9800GT are reported at 339 GFLOPs these days.

One source of practical information at the moment is this GPUGrid thread. Their appication (not BOINC) is reporting the GTX 480 as

# There is 1 device supporting CUDA
# Device 0: "GeForce GTX 480"
# Clock rate: 0.81 GHz
# Total amount of global memory: 1576468480 bytes
# Number of multiprocessors: 15
# Number of cores: 120

- that's from a host which is also displaying driver 19733 through BOINC v6.10.18/Windows 7.

GPUGrid reckon the low core count is because they've coded 8 shaders per MP, where the Fermis have 32 - maybe that could account for the low speed report too.
ID: 31836 · Report as offensive
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31840 - Posted: 29 Mar 2010, 20:54:04 UTC - in response to Message 31836.  

OK, so it seems that the Beta driver is reporting the wrong stats, and this is limiting the apps on the Boinc client.

0.81 is because it is reading the live speed, before the card is in use, it speeds up later to 1.4, but this is part of the equation to calculate the peak GFlops rating, along with RAM speed, shader speed and core and shader count; these are all out because they are being read from the live stat rather than the maximum state (hence its call peak GFlops) basically because of the driver. So the Boinc apps are seeing the numbers incorrectly and only using 120shaders because of both the driver and the app; written for earlier cards. I'm sure NVidia are already working on release drivers, and will continue to support the card and its variants with updates that improve performance, as they have done with their other ranges over many years.
ID: 31840 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31841 - Posted: 29 Mar 2010, 21:01:30 UTC - in response to Message 31840.  
Last modified: 29 Mar 2010, 21:02:05 UTC

So the Boinc apps are seeing the numbers incorrectly and only using 120shaders because of both the driver and the app;

Not something for BOINC to change. All BOINC does is start the science application, it doesn't care what piece of hardware is being used by the application and whether that's done using CUDA, CAL or OpenCL. So if you want that fixed, you'll have to contact the project that the apps are from.
ID: 31841 · Report as offensive
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31844 - Posted: 30 Mar 2010, 9:15:03 UTC - in response to Message 31841.  

Clearly Boinc calculates the Gflops rating, as it can do this before its attached to a GPU project. As it requires that the GPU drivers are installed it is also clear that Boinc uses the information in the drivers to calculate the GFlops rating of the card.
So if the driver is not reporting this correctly then Boinc cant calculate it properly. I expect changes in the driver format have meant that Boinc is either calling the wrong data or is being given the wrong data. It is Boinc that reports the reading incorrectly to us, when the GPU clock is reduced (power saving) rather than peak. So Boinc may have to be adapted to call the data differently.

The apps is a different question, but obviously an app written specifically for G200 wont work with a Fermi because the designs are very different.
ID: 31844 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31845 - Posted: 30 Mar 2010, 9:39:32 UTC - in response to Message 31844.  

Clearly Boinc calculates the Gflops rating, as it can do this before its attached to a GPU project. As it requires that the GPU drivers are installed it is also clear that Boinc uses the information in the drivers to calculate the GFlops rating of the card.

It's slightly different than that. The drivers come with information about all the cards/GPUs that they service. All BOINC does is read the information about your GPU and translate that into values you see.

In the case of the peakflops() value, all BOINC does is divide the value given in the drivers by 1e9 (10,000,000,000) so it puts those numbers down in a legible form, without adding too much to the line. I mean, it's easier to say that your GPU can do 449 GFLOPs, than to say it does 4490000000000 flops.

So if the driver is not reporting this correctly then Boinc cant calculate it properly.

When the value for peakflops() in the drivers is set to the wrong value, then BOINC can only show you what it says in the drivers. It can't and won't make up another number. It's got to be fixed in the drivers.

I notice you haven't answered where you got these drivers from.
Nvidia has 197.13
Omegadrivers has 197.17 and 197.25 Beta (This is a WHQL-candidate driver for GeForce 6, 7, 8, 9, 100, 200, and 300-series desktop GPUs and ION desktop GPUs. >> See how it's missing the description of the 400 series?)

So since you clearly do not want to tell us, could you please be so kind and tell them that they have their numbers wrong? :-)
ID: 31845 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31846 - Posted: 30 Mar 2010, 9:52:01 UTC - in response to Message 31845.  

All BOINC does is read the information about your GPU and translate that into values you see.

Oh, PS, no need to believe me:

Read coproc.cpp, lines 167 - 231
ID: 31846 · Report as offensive
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31847 - Posted: 30 Mar 2010, 10:12:45 UTC - in response to Message 31846.  

By calculates I do just mean run an equation given the reported values of the driver, I dont mean benchmark!
ID: 31847 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31848 - Posted: 30 Mar 2010, 10:23:10 UTC - in response to Message 31847.  
Last modified: 30 Mar 2010, 10:23:52 UTC

And neither do I. Have you read what I wrote?

OK, in simple words.
BOINC reads the contents of a certain DLL (dynamic linked library) file that comes with the drivers. Inside this DLL file, there's information per videocard, per GPU, showing what the capabilities of this card/GPU are. These values are added into that DLL file by the GPU driver manufacturer.

All BOINC does is read the values and show them to you. For the peakGFLOPs number, the value of peakflops() inside the DLL file is divided by 1e9 (10 to the power of 9). This is done so it shows easier on the eyes.

So if the value of peakflops() inside the DLL file for your GPU is 1290000000000, then all BOINC can do is show you that the peakGFLOPs value is 129 GFLOPs. It won't add to it, it won't subtract, it won't scour the internet to look for 'better' numbers.
ID: 31848 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 31849 - Posted: 30 Mar 2010, 10:36:22 UTC - in response to Message 31846.  
Last modified: 30 Mar 2010, 10:42:45 UTC

All BOINC does is read the information about your GPU and translate that into values you see.

Oh, PS, no need to believe me:

Read coproc.cpp, lines 167 - 231

Jord, I don't think that's the right reference. The link you gave is more concerned with displaying the information that BOINC already holds.

BOINC actiually gets the information in the first place via coproc_detect.cpp. As you say, that relies on NVIDIA-supplied code in a DLL.

But surely that code must actually query the underlying hardware? Not for the driver version, obviously, but things like current speed can't be hard-coded into a lookup table: otherwise overclocking could never be reported.

WRT drivers: would it not be a plausible first assumption that driver version 19733 came off the CD supplied in the box with the card itself? Still nice to have that confirmed, or otherwise, though.

Edit: From coproc.h

   inline double peak_flops() { 
                 double x = attribs.numberOfSIMD * attribs.wavefrontSize * 2.5 * attribs.engineClock * 1.e6; 
ID: 31849 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31850 - Posted: 30 Mar 2010, 10:48:36 UTC - in response to Message 31849.  
Last modified: 30 Mar 2010, 10:50:21 UTC

Ugh, I give up and am going to wait for my new motherboard to arrive.
ID: 31850 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 31851 - Posted: 30 Mar 2010, 11:15:59 UTC - in response to Message 31850.  

Re: GPU overclocking (pre-edit!)

If you haven't seen any overclocked GPUs, then you haven't read Post your BOINC Startup 'CUDA' Info. Lots of weird and (not so wonderful) figures there. [And I see my last post killed it]

In my case, I have one card which was overclocked by the manufacturer. But I'm no longer using the drivers from the (15 month old) manufacturer's CD: I'm using standard, but newer, NVidia drivers. They pick up the hardware overclock.

Other people use software tools like (from memory) RivaTuner or EVGA Precision something. Apparently, they aren't manufacturer specific but can adjust the clockings of any NVidia-based card. The variation in speeds in that SETI thread mainly result from the use of such tools.

There are also tools like GPU-Z which can get data the same way BOINC can: this is a reviewer's shot of a Fermi, re-posted from GPUGrid:

ID: 31851 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31858 - Posted: 30 Mar 2010, 17:58:53 UTC - in response to Message 31851.  

In my case, I have one card which was overclocked by the manufacturer. But I'm no longer using the drivers from the (15 month old) manufacturer's CD: I'm using standard, but newer, NVidia drivers. They pick up the hardware overclock.

There's one possibility and that's that the Nvidia Control Panel program, or Rivatuner, or whichever overclocking program is available for Nvidia GPUs, will (re)write the DLL file for the drivers with the newer values.

As far as I know, BOINC does not benchmark the GPU in any way or form.

The code you pointed out is for an ATI GPU. See line 298, which says "COPROC_ATI(): COPROC("ATI"){}". Little clue. :-)

The code for Nvidia goes like this:

   int parse(MIOFILE&);

   // Estimate of peak FLOPS.
   // FLOPS for a given app may be much less;
   // e.g. for SETI@home it's about 0.18 of the peak
   //
   inline double peak_flops() {
        // clock rate is scaled down by 1000;
        // each processor has 8 cores;
        // each core can do 2 ops per clock
        //
        double x = (1000.*prop.clockRate) * prop.multiProcessorCount * 8. * 2.;
        return x?x:5e10;
 	    }

Whatever that last line means.
ID: 31858 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 31859 - Posted: 30 Mar 2010, 18:25:37 UTC - in response to Message 31858.  

// each processor has 8 cores;
double x = (1000.*prop.clockRate) * prop.multiProcessorCount * 8. * 2.;

Whatever that last line means.

BOINC bug. He's hardcoded a variable, not valid for Fermi: each processor has 32 cores.
ID: 31859 · Report as offensive
skgiven
Avatar

Send message
Joined: 19 Aug 08
Posts: 87
United Kingdom
Message 31863 - Posted: 30 Mar 2010, 23:27:51 UTC - in response to Message 31859.  

Hence its using 120 cores and not 480.

Ageless, do you still think an NVidia driver is telling an NVidia card to only use 120 of its 480 cores?
ID: 31863 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 31868 - Posted: 31 Mar 2010, 6:48:20 UTC - in response to Message 31863.  

I never said that. I said that the peakflops() value was read from the driver's DLL file. That's what I always understood was being done.

The reason behind that was the checkin on the change: "* standardize the FLOPS estimate between NVIDIA and ATI. Make them both peak FLOPS, according to the formula supplied by the manufacturer."

Which still means that the detailed values of each GPU is supplied by the manufacturer and must be accurate in the first place.

I've sent word on to David about that piece of code. He's changed it in [trac]changeset:21034[/trac], so it adapts the formula to respectively 8 or 32 cores per processor. (coproc.h)
ID: 31868 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 31872 - Posted: 31 Mar 2010, 9:29:03 UTC - in response to Message 31868.  

Jord, I still don't think you have it quite right. A DLL is a code library - a tool for doing something, not a repository of information in its own right. It will be part of the Application Programming Interface: so a better characterisation would be "BOINC uses the driver's DLL file to read information from the card" - not the peakflops directly, but lower-level information which can be used to calculate it.

That new code is an even worse kludge than the previous one: let's hope it's just a temporary holding operation until David (as posted earlier in this thread) can get a proper answer back from NVidia. Maybe they made a similar assumption when writing the API, and forgot to include a function for "How many shaders does each multiprocessor have?".

@ skgiven,
I don't think this false assumption by BOINC is going to have any effect at all on actual computation. The 'Peak FLOPs' figure is only posted by BOINC to give people something to brag about, and it will help in work fetch calculations. But I don't think it will control the science application. Far more likely is that the application developers (perhaps led astray by BOINC's code) have similarly failed to find an API for "shaders per MP", and have taken the lazy way out with a hard-coded constant. GPUGrid don't seem to have cracked it yet, even with the cuda30 version of their Beta v6.22, but it's early days for Fermi.
ID: 31872 · Report as offensive
1 · 2 · Next

Message boards : BOINC client : Fermi Cards recognition

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.