Incorrect Nvidia GPU Detection and Count

Message boards : Questions and problems : Incorrect Nvidia GPU Detection and Count
Message board moderation

To post messages, you must log in.

AuthorMessage
John P. Myers

Send message
Joined: 30 Apr 10
Posts: 8
United States
Message 49816 - Posted: 8 Jul 2013, 10:46:24 UTC
Last modified: 8 Jul 2013, 10:56:23 UTC

In my system, i have 1 GTX Titan, 1 GTX 560 and 1 GT 640. When BOINC starts, it reads this correctly, detects OpenCL and everything correctly. They all crunch. Everything seems fine, except client_state.xml reports i have 3 Titans, which i do not. It does not mention any of the other GPUs. This info is then sent to project sites which also say that i have 3 Titans and nothing else.

Event log:

7/8/2013 3:36:28 AM | | Starting BOINC client version 7.0.64 for windows_x86_64
7/8/2013 3:36:28 AM | | log flags: file_xfer, sched_ops, task
7/8/2013 3:36:28 AM | | Libraries: libcurl/7.25.0 OpenSSL/1.0.1 zlib/1.2.6
7/8/2013 3:36:28 AM | | Data directory: C:\ProgramData\BOINC
7/8/2013 3:36:28 AM | | Running under account John
7/8/2013 3:36:28 AM | | Processor: 8 GenuineIntel Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz [Family 6 Model 58 Stepping 9]
7/8/2013 3:36:28 AM | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 cx16 sse4_1 sse4_2 popcnt aes syscall nx lm vmx tm2 pbe
7/8/2013 3:36:28 AM | | OS: Microsoft Windows 7: Professional x64 Edition, Service Pack 1, (06.01.7601.00)
7/8/2013 3:36:28 AM | | Memory: 15.95 GB physical, 31.90 GB virtual
7/8/2013 3:36:28 AM | | Disk: 596.17 GB total, 504.71 GB free
7/8/2013 3:36:28 AM | | Local time is UTC -5 hours
7/8/2013 3:36:28 AM | | CUDA: NVIDIA GPU 0: GeForce GTX TITAN (driver version 320.49, CUDA version 5.50, compute capability 3.5, 4096MB, 4096MB available, 5268 GFLOPS peak)
7/8/2013 3:36:28 AM | | CUDA: NVIDIA GPU 1: GeForce GT 640 (driver version 320.49, CUDA version 5.50, compute capability 3.0, 2048MB, 1982MB available, 692 GFLOPS peak)
7/8/2013 3:36:28 AM | | CUDA: NVIDIA GPU 2: GeForce GTX 560 (driver version 320.49, CUDA version 5.50, compute capability 2.1, 1024MB, 917MB available, 1089 GFLOPS peak)
7/8/2013 3:36:28 AM | | OpenCL: NVIDIA GPU 0: GeForce GTX TITAN (driver version 320.49, device version OpenCL 1.1 CUDA, 6144MB, 4096MB available, 5268 GFLOPS peak)
7/8/2013 3:36:28 AM | | OpenCL: NVIDIA GPU 1: GeForce GT 640 (driver version 320.49, device version OpenCL 1.1 CUDA, 2048MB, 1982MB available, 692 GFLOPS peak)
7/8/2013 3:36:28 AM | | OpenCL: NVIDIA GPU 2: GeForce GTX 560 (driver version 320.49, device version OpenCL 1.1 CUDA, 1024MB, 917MB available, 1089 GFLOPS peak)
7/8/2013 3:36:28 AM | | Config: report completed tasks immediately
7/8/2013 3:36:28 AM | | Config: use all coprocessors

And the client_state.xml info:

<coprocs>
<coproc_cuda>
   <count>3</count>
   <name>GeForce GTX TITAN</name>
   <available_ram>4294967295.000000</available_ram>
   <have_cuda>1</have_cuda>
   <have_opencl>1</have_opencl>
   <peak_flops>5268480000000.000000</peak_flops>
   <cudaVersion>5050</cudaVersion>
   <drvVersion>32049</drvVersion>
   <totalGlobalMem>4294967295.000000</totalGlobalMem>
   <sharedMemPerBlock>49152.000000</sharedMemPerBlock>
   <regsPerBlock>65536</regsPerBlock>
   <warpSize>32</warpSize>
   <memPitch>2147483647.000000</memPitch>
   <maxThreadsPerBlock>1024</maxThreadsPerBlock>
   <maxThreadsDim>1024 1024 64</maxThreadsDim>
   <maxGridSize>2147483647 65535 65535</maxGridSize>
   <clockRate>980000</clockRate>
   <totalConstMem>65536.000000</totalConstMem>
   <major>3</major>
   <minor>5</minor>
   <textureAlignment>512.000000</textureAlignment>
   <deviceOverlap>1</deviceOverlap>
   <multiProcessorCount>14</multiProcessorCount>
   <coproc_opencl>
      <name>GeForce GTX TITAN</name>
      <vendor>NVIDIA Corporation</vendor>
      <vendor_id>4318</vendor_id>
      <available>1</available>
      <half_fp_config>0</half_fp_config>
      <single_fp_config>63</single_fp_config>
      <double_fp_config>63</double_fp_config>
      <endian_little>1</endian_little>
      <execution_capabilities>1</execution_capabilities>
      <extensions>cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_d3d9_sharing cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll  cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 </extensions>
      <global_mem_size>6442450944</global_mem_size>
      <local_mem_size>49152</local_mem_size>
      <max_clock_frequency>980</max_clock_frequency>
      <max_compute_units>14</max_compute_units>
      <opencl_platform_version>OpenCL 1.1 CUDA 4.2.1</opencl_platform_version>
      <opencl_device_version>OpenCL 1.1 CUDA</opencl_device_version>
      <opencl_driver_version>320.49</opencl_driver_version>
   </coproc_opencl>
<pci_info>
   <bus_id>16</bus_id>
   <device_id>0</device_id>
   <domain_id>0</domain_id>
</pci_info>
<pci_info>
   <bus_id>5</bus_id>
   <device_id>0</device_id>
   <domain_id>0</domain_id>
</pci_info>
<pci_info>
   <bus_id>3</bus_id>
   <device_id>0</device_id>
   <domain_id>0</domain_id>
</pci_info>
</coproc_cuda>
</coprocs>


My question is, will this hurt performance? I'm noticeing the <single_fp_config> and <double_fp_config> values should not be equal. The Titan has 14 compute units (SMX's), each consisting of 192 FP32 'cores' and 64 FP64 'cores'. This is the same for a GTX 780, except only 12 SMX's instead of 14, and the 780's FP64 cores are crippled to perform at 1/24 FP32 speed instead of 1/3 when enabled on the Titan.

Does the incorrect representation of which GPUs are in my system hurt performance when workunits are issued to me, and does the incorrect reporting of the FP32 and FP64 configs hurt performance while crunching?

Not all that concerned about the incorrect amount of RAM on the Titan displayed in the Event log.
ID: 49816 · Report as offensive
Claggy

Send message
Joined: 23 Apr 07
Posts: 1112
United Kingdom
Message 49817 - Posted: 8 Jul 2013, 11:00:00 UTC - in response to Message 49816.  

Boinc only uses the Best GPU (of a vendor) by default, and will only report that Best GPU (of that vendor) to the project, so that is expected behaviour.

Claggy
ID: 49817 · Report as offensive
John P. Myers

Send message
Joined: 30 Apr 10
Posts: 8
United States
Message 49818 - Posted: 8 Jul 2013, 11:29:44 UTC - in response to Message 49817.  
Last modified: 8 Jul 2013, 11:33:06 UTC

Not entirely true. Around 2 years ago i had this same issue. My GPUs were 2 GTX 590s and 1 GTX 460. It was being reported that i had 5 GTX 460s. Definitely not the best GPU in the system. The GTX 460 was in the first PCIe slot though and i assumed that's what caused it then, being in the primary position. Also, it was the only GPU with a monitor attached. However this time, the GTX 560 is in the first slot, and again is the only GPU with a monitor attached. The Titan is in the last.
ID: 49818 · Report as offensive
Claggy

Send message
Joined: 23 Apr 07
Posts: 1112
United Kingdom
Message 49819 - Posted: 8 Jul 2013, 13:15:12 UTC - in response to Message 49818.  

Not entirely true. Around 2 years ago i had this same issue. My GPUs were 2 GTX 590s and 1 GTX 460. It was being reported that i had 5 GTX 460s. Definitely not the best GPU in the system. The GTX 460 was in the first PCIe slot though and i assumed that's what caused it then, being in the primary position. Also, it was the only GPU with a monitor attached. However this time, the GTX 560 is in the first slot, and again is the only GPU with a monitor attached. The Titan is in the last.

Boinc works out what is best in the following order:

BOINC decides which GPU is best based on these factors, in decreasing priority):
- compute capability
- software version
- available memory
- speed

http://boinc.berkeley.edu/dev/forum_thread.php?id=7899&postid=45886

So in that respect, a GTX460 with compute capability 2.1 is better than a GTX590 with compute capability 2.0

Claggy
ID: 49819 · Report as offensive

Message boards : Questions and problems : Incorrect Nvidia GPU Detection and Count

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.