Posts by Gipsel

1) Message boards : Questions and problems : Seti CUDA Likes To Hog The GPU (Message 27185)
Posted 8 Sep 2009 by Gipsel
Post:
I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;)

It started with 6.6.16

And whomever is responsible? I don't know... I did make a BOINC Manager that shows both CPU time and Wall time in columns, though. ;-)
(32bit Windows only)

I know, that was the reason for the smiley.

But I think it creates a lot of confusion to some people, especially if the possibility to run more than one WU per GPU (officially supported since 6.10.3) is used. A directive to the GPU developers to report the GPU time (measured either way by most project apps) instead of CPU time to the client and the manager simply showing the time reported by the science app (that was the behaviour before 6.6.16) would be better in my opinion. It requires the cooperation of the project developers (guess that's a killer argument), but the manager would show some much more representative times.
The current state over at MW for instance is that recent manager versions show roughly triple the time needed for a WU (if one uses the default 3 WUs per GPU) in case of ATI cards, and the task list is showing the GPU times (as my apps report the GPU time instead of the CPU time to the client). But using the CUDA app leads to the very low CPU times in the task list. All in all there is not much coherence between the times seen in the manager and on the project pages.

But I guess that is a bit offtopic here and I don't want to hijack the thread.
2) Message boards : Projects : MilkyWay not provide good CUDA statistics (Message 27184)
Posted 8 Sep 2009 by Gipsel
Post:
MilkyWay project has is supplying a client for GeForce processors. Nice, and the credit thruput is incredible.

It's not so incredible for nvidia GPUs :p

However, they are not providing enough information about the device being used (nvidia, ati or just the cpu) to be able to identify what was used for a particular task.

I have to partially disagree. At least the ATI versions have some quite extensive diagnostic output to the stderr.txt. Just an example (taken from the top host):

[quote<stderr_txt>
Running Milkyway@home ATI GPU application version 0.19g by Gipsel
allowing 4 concurrent WUs per GPU
setting minimum kernel frequency to 1 Hz
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (8 cores/threads) 3.50007 GHz (1160ms)

CAL Runtime: 1.3.158
Found 4 CAL devices

Device 0: ATI Radeon HD 4800 (RV770) 512 MB local RAM (remote 2047 MB cached + 2047 MB uncached)
GPU core clock: 800 MHz, memory clock: 500 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

Device 1: ATI Radeon HD 4800 (RV770) 512 MB local RAM (remote 2047 MB cached + 2047 MB uncached)
GPU core clock: 800 MHz, memory clock: 500 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

Device 2: ATI Radeon HD 4800 (RV770) 512 MB local RAM (remote 2047 MB cached + 2047 MB uncached)
GPU core clock: 800 MHz, memory clock: 500 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

Device 3: ATI Radeon HD 4800 (RV770) 512 MB local RAM (remote 2047 MB cached + 2047 MB uncached)
GPU core clock: 800 MHz, memory clock: 500 MHz
800 shader units organized in 10 SIMDs with 16 VLIW units (5-issue), wavefront size 64 threads
supporting double precision

4 WUs already running on GPU 0
4 WUs already running on GPU 1
3 WUs already running on GPU 2
4 WUs already running on GPU 3
Starting WU on GPU 2

main integral, 320 iterations
predicted runtime per iteration is 179 ms (1000 ms are allowed)
borders of the domains at 0 1600
Calculated about 9.89542e+012 floatingpoint ops on GPU, 1.23583e+008 on FPU. Approximate GPU time 66.5653 seconds.

probability calculation (stars)
Calculated about 3.34818e+009 floatingpoint ops on FPU.

WU completed.
CPU time: 2.65202 seconds, GPU time: 66.5653 seconds, wall clock time: 283.864 seconds, CPU frequency: 3.50007 GHz

</stderr_txt>[/quote]
3) Message boards : Projects : News on project outages. (Message 27181)
Posted 8 Sep 2009 by Gipsel
Post:
Collatz remains offline this afternoon

And maybe even for a few days. There is a statement by Slicker, the Collatz admin:

Due to a mixture of water and electricity, the power blew and brought down the server. I'm working on getting it working and/or rebuilding it, but really have no idea whether it will be one day or one week. The good news is I have a very recent database backup from just an hour or so prior to the crash.
4) Message boards : Questions and problems : Seti CUDA Likes To Hog The GPU (Message 27180)
Posted 8 Sep 2009 by Gipsel
Post:
Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;)

Which is why when you throttle the CPU, that temporarily suspends the CPU application sending those kernels to the GPU. But I said that already. ;-)

Ah, I may add for those that don't know this, the CPU throttle in BOINC suspends and resumes the running tasks for a couple of seconds every 10 seconds. If you set it for 80% it will not continuously give 80% CPU, but rather run 8 seconds, pause 2.

But that's so slow the fan on the card may already spin up and down all the time. The other solution is much more fine grained :)

Btw, welcome around here Gipsel. Your work at Milkyway (and other places) is highly appreciated.

Thanks!
I just came here by accident searching for some information on who the hell suggested that the BOINC manager (it started with 6.6.21 or so) now shows wall clock time for the WUs ;)
5) Message boards : Questions and problems : Seti CUDA Likes To Hog The GPU (Message 27171)
Posted 8 Sep 2009 by Gipsel
Post:
I was just hoping that there would be a way of limiting GPU usage, the same way as there is for CPU usage.

By using the CPU throttle function, you throttle the GPU as well, as that will slow down the amount of kernels that the CPU application makes for the GPU to work on. Other than that, there is no way at this time that the BOINC developers or the Nvidia developers can throttle a GPU.

Well, that is not entirely true. The developer of a GPU application can of course "throttle" a GPU just by waiting a bit between all those kernel calls ;)

In case of the ATI application for MW there is even a command line option a user can set in the app_info.xml for this exact purpose. So it is true that it can't be enforced by the BOINC client, nevertheless such an option would be possible if the science app supports it.




Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.