DCF Integrator

Message boards : Questions and problems : DCF Integrator
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
Geek@Play
Avatar

Send message
Joined: 20 Jan 09
Posts: 70
United States
Message 28251 - Posted: 22 Oct 2009, 21:43:03 UTC

Well......I gained a bit of knowledge thanks to both of you.

Thanks Jord and Richard.
ID: 28251 · Report as offensive
Geek@Play
Avatar

Send message
Joined: 20 Jan 09
Posts: 70
United States
Message 28301 - Posted: 24 Oct 2009, 22:05:31 UTC - in response to Message 28248.  
Last modified: 24 Oct 2009, 22:46:08 UTC

Isn't the value of Seti's DCF you want somewhere between 0.5 - 1 for normal apps and 0.3 - 0.8 for optimised apps? Perhaps that CUDA is throwing the spanner here.


I believe your right about the CUDA apps causing the DCF to get too low. I observed today that 2 of my boxes suddenly downloaded a couple of hundred work units again. The DCF on one was at 0.1009 at the time it happened. CUDA work completes in 10 to 20 minutes where as version 603 on the cpu's takes 1.5 to 2 hours to complete. Meanwhile DCF is being driven down by CUDA. At least this is how I see it.

I wonder if Boinc would deal with this better if <flops> were not defined in the app_info. Boinc must make some kind of estimate of it's own based on the benchmarks.

edit - or maybe increase the <flops> on the CUDA app to move predicted and actual crunch times towards each other. Maybe then it would not drive down the DCF so badly.
ID: 28301 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 20 Dec 07
Posts: 1069
Germany
Message 28302 - Posted: 24 Oct 2009, 22:47:11 UTC - in response to Message 28301.  

I wonder if Boinc would deal with this better if <flops> were not defined in the app_info.

No, but there has to be a <flops> entry for every application in the app_info.xml. If the DCF value is disturbed by one application, adjust the <flops> value accordingly. See also app_info for AP503, AP505, MB603 and MB608 at the SETI boards:
7. Browse your client_state.xml file (its in the BOINC data directory) and look for the entry <p_fpops>. We need to use this number. Do NOT change this file.

8. For each of the apps multiply the p_fpops value by the factor below and put this into the appropiate flops entry in the app_info given below. For multibeam 608 you need the estimated Gflops. The app_info given below has the values for a GTS250.
Application Calculate
Astropulse 503 = p_fpops x 2.6
Astropulse 505 = P_fpops x 2.6
Multibeam 603 = p_fpops x 1.75
Multibeam 608 = Est.Gflops x 0.2

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
ID: 28302 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 28303 - Posted: 24 Oct 2009, 22:59:01 UTC - in response to Message 28301.  

edit - or maybe increase the <flops> on the CUDA app to move predicted and actual crunch times towards each other. Maybe then it would not drive down the DCF so badly.

There is a problem with that: the flops (or fpops) are defined in the tasks. And at Seti, there is no fine line between tasks for CPU or GPU, the same Seti Enhanced Multibeam tasks run on both. So their estimated amount of floating point operations (fpops) that the task will take is geared towards the use of the CPU which is still used by the majority of people.

A solution for Seti would be to make work for CPUs and GPUs separately, but that's at this moment impossible.

Nothing said about better GPUs coming down the line eventually, not only ATIs, but Nvidias (and other brands) as well. And what of it when someone finally makes an OpenCL version of SE 6.08?

Another solution is their own DCF for all the different kinds of hardware. Want to make a guess at the difficulties with that? You'd have to define all kinds of hardware by class, by their power etc.

Perhaps to tell the GPU to only use one processor, just like the CPU has? I'm not even sure that's possible. It's a big waste, if someone does manage to do that.
ID: 28303 · Report as offensive
Geek@Play
Avatar

Send message
Joined: 20 Jan 09
Posts: 70
United States
Message 28304 - Posted: 24 Oct 2009, 23:24:06 UTC

Well I made the following changes in my app_info and we'll see how it goes.

Multibeam 608 = Est.Gflops x 0.3

Estimated crunch times for 608 are certainly more in line with reality now.
ID: 28304 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5082
United Kingdom
Message 28305 - Posted: 24 Oct 2009, 23:40:32 UTC - in response to Message 28304.  
Last modified: 24 Oct 2009, 23:51:48 UTC

Well I made the following changes in my app_info and we'll see how it goes.

Multibeam 608 = Est.Gflops x 0.3

Estimated crunch times for 608 are certainly more in line with reality now.

That sounds plausible: maybe even still too low.

Those original calculations were made in March, remember: since then, we've had first Cuda 2.2 and later Cuda 2.3

For SETI, specifically (and this does not apply to other BOINC projects I've tested), each Cuda runtime upgrade improved speed by at least 30%. You could even need x 0.4 if you're running the 2.3 DLLs.

But don't sweat it - it is impossible to reach a mathematically-perfect set of multipliers, because the ratio of the speed of the different applications is different for different datasets (ARs, in SETI terminology). Just set something which keeps things broadly under control, and let BOINC manage its own affairs from there.
ID: 28305 · Report as offensive
Geek@Play
Avatar

Send message
Joined: 20 Jan 09
Posts: 70
United States
Message 28306 - Posted: 25 Oct 2009, 0:05:47 UTC - in response to Message 28305.  
Last modified: 25 Oct 2009, 0:10:02 UTC

Well I made the following changes in my app_info and we'll see how it goes.

Multibeam 608 = Est.Gflops x 0.3

Estimated crunch times for 608 are certainly more in line with reality now.

That sounds plausible: maybe even still too low.

Those original calculations were made in March, remember: since then, we've had first Cuda 2.2 and later Cuda 2.3

For SETI, specifically (and this does not apply to other BOINC projects I've tested), each Cuda runtime upgrade improved speed by at least 30%. You could even need x 0.4 if you're running the 2.3 DLLs.

But don't sweat it - it is impossible to reach a mathematically-perfect set of multipliers, because the ratio of the speed of the different applications is different for different datasets (ARs, in SETI terminology). Just set something which keeps things broadly under control, and let BOINC manage its own affairs from there.


Initialy seems too low still. I'll let it go till morning then maybe go to a .4 value.

edit - I understand. I'm just trying to stop Boinc from occasionally downloading a couple of hundred wu. That's the only thing that is bothering me at the moment. That kind of activity by Boinc overrides the set it and forget croud.
ID: 28306 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15483
Netherlands
Message 28307 - Posted: 25 Oct 2009, 0:33:17 UTC - in response to Message 28306.  

I'm just trying to stop Boinc from occasionally downloading a couple of hundred wu.

Occasionally reset the project. :-)

I don't think CUDA lends itself for set it and forget it. Still too many problems with the hardware for that.
ID: 28307 · Report as offensive
Geek@Play
Avatar

Send message
Joined: 20 Jan 09
Posts: 70
United States
Message 28310 - Posted: 25 Oct 2009, 4:35:43 UTC

I waited a few hours then reset the multiplier to........

Multibeam 608 = Est.Gflops x 0.4

The predicted crunch times for 608 wu are close to reality now. Of course Boinc 6.10.16 downloaded a bunch of 608 wu because of the change. I expected that. Ok so far.

Yes.....I run optimized apps and version 2.3 dll files.

DCF on the 5 machines at this moment are..........

0.2133
0.2160
0.2073
0.2393
0.2421

My thanks to all of you for helping me to comprehend this problem and leading me down the path to the solution. Again, I learned a bit about Boinc and Seti. Sorry this old brain can be stubborn on occasion.

Now off to snooze for a while.............

Slide rule no good in digital age.
ID: 28310 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Questions and problems : DCF Integrator

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.