Message boards : BOINC client : Boinc overestimating time for WU
Message board moderation
Author | Message |
---|---|
Send message Joined: 5 Oct 06 Posts: 5133 |
I suspect that the main problem may be caused by running two different Einstein applications on the same machine. That needs a history lesson. The very first versions of BOINC only ran CPU applications, and most projects only had one type of application. The BOINC client on your machine kept track of the real-world speed of those single-application projects by means of a single (one per project) value called 'DCF', or Duration Correction Factor. Then GPUs came along, and multiple-application projects, and it all fell apart - the single DCF value couldn't adjust estimates for two different applications simultaneously. Instead, as part of the CreditNew release in 2010, DCF was replaced by APR (Average Processing Rate) tracked on the server, which can adjust for an arbitrary number of applications and devices at the same time. The Einstein project - specifically - didn't accept the CreditNew design, and so didn't adopt the integrated Runtime Estimation tools either, and never developed their own replacement. If you look at the Project properties for the Einstein project in BOINC Manager, you'll probably see a line like Duration correction factor 0.5865(that's my intel_gpu running Einstein on this machine). SETI and every other project will either hide DCF because it's redundant, or show exactly 1.0000 as the value. You'll probably be able to see DCF dropping every time the AMD finishes a task, and bobbing back up again every time the iGPU finishes one. The only workrounds I can think of for this are: 1) run different projects on the two different kinds of GPU (and the CPU, for that matter) 2) wrap the current Einstein applications up in an app_info.xml file (see Anonymous platform), and declare your own speed rating (<flops> value) for each app. But that's tricky. Apart from that, the only true solution involves work by the Einstein developers. A lot of work. Edit - also read Gary Roberts' comment recently at Einstein. |
Send message Joined: 1 Jul 16 Posts: 146 |
This is one reason why I like to run CPU apps in one client and GPU apps in another client. Even more so from the same project. During competitions is makes bunkering one type of app/project easier. |
Send message Joined: 5 Oct 06 Posts: 5133 |
What do you mean "in another client"? Are you somehow running two instances of the Boinc manager?No, he said 'client' and he meant client - they're different programs. How you manage them is a separate - and potentially difficult - question. |
Send message Joined: 1 Jul 16 Posts: 146 |
Well since you mentioned BOINCTasks, thats three things. BOINC Client and Manager plus 3rd part software BOINCTasks. The client runs in the background and has no GUI, the manager is the user interface that shows tasks, adds projects etc. You can run more than one client and BOINCTasks can monitor more than 1 client per PC by using the gui_rpc_port #. I manage the utilization of CPU threads (with Process Lasso)for GPU tasks since windows is pretty much retarded when trying to do so. I also use app_config files when have been pointed out to you many times so a CPU core isn't wasted on a GPU task that doesn't need it. This allows for more control of my systems. E@H tasks have their own separate ETAs since BOINC merges CPU and GPU run times together. Bunkering during competitions is easier since it's typically GPU or CPU only or I only want to run GPU tasks. Can also get more tasks this way. Can run more NCI Goofy apps w/o using VMs. I made a guide awhile back, its not hard. BOINCTasks developer actually referenced it for his website. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.