7.0.25 Doesn't fetch work when cc_config excludes GPU's

log in

Advanced search

Message boards : Questions and problems : 7.0.25 Doesn't fetch work when cc_config excludes GPU's

1 · 2 · Next
Author Message
Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43689 - Posted: 23 Apr 2012, 3:08:02 UTC
Last modified: 23 Apr 2012, 3:19:51 UTC

I modified the cc_config that as follows:

<cc_config>
<log_flags>
<file_xfer>1</file_xfer>
<sched_ops>1</sched_ops>
<task>1</task>
<app_msg_receive>0</app_msg_receive>
<app_msg_send>0</app_msg_send>
<benchmark_debug>0</benchmark_debug>
<checkpoint_debug>0</checkpoint_debug>
<coproc_debug>0</coproc_debug>
<cpu_sched>0</cpu_sched>
<cpu_sched_debug>0</cpu_sched_debug>
<cpu_sched_status>0</cpu_sched_status>
<dcf_debug>0</dcf_debug>
<priority_debug>0</priority_debug>
<file_xfer_debug>0</file_xfer_debug>
<gui_rpc_debug>0</gui_rpc_debug>
<heartbeat_debug>0</heartbeat_debug>
<http_debug>0</http_debug>
<http_xfer_debug>0</http_xfer_debug>
<mem_usage_debug>0</mem_usage_debug>
<network_status_debug>0</network_status_debug>
<poll_debug>0</poll_debug>
<proxy_debug>0</proxy_debug>
<rr_simulation>0</rr_simulation>
<rrsim_detail>0</rrsim_detail>
<sched_op_debug>0</sched_op_debug>
<scrsave_debug>0</scrsave_debug>
<slot_debug>0</slot_debug>
<state_debug>0</state_debug>
<statefile_debug>0</statefile_debug>
<std_debug>0</std_debug>
<task_debug>0</task_debug>
<time_debug>0</time_debug>
<trickle_debug>0</trickle_debug>
<unparsed_xml>0</unparsed_xml>
<work_fetch_debug>0</work_fetch_debug>
<notice_debug>0</notice_debug>
</log_flags>
<options>
<exclude_gpu>
<url>http://einstein.phys.uwm.edu/</url>
<device_num>0</device_num>
<app>einsteinbinary_BRP4</app>
</exclude_gpu>
<exclude_gpu>
<url>http://www.gpugrid.net/</url>
<device_num>1</device_num>
<app>acemd2</app>
</exclude_gpu>

<abort_jobs_on_exit>0</abort_jobs_on_exit>
<allow_multiple_clients>0</allow_multiple_clients>
<allow_remote_gui_rpc>0</allow_remote_gui_rpc>
<client_version_check_url>http://boinc.berkeley.edu/download.php?xml=1</client_version_check_url>
<client_download_url>http://boinc.berkeley.edu/download.php</client_download_url>
<disallow_attach>0</disallow_attach>
<dont_check_file_sizes>0</dont_check_file_sizes>
<dont_contact_ref_site>0</dont_contact_ref_site>
<exit_after_finish>0</exit_after_finish>
<exit_before_start>0</exit_before_start>
<exit_when_idle>0</exit_when_idle>
<fetch_minimal_work>0</fetch_minimal_work>
<force_auth>default</force_auth>
<http_1_0>0</http_1_0>
<http_transfer_timeout>300</http_transfer_timeout>
<http_transfer_timeout_bps>10</http_transfer_timeout_bps>
<max_file_xfers>8</max_file_xfers>
<max_file_xfers_per_project>2</max_file_xfers_per_project>
<max_stderr_file_size>0</max_stderr_file_size>
<max_stdout_file_size>0</max_stdout_file_size>
<max_tasks_reported>0</max_tasks_reported>
<ncpus>-1</ncpus>
<network_test_url>http://www.google.com/</network_test_url>
<no_alt_platform>0</no_alt_platform>
<no_gpus>0</no_gpus>
<no_info_fetch>0</no_info_fetch>
<no_priority_change>0</no_priority_change>
<os_random_only>0</os_random_only>
<proxy_info>
<socks_server_name></socks_server_name>
<socks_server_port>80</socks_server_port>
<http_server_name></http_server_name>
<http_server_port>80</http_server_port>
<socks5_user_name></socks5_user_name>
<socks5_user_passwd></socks5_user_passwd>
<http_user_name></http_user_name>
<http_user_passwd></http_user_passwd>
<no_proxy></no_proxy>
</proxy_info>
<rec_half_life_days>1.000000</rec_half_life_days>
<report_results_immediately>0</report_results_immediately>
<run_apps_manually>0</run_apps_manually>
<save_stats_days>30</save_stats_days>
<skip_cpu_benchmarks>0</skip_cpu_benchmarks>
<simple_gui_only>0</simple_gui_only>
<start_delay>0</start_delay>
<stderr_head>0</stderr_head>
<suppress_net_info>0</suppress_net_info>
<unsigned_apps_ok>0</unsigned_apps_ok>
<use_all_gpus>0</use_all_gpus>
<use_certs>0</use_certs>
<use_certs_only>0</use_certs_only>
<zero_debts>0</zero_debts>
</options>
</cc_config>


What happens is GPUGRID has some tasks waiting in que, but they are LONG tasks. Einstein starts with a bunch of tasks and they ALL finish before GPUGrid has finished. Then GPU 1 sits there idle forever and even a manual update request won't fetch more Einstein tasks until I suspend GPUGrid.

I know it's working and BOINC reports this in log:
Win7-950

Einstein@Home 4/22/2012 10:24:45 PM Config: excluded GPU. Type: all. App: einsteinbinary_BRP4. Device: 0
GPUGRID 4/22/2012 10:24:45 PM Config: excluded GPU. Type: all. App: acemd2. Device: 1

Einstein actually only runs on GPU-1 and GPUGrid on GPU-0... so it's working.

HELP!!!

8-)

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43722 - Posted: 23 Apr 2012, 20:02:07 UTC
Last modified: 23 Apr 2012, 20:42:54 UTC

Another possible related problem is I have 3 total projects running on another computer... HD6990/AMD 1100T box, Win7-Pro 64b, 8Gig mem.

Milkyway at home ATI GPU project only.

Correlizer CPU only project.
WUProp in background (NCI).
Neurona@home CPU only project and this one one sends ONE task at a time no matter what due to HUGE memory use...

What happens is, Milkyway at home ALWAYS runs in HIGH priority mode, even after doing:

1) The reset <zero_debts>1</zero_debts> - <zero_debts>0</zero_debts> thing in cc_config

2) Tried different settings of <rec_half_life_days>10.000000</rec_half_life_days> from 10 to 0.25.

3) Removed and reinstalled 7.0.25 twice...

4) Tried running Milkway with and without app_info.xml.

Also what happens is BOINC NEVER fetches work from Neurona after it reports the first task done EVEN THOUGH I have it's resource share set to 10,000 and others default to 100.

Okay, today, GPUGRID ran out of work and Einstein is still running fine on GPU-1.

But, GPU-0 sits idle and NOTHING except suspending the Einstein Project will cause BOINC to fetch more GPUGrid projects... just the opposite of the first post.

CLEARLY this is a BUG! So, I followed the Alpha tester thing for debug settings and got this after a while on a COLD restart on the i7-950 system first...

Win7-950

2387 4/23/2012 3:06:12 PM [work_fetch] ------- end work fetch state -------
2388 4/23/2012 3:06:12 PM [work_fetch] No project chosen for work fetch
2389 4/23/2012 3:07:05 PM [cpu_sched_debug] Request CPU reschedule: periodic CPU scheduling
2390 4/23/2012 3:07:05 PM [cpu_sched_debug] schedule_cpus(): start
2391 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0 (coprocessor job, FIFO) (prio -2.910353)
2392 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] reserving 0.500000 of coproc NVIDIA
2393 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0 (coprocessor job, FIFO) (prio -2.941196)
2394 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] reserving 0.500000 of coproc NVIDIA
2395 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1096_1
2396 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1152_0
2397 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110428.G60.03-00.26.N.b2s0g0.00000_752_2
2398 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1016_1
2399 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1168_0
2400 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1136_0
2401 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1136_0
2402 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1104_0
2403 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1200_0
2404 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1128_1
2405 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1104_0
2406 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1120_1
2407 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1152_0
2408 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1128_1
2409 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1112_1
2410 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1160_0
2411 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1160_0
2412 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1176_0
2413 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1176_0
2414 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1192_0
2415 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1040_1
2416 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1184_0
2417 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1944_1
2418 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2040_0
2419 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2024_0
2420 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110428.G59.90-00.03.N.b0s0g0.00000_2544_2
2421 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1976_1
2422 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1976_0
2423 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110428.G59.77+00.20.N.b5s0g0.00000_2904_2
2424 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2056_0
2425 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2016_1
2426 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2032_0
2427 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2056_1
2428 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1960_1
2429 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2064_0
2430 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1912_0
2431 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2040_1
2432 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1992_0
2433 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2048_1
2434 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1936_1
2435 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1992_1
2436 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1968_1
2437 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2008_1
2438 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_2000_0
2439 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2064_0
2440 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.68+00.77.S.b3s0g0.00000_1896_1
2441 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_2072_0
2442 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110428.G59.77+00.20.N.b5s0g0.00000_2832_2
2443 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1968_1
2444 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] insufficient NVIDIA for p2030.20110421.G40.44+01.24.S.b6s0g0.00000_1960_0
2445 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling muon_120421220202_105_0 (CPU job, priority order) (prio -0.039554)
2446 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling evo_A1334981706-7-0_4.29MB_60_0 (CPU job, priority order) (prio -0.040330)
2447 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_5bb422d5-1264-4a92-adca-5ae1756d1852_689343_2 (CPU job, priority order) (prio -0.040426)
2448 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling ecm_xy_1334961911_C200_118_77_3625_0 (CPU job, priority order) (prio -0.041105)
2449 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_08e14bc1-7f20-461b-9855-9c8363fd6ae9_689302_3 (CPU job, priority order) (prio -0.041201)
2450 4/23/2012 3:07:05 PM [cpu_sched_debug] enforce_schedule(): start
2451 4/23/2012 3:07:05 PM [cpu_sched_debug] preliminary job list:
2452 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] 0: p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0 (MD: no; UTS: yes)
2453 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] 1: p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0 (MD: no; UTS: yes)
2454 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 2: muon_120421220202_105_0 (MD: no; UTS: yes)
2455 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 3: evo_A1334981706-7-0_4.29MB_60_0 (MD: no; UTS: yes)
2456 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] 4: dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_5bb422d5-1264-4a92-adca-5ae1756d1852_689343_2 (MD: no; UTS: yes)
2457 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 5: ecm_xy_1334961911_C200_118_77_3625_0 (MD: no; UTS: yes)
2458 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] 6: dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_08e14bc1-7f20-461b-9855-9c8363fd6ae9_689302_3 (MD: no; UTS: no)
2459 4/23/2012 3:07:05 PM [cpu_sched_debug] final job list:
2460 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] 0: p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0 (MD: no; UTS: yes)
2461 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] 1: p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0 (MD: no; UTS: yes)
2462 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 2: muon_120421220202_105_0 (MD: no; UTS: yes)
2463 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 3: evo_A1334981706-7-0_4.29MB_60_0 (MD: no; UTS: yes)
2464 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] 4: dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_5bb422d5-1264-4a92-adca-5ae1756d1852_689343_2 (MD: no; UTS: yes)
2465 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 5: ecm_xy_1334961911_C200_118_77_3625_0 (MD: no; UTS: yes)
2466 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] 6: evo_A1335037206-88-0_42.9MB_79.92_0 (MD: no; UTS: yes)
2467 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] 7: dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_08e14bc1-7f20-461b-9855-9c8363fd6ae9_689302_3 (MD: no; UTS: no)
2468 Einstein@Home 4/23/2012 3:07:05 PM [coproc] NVIDIA instance 1: confirming for p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0
2469 Einstein@Home 4/23/2012 3:07:05 PM [coproc] NVIDIA instance 1: confirming for p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0
2470 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0
2471 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0
2472 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling muon_120421220202_105_0
2473 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling evo_A1334981706-7-0_4.29MB_60_0
2474 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_5bb422d5-1264-4a92-adca-5ae1756d1852_689343_2
2475 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling ecm_xy_1334961911_C200_118_77_3625_0
2476 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] scheduling evo_A1335037206-88-0_42.9MB_79.92_0
2477 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] all CPUs used (5.40 >= 5), skipping dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_08e14bc1-7f20-461b-9855-9c8363fd6ae9_689302_3
2478 World Community Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] faah33222_ZINC02857073_x3NF6b_00_0 sched state 1 next 1 task state 0
2479 World Community Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] cfsw_0235_00235607_1 sched state 1 next 1 task state 0
2480 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_5bb422d5-1264-4a92-adca-5ae1756d1852_689343_2 sched state 2 next 2 task state 1
2481 WUProp@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] wu_v3_1335155701_21118_0 sched state 2 next 2 task state 1
2482 SZTAKI Desktop Grid 4/23/2012 3:07:05 PM [cpu_sched_debug] dbbb0b60-d41c-41ea-8e5f-d1777c9dfd8b_08e14bc1-7f20-461b-9855-9c8363fd6ae9_689302_3 sched state 1 next 1 task state 0
2483 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] p2030.20110420.G58.98-00.25.N.b1s0g0.00000_1144_0 sched state 2 next 2 task state 1
2484 Einstein@Home 4/23/2012 3:07:05 PM [cpu_sched_debug] p2030.20110420.G58.98-00.25.N.b0s0g0.00000_1144_0 sched state 2 next 2 task state 1
2485 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] muon_120421220202_105_0 sched state 2 next 2 task state 1
2486 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] evo_A1334981706-7-0_4.29MB_60_0 sched state 2 next 2 task state 1
2487 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] ecm_xy_1334961911_C200_118_77_3625_0 sched state 2 next 2 task state 1
2488 yoyo@home 4/23/2012 3:07:05 PM [cpu_sched_debug] evo_A1335037206-88-0_42.9MB_79.92_0 sched state 2 next 2 task state 1
2489 4/23/2012 3:07:05 PM [cpu_sched_debug] enforce_schedule: end
2490 4/23/2012 3:07:12 PM [work_fetch] work fetch start
2491 4/23/2012 3:07:12 PM [work_fetch] ------- start work fetch state -------
2492 4/23/2012 3:07:12 PM [work_fetch] target work buffer: 21600.00 + 21600.00 sec
2493 ABC@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2494 Albert@Home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2495 rosetta@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2496 DistrRTgen 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2497 Poem@Home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2498 Leiden Classical 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2499 Collatz Conjecture 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2500 The Lattice Project 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2501 boincsimap 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2502 CAS@home 4/23/2012 3:07:12 PM [work_fetch] REC 10.795 priority -0.000000 (no new tasks)
2503 climateprediction.net 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2504 Docking 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2505 Einstein@Home 4/23/2012 3:07:12 PM [work_fetch] REC 110122.230 priority -58.322806
2506 eon2 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2507 NFS@Home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2508 Neurona@Home 4/23/2012 3:07:12 PM [work_fetch] REC 2.758 priority -0.000029
2509 EDGeS@Home 4/23/2012 3:07:12 PM [work_fetch] REC 21.072 priority -0.000000 (project backoff 49257.67) (master fetch pending)
2510 LHC@home 1.0 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000
2511 Milkyway@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2512 Moo! Wrapper 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2513 orbit@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2514 QMC@HOME 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2515 SETI@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2516 Spinhenge@home 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2517 sudoku 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2518 correlizer 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2519 SZTAKI Desktop Grid 4/23/2012 3:07:12 PM [work_fetch] REC 764.814 priority -0.810518
2520 Cosmology@Home 4/23/2012 3:07:12 PM [work_fetch] REC 5.180 priority -0.000000 (no new tasks)
2521 GPUGRID 4/23/2012 3:07:12 PM [work_fetch] REC 82260.890 priority -86.976115
2522 malariacontrol.net 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2523 PrimeGrid 4/23/2012 3:07:12 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
2524 yoyo@home 4/23/2012 3:07:12 PM [work_fetch] REC 748.326 priority -0.806278
2525 RNA World 4/23/2012 3:07:12 PM [work_fetch] REC 907.709 priority -0.962139
2526 World Community Grid 4/23/2012 3:07:12 PM [work_fetch] REC 971.208 priority -1.034450
2527 4/23/2012 3:07:12 PM [work_fetch] CPU: shortfall 0.00 nidle 0.00 saturated 52244.27 busy 0.00
2528 ABC@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2529 Albert@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2530 rosetta@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2531 DistrRTgen 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2532 Poem@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (blocked by prefs)
2533 Leiden Classical 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2534 Collatz Conjecture 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2535 The Lattice Project 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2536 boincsimap 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2537 CAS@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2538 climateprediction.net 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2539 Docking 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2540 Einstein@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (blocked by prefs) (no apps)
2541 eon2 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2542 NFS@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2543 Neurona@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.490 rsc backoff (dt 0.00, inc 0.00)
2544 EDGeS@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2545 LHC@home 1.0 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.490 rsc backoff (dt 0.00, inc 0.00)
2546 Milkyway@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2547 Moo! Wrapper 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
2548 orbit@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2549 QMC@HOME 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2550 SETI@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2551 Spinhenge@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2552 sudoku 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2553 correlizer 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2554 SZTAKI Desktop Grid 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2555 Cosmology@Home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2556 GPUGRID 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
2557 malariacontrol.net 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2558 PrimeGrid 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2559 yoyo@home 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2560 RNA World 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2561 World Community Grid 4/23/2012 3:07:12 PM [work_fetch] CPU: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2562 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: shortfall 1920.76 nidle 0.00 saturated 41278.98 busy 0.00
2563 ABC@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2564 Albert@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2565 rosetta@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2566 DistrRTgen 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2567 Poem@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2568 Leiden Classical 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2569 Collatz Conjecture 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2570 The Lattice Project 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2571 boincsimap 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
2572 CAS@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
2573 climateprediction.net 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2574 Docking 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2575 Einstein@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.010 rsc backoff (dt 0.00, inc 0.00)
2576 eon2 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2577 NFS@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2578 Neurona@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.483 rsc backoff (dt 0.00, inc 0.00)
2579 EDGeS@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2580 LHC@home 1.0 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.483 rsc backoff (dt 0.00, inc 0.00)
2581 Milkyway@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2582 Moo! Wrapper 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
2583 orbit@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2584 QMC@HOME 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2585 SETI@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2586 Spinhenge@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2587 sudoku 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2588 correlizer 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2589 SZTAKI Desktop Grid 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2590 Cosmology@Home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2591 GPUGRID 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2592 malariacontrol.net 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2593 PrimeGrid 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
2594 yoyo@home 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2595 RNA World 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2596 World Community Grid 4/23/2012 3:07:12 PM [work_fetch] NVIDIA: fetch share 0.005 rsc backoff (dt 0.00, inc 0.00)
2597 4/23/2012 3:07:12 PM [work_fetch] ------- end work fetch state -------
2598 4/23/2012 3:07:12 PM [work_fetch] No project chosen for work fetch


Then I did the same thing on the HD6990/1100T box... Next post...

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43723 - Posted: 23 Apr 2012, 20:47:49 UTC

Okay, just for fun I changed the FLOPS in the app_info.xml to see if BOINC was calculating properly to a LOW value and got this:

http://i.imgur.com/GDHmY.jpg

LOL! Days and Hours to finish a task that normal runs in 54 seconds...

Changed FLOPS back to proper value and got this:

http://i.imgur.com/SbCJn.jpg

Almost exactly right running two tasks per GPU... so that much works!

Then I let it run, all tasks HIGH PRIORITY and did the debug thing...

Win7-R400

3497 4/23/2012 3:50:44 PM [work_fetch] Request work fetch: Backoff ended for Milkyway@Home
3498 4/23/2012 3:50:48 PM [work_fetch] work fetch start
3499 4/23/2012 3:50:48 PM [work_fetch] ------- start work fetch state -------
3500 4/23/2012 3:50:48 PM [work_fetch] target work buffer: 21600.00 + 8640.00 sec
3501 boincsimap 4/23/2012 3:50:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3502 eon2 4/23/2012 3:50:48 PM [work_fetch] REC 177.416 priority -0.000000 (no new tasks)
3503 Neurona@Home 4/23/2012 3:50:48 PM [work_fetch] REC 16.863 priority -1.000000
3504 LHC@home 1.0 4/23/2012 3:50:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3505 Milkyway@Home 4/23/2012 3:50:48 PM [work_fetch] REC 669484.332 priority -42.244149 (no new tasks)
3506 correlizer 4/23/2012 3:50:48 PM [work_fetch] REC 1516.003 priority -12648.340063 (no new tasks)
3507 Cosmology@Home 4/23/2012 3:50:48 PM [work_fetch] REC 53.793 priority -0.000000 (no new tasks)
3508 RNA World 4/23/2012 3:50:48 PM [work_fetch] REC 146.388 priority -0.000000 (no new tasks)
3509 4/23/2012 3:50:48 PM [work_fetch] CPU: shortfall 0.00 nidle 0.00 saturated 35440.34 busy 0.00
3510 boincsimap 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3511 eon2 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3512 Neurona@Home 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 1.000 rsc backoff (dt 0.00, inc 0.00)
3513 LHC@home 1.0 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3514 Milkyway@Home 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (blocked by prefs) (no apps)
3515 correlizer 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3516 Cosmology@Home 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3517 RNA World 4/23/2012 3:50:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3518 4/23/2012 3:50:48 PM [work_fetch] ATI: shortfall 60480.00 nidle 2.00 saturated 0.00 busy 0.00
3519 boincsimap 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
3520 eon2 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3521 Neurona@Home 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 940.26, inc 1200.00)
3522 LHC@home 1.0 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3523 Milkyway@Home 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3524 correlizer 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3525 Cosmology@Home 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3526 RNA World 4/23/2012 3:50:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3527 4/23/2012 3:50:48 PM [work_fetch] ------- end work fetch state -------
3528 4/23/2012 3:50:48 PM [work_fetch] No project chosen for work fetch
3529 4/23/2012 3:51:20 PM [cpu_sched_debug] Request CPU reschedule: periodic CPU scheduling
3530 4/23/2012 3:51:20 PM [cpu_sched_debug] schedule_cpus(): start
3531 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4004759_1 (CPU job, priority order) (prio -1.000000)
3532 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4000468_1 (CPU job, priority order) (prio -1.000023)
3533 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4004621_0 (CPU job, priority order) (prio -1.000047)
3534 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4001379_0 (CPU job, priority order) (prio -1.000070)
3535 4/23/2012 3:51:20 PM [cpu_sched_debug] enforce_schedule(): start
3536 4/23/2012 3:51:20 PM [cpu_sched_debug] preliminary job list:
3537 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 0: rc_4004759_1 (MD: no; UTS: yes)
3538 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 1: rc_4000468_1 (MD: no; UTS: yes)
3539 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 2: rc_4004621_0 (MD: no; UTS: yes)
3540 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 3: rc_4001379_0 (MD: no; UTS: yes)
3541 4/23/2012 3:51:20 PM [cpu_sched_debug] final job list:
3542 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 0: rc_4004759_1 (MD: no; UTS: yes)
3543 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 1: rc_4000468_1 (MD: no; UTS: yes)
3544 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 2: rc_4004621_0 (MD: no; UTS: yes)
3545 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] 3: rc_4001379_0 (MD: no; UTS: yes)
3546 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4004759_1
3547 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4000468_1
3548 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4004621_0
3549 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] scheduling rc_4001379_0
3550 WUProp@Home 4/23/2012 3:51:20 PM [cpu_sched_debug] wu_v3_1335155701_20089_0 sched state 2 next 2 task state 1
3551 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] rc_4004759_1 sched state 2 next 2 task state 1
3552 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] rc_4000468_1 sched state 2 next 2 task state 1
3553 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] rc_4004621_0 sched state 2 next 2 task state 1
3554 correlizer 4/23/2012 3:51:20 PM [cpu_sched_debug] rc_4001379_0 sched state 2 next 2 task state 1
3555 WUProp@Home 4/23/2012 3:51:20 PM [css] running wu_v3_1335155701_20089_0 ( )
3556 correlizer 4/23/2012 3:51:20 PM [css] running rc_4004759_1 ( )
3557 correlizer 4/23/2012 3:51:20 PM [css] running rc_4000468_1 ( )
3558 correlizer 4/23/2012 3:51:20 PM [css] running rc_4004621_0 ( )
3559 correlizer 4/23/2012 3:51:20 PM [css] running rc_4001379_0 ( )
3560 4/23/2012 3:51:20 PM [cpu_sched_debug] enforce_schedule: end
3561 4/23/2012 3:51:48 PM [work_fetch] work fetch start
3562 4/23/2012 3:51:48 PM [work_fetch] ------- start work fetch state -------
3563 4/23/2012 3:51:48 PM [work_fetch] target work buffer: 21600.00 + 8640.00 sec
3564 boincsimap 4/23/2012 3:51:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3565 eon2 4/23/2012 3:51:48 PM [work_fetch] REC 177.075 priority -0.000000 (no new tasks)
3566 Neurona@Home 4/23/2012 3:51:48 PM [work_fetch] REC 16.831 priority -1.000000
3567 LHC@home 1.0 4/23/2012 3:51:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3568 Milkyway@Home 4/23/2012 3:51:48 PM [work_fetch] REC 668196.541 priority -4.353203 (no new tasks)
3569 correlizer 4/23/2012 3:51:48 PM [work_fetch] REC 1517.752 priority -12672.150553 (no new tasks)
3570 Cosmology@Home 4/23/2012 3:51:48 PM [work_fetch] REC 53.689 priority -0.000000 (no new tasks)
3571 RNA World 4/23/2012 3:51:48 PM [work_fetch] REC 146.106 priority -0.000000 (no new tasks)
3572 4/23/2012 3:51:48 PM [work_fetch] CPU: shortfall 0.00 nidle 0.00 saturated 35367.13 busy 0.00
3573 boincsimap 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3574 eon2 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3575 Neurona@Home 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 1.000 rsc backoff (dt 0.00, inc 0.00)
3576 LHC@home 1.0 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3577 Milkyway@Home 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (blocked by prefs) (no apps)
3578 correlizer 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3579 Cosmology@Home 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3580 RNA World 4/23/2012 3:51:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3581 4/23/2012 3:51:48 PM [work_fetch] ATI: shortfall 60480.00 nidle 2.00 saturated 0.00 busy 0.00
3582 boincsimap 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
3583 eon2 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3584 Neurona@Home 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 880.26, inc 1200.00)
3585 LHC@home 1.0 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3586 Milkyway@Home 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3587 correlizer 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3588 Cosmology@Home 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3589 RNA World 4/23/2012 3:51:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3590 4/23/2012 3:51:48 PM [work_fetch] ------- end work fetch state -------
3591 4/23/2012 3:51:48 PM [work_fetch] No project chosen for work fetch
3592 4/23/2012 3:52:20 PM [cpu_sched_debug] Request CPU reschedule: periodic CPU scheduling
3593 4/23/2012 3:52:20 PM [cpu_sched_debug] schedule_cpus(): start
3594 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4004759_1 (CPU job, priority order) (prio -1.000000)
3595 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4000468_1 (CPU job, priority order) (prio -1.000023)
3596 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4004621_0 (CPU job, priority order) (prio -1.000047)
3597 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4001379_0 (CPU job, priority order) (prio -1.000070)
3598 4/23/2012 3:52:20 PM [cpu_sched_debug] enforce_schedule(): start
3599 4/23/2012 3:52:20 PM [cpu_sched_debug] preliminary job list:
3600 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 0: rc_4004759_1 (MD: no; UTS: yes)
3601 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 1: rc_4000468_1 (MD: no; UTS: yes)
3602 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 2: rc_4004621_0 (MD: no; UTS: yes)
3603 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 3: rc_4001379_0 (MD: no; UTS: yes)
3604 4/23/2012 3:52:20 PM [cpu_sched_debug] final job list:
3605 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 0: rc_4004759_1 (MD: no; UTS: yes)
3606 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 1: rc_4000468_1 (MD: no; UTS: yes)
3607 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 2: rc_4004621_0 (MD: no; UTS: yes)
3608 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] 3: rc_4001379_0 (MD: no; UTS: yes)
3609 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4004759_1
3610 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4000468_1
3611 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4004621_0
3612 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] scheduling rc_4001379_0
3613 WUProp@Home 4/23/2012 3:52:20 PM [cpu_sched_debug] wu_v3_1335155701_20089_0 sched state 2 next 2 task state 1
3614 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] rc_4004759_1 sched state 2 next 2 task state 1
3615 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] rc_4000468_1 sched state 2 next 2 task state 1
3616 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] rc_4004621_0 sched state 2 next 2 task state 1
3617 correlizer 4/23/2012 3:52:20 PM [cpu_sched_debug] rc_4001379_0 sched state 2 next 2 task state 1
3618 WUProp@Home 4/23/2012 3:52:20 PM [css] running wu_v3_1335155701_20089_0 ( )
3619 correlizer 4/23/2012 3:52:20 PM [css] running rc_4004759_1 ( )
3620 correlizer 4/23/2012 3:52:20 PM [css] running rc_4000468_1 ( )
3621 correlizer 4/23/2012 3:52:20 PM [css] running rc_4004621_0 ( )
3622 correlizer 4/23/2012 3:52:20 PM [css] running rc_4001379_0 ( )
3623 4/23/2012 3:52:20 PM [cpu_sched_debug] enforce_schedule: end
3624 4/23/2012 3:52:48 PM [work_fetch] work fetch start
3625 4/23/2012 3:52:48 PM [work_fetch] ------- start work fetch state -------
3626 4/23/2012 3:52:48 PM [work_fetch] target work buffer: 21600.00 + 8640.00 sec
3627 boincsimap 4/23/2012 3:52:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3628 eon2 4/23/2012 3:52:48 PM [work_fetch] REC 176.734 priority -0.000000 (no new tasks)
3629 Neurona@Home 4/23/2012 3:52:48 PM [work_fetch] REC 16.798 priority -1.000000
3630 LHC@home 1.0 4/23/2012 3:52:48 PM [work_fetch] REC 0.000 priority -0.000000 (no new tasks)
3631 Milkyway@Home 4/23/2012 3:52:48 PM [work_fetch] REC 666911.227 priority -0.450693 (no new tasks)
3632 correlizer 4/23/2012 3:52:48 PM [work_fetch] REC 1519.497 priority -12696.096685 (no new tasks)
3633 Cosmology@Home 4/23/2012 3:52:48 PM [work_fetch] REC 53.586 priority -0.000000 (no new tasks)
3634 RNA World 4/23/2012 3:52:48 PM [work_fetch] REC 145.825 priority -0.000000 (no new tasks)
3635 4/23/2012 3:52:48 PM [work_fetch] CPU: shortfall 0.00 nidle 0.00 saturated 35295.99 busy 0.00
3636 boincsimap 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3637 eon2 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3638 Neurona@Home 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 1.000 rsc backoff (dt 0.00, inc 0.00)
3639 LHC@home 1.0 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3640 Milkyway@Home 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (blocked by prefs) (no apps)
3641 correlizer 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3642 Cosmology@Home 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3643 RNA World 4/23/2012 3:52:48 PM [work_fetch] CPU: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3644 4/23/2012 3:52:48 PM [work_fetch] ATI: shortfall 60480.00 nidle 2.00 saturated 0.00 busy 0.00
3645 boincsimap 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00) (no apps)
3646 eon2 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3647 Neurona@Home 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 820.26, inc 1200.00)
3648 LHC@home 1.0 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3649 Milkyway@Home 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3650 correlizer 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3651 Cosmology@Home 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3652 RNA World 4/23/2012 3:52:48 PM [work_fetch] ATI: fetch share 0.000 rsc backoff (dt 0.00, inc 0.00)
3653 4/23/2012 3:52:48 PM [work_fetch] ------- end work fetch state -------
3654 4/23/2012 3:52:48 PM [work_fetch] No project chosen for work fetch
3655 4/23/2012 3:53:13 PM Suspending computation - user request


I hope that helps... looks like I will have to go back to 6.12.34 on all boxes to resolve all the problems I am having AFTER running 7.0.25 for over a week..

8-)

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43778 - Posted: 25 Apr 2012, 18:52:38 UTC - in response to Message 43723.

Update: Upgrading to 7.0.26 seems to have cured the problem... BUT, it took an hour for it to start loading GPUGrid tasks and thereafter keeps one in que all the time...

So, at least that part works better now.

8-)

Profile Ageless
Volunteer moderator
Avatar
Send message
Joined: 29 Aug 05
Posts: 8733
Message 43793 - Posted: 26 Apr 2012, 19:40:09 UTC - in response to Message 43722.

1) The reset <zero_debts>1</zero_debts> - <zero_debts>0</zero_debts> thing in cc_config

<debt_debug/>, <std_debug/> and <zero_debts/> are deprecated in BOINC 7.0, so there's no need to run (with) any of these flags. BOINC 7.0 doesn't use debt anymore, but the Recent Estimated Credit (REC) scheduler. No actual credit is used, it's just a number based on how long your computer took to do the work. A sort of 'pay' system.

Work fetch is done according to priority, which is calculated from the REC / Resource Share ratio. When a projects runs, it accumulates REC; when it doesn't run, REC decays. So if a project goes down, REC goes down and priority goes up.
Conversely if a project goes into Earliest Deadline First/High Priority (EDF/HP), REC goes up and priority goes down, so it won't run for a while after it comes out of EDF/HP.

Before you ask, no there's no way yet to safely reset REC.
____________
Jord

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43796 - Posted: 26 Apr 2012, 22:00:09 UTC - in response to Message 43793.
Last modified: 26 Apr 2012, 22:23:31 UTC

Well, that is interesting... I was just going by the Alpha testers post an such.

I will note that if what you say is the case, one would expect 3 instances of WCG running on a 6 core system would accumulate 3X the Rec for WGC, and then run some other task that length of time at least. But, that is not what I am seeing so far.

Nevertheless, 26 does run Excluded GPU things better somehow... at least it doesn't go all night never fetching a new GPU task... only max 1 hour the first time and then good after that.

8-)

PS: Still doesn't explain why .25 and .26 always run Milkyway@home GPU tasks in high priority...

Richard Haselgrove
Send message
Joined: 5 Oct 06
Posts: 1185
Message 43797 - Posted: 26 Apr 2012, 23:20:34 UTC - in response to Message 43796.

PS: Still doesn't explain why .25 and .26 always run Milkyway@home GPU tasks in high priority...

That one was fixed by a volunteer here - [bug report, patch] Always running high priority - David adopted it (without citation) as [25593]. Also should be in v7.0.27

Profile Gary Charpentier
Avatar
Send message
Joined: 23 Feb 08
Posts: 226
Message 43798 - Posted: 27 Apr 2012, 0:42:57 UTC - in response to Message 43793.

1) The reset <zero_debts>1</zero_debts> - <zero_debts>0</zero_debts> thing in cc_config

<debt_debug/>, <std_debug/> and <zero_debts/> are deprecated in BOINC 7.0, so there's no need to run (with) any of these flags. BOINC 7.0 doesn't use debt anymore, but the Recent Estimated Credit (REC) scheduler. No actual credit is used, it's just a number based on how long your computer took to do the work. A sort of 'pay' system.

Work fetch is done according to priority, which is calculated from the REC / Resource Share ratio. When a projects runs, it accumulates REC; when it doesn't run, REC decays. So if a project goes down, REC goes down and priority goes up.
Conversely if a project goes into Earliest Deadline First/High Priority (EDF/HP), REC goes up and priority goes down, so it won't run for a while after it comes out of EDF/HP.

Before you ask, no there's no way yet to safely reset REC.

So with a project like LHC that never has work will eventually become the only project the BOINC will attempt to fetch work from, destroying the idea of a work fetch cache to get over bumps from a single project going down for a day maintenance. So the original work fetch problem remains in 7.X.

Profile Ageless
Volunteer moderator
Avatar
Send message
Joined: 29 Aug 05
Posts: 8733
Message 43800 - Posted: 27 Apr 2012, 4:49:42 UTC - in response to Message 43798.

So with a project like LHC that never has work will eventually become the only project the BOINC will attempt to fetch work from

Of course not. A project that has no work will be asked a couple of times, until its back-off places it at asking it once every 24 hours if there is work, and if not, back to a countdown of 24 hours. Even 6.10 and 6.12 did that already.
____________
Jord

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43811 - Posted: 27 Apr 2012, 11:25:33 UTC

Okay! Wow, I am impressed! I guess I found a real bug worthy of the 3 Guru's talent! WOOT!

Now, one last thing and I won't bug ya again... a REAL problem and one that I had hoped to solve myself... but, looking at the code, they could probably fix it faster and easier.

Scenerio: Neurona@Home, a new project, had WU's that needed 8Gig mem to run in... so they limited the amount of tasks to ONE per machine. You couldn't get another task until you finished the current task. Also, joining the project was by invitation at the time so they knew your machine could handle it.

NOW, they reduced the task duration AND RAM size to 2.2Gig per task AND they send two tasks per machine as a test as of yesterday.

PROBLEM: ON a 32b WinXP machine with 3.4Gig memory available and a 6-Core 1055T processor, HOW DO I CONTROL the INSTANCES of Neurona to ONLY ONE because that is all the hardware can handle?

IF the developers allow INSTANCE limits through cc_config or a program option, that would be great! As of now, my only method is to limit the CPU usage to 18% (one core out of six).

8-)

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43813 - Posted: 27 Apr 2012, 14:52:33 UTC - in response to Message 43811.

Okay, I don't know why, but Einstein on GPU-1 ran out of work again and 7.0.26 would not fetch more unless I Project Suspended GPUGrid on GPU-0.

Again, Einstein is running 2 tasks at a time on GPU-1 and GPUGrid is running one task at a time on GPU-0.

Looks like a bug in there...

8-)


Profile Ageless
Volunteer moderator
Avatar
Send message
Joined: 29 Aug 05
Posts: 8733
Message 43814 - Posted: 27 Apr 2012, 15:15:46 UTC - in response to Message 43813.

The bug here is that you're still expecting BOINC 7.0 to work like previous BOINC versions did, where it will fetch work for whatever project e.g. when it's just uploaded & reported work for that project. Sort of to top off the cache all the time.

BOINC 7.0 does not work this way. It will only fetch work when the total work cache is below that of the minimum work buffer value and then it will only fetch work from the project with the highest priority (priority == 0, or close to it, like -0.05), and only if that project does not have work, will it fetch work from the next, and the next, etc.

IF the developers allow INSTANCE limits through cc_config or a program option, that would be great! As of now, my only method is to limit the CPU usage to 18% (one core out of six).

You said yourself that the project only sends you 2 of those tasks, so why not suspend one of them while you run the other? A manual value for amounts of tasks you want to get from the project has been asked plenty of times before and has always met a nyet from the developers. I don't see any change in that in the foreseeable future.
____________
Jord

Profile Gundolf Jahn
Send message
Joined: 20 Dec 07
Posts: 1069
Message 43815 - Posted: 27 Apr 2012, 16:27:06 UTC - in response to Message 43811.

...HOW DO I CONTROL the INSTANCES of Neurona to ONLY ONE because that is all the hardware can handle?

I'm not sure if it applies here, but has anyone on that project ever tested the "Memory: when computer is in use, use at most xx% of total" preference setting for that purpose?

Gruß,
Gundolf

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43816 - Posted: 27 Apr 2012, 18:39:32 UTC - in response to Message 43814.
Last modified: 27 Apr 2012, 18:43:46 UTC



IF the developers allow INSTANCE limits through cc_config or a program option, that would be great! As of now, my only method is to limit the CPU usage to 18% (one core out of six).


You said yourself that the project only sends you 2 of those tasks, so why not suspend one of them while you run the other? A manual value for amounts of tasks you want to get from the project has been asked plenty of times before and has always met a nyet from the developers. I don't see any change in that in the foreseeable future.


Okay, you said we shouldn't be allowed to control how many tasks are DOWNLOADED. That is fine... what I am talking about is an INSTANCE limit while running so when I run a high memory usage task, it doesn't start THRASHING the hard driver because it's using Virtual Memory and turn a 5 minute task into a 1.5 hour long task.

Limit the INSTANCES, not affinity, the INSTANCES of a task running is a NEW thing I am asking about. Simple check box or something in the config file. Telling me I have to MANUALLY control when/how much of each task is run is micro-manageing in the extreme and not something I care to do. If Neurona downloaded 100 tasks, it doesn't matter because I can only run ONE at a time without thrashing on the XP box OR limit the amount of processor cores used which is a total waste of power.

----

Now, as to running out of work, look at the FIRST post. Boinc supports using the EXCLUDE GPU option for tasks and THAT works fine. However, when used the way "I" am using it... that is to EXCLUDE GPU-0 from Einstein tasks and EXCLUDE GPU-1 from GPUGrid tasks, it allows either task to completely run out of work and never fetch MORE work until the OTHER tasks are suspended or also run out of work. I am certain that is NOT the way it was meant to operate and is a REAL bug.

-----

And, I have tested that %mem thing and what happens is the task starts running with very little memory and as it gobbles up more and more it hits some point where it triggers an error and boinc halts it, then retries again in a loop...again another total waste of system resources and power.

8-)

Profile Gary Charpentier
Avatar
Send message
Joined: 23 Feb 08
Posts: 226
Message 43819 - Posted: 28 Apr 2012, 0:51:44 UTC - in response to Message 43800.

So with a project like LHC that never has work will eventually become the only project the BOINC will attempt to fetch work from

Of course not. A project that has no work will be asked a couple of times, until its back-off places it at asking it once every 24 hours if there is work, and if not, back to a countdown of 24 hours. Even 6.10 and 6.12 did that already.

Ah, but the problem still exists, and perhaps is even worse under 7.x. Devine me how a cache can be maintained? Consider that it will only fetch enough work to tide it over until the back off expires [same as setting network available times]. So setting a fetch cache of three days will never happen. On average the cache will only have 12 hours work. If I understood you correctly under 7.x when it can't get work from a project if goes to the next and gorges itself on this project. I can see a circumstance where with the predictable 24 hour back off that a project being maintained on a 24 hour schedule might never give a client work. Either the project in back off or another that has to wait for the cache to drain and then gets fetched on a wall clock schedule.

The people who designed this work fetch tried to do the right thing, but then have just messed up by trying to get too fancy and not considered all the possibilities.

____________

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43855 - Posted: 29 Apr 2012, 13:42:07 UTC - in response to Message 43819.
Last modified: 29 Apr 2012, 13:49:31 UTC

It seems if I set my work fetch for 1 day, the GPU's will get sufficient work to keep them busy and therefore not run out of work...

Sigh... I suppose that is okay, but some tasks will flood your system if set that way.

On another side note, again, I wish to stress that it all works well, but has no control for wayward projects.

Allowing us to set an "INSTANCE" limit, run only "ONE TASK" of "ONE CERTAIN PROJECT" at a time would solve just about every problem I can think of since with that option, we could adjust other things to make it run properly all the time.

I've looked at the code and seems to me that ONE check could easily be added in the CPU scheduler. However, having said that, it isn't so simple as the GUI would have to have some method to apply the limit. Soo, the change would be to more areas than just the CPU scheduler, it would also be made to the GUI in some area that isn't so straight forward to implement. However, since it is a POWER user feature, setting it in the cc_config would be fine IMHO.

Soo, in the cc_config, we add something like this to load up a 6-core/thread system the way we want:

[run one task at a time]

<limit_cpu>
<url>http://home.edges-grid.eu/home/</url>
<limit_num>1</limit_num>
</limit_cpu>

[run no more than 2 tasks at a time]

<limit_cpu>
<url>http://boinc.bakerlab.org/rosetta/</url>
<limit_num>2</limit_num>
</limit_cpu>

[run no more than 3 tasks at a time]

<limit_cpu>
<url>http://worldcommunitygrid.org/</url>
<limit_num>3</limit_num>
</limit_cpu>


That would be the simple way to do it...

8-)

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43861 - Posted: 29 Apr 2012, 15:31:33 UTC - in response to Message 43855.

I may also add that one of the biggest problems is mixing very long tasks with very short but higher priority tasks. In fact, any mix at all can reduce throughput of desired HIGH priority tasks, especially those that limit you to one or two tasks AT MOST per machine.

I face that very situation now... several 6-core machines that I wish to run HIGH PRIORITY tasks on that give me only TWO WU's max due to runtime memory requirements. My only real option is to run the one high priority task and nothing else! It's a total waste of 4 unused cores! I could tolerate that on only one machine... not the others... so my total maximum throughput is very limited...

There simply is no option but to MICRO-MANAGE those types of projects and that is a pain in the buttoska!

LOL!

8-)

Profile Peter
Avatar
Send message
Joined: 7 Sep 09
Posts: 135
Message 43898 - Posted: 30 Apr 2012, 14:50:06 UTC

I've just been told this is not likely to change in the near future and, in my case at least, to 'turn off GPU usage altogether in the various project settings' which is totally unsatisfactory.

How does one revert to an earlier version?

Tex1954
Send message
Joined: 3 Mar 12
Posts: 26
Message 43907 - Posted: 30 Apr 2012, 21:24:49 UTC - in response to Message 43898.
Last modified: 30 Apr 2012, 21:59:15 UTC

I've just been told this is not likely to change in the near future and, in my case at least, to 'turn off GPU usage altogether in the various project settings' which is totally unsatisfactory.

How does one revert to an earlier version?


Well, the only way I was able to do it in a satisfactory way was to uninstall BOINC, delete everything in the BOINC PROGRAM and DATA folders then reinstall after a reboot.

8-)

PS: There is possibly another way to do this as well... perhaps I could do something and submit it... maybe... someday... would be fun! :)

SekeRob2
Send message
Joined: 6 Jul 10
Posts: 354
Message 43910 - Posted: 1 May 2012, 11:10:48 UTC - in response to Message 43907.

My 64 bit Linux Ubuntu 12.04 equipped out of the box with 7.0.24 dbfg [on USB 3.0 mem.stick] displayed the "no work fetching" behavior. This instance had been run dry the previous day, then was not loaded for some 16 hours. Nothing could get work fetching to move, but there was an entry logged that the clock was set back 2 hours from CET-DST to GMT(UTC). After doing a "sudo service boinc-client stop" and start, work fetching resumed. Experimented a bit more with clock settings putting the clock back, which logs and tells the deferrals were reset. Computing freezes and same had to be done, unload the client (not like run suspended in the menu) and start it again to get it to resume computing. Essentially, this suggests that next time we switch from CET-DST to CET wintertime, the client is possibly going to freeze, and it wont be only mine doing this.

Since the testing was set up on a laptop, had disabled the GPU use in the config to be sure (located in /etc/boinc-client) and read in, and confirmed in message log as accepted). The log said anyhow there was no usable GPU/co-processor in the system.

FYI... now know what to watch and where to look when computing freezes... many others wont even be aware.

--//--

1 · 2 · Next

Message boards : Questions and problems : 7.0.25 Doesn't fetch work when cc_config excludes GPU's


BOINC home page · Log in · Create account

Copyright © 2014 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.