Resource Share, what if?

Message boards : Questions and problems : Resource Share, what if?
Message board moderation

To post messages, you must log in.

AuthorMessage
O&O
Avatar

Send message
Joined: 15 Jun 18
Posts: 12
Saudi Arabia
Message 87082 - Posted: 12 Jul 2018, 18:32:33 UTC
Last modified: 12 Jul 2018, 18:35:11 UTC

Hi,

Theoretically, If I suspend a project which has like 200 share of the resource, do these resources automatically get redistributed among the projects I'm currently running?

Or ...

Do I need to go and modify the preferences of the project I suspended to 0 (zero) to have the resources practically get redistributed among other projects?

Regards,
O&O
ID: 87082 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5080
United Kingdom
Message 87084 - Posted: 12 Jul 2018, 19:10:47 UTC - in response to Message 87082.  
Last modified: 12 Jul 2018, 19:21:05 UTC

Yes.

No.

Edit - that was a bit blunt ;-) A resource share of zero has a special, different, meaning: it makes that project a 'backup' project - work is only fetched, one task at a time, if no work is available from any other project and the computer is at risk of becoming idle within three minutes. Only use it with that meaning in mind - so it makes no sense when the project is suspended anyway.
ID: 87084 · Report as offensive
O&O
Avatar

Send message
Joined: 15 Jun 18
Posts: 12
Saudi Arabia
Message 87086 - Posted: 12 Jul 2018, 19:27:23 UTC - in response to Message 87084.  
Last modified: 12 Jul 2018, 19:28:14 UTC

Well,... I tried it and ..

Unsure, because the percentage of resources stays as is under BOINC for the remaining running projects.

Yes, setting the preference of the suspended project to 0, made the percentages of the other resources to increase in BOINC.

So,...
ID: 87086 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5080
United Kingdom
Message 87090 - Posted: 12 Jul 2018, 21:45:22 UTC - in response to Message 87086.  

Where did you look? Some of the Event Log options - like <Work Fetch debug> - give second-by-second data about the share, for each resource, taking into account all the current backoffs, suspensions, preferences, etc.
ID: 87090 · Report as offensive
O&O
Avatar

Send message
Joined: 15 Jun 18
Posts: 12
Saudi Arabia
Message 87094 - Posted: 13 Jul 2018, 7:43:34 UTC - in response to Message 87090.  
Last modified: 13 Jul 2018, 8:00:50 UTC

BOINC Client>Advanced view>Projects>Resource Share (column)
Notice the resource share percentages ..

Before...


After ...

ID: 87094 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5080
United Kingdom
Message 87095 - Posted: 13 Jul 2018, 8:04:26 UTC - in response to Message 87094.  

It's still there, waiting for you to use it, but operationally it won't play any part.

13/07/2018 08:56:46 | | [work_fetch] --- state for CPU ---
13/07/2018 08:56:46 | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 26947.37 busy 0.00
13/07/2018 08:56:46 | Albert@Home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | Asteroids@home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | climateprediction.net | [work_fetch] share 0.000
13/07/2018 08:56:46 | Einstein@Home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | Fight Neglected Diseases | [work_fetch] share 0.000
13/07/2018 08:56:46 | GPUGRID | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | LHC@home | [work_fetch] share 0.000
13/07/2018 08:56:46 | Milkyway@Home | [work_fetch] share 0.000
13/07/2018 08:56:46 | NumberFields@home | [work_fetch] share 1.000
13/07/2018 08:56:46 | orbit@home | [work_fetch] share 0.000
13/07/2018 08:56:46 | SETI@home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | SETI@home Beta Test | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | | [work_fetch] --- state for NVIDIA GPU ---
13/07/2018 08:56:46 | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 91740.09 busy 0.00
13/07/2018 08:56:46 | Albert@Home | [work_fetch] share 0.000
13/07/2018 08:56:46 | Asteroids@home | [work_fetch] share 0.000
13/07/2018 08:56:46 | climateprediction.net | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | Einstein@Home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | Fight Neglected Diseases | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | GPUGRID | [work_fetch] share 0.000 job cache full
13/07/2018 08:56:46 | LHC@home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | Milkyway@Home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | NumberFields@home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | orbit@home | [work_fetch] share 0.000
13/07/2018 08:56:46 | SETI@home | [work_fetch] share 1.000
13/07/2018 08:56:46 | SETI@home Beta Test | [work_fetch] share 0.000
13/07/2018 08:56:46 | | [work_fetch] --- state for Intel GPU ---
13/07/2018 08:56:46 | | [work_fetch] shortfall 771.02 nidle 0.00 saturated 21692.98 busy 0.00
13/07/2018 08:56:46 | Albert@Home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | Asteroids@home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | climateprediction.net | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | Einstein@Home | [work_fetch] share 1.000
13/07/2018 08:56:46 | Fight Neglected Diseases | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | GPUGRID | [work_fetch] share 0.000
13/07/2018 08:56:46 | LHC@home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | Milkyway@Home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | NumberFields@home | [work_fetch] share 0.000 no applications
13/07/2018 08:56:46 | orbit@home | [work_fetch] share 0.000
13/07/2018 08:56:46 | SETI@home | [work_fetch] share 0.000 blocked by project preferences
13/07/2018 08:56:46 | SETI@home Beta Test | [work_fetch] share 0.000 no applications
That's a fetch share of 1 (==100%) for just one project in each resource section. The same would apply to runtime allocation.
ID: 87095 · Report as offensive
O&O
Avatar

Send message
Joined: 15 Jun 18
Posts: 12
Saudi Arabia
Message 87096 - Posted: 13 Jul 2018, 8:25:28 UTC - in response to Message 87095.  
Last modified: 13 Jul 2018, 8:29:45 UTC

@Richard ..
Did you suspend all your projects, edited the preference resources to zero for each project and then updated all the projects and .... found that , at least, one SETI task was running ?

Why a SETI?
ID: 87096 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5080
United Kingdom
Message 87097 - Posted: 13 Jul 2018, 9:19:34 UTC - in response to Message 87096.  

I tend to use 'no new tasks' rather than 'suspend' - many of them are projects which I have attached to briefly to run one or two test tasks to explore a problem, but not felt attracted into a long-term relationship.

If I did find an unexpected task running, I would read the Event Log in detail to try to understand the circumstances, and report a bug if I found one. It's often easiest to do that by opening the event log archive file 'stdoutdae.txt', and using the 'find' facility in your text editor. The full life-cycle for a task looks like

11-Jul-2018 21:16:03 [SETI@home] [sched_op] Starting scheduler request
11-Jul-2018 21:16:03 [SETI@home] Sending scheduler request: To fetch work.
11-Jul-2018 21:16:03 [SETI@home] Reporting 1 completed tasks
11-Jul-2018 21:16:03 [SETI@home] Requesting new tasks for NVIDIA GPU
11-Jul-2018 21:16:03 [SETI@home] [sched_op] CPU work request: 0.00 seconds; 0.00 devices
11-Jul-2018 21:16:03 [SETI@home] [sched_op] NVIDIA GPU work request: 7627.38 seconds; 0.00 devices
11-Jul-2018 21:16:03 [SETI@home] [sched_op] Intel GPU work request: 0.00 seconds; 0.00 devices
11-Jul-2018 21:16:06 [SETI@home] Scheduler request completed: got 9 new tasks

11-Jul-2018 21:16:42 [SETI@home] Started download of blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar
11-Jul-2018 21:16:53 [SETI@home] Finished download of blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar

13-Jul-2018 09:32:20 [SETI@home] Starting task blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar_1
13-Jul-2018 09:47:00 [SETI@home] Computation for task blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar_1 finished

13-Jul-2018 09:47:02 [SETI@home] Started upload of blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar_1_r1559224817_0
13-Jul-2018 09:47:06 [SETI@home] Finished upload of blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar_1_r1559224817_0

13-Jul-2018 09:47:12 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc16_2bit_guppi_58185_60090_Bol520_off_0013.963.1636.22.45.2.vlar_1

WU 3047813161
To track down what happened, it's important to find that original 'Sending scheduler request: To fetch work.' that triggered the download. It's a common misconception that projects "send" work: they can't. They only respond to requests for work (if the project initiated the call, it wouldn't get through your firewall). Look back over what you were doing around the time that task was downloaded.
ID: 87097 · Report as offensive

Message boards : Questions and problems : Resource Share, what if?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.