Thread 'Multiple Projects'

Message boards : Questions and problems : Multiple Projects
Message board moderation

To post messages, you must log in.

AuthorMessage
ProfileSabrina Tarson

Send message
Joined: 21 Jun 12
Posts: 11
United States
Message 44898 - Posted: 15 Jul 2012, 0:53:48 UTC

I have 1 Project (Milkyway@Home) that runs on the GPU, and 4 Projects (Rosetta@Home, Einstein@Home, World Community Grid, and WUProp) on my Quad-Core CPU.

Will BOINC automatticly determine which project to run so that all the projects return their work back on time, or is this something that I should have to monitor?

The reason is for during the summer, my computer is only going to be running BOINC for about 8 hours a day, at night, but during the School year, it will be running almost 17 hours a day. From when I go to bed, to when I get back home from school the next day. I want to make sure even if one project goes down, my computer will still have work to do and contribute as much as possible.
ID: 44898 · Report as offensive
BilBg
Avatar

Send message
Joined: 18 Jun 10
Posts: 73
Bulgaria
Message 44904 - Posted: 15 Jul 2012, 11:25:12 UTC - in response to Message 44898.  


BOINC is supposed to (designed to) adapt automatically to the changes.

What you set in "Maintain enough tasks to keep busy for at least XX days" is interpreted as "total work on-board for all the projects in calendar days"

So BOINC adapts automatically (and will get less tasks for the same amount of XX days) if you use the "computer for about 8 hours a day" or any other change which leads to less tasks done per day.





- ALF - "Find out what you don't do well ..... then don't do it!" :)
ID: 44904 · Report as offensive
ProfileSabrina Tarson

Send message
Joined: 21 Jun 12
Posts: 11
United States
Message 44939 - Posted: 18 Jul 2012, 15:10:49 UTC - in response to Message 44904.  

For some reason now, Rosetta@home does not seem to download new tasks, but all the others do. I have not done anything to its settings. Will it just wait until one of the projects is offline and then download work again, then when that project is online again stop?
ID: 44939 · Report as offensive
BilBg
Avatar

Send message
Joined: 18 Jun 10
Posts: 73
Bulgaria
Message 44975 - Posted: 20 Jul 2012, 14:32:25 UTC - in response to Message 44939.  


Which BOINC version do you use?
(to know if it uses 'debts')

In general BOINC try (in a long term) to obey your settings for 'Resource share' on every project.
If the computer did more work recently than the 'Resource share' for that project dictates
now it is time for BOINC to 'pay back' (devote CPU/GPU time) to other projects.





- ALF - "Find out what you don't do well ..... then don't do it!" :)
ID: 44975 · Report as offensive
speechless

Send message
Joined: 16 Aug 12
Posts: 3
Switzerland
Message 45315 - Posted: 16 Aug 2012, 21:26:07 UTC - in response to Message 44975.  
Last modified: 16 Aug 2012, 21:27:16 UTC

In general BOINC try (in a long term) to obey your settings for 'Resource share' on every project.
If the computer did more work recently than the 'Resource share' for that project dictates
now it is time for BOINC to 'pay back' (devote CPU/GPU time) to other projects.


I'm curious:
Does this apply to project share across hosts or is the ratio respected per host?

Example:
If I set Rosetta to 100 and Docking to 100, and Docking never runs on my Mac - don't know why, btw - will it reduce usage of Rosetta on my Windows machine accordingly or will it contribute 50/50 on my Windows machine?

Note:
I am using the most recent BOINC version on all my computers.
ID: 45315 · Report as offensive
BilBg
Avatar

Send message
Joined: 18 Jun 10
Posts: 73
Bulgaria
Message 45317 - Posted: 17 Aug 2012, 0:26:19 UTC - in response to Message 45315.  


I think that Resource share is 'respected per host' (i.e. one of your hosts do not affect other your hosts in any way (not only considering Resource share but for every other aspect of BOINC))

Not every of your Computers need to do work for all projects that you participate in.

E.g. if you participate in projects A, B, C (all with Resource share 100):
Computer_1: attached to projects A, B, C - distribution of work 33%, 33%, 33%
Computer_2: attached to projects A, B - distribution of work 50%, 50%
Computer_3: attached to projects B, C - distribution of work 50%, 50%
Computer_4: attached to project A - distribution of work 100%
Computer_5: attached to project B - distribution of work 100%

(from these figures you can't know which project will end with most work done (you can't add Computer_1-33% + Computer_2-50% + Computer_4-100% - they have different speed/rate of computing)
If only project C have GPU app it's very probable that most 'production' will go to C (even if you set it to lower Resource share)
)

When I say "A, B, C - distribution of work 33%, 33%, 33%" this is even not 'per host' but 'per host_device'.
If A, B, C use the same resource (CPU) they have the same need so will be allowed to use it (the CPU) about 33% of the time (in a long term).
If A uses only CPU, B uses only nVidia GPU, C uses only ATi GPU - all will run on their resource all of the time.





- ALF - "Find out what you don't do well ..... then don't do it!" :)
ID: 45317 · Report as offensive
Bernard 64250

Send message
Joined: 10 May 12
Posts: 11
France
Message 45460 - Posted: 26 Aug 2012, 23:56:13 UTC
Last modified: 27 Aug 2012, 0:00:16 UTC

This is the theory, but the facts are different.

I have 2 Core 2 Duo computers running Boinc 7.0.31. The minimum reserve of work is set to 3 days with a threshold of 0.01 day.
They are connected to 3 projects that should equally share the CPU resources (100 / 100 / 100). According to a simple calculation (3 days x 2 cores / 3 projects), they should maintain a minimum workload of 48 hours.

But it doesn't work this way.
On both computers:
- 1 project has 1 WU active with more then 200 hours of work to do and a very long deadline. That sounds normal.
- 1 project has no pending work, with message "Not requesting tasks: don't need".
- 1 project has a lot of WUs in queue (22 WUs for a total workload of 140 hours on one computer, 13 WUs for a total workload of 152 hours on the other). By chance, the deadlines are long enough to let computing time to the other projects.
Before, my computers were attached to a project with short deadlines, and its WUs were always running in high priority, so a single project was using 100% of the resources when 3 projects should share them.
ID: 45460 · Report as offensive
Bill Hepburn

Send message
Joined: 12 Sep 05
Posts: 12
Message 45477 - Posted: 30 Aug 2012, 1:32:08 UTC

I am convinced that something is weird with the "new" work fetch. There are far too many complaints that it doesn't fetch enough work, or what it does fetch is not what is expected. Some of this can be attributed to a lack of understanding on the part of the complainer, but I don't believe that it all can be.

The standard answer of "leave it alone and it will sort itself out on it's own" hasn't worked for me. I have mostly left it alone for a couple of months now and it's still weird.

I am running BOINC 7.0.28, on several machines. CPU runs Malaria, Rosetta, and LHC Sixtrack all with a resource share of 100. Work fetch seems to work reasonably well, at least I don't seem to run out of work very often, and the task list usually has a mix of projects. I have the "keep enough work" settings around .25 days each.

GPU is a totally different thing though. I run Seti with a resource share of 100, and Einstein at 5 on GPU only one computer. Machine is Win7 pro and an nVidea GTX 460 with 301.42 driver only. GPU crunching turns off when I use the computer, and all computing turns off in the afternoon (temperature and high utility rates).

I would expect Seti to run much (20 times) more than Einstein, Einstein filling in during Setis' maintenance and connectivity outages. However, Einstein runs about 4 times as much as Seti. I have seen it request tasks for Einstein and not Seti, even though it has been running Einstein for half a day or more. Of course that fills up the cache for a few more hours of Einstein.

It is curious.
ID: 45477 · Report as offensive

Message boards : Questions and problems : Multiple Projects

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.