Message boards : BOINC Manager : My Wish List - part 3.
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 8 · Next
Author | Message |
---|---|
Send message Joined: 19 Jan 07 Posts: 1179 |
A startup delay timer that is "settable" by the user (minimum of zero, maximum of 120 or 180 seconds) to allow a delay before boinc launches any tasks. Already supported since BOINC 6.1.6. You have to add <start_delay>180</start_delay> to cc_config.xml. See http://boinc.berkeley.edu/wiki/Cc_config.xml#Options |
Send message Joined: 30 Aug 05 Posts: 65 |
To get rid of FIFO for work cached for processing by a GPU. To have a separate resource share set up for GPU. To be able to set a 'back-up' project -> give basically all resource share to a single or set of projects and only if those projects run out of work ask for a small amount of work from a user designated 'back-up' project. To get rid of the debt resetting if a project runs out of work. Debt should rise to a maximum amount -> say 172800 seconds (2 days). Why 2 days? Why not. For BOINC to not immediately start a task if a project has been suspended and then resumed. The system should wait until the running wu checkpoints or ends before swapping out. If the internet conneciton goes down but the local LAN is still operational, I wish BOINC manager wouldn't freeze or lock until a time out is reached. For a RSS reader to be incorporated into BOINC manager so that project news for attached projects would be displayed -> Be nice if Ctrl+Shift+N on the alpha BOINC managers worked (News). Paul. |
Send message Joined: 14 Apr 09 Posts: 3 |
Rather than just the simple on/off based on idle, I'd like to see different cpu limits when idle/working. That way I can limit cpu usage to either 1/2 the processors or 50% load while I'm active rather than just turning BOINC off. |
Send message Joined: 29 Aug 05 Posts: 15574 |
|
Send message Joined: 19 Jan 07 Posts: 1179 |
To get rid of the debt resetting if a project runs out of work. Debt should rise to a maximum amount -> say 172800 seconds (2 days). Why 2 days? Why not. Putting such a low limit defeats the whole point of having debt at all. If your computer runs CPDN in deadline-panic-mode for 6 months, CPDN shouldn't run for 6 months to let the other projects catch up. (I'm assuming equal resource shares here) |
Send message Joined: 30 Aug 05 Posts: 65 |
To get rid of the debt resetting if a project runs out of work. Debt should rise to a maximum amount -> say 172800 seconds (2 days). Why 2 days? Why not. Fair enough, but the issue is if BOINC doesn't have any wu's cached, asks for new work, then receives none as a project is out of work then the debt is reset to 0. This is not desirable is my point. Paul. |
Send message Joined: 19 Jan 07 Posts: 1179 |
To get rid of the debt resetting if a project runs out of work. Debt should rise to a maximum amount -> say 172800 seconds (2 days). Why 2 days? Why not. Well, I have to admit that I don't understand debts anymore since they added GPU support and changed how the scheduler works. |
Send message Joined: 7 Dec 09 Posts: 1 |
I would love to see an option to set Project Y to active only if you are unable to to get work form project X. Especially for us ATI crunchers that can only use Collatz or Milkyway. This way if MW goes down, it will automatically start getting work from Collatz, and stop fetching from Collatz once MW is back up. Most of us that pay attention to our clients on a regular basis do have projects that we only crunch if our favorites are down, right? If there is a way to do this already, I apologize! |
Send message Joined: 26 Dec 06 Posts: 36 |
When you install BOINC it should add "Limited User Acounts" too! At least give you the choice to do so during the install part! Also, DON"T forget some of us are still using the old dailup. So BOINC should be more dailup friendly! Thanks! |
Send message Joined: 3 Apr 06 Posts: 547 |
Am I the only one who could use this feature? No, you are neither the first, not the only one. See e.g. Trac ticket [trac]#41[/trac]. I have a server, a media-center and a work computer that all run Boinc. After being asked for by lots of prople since ages, Dr.A. finally agreed on putting this option on the ToDo list (Yay!!!) [boinc_dev] Additional processor feature required Peter |
Send message Joined: 24 Dec 09 Posts: 2 |
If this already exists, I apologize in advance for the redundancy. I'd like to see a Pre-load feature for tasks. There are many occasions when I know I'll be off my laptop connection(s) for a while, sometimes even a long while. I'd like the ability to stack two or three tasks (instead of just one when the running task gets close to completion), all able to run, so that when I do re-connect, I can have many results to upload. Thank you. |
Send message Joined: 24 Dec 09 Posts: 2 |
Never mind. I found how to do the same thing, thank you. |
Send message Joined: 25 Nov 05 Posts: 1654 |
That's been available for a long time. Just increase the value in your Maintain enough work for an additional option to several days, instead of the default value, which is a fraction of a day. This is under Network usage in Computing preferences on the project's web site. Or, if you're using the preferences in the BOINC manager's menu, then it's called Additional work buffer, and is under Network usage in the preferences section. edit OK. You seem to have found it. :) |
Send message Joined: 19 Feb 10 Posts: 1 |
I would very much like the option to prioritize the CPU (and GPU) workload. For example. If i leave my computer untouched for a minute, it will switch to full processorusage, or what ever i set it to, but then when i start to use my computer, it will slow down to about 50% or 25% usage. +1 to this here. I'd really love to be able to tell my computer to use 25% of its power (one of the 4 CPUs, say, or all 4 CPUs at 25% workload) when the computer's not idle, and 100% when it is idle. Instead, I have to set it to suspend all work when the computer's not idle. (I'm running v6.10.18 -- the newest one, I believe, and I see no options to change what I'm talking about. So please feel free to correct me if I'm wrong.) Additionally, if I could (for example) tell it to use one GPU when the computer is not idle, and all GPUs when it is idle, that'd be fantastic too. |
Send message Joined: 19 Feb 10 Posts: 97 |
Hi, sorry if this has already been asked for/suggested, I did not read through the previous wish list parts... I'd like to see BOINC manage duration correction factors per application and not per project as currently done. I run Seti on both GPU and CPU which have a massively different correction factor - upping calculated times for the CPU by a factor of 4-5 which messes up the cache. Einstein has two different sorts of application which also would benefit from individual correction factors. Main benefit would be a more reliable cache - BOINC will of course realize it is running low and download more work and you can always increase the cache - but then I end up with quite a big cache for projects which only have a small share and they end up going into high priority. |
Send message Joined: 29 Aug 05 Posts: 15574 |
I'd like to see BOINC manage duration correction factors Already requested in [trac]#812[/trac] Which year's August David means is anyone's guess though. :) |
Send message Joined: 15 Mar 10 Posts: 4 |
I've finally just installed a cuda capable boinc on a cuda capable machine (as part of a complete reinstall of that machine) and have come across my first feature request. I'd like boinc to separate its resource share allocation into cpu and gpu categories so that the gpu will continue to crunch all the time, regardless of the cpu project debt levels. Here's the example: the machine in question is just a Pentium dual core 1.6Ghz with an nvidia 8400GS card in it. BOINC picks it up and tells me it is 29GFLOPS peak capable and proceeded to download over 1000 cuda work units for SETI@home for it! (after it downloaded a few normal cpu units before it had finished getting the cuda executables) I let it finish the 2 SETI cpu units it started but aborted the 3 it hadn't started. I also attached it to CPDN and it grabbed 2 units for that and started crunching those. The problem is now it isn't touching the SETI cuda units at all (the non-cuda SETI units have finished now), and is only crunching the CPDN units, leaving the GPU idle with >1000 cuda units "ready to start". My guess is that it may have done some quick cuda crunching and now it's waiting for the non-cuda CPDN to catch up before doing any more SETI crunching. I have set both projects to not download any further work until it decides to do the SETI units, and I'll wait until the early deadlines for the SETI units pass before raising an issue elsewhere - to see if it goes into panic mode as the deadlines approach. It just seems to be a waste to have >1000 cuda units sitting in a queue with an idle GPU (even when set to use the gpu all the time), especially when that single gpu will likely be my biggest single GFLOPS contributor at present! bb from Oz. |
Send message Joined: 15 Mar 10 Posts: 4 |
As a further explanation, let say in my case I have CPDN set to resource share 20 and SETI set to resource share 50. In the non cuda version that meant CPDN had a roughly 30% share of cpu time (or probably more correctly total long term amount of number crunching, or GFLOP's [not GFLOPS]) and SETI had 70%. Now I've added cuda to the mix I'm guessing it includes the cuda total in the 70% share for SETI meaning it hits that proportion much quicker, and then sits and waits for the cpu to slowly catch up the CPDN totals leaving the cuda SETI idle. If it counted them separately then that would give something like this: CPU processing share: CPDN 20 (~30%), SETI 50 (~70%) GPU processing share: CPDN 0 (since it is not a CUDA app), SETI 50 (100%) That way it could use Astropulse (from SETI) and CPDN for example to keep the cpu busy while cuda SETI would keep the gpu busy. That's how I see it anyway. bb from Oz. |
Send message Joined: 20 Dec 07 Posts: 1069 |
I don't think it has anything to do with resource share. AFAIK, BOINC would never let a resource go idle because of that. What is the available Memory on your GPU? To see the relating messages, you'll have to create a cc_config.xml file and enable the <cpu_sched_debug> (and eventually the <coproc_debug> logging options. That would yield something like: 10-Mar-2010 18:41:46 [SETI@home] [coproc_debug] Assigning CUDA instance 0 to 30dc06ad.948.9888.11.10.64_0 10-Mar-2010 18:41:46 [SETI@home] [cpu_sched_debug] 30dc06ad.948.9888.11.10.64_0: insufficient GPU RAM (214MB < 238MB)(if that should be your problem). Gruß, Gundolf Computer sind nicht alles im Leben. (Kleiner Scherz) |
Send message Joined: 15 Mar 10 Posts: 4 |
Looking at your sample messages with SETI apparently asking for 238MB of GPU RAM available, that is looking like the probable source of the problem. The card in question has 256MB total, but is running in 1650x1050x32bit mode, so there's a good chance of it using up enough RAM to stop SETI from running. The machine is also not the machine I normally use so I'll have to wait until I can use it to change settings and test. In the mean time do you have any suggestions as to how to reduce the GPU RAM usage, preferrably short of reducing the resolution, bit depth, or both, so as to minimise the impact on normal usage? (If I have to I'll drop the bit depth to 16 bit to get it to clear the backlog of units it already has, but I'd rather not have to do that as a long term thing. After that I guess I might have to disable cuda SETI at least for that machine. [my only cuda machine running it - until I can get myself a new box that will do it easily]) For extra info, it's not a machine that needs a hefty GPU in general, except for the Windows display resolution (the LCD screen native res, which I'd rather not change from for obvious reasons). Its heaviest load is probably the occasional flash game in Windows and once in a while (as in: only a couple of days every couple of months) C&C Generals. (but I'm guessing it will automatically not run when Generals asks for the card, and I can force it not to if necessary) It's running XP Pro SP3, 2GB RAM, and the recent 196.75 nVidia drivers - any tweaks there to reduce general Windows GPU RAM usage? (disable triple buffering and the like, perhaps) Thanks, bb from Oz. |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.