Message boards : BOINC client : My Wish List
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next
Author | Message |
---|---|
Send message Joined: 29 Aug 05 Posts: 15573 |
The projects that use VirtualBox don't all work correctly with the newest version of VirtualBox, hence why BOINC has an older but trusted version included. |
Send message Joined: 26 Oct 09 Posts: 67 |
When comparing the BOINC app with the NativeBOINC, there are some features that are missing: 1. Option for "Set up hostname", which is useful especially if using bam with a lot of android devices (in my case more than 10) 2. RAM management. In NativeBOINC I am running WCG FAAH which in theory needs 250 MB, but actually needs around 40 MB. The BOINC app checks if the mobile device has 250 MB available (in my case on an old Samsung Galaxy 1, with 370MB RAM, out of which 185MB is used for system), and if it does not comply, will not start the project app. I find the NativeBOINC implementation (or workaround) the best option for RAM constrained devices. |
Send message Joined: 6 Jul 10 Posts: 585 |
The hostname change is a hack i.e. Google could break it anytime without notice [It has in a way since NativeBOINC was never ported to the PIE [Position Independent Executables] memory compliance model and thus does not run beyond Android 4.4]. You can change the host-name by rooting your devices [which results in loss of everything stored i.e. you have to rebuild the whole setup]. As for memory and how NB handles works-arounds (?), do not know [forgot since my Lollipop can't run it]. Minima by WCG are set for the possible maximum 'Just in case' work units. There's no 100% predicting how big or small any individual unit will be [non-deterministic computing]. If then having a set of 'harder to crunch', this could lead to serial failing and swamping the system with repairs which is a big problem for continuity [and lots of complaints for various reasons such as tasks failing]. Coelum Non Animum Mutant, Qui Trans Mare Currunt |
Send message Joined: 26 Oct 09 Posts: 67 |
My feeling is that the 'Just in case' scenario of 250MB is a bit misleading. It can affect for example low RAM smartphones like: 1C 250MB, 2C 500MB, 4C 1GB, 8C 2GB. I also see on a SGS6 with 3 GB of RAM, the OS is taking around 1.5GB with apps installed, leaving 1.5GB of RAM for 8 Cores, so less than 250MB/Core. I would prefer to have the 'Just in case' scenario replaced with a warning message but still bypass this limitation and leave the app to crunch. Plus based on this http://wuprop.boinc-af.org/results/ram.py?plateforme=android&tri=2&sort=desc the most demanding app does not use more than 125MB, so kind of pointless to have a 'Just in case' 250MB RAM with a low probability. What really annoys me is that although I have 180 MB RAM free, I cannot crunch due to the smartphone being to old, but the WCG app is on averaging taking 37MB RAM. I really don't want to run collatz on it :) |
Send message Joined: 5 Sep 15 Posts: 6 |
I'd like to be able to set a 'backoff delay' for BOINC compared to system bootup (say delay in minutes from zero to 60 for example) so that BOINC won't load/startup until x-number of minutes from Windows startup. I find that due to the heavy load-demands of BOINC and/or projects, it does slow system boot up, when it shouldn't need to. I often snooze it during boot up so that my normal boot up completes before allowing BOINC to resume. In my case, it is the disk usage that delays regular Windows startup, and snoozing BOINC during startup relieves it quite a lot. Otherwise, a setting so that it will only do the 'heavy loading' once the screensaver activates for example? [Edit: typo] |
Send message Joined: 8 Nov 10 Posts: 310 |
I'd like to be able to set a 'backoff delay' for BOINC compared to system bootup (say delay in minutes from zero to 60 for example) so that BOINC won't load/startup until x-number of minutes from Windows startup. You can use Startup Delayer for that. http://www.r2.com.au/page/products/page/2/show/startup-delayer/ |
Send message Joined: 20 Nov 12 Posts: 801 |
<start_delay> should do it. BOINC will still start normally but it will wait before starting science apps. |
Send message Joined: 26 Oct 09 Posts: 67 |
I would like to see an option in BOINC for Android smth like: Stop task after checkpoint when running on battery I am running boinc on 7/8 cores on a mobile device, also while on batteries. While running for WCG, tasks are taking around 12 hours, with 1.5 hours per checkpoint. Since I have a limitation on the battery, around 50%, the phone will disregard the progress from the last checkpoint when it reaches 50%. Worst case scenario is that all 7 cores are short by 1 minute before the next checkpoint, when the battery limitations kicks in, so around 10 hours of crunching is wasted. IMO it will be a more efficient way to compute and conserve the battery. |
Send message Joined: 8 Nov 10 Posts: 310 |
When I suspend a GPU task for a given project (e.g., Einstein), it prevents the downloading of both new GPU tasks and new CPU tasks. I suggest that this function be separated, and that suspending a GPU task prevents only the downloading of a new GPU task, but allows downloading CPU tasks. If you want to prevent the downloading of both, you can do that in the normal fashion of "No new tasks". |
Send message Joined: 29 Aug 05 Posts: 15573 |
There are no separate GPU and CPU tasks. At the moment of downloading work to your computer, it's decided what it's appointed to, the CPU or the GPU. So these cannot be separated. |
Send message Joined: 14 Feb 11 Posts: 63 |
Nvidia drivers between 364.47 through 365.19 have a buggy PTX to GPU machine code compiler which causes them to miscalculate in GPGPU loads including CUDA and OpenCL loads. See http://www.primegrid.com/forum_thread.php?id=6775&nowrap=true for evidence. Kepler and Fermi cards are not affected because they use different PTX to GPU machine code compilers tailored to their different internal machine codes. The math bug has been fixed in Nvidia's internal development driver, but the fix came in too late to be included in released driver 365.19. The next driver will fix the math bug. I therefore think that the client should have a blacklist of known bad driver versions that when detected will cause the BOINC client to refuse to fetch GPU work units and abort any GPU work units that have not been finished. Optionally, entries should be paired with a list of GPU architectures, compute capabilities, or GPU names of what are affected. For example, these drivers' entries could have a restriction that the CUDA compute capability must be 5.* in order to blacklist these drivers only for Maxwell-based GPUs because these drivers do not miscalculate when paired with GPUs based on the Kepler or Fermi architectures. Maxwell-based cards have CUDA compute capabilities 5.0, 5.2, and 5.3. Kepler-based cards have compute capabilities 3.0, 3.2, 3.5, and 3.7. |
Send message Joined: 6 Jul 10 Posts: 585 |
-- (Is there a way to make a forum thread open at the last post in chronological order? [WCG forums do]! Open thread, replied, but it was to the last post on page 1 ). Coelum Non Animum Mutant, Qui Trans Mare Currunt |
Send message Joined: 2 Jul 14 Posts: 186 |
I don't know if this suits well in this thread, but... I wish there would be more than four possible 'locations' available some day. I feel four is not much at all. There are interesting applications available, but at the same time there are a hosts with different types of hardware (AMD or Nvidia GPU, different amount of RAM etc). It could be much easier to make plans and try different kind of app combinations, if there was a possibility to save different settings for 6-8 'locations' per project. |
Send message Joined: 14 Feb 11 Posts: 63 |
BOINC needs to be made NUMA aware especially now since AMD's Ryzen processors appear to act as if they are dual CPU socket systems according to https://www.pcper.com/reviews/Processors/AMD-Ryzen-and-Windows-10-Scheduler-No-Silver-Bullet. Basically, current Ryzen processors are structured into two core complexes, or CCXs for short, consisting of 4 cores per CCX. Tightly interlocked multithreaded workloads will suffer massive latencies if some of the threads are spread across the CCXs because . Loosely interlocked multithreaded workloads will benefit from multiple cores because they do not spend much time communicating between threads. This work will also benefit Intel Xeon processors that are set up to use cluster on die mode as seen in http://www.anandtech.com/show/8423/intel-xeon-e5-version-3-up-to-18-haswell-ep-cores-/4, any multi CPU socket system, and old Core 2 Quads (which have two dual core CPU dies in one package). |
Send message Joined: 12 Feb 11 Posts: 419 |
BOINC needs to be made NUMA aware especially now since AMD's Ryzen processors appear to act as if they.... This work will also benefit Intel Xeon processors that are set up to use cluster on die mode.... Seems that Numa is an old request: https://github.com/BOINC/boinc/issues/1357 http://boinc.berkeley.edu/dev/forum_thread.php?id=10124#60953 I don't understand if Numa is now supported or not |
Send message Joined: 14 Apr 12 Posts: 51 |
Can the boinccmd --get_tasks report the number of CPU/GPU cores used by the task? |
Send message Joined: 29 Aug 05 Posts: 15573 |
Amount of CPU cores is normally just one per task, unless the task is using a multithreaded application, and then it's all cores. For GPUs it's always a minimum of one GPU per task, even when you run multiple tasks on a GPU, it's all the stream/CUDA cores that run each task. |
Send message Joined: 14 Apr 12 Posts: 51 |
Some project are non cpu intensive and the multithreaded ones can be limited to a specificed number of cores so I would like to know how it's all working. |
Send message Joined: 13 Jun 14 Posts: 81 |
BOINC needs to be made NUMA aware especially now since AMD's Ryzen processors appear to act as if they.... This work will also benefit Intel Xeon processors that are set up to use cluster on die mode.... On my dual Xeon systems BOINC seems to run fine across both NUMA nodes. I did try a few tests comparing NUMA enabled and disabled. I didn't see anything difference between the two configurations when processing project tasks. |
Send message Joined: 26 Oct 09 Posts: 67 |
I would like to have an option in the manager smth like: After being idle for X min, set PState to the lowest level (basically set all cpu freq to idle) and run tasks on Y cores. Although the freq will be around 800MHz, it is still better than nothing when having lots of threads. Plus the power efficiency should be , ideal from point of view of efficiency :). For example, on a Core i7-6820HQ running on all 8 threads, TDP = 45W at 3.2GHz running on all 8 threads, TDP = 15W at 0.8GHz on idle, TDP = 13W at 0.8GHz And also add an option to specify which frequency should be set for all cpus, like the new option in Windows Creators update: maximum processor frequency |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.