Info | Message |
---|---|
21) Message boards : Projects : boinc.berkeley.edu/w/ account missing || Malariacontrol,net Wiki page needs updating
Message 80573 Posted 30 Aug 2017 by ![]() |
Malariacontrol.net's project page should reflect that there are no more WU's, the project is currently in limbo and even the website itself is 404. The last announcement on the homepage was they are getting their needs met from private resources (from archive.org, Oct 2016) For all intents and purposes, it is a retired project. I went to update the Wiki and my user account is gone. Were all Wiki accounts purged on inactivity basis? Anyway, I only updated the Wiki when noticing a glaring mistaek so someone with a working account needs to update the page for Malariacontrol.net. |
22) Message boards : Android : There are a few projects that claim ARM but will not currently work on Android 5.x+
Message 70928 Posted 17 Jul 2016 by ![]() |
Enigma, I believe, has only produced a "no-pie" app, which does only work on earlier ARM/Android. It's not marked/named as such, though. Someone mentioned that they are only about 15,000 combinations left from completely searching the entire deencryption data space, so doubtful there will be a new version. |
23) Message boards : Android : There are a few projects that claim ARM but will not currently work on Android 5.x+
Message 70863 Posted 15 Jul 2016 by ![]() |
Engima http://www.enigmaathome.net/forum_thread.php?id=603 Citizen Science Grid https://csgrid.org/csg/forum_thread.php?id=2038 - both download and then throw immediate computation errors as their app will not work on newer than Android 4.x. POGS, Universe and YoYo are fine. If you find any others please add them to the list. |
24) Message boards : Android : (Attach To) Project List for 7.4.53 on 4.4.2 KitKat, ARM7
Message 70862 Posted 14 Jul 2016 by ![]() |
I went to the newest BOINC for Android project list to add a new project on my Amazon 5th Gen and saw a whole lot of new projects and thought "cool! The projects are starting to catch on to Android" Then realized (after trying vLHC) that none of these projects are ARM and the list is invalid. Please, somebody fix the project list on Android BOINC to only reflect ARM if it is running on ARM. |
25) Message boards : Questions and problems : This seems to be a bug. Work fetch reporting "no tasks available" as "not highest priority"
Message 68098 Posted 3 Mar 2016 by ![]() |
Ah, that is a piece of relevant info I did not have This is certainly part of the problem for why Mind Modeling was losing to NFS as the server was sending out only small batches of WU's at a time and hitting 0 work available every few minutes. Why vLHC was failing is still baffling but instead of spending more time trying to figure out the issue I moved NFS to a couple of single core VM's and halted any other work for it on BOINC clients with shared projects and things appear to be running smoothly, for now.
If the problem arises again; will do. Your help is appreciated and thankyou for the responses. |
26) Message boards : Questions and problems : This seems to be a bug. Work fetch reporting "no tasks available" as "not highest priority"
Message 67980 Posted 23 Feb 2016 by ![]() |
So is there a bug if the project's prio is about 2 to 3 orders of magnitude greater (abs()) than it should be? NFS prio should be down around -2.63 to -0.263 as it's already at 45% of maximum potential RAC while MindModeling reports -1.9 prio and is at 3-7% of potential RAC from days on end of no work. How does the NFS REC get such a high magnitude prio of -263 as it's credit payout is NOT 150 times greater than MindModeling? The expected credit per WU is about 3x greater as is the expected RAC over MindmModeling on this particular machine. That was to indicate absolute magnitude of the value -263 and nothing to do with how it's calculated which has yet to be mentioned. abs(-263) is equivalent to |-263| = 263. My degrees are in mathematics and physics but most on here seem to be computer science and engineering backgrounds so I used the function call notation.
And so is it a bug that that |value| is so large? That's the crux of my last post. The 'highest' possible priority mathematically is technically 0 which is > than all negative numbers but the project that is given highest priority over all others when a work fetch is made is the one with the lowest negative value prio as shown by how NFS gets priority over all other projects.
MindModeling immediately cancels backoffs and work is received from regular manual attempts. 2/23/2016 5:31:37 PM | | [work_fetch] No project chosen for work fetch 2/23/2016 5:31:39 PM | | [work_fetch] Request work fetch: Backoff ended for MindModeling@Beta Backoff's maybe the norm but are irrelevant in this situation. Actually, I've noticed other projects that also turn off the backoff feature. 1) So why is NFS is dominating at -263 prio? 2) Can a user turn off the REC feature or make adjustments so that work flow is more suited to their needs? I posted a link over at NFS in hopes that we can figure out why this REC for NFS is so much stronger in magnitude than other projects. |
27) Message boards : Questions and problems : REC calc question: Server side WU limits effect on REC calc.
Message 67968 Posted 22 Feb 2016 by ![]() |
Does the 2 WU limit set by vLHC get accounted for by the REC calculation on work fetch determination and if so, how does the BOINC client garner that information? |
28) Message boards : Questions and problems : This seems to be a bug. Work fetch reporting "no tasks available" as "not highest priority"
Message 67967 Posted 22 Feb 2016 by ![]() |
The log message "not highest priority" comes from your local client, not from the server. It didn't even ask the server for work - so it won't get any reply relating to work availability.(Thanks for the response) 2/17/2016 2:09:38 PM | MindModeling@Beta | Not requesting tasks: don't need (CPU: not highest priority project; NVIDIA GPU: ) I understand (guess I forgot, my memory is decaying) comes from the REC calcs and prio weighted sorting. In this case the [work_fetch] share 0.519 was the highest of any in the entire list by 50%. But of the prio numbers NFS was at something like -373 while MM was at around -3.5 so work fetch share was irrelevant if the project can't beat the most negative prio. The NFS project is somehow getting outrageous prios and I do not know how to counteract it. Here is the current situation: 2/22/2016 11:37:55 AM | MindModeling@Beta | [work_fetch] share 0.324 . . . 2/22/2016 11:37:55 AM | NFS@Home | [work_fetch] share 0.001 2/22/2016 11:37:55 AM | MindModeling@Beta | [work_fetch] REC 3509.462 prio -1.989 can request work . . . 2/22/2016 11:37:55 AM | NFS@Home | [work_fetch] REC 1949.634 prio -263.383 can request work Resource shares assigned: MindModeling = 240 NFS = 001 There are open cores waiting for work and sitting idle because the client decides MindModeling is not the highest priority unless manually updated. Mindmodeling rarely gives any work, maybe 20,000 WU once a week. NFS has been sending a steady stream and this machine has been doing NFS WU's for 3 weeks steadily. In order to get any work on MM I have to manually update every few minutes to get past NFS lock on work. So is there a bug if the project's prio is about 2 to 3 orders of magnitude greater (abs()) than it should be? NFS prio should be down around -2.63 to -0.263 as it's already at 45% of maximum potential RAC while MindModeling reports -1.9 prio and is at 3-7% of potential RAC from days on end of no work. How does the NFS REC get such a high magnitude prio of -263 as it's credit payout is NOT 150 times greater than MindModeling? The expected credit per WU is about 3x greater as is the expected RAC over MindmModeling on this particular machine. BTW, I tried <rec_half_life_days>30</rec_half_life_days> over the last week and it hasn't had any noticeable effect. |
29) Message boards : Questions and problems : This seems to be a bug. Work fetch reporting "no tasks available" as "not highest priority"
Message 67853 Posted 17 Feb 2016 by ![]() |
<work_fetch_debug>1</work_fetch_debug>2/17/2016 2:09:38 PM | MindModeling@Beta | [work_fetch] share 0.519 is the highest work_fetch share in the preparation list. The client comes back with: 2/17/2016 2:11:01 PM | MindModeling@Beta | [work_fetch] share 0.000 no applications and the server status does report no available work units. But the final result is: 2/17/2016 2:09:38 PM | MindModeling@Beta | Not requesting tasks: don't need (CPU: not highest priority project; NVIDIA GPU: ) instead of reporting: 2/17/2016 2:20:09 PM | MindModeling@Beta | Project has no tasks available which is what a manual update request returns with. |
30) Message boards : Questions and problems : Resource share apparently does nothing at all
Message 67851 Posted 17 Feb 2016 by ![]() |
What's the debt formula? Is it based on RAC or total credit? Credit granted per project is wildly different and not useful in inter-project debt calculations. So what would be the effect of these two settings: 1) Leaving <zero_debt>n</zero_debt> set to 1 on every start (deprecated in version 7 so it does nothing now?) 2) <rec_half_life_days>n</rec_half_life_days> set to 0 days. Would this mean the debt system would always go by current credit or ignore credit altogether and go only by resource setting? |
31) Message boards : Questions and problems : Resource share apparently does nothing at all
Message 67850 Posted 17 Feb 2016 by ![]() |
What I dream of is an interface of vertical sliders, one assigned for each project enrolled on the client.![]() The sliders always add up to 100% usage of available processing cores. A user can drag a slider up and down to give dominance to a particular project while the other projects then automatically get adjusted while maintaining their current relative levels to each other. A button can push pushed above slider to lock it in place while adjusting the others. Another button can control whether the project is accepting work and a third for suspending the project and a horizontal slide can adjust how many cores a project is allowed (app_config setting <project_max_concurrent> knowledge known). The slider system would be confined to the subset defined by the global preferences of core and CPU slice percentages. This method would be quick, intuitive and give us complete control over how our resources (cores and CPU slices) are shared amongst projects and it would be an attractive addition to the basic interface. By experimenting with various levels we can optimize our RAC for the projects to our liking. Credit per project varies from 0.0001 per second of CPU time all the way up to 0.3000 per second on the same machine (I've done fairly extensive spreadsheet study over 33 different projects) and so letting the client attempt to equalize credit over the various projects doesn't really work. It doesn't understand the games some of the projects play in attempting to dominate client resources with higher than typical credit, ignoring the daily limits on work d/led, switching to priority ignoring JAVA code or shortened deadlines. We can not deny that all projects are in competition for the cruncher's machine resources and some will use the credit system to get their benefit of completing their work. Until the day comes when the BOINC client has elevated AI and regular data on credit granted per project per app within each project, it's best to give people in charge of their computers best methods of controlling the usage of the processing equipment. |
32) Message boards : Questions and problems : Need help running BOINC in the cloud with multiple instances
Message 67497 Posted 4 Feb 2016 by ![]() |
Still no change after 2 days. Try manually setting up the accounts on one cloud installation and you should get immediate running apps. |
33) Message boards : Questions and problems : Limit disk space use by tasks accepted on a per project basis
Message 67496 Posted 4 Feb 2016 by ![]() |
Fellow BOINCers, 1) The simplest method is to go into advanced settings and adjust the buffers down. Something like 0 additional days and 0.2 - 0.5 days buffered. 2) You can run your BOINC within a virtual machine with a limited hard drive size. 3) You can run a separate BOINC installation on different partitions and assign different control ports per client. 4) You can micro manage projects by going to the server preferences and unchecking the work units that send down the largest project files. Only choose the smaller work units (and usually lower paying, but not always). On 2) and 3), when a partition gets full the project will complain about lack of space and another project with smaller requirements will bring down work. |
34) Message boards : Questions and problems : Resource share apparently does nothing at all
Message 67495 Posted 4 Feb 2016 by ![]() |
Even with the present form of the scheduler, BOINC still uses a form of debt between projects. What's the debt formula? Is it based on RAC or total credit? Credit granted per project is wildly different and not useful in inter-project debt calculations. I would love debt calculation based only on total CPU/GPU time used without regards to credit. |
35) Message boards : Questions and problems : max_concurrent of 0 should mean no apps running instead of unlimited
Message 67494 Posted 4 Feb 2016 by ![]() |
It would make more sense to have <max_concurrent>-1</max_concurrent> as a setting for unlimited work units and have a setting of 0 as a client local rejection of running work units. This was the default taught in undergrad and graduate CS courses. |
36) Message boards : Questions and problems : Totally frustrated by the work fetch algorithm that could automate fetching single, temp WU when high priority projects have no work.
Message 66601 Posted 4 Jan 2016 by ![]() |
(you'll likely have to create the file) Nice! Will try it out. |
37) Message boards : Questions and problems : Unusual virtual memory grab from BOINCMGR.EXE ver 7.6.9.
Message 66600 Posted 4 Jan 2016 by ![]() |
I knew what a kilobyte was in 1977 from reading Byte Magazine and text books on computer science that I checked out of the library and read in my spare time. Boincmgr.exe has virtualized memory space of 2.18 Gigabytes according to ProcessHacker. Win 7 Resource Monitor doesn't show how much virtual memory a process is laying claim to and it's just odd that BOINCMGR.EXE's code is setting up over a 2 GB address space on one Win 7 machine but not on the other two Win 7 machines which are not running ATLAS (They're BOINCMGR.EXE show about 149 MB on each). Version 7.6.22 shows the same results when I switched to Sysinternals Process Explorer to compare results: ![]() The virtual size column matches the size of the RAM that my personally created VBox machines are requesting. Their private bytes will never be close to their virtual size, though they use up their entire virtual size in RAM/swap. That is why I was wondering if BOINCMGR.EXE was acting on behalf of the VM's by placing a virtual RAM claim of over 2 GB because this machine was running ATLAS, which supposedly has 2 GB RAM virtual machines. The ATLAS machines each show virtual sizes of 218 MB instead of the 2,000MB I was expecting. ![]() To throw a twist in, I checked the virtual memory claims of BOINCMGR.EXE on WinX and it's claiming 32.9 Gigabytes there. The coders have assigned huge amounts in their declarations or the process explorers are broken under WinX. BTW, the one positive attribute that ProcessHacker had over other choices so that I chose it for daily use, was it's rt-click option to reduce any process' working set which is much nicer than some other mass RAM cleaning utilities. |
38) Message boards : Questions and problems : Unusual virtual memory grab from BOINCMGR.EXE ver 7.6.9.
Message 66596 Posted 4 Jan 2016 by ![]() |
First things first, terminology: VRAM == memory on a videocard, fully called Video RAM, hence abbreviated to VRAM. If you need to abbreviate virtual memory, it's VM. Yes, the same as Virtual Machine. Sorry, I know that virtual memory has the same abbrev as virtual machine so I use vram in my mind for virtual memory and vidram for video. I forgot to translate into common usage. ProcessHackerPortable is a nice analyzer program. There are a couple others at that site in the utility section. |
39) Message boards : Questions and problems : Unusual virtual memory grab from BOINCMGR.EXE ver 7.6.9.
Message 66581 Posted 2 Jan 2016 by ![]() |
My old Dell m-6400 with 8GB RAM running Win7 x64 SP1 has BOINCMGR.EXE grabbing 2.2 GB of virtual memory. It only has a few projects attached Is it reserving vram for the VBox machines? I'm running VBox projects on other computers and their BOINCMGR.EXE only claims about 350-450MB vram. This machine with 31 projects attached is claiming 149MB vram also on Win 7 x64 Pro SP1. I'm pretty sure that I installed VBox independently of BOINC on all the machines instead of using the combined installer. All the machines have VBox 5.0.10 installed and the 7.6.9 combined installer came with VBox 4.3.34(?). I'll try installing the 7.6.22 version later and see if the issue continues but was curious how BOINCMGR.EXE decides on the amount of vram to reserve. |
40) Message boards : Questions and problems : Totally frustrated by the work fetch algorithm that could automate fetching single, temp WU when high priority projects have no work.
Message 66580 Posted 2 Jan 2016 by ![]() |
Setup the 6 core machine with two running clients and using the CC_CONFIG.XML options to adjust the number of cores that BOINC.EXE emulates to 6/2 cores. It's fetching work amazingly well now and the cores are at 100% usage all day long. Anyone else having issues with work fetch from high resource projects with no work for days on end and RAM limited projects using APP_CONFIG.XML to limit <PROJECT_MAX_CONCURRENT>N</PROJECT_MAX_CONCURRENT>, these options in the CC_CONFIG.XML stored in the DATA directory (you'll likely have to create the file) can help sort out the work fetch flow. <cc_config> <log_flags> </log_flags> <options> <allow_multiple_clients>1</allow_multiple_clients> - allows you to split into 2 or more clients. <fetch_minimal_work>1</fetch_minimal_work> - tells the client to fetch only 1 WU at a time and can stop some projects flooding the buffer while your client waits on a high resource project to get some work ready. <ncpus>X</ncpus> - where X is the number of cores per client you split into. Actually, setting this higher than the cores in your machine has benefits. <fetch_on_update>1</fetch_on_update> - this causes projects that wouldn't normally get work because of the work-fetch algorithm decisions to fetch some WU's and goes well with <fetch_minimal_work>1</fetch_minimal_work>. These keep your many machines running BOINC from flooding your single internet connection: <report_results_immediately>1</report_results_immediately> <max_file_xfers>2</max_file_xfers> The rest of these are just some nice options. <skip_cpu_benchmarks>1</skip_cpu_benchmarks> - machines exclusively running BOINC 99.9% of the time don't need to be BMed. <start_delay>15</start_delay> - give you a chance to stop BOINC before apps start if you've made a mistake in configuration. <suppress_net_info>1</suppress_net_info> - doesn't report your IP. </options> </cc_config> |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.