Posts by GeneAZ

1) Message boards : Questions and problems : Question: move project to another BOINC -instance- (Message 98355)
Posted 7 May 2020 by GeneAZ
Project sucessfully moved. I'll outline the process here, for the benefit of future message searches.
@RH - pretty much followed your recipe, but included the "client_state_prev.xml" file as well as the "client_state.xml" file when moving the <project> block of statements.
@KM - I was brave enough to try RH's strategy. My thinking was similar to COVID vaccine trial volunteers, i.e. even if the outcome is bad it gives useful data to others coming down the same trail.
So here's the script, in which asteroidsathome (A@H) is moved from instance BOINC2 to instance BOINC1.
invoke NNT on ALL projects and allow caches to completely drain;
exit both instances of boinc client and boinc manager;
make backup copies of both BOINC1 and BOINC2 directories;
move /BOINC2/projects/A@H to /BOINC1/projects/A@H;
N.B.:  "move" means copy to destination AND delete from source;
move the following 6 project-specific xml files from /BOINC2/ to /BOINC1/;
locate the block of xml statements starting with the <project> header relating to the asteroids@home project
  and ending at the line before the *next* occurance of <project>;
move that block from the /BOINC2/client_state.xml to the /BOINC1/client_state.xml;
  take care to insert the block at a place comparable to where it was removed;
repeat these two steps for /BOINC2/client_state_prev.xml to /BOINC1/client_state_prev.xml;

Resumed the BOINC2 client and manager. Event log showed no error messages. Remaining project started normally and, when removing the NNT restriction, downloaded new work.
Resumed the BOINC1 client and manager. Event log showed no error messages. All projects are shown, including the A@H project that was moved here. Removed NNT on projects one by one and each downloaded new work. asteroids@home now operational in the new BOINC instance.
Some chores remain, i.e. adjust resource shares and concurrent task limits, but nothing apparent at this time that is badly configured.
Thank you Richard and Keith for your interest. And I hope this may be helpful to future readers.
5/4/20 May the 4th be with you!
2) Message boards : Questions and problems : Question: move project to another BOINC -instance- (Message 98339)
Posted 5 May 2020 by GeneAZ
Note, from the title, NOT moving to another PC but a second "instance" of BOINC running in the -same- PC. I have two Linux BOINC's (7.14.2) running, clients and managers, with their own choice of projects. (The motive, from long ago, was to facilitate resource management and to allow the CPU-only projects, for example, to run only at night when temperatures and PC usage are favorable while GPU projects run 24/7.)
One of the CPU projects, Asteroids@home, now has a GPU app and so I now want to move that project into the BOINC instance with all the other GPU apps.
I will run out the caches in both BOINCs and wait for all work reported, then shut off both BOINCs. So let's assume that as the starting point.
My first thought (based on the Remove then Attach thread) is to do the "Remove" action on the A@H project in its BOINC instance, then switch to the other BOINC and do an "Attach Project" (carefully selecting -returning user-).
But, the second thought, can I salvage anything from the BOINC project directory to either expedite the new "Attach" process or to bypass it altogether? It would be easy enough to copy the contents of the /project/asteroids... directory to its new BOINC home. And maybe tempting to copy all the project-specific .xml files (master_, account_, sched_request_, sched_reply_, job_log_, statistics_, etc.) from the source BOINC to the target BOINC directories. Is this just asking for (big!) trouble? Just trying to imagine that BOINC restarting, with all those files and directories present, might be happy to pick up where it had left off. I DO worry about the side effects regarding the client_state.xml file. Maybe that file is regenerated in the restart process - or, maybe not!
Thoughts or advice, anyone? Thanks...
3) Message boards : Questions and problems : Multiple BOINC projects; individual daily schedules?? (Message 86924)
Posted 7 Jul 2018 by GeneAZ
I like your idea of cron scheduling. It is a lot easier to experiment with times and projects without the worry of losing work units or, as I ran into, inadvertently creating a duplicate computer id. I'll try it with the two boinc clients currently running. There are "root" cron jobs active, for logrotate and such, that should be useful as examples. When in doubt read the man!
And thanks for joining this boinc message board, perhaps just to comment on this thread.
4) Message boards : Questions and problems : Multiple BOINC projects; individual daily schedules?? (Message 86900)
Posted 5 Jul 2018 by GeneAZ
Here are the "scripts" I'm using for the two boinc clients (and respective boinc managers):
...for the "primary" client, i.e. default gui/rpc port access.
exec /usr/bin/boinc --dir /home/gene/BOINC --redirectio --allow_multiple_clients &
sleep 3
exec /usr/bin/boincmgr -d /home/gene/BOINC -e /home/gene/BOINC -n localhost -m &

And, for the "extra" client, i.e. custom gui/rpc port access.
#! /bin/bash
exec /usr/bin/boinc --dir /home/gene/BOINC2 --redirectio --gui_rpc_port 31420 --allow_multiple_clients &
sleep 3
exec /usr/bin/boincmgr -d /home/gene/BOINC2 -e /home/gene/BOINC2 -n localhost -m -g 31420 &

These are Linux scripts, of course, and for Windows systems some different procedure is used.
Obviously I have two boinc data directories, with really imaginative names BOINC and BOINC2. The only glitch I ran into is the initiation order. If I start the "primary" boinc client first (and its matching manager) then the "extra" client will start but a boincmgr CANNOT connect to it! BUT... if I start the "extra" client first then the "primary" client will start and the boincmg WILL connect to it.

The "multiple_clients" parameter options are included in both scripts (above). The respective cc_config.xml files in both directories have the <allow_multiple_clients>1</allow_multiple_clients> so it's possible the script options are not necessary. I haven't tested that, and probably never will since working scripts are in hand.

I noticed that the extra client suspends work when the primary client is running. Reason = CPU busy. I had a "suspend when non-BOINC usage is above 50%" setting in computing preferences. It seems as if the primary boinc client usage is considered as "non-BOINC" in this context. Raising that level to 90% removes that constraint. I guess it makes sense that each boinc client looks at kernel values for CPU utilization and has no way of knowing that it is another boinc instance running apps that should not really count as non-BOINC use.

So, dual clients running and daily schedule values in use for the low-priority project. A more thoughtful approach to the changes in the data directories might have avoided the creation of a new computer ID (since merged with the original one). I suspect that if I searched these message boards I would find the right method.

...and no work units lost or mangled along the way... :)
5) Message boards : Questions and problems : Multiple BOINC projects; individual daily schedules?? (Message 86821)
Posted 2 Jul 2018 by GeneAZ
Good progress, so far, and no disasters. I was able to get a second boinc client and boinc manager up and running. I made a second boinc data directory and copied the account*, job_log*, master*, sched_request*, sched_reply*, and statistics* files for the "night" project from the primary boinc data directory. And copied the full /boinc/projects/asteroids*/ directory to the second directory. Then did a Project->Remove in the first directory. Then tried starting up the second boinc/boincmgr instance more or less along the lines that Richard had posted. The "boinc" client seemed to start up but the "boincmgr" failed to connect. Twiddled with the port number and got past that roadblock on the third try. The only glitch in that process was that the boinc client (apparently) tried to resume suspended tasks and created some slot directory lock discrepancies - two applications contending for the same slot and one going into "postponed" status. Used a bit of "root" magic to sort things out and kill one of the contending applications. (I have the work unit ID's of those tasks and I'll want to check their validation status just to confirm they were not affected.)
I got a "new" CPID for the project that was moved. Not a big surprise and I hope a "merge computers" will resolve that. I have not yet tried a daily schedule setup. The "proof-test" will be whether I can shut down both boinc instances and restart them with no ill effects. That's the "to do" list for tomorrow.
Even though this project has a resource share of 1 it is the only project in its boinc manager instance so it looks like it is entitled to 100% resource share. I'll use settings in app_config.xml and (eventually) daily schedule to provide the resource limits I want.

>>For the boinc "wish list:" enable daily schedule settings for each project instead of just a global host setting...
6) Message boards : Questions and problems : Multiple BOINC projects; individual daily schedules?? (Message 86799)
Posted 1 Jul 2018 by GeneAZ
O.K., thanks. I have all projects in /home/gene/BOINC/projects/*** directories so it looks like the straightforward approach is to move the ~BOINC/projects/asteroidsathome directory (the project I want to run only at night) over to a new /home/gene/BOINC2/** directory. There are several .xml files in the main BOINC directory (like sched_request_asteroids...) that look like they should be removed, or copied to the BOINC2 directory. Here's my thinking on that: do a "projects -> remove" in the current BOINC directory; then switch to the second data directory and do a "tools -> add project". I would expect that sequence to clean out anything from the current data directory and install needed files in the second data directory. What about "cc_config.xml" ? Duplicate that file in both primary and secondary boinc data directories? It has no project specific tags in it. If other files are needed I hope that boincmgr will give useful/informative error messages to point me toward a fix.
...Give me a day or two to step through this process. And hope not to crash the boinc client(s) along the way...
7) Message boards : Questions and problems : Multiple BOINC projects; individual daily schedules?? (Message 86784)
Posted 30 Jun 2018 by GeneAZ
Linux system, boinc 7.6.33
I have several boinc projects active with their respective resource shares. What I would like to do is restrict one of those projects to run only at night. There are "daily schedule" options in the computing preferences tab but they seem to apply to all boinc projects on my host and I don't see a way to use different schedules for different projects.

At the moment my host is a "home" location for all projects. If I change one of the projects to a "work" location (for example) would that allow different "daily schedule" options? I don't see that option in boinc manager's computing preferences but maybe that's because I don't have anything besides a "home" location. I can try changing one project to a "work" location and see if that opens additional daily schedule options. Is this a good idea?

What about running a second boinc client on this host? (<allow_multiple_clients> = 1 in cc_config.xml) Could I then attach only the "night" project and set the daily schedule parameters as desired? Is this the best (or only?) way to do what I want? I am aware of the requirement to run the boinc clients in unique data directories.

Thanks for any suggestions that are offered.
8) Message boards : Questions and problems : what is "plan_class" xml tag in app_info ? (Message 84911)
Posted 26 Feb 2018 by GeneAZ
Thanks for the replies. The examples and other references were enough to get me on the right track. All is well now. Up and running with the cpu and gpu apps that I wanted to use. I might note (for the benefit of others who might find this thread) that the boinc manager action "read config files" is NOT SUFFICIENT to update the boinc manager process state, especially in an anonymous platform context - a boinc RESTART is required. My best analysis is that the app_info.xml file is NOT re-read and when the app_config.xml file IS re-read there can be many mismatches. That fact is probably spelled out somewhere in the boinc documentation and user guides. But easy to overlook and/or forget. :(
9) Message boards : Questions and problems : what is "plan_class" xml tag in app_info ? (Message 84903)
Posted 25 Feb 2018 by GeneAZ
In an anonymous platform context... boinc 7.6.33, linux host... seti@home project...

What is the <plan_class> tag in the app_info.xml file? Looking in the boinc wiki it says that in the plan_class tag one should fill in the "plan_class" of the application. Well, that's not very helpful. Is the plan_class built-in to the application? If so, how does one know what it is? Or, is the plan_class somehow implied in the name of the executable file?
The wiki also indicates that the plan_class tag is OPTIONAL. So, under what conditions is it required and what are the consequences of leaving it out if it is not required?
I'm really only aiming at having one cpu app and one gpu app on a given named application.

Thanks for any light you can shed on this.
10) Message boards : Questions and problems : nuisance log messages, "unrecognized report_on_rpc/" (Message 83468)
Posted 3 Dec 2017 by GeneAZ
O.K., Richard,
I've posted to the E@H "Wish List" forum (thinking it not serious enough to qualify as Problem/Bug) and we'll see if there is any interest.
11) Message boards : Questions and problems : nuisance log messages, "unrecognized report_on_rpc/" (Message 83404)
Posted 30 Nov 2017 by GeneAZ
boinc version 7.6.33, 64-bit Linux 4.12.12
Only from Einstein@home project, I get lots (sometimes hundreds) of Event Log messages
"[unparsed_xml] FILE_INFO::parse(): unrecognized: report_on_rpc/"
on a project update / work request cycle. O.K., I know that I can ignore them by NOT SELECTING the "unparsed_xml" flag in the Diagnostic log flag dialog box, but if this is something that boinc does not support shouldn't someone tell the E@H administrators to fix it? In their <sched_reply_einstein....xml> file it seems that almost every <file_info> block contains a <report_on_rpc/> tag. It is NOT in the sched_reply...xml file of other projects I have running.
Was this a feature of (long) past boinc versions? I don't find any mention of it in the forum messages since 2014. Without the diagnostic flag enabled it is silently ignored and invisible to the boinc hosts. I'll be happy to advise the E@H staff if some boinc "authority" will confirm that this .xml tag is wrong.
12) Message boards : Questions and problems : Linux vsyscall deprecated, may still be used in some project apps (Message 79662)
Posted 18 Jul 2017 by GeneAZ
An alert to Linux hosts... I've updated to a 4.9 kernel (Debian) and discovered that support of the "vsyscall" kernel function is being phased out, i.e. strongly discouraged, but is still "supported" via legacy emulation code with an appropriate kernel compile option. Only some project applications still use "vsyscall." If they do then the kernel will need the CONFIG_LEGACY_VSYSCALL_EMULATE setting. (The other choices are: NATIVE or NONE.) It is likely, but not a sure thing, that distribution kernel images will have that emulation setting. The failure symptom may be a little misleading: computation error, output file absent. There is nothing wrong with the file/directory permissions; rather, the application "seg faults" very early, no output file was ever created, and boincmgr (correctly) reports the output file absent. The "seg fault" results from a vsyscall to a kernel that does not have that capability or an emulation of it. The error is reported in the kern.log and messages. Only limited experience to report:
Asteroids@home (CPU) o.k.; NFS@home (CPU) FAILS; Einstein@home (GPU) o.k.; Einstein@home (CPU) FAILS.
[boinc 7.6.33 / Linux 4.9.30 / GTX 750Ti / AMD 7 1700]
13) Message boards : GPUs : GFLOPs of GPUs - Listing (Message 66519)
Posted 30 Dec 2015 by GeneAZ
Nvidia GTX 650 (driver 352.41) Peak 813 GFLOPS
Compute capability 3.0; CUDA 6.0; 1024 MB available
Max. performance mode at 1058 Mhz clock.
14) Message boards : Questions and problems : resource "share" experiment (Message 65132)
Posted 29 Oct 2015 by GeneAZ
A follow-up to thread id=10389, now locked...

I was curious to see how system resources would be allocated among multiple projects. I set up the resource shares as noted below and let boinc run without any manual intervention for 75 days to aim to reach some steady state. I've captured run time and credit statistics for the 5-day span (Sept. 23 - 27) and tabulated the results below. (It has taken a while for work units to be validated and credits granted.)

project  share  GPU used  CPU used  credit  RAC 9/27
-------   ---    ------    ------    ----    ------
  Seti      88 %    85 %      38 %    37 %    5400
  Einstein  11 %    14 %       5 %    23 %    4391
  NFS        1 %     n.a.     24 %    13 %    1380
  Asteroids  1 %     n.a.     23 %    27 %    3658

The GPU (GTX 650) is configured to run only one task at a time. The CPU has 4 cores, one is reserved/configured to support the GPU. The Einstein, NFS, and Asteroids projects are configured to "max concurrent" = 1. The Seti and Einstein projects seem to share the GPU and the CPU resources close to the 8:1 share ratio settings. The NFS and Asteroids projects are not close. To calculate the % CPU used I have assumed 3-cores were available. So the NFS project, for example, used 24% of the 3 cores but that translates into using its "max concurrent" core 72% of the time over those 5 days.

In the past I have succumbed to the temptation to "micro manage" the work buffer, eg. setting NNT for resource hogs occasionally to allow downloads for other projects with a higher share. (The buffer parameter is set for 1+0.5 day.) As of 8 a.m. today the CPU work buffer for Seti is zero while the NFS and Asteroids projects have 57 and 77 hours respectively. This is admittedly just a snapshot sample but it is not unusual. Seti CPU work has been 0 on 12 of the last 30 days.

The boinc client is 7.4.23 (a Debian Linux distribution). There appears to be a 7.6.12 version in the pipeline and I look forward to upgrading although I have not seen anything in the message boards to suggest any change in the resource share management.

Just my 2-cents worth to present real user statistics and not just anecdotal comments. I wish that boinc did a "better" job of resource sharing but I will "play with the hand I'm dealt."
15) Message boards : Questions and problems : Why project schedule priority so high? (Message 63284)
Posted 28 Jul 2015 by GeneAZ
Here's an update on the progress of boinc learning how to manage the flow of CPU tasks for my system:
  project  Res.share  Buffer:tasks/hours  sched.priority
    Seti       88          0 / 0               -1.11
    Einstein   10          0 / 0               -1.39
    NFS         1         38 / 63              -1.15
    Asteroids   1         28 / 65              -1.11

The "hours" indicated is the sum of the tasks estimates. It is pretty close since hundreds of tasks have been done and a decent average is established.

This (table above) was a snapshot of the buffer at 8 a.m. this morning. It is similar to the buffer content on each of the two preceeding days. Occasionally a Seti CPU task gets downloaded, and run immediately since there is an idle core. (NFS & Asteroids seem to be using one core each and one core feeds the GPU.)

I will continue to be patient with boinc as it tries to figure out what I want done. So, no changes to resource shares until further notice, but one can see why my "micro-management" finger is getting itchy.
16) Message boards : Questions and problems : Why project schedule priority so high? (Message 63108)
Posted 18 Jul 2015 by GeneAZ
As of 10 minutes ago, all Seti CPU work has been drained out of the buffer. There is CPU work for the other three projects and they are all using one core each. There is Nvidia (GPU) work for Seti and Einstein and I think they are sharing the GPU resource more or less according to the share settings.
Now I shall try to be patient and leave things alone to see what happens.

17) Message boards : Questions and problems : Why project schedule priority so high? (Message 63087)
Posted 16 Jul 2015 by GeneAZ
I did not feel "forced" to set NNT, I just got impatient with resource share settings I tried back in February - Asteroids, at RS=5%, filled the work buffer, Seti, at RS=90%, exhausted its supply of tasks and 3 cores sat idle while Asteroids ran 1 core 24/7, refilling the buffer as needed and always running at higher scheduler priority to block any Seti downloads. After a couple of days in this state I intervened manually, knowing very well that it would interfere with the Boinc "learning" process. My thinking/strategy was: suspend Asteroids occasionally to allow Seti work to download but otherwise let both projects do some work and hope that Boinc would eventually reach some equilibrium and back off on the Asteroids project as I had intended in the resource share settings.

You could lower the resource share a bit

I have now set Asteroids to 1% share (and given the lost 3% to Seti). In retrospect my inititial thinking of gradually reducing the Asteroids share until it got its "proper" share was flawed. I should have started at 1% and then, if appropriate, raised the share. I.e. home in on a share setting from the underutilized side.

You may be right, that there is no way in the current Boinc manager to mix CPU and GPU work/projects and get them to play together.

#Richard & Elektra:
CreditNew doesn't work at present and even won't work in the future:

I read the two papers. The second one alludes to some control system theory that was very heavy going. I think I see the point of the argument but the details are over my pay grade.
I have seen CreditNew threads in other project forums, of course. My own feeling is that local host run-times would have been the best basis for resource share but I do understand the intent of the CreditNew scheme. We play with the hand we're dealt! For my four active projects I see (via work_fetch debug) wide divergence between the RAC and REC values. For example, present Seti RAC is 5225 but the REC in work fetch is 28812. Einstein is much closer at 4472 vs. 4660.

as far as I can see, REC is calculated from runtime and has nothing to do with actual credit

I can't reach this conclusion from my observations (see above paragraph) but it could be true. I commend you for your efforts to dig into the actual Boinc client code. (I have looked at the code, months ago, seeking an explanation of an entirely different Boinc issue but became hopelessly lost in data structures, variable names, and function scope.) The relative performance of the GPU and CPU resources should enter into the scheduling process but it will differ for each project and likely depends on programming efficiency, etc. I have stopped running Asteroids GPU tasks because they actually run *longer* than the CPU tasks. (And Asteroids grants fixed credit for each work unit regardless of actual run time.)

If you upset the balance by suspending projects

Maybe my strategy is misguided. I only suspend a project when it has (over) filled the work buffer, thus blocking any other projects from work fetch, and there are idle CPU cores as a result. Suspending just for a minute or two is sufficient for work fetch to proceed for another project and then resume the project and let all the cores crunch away as intended. I don't see the harm in this use of project "suspend."

I have scanned the work buffer to get total estimated hours of work for all four active projects, separately for CPU and GPU where relevant. I'm willing to see how this develops over the next week (or more). At present, Asteroids has 56 hours of work pending and it can satisfy all deadlines but only by running pretty much 100%. The question is: as the work buffer is drawn down will it fetch more work far in excess of the 1% (intended) resource share.

Thanks to everybody for your insight from various perspectives.

18) Message boards : Questions and problems : Why project schedule priority so high? (Message 63033)
Posted 14 Jul 2015 by GeneAZ
System: Linux x64, Boinc 7.4.23, 4-core CPU + Nvidia GPU, 4 projects with individual resource shares as shown below:
   Project  Res.Share   Project Schedule Prio. 
 Seti        85              -1.08
 Einstein    10              -1.13
 Asteroids    4              -0.49
 NFS          1              -0.66

Work buffer parameters set for 1 day + 0.5 day and the system has been running with this configuration for over a month.

I frequently have to set No New Tasks for Asteroids as it will otherwise overload the work buffer and starve other CPU applications by reason of "CPU not highest priority project." And then "Suspend" the Asteroids project, at least momentarily, to allow CPU work fetch for other projects.

For example, 4 hours ago the work buffer had 13 Seti (CPU) tasks at roughly 2.5 hours each; plus 44 Asteroids (CPU) tasks at roughly 2.4 hours each. A scheduler request was initiated to Asteroids and boinc fetched 4 more(!) CPU tasks. Why?

Meanwhile, the Nvidia work flow (for Seti and Einstein) seems to allocate time on the GPU roughly in accordance with the respective resource shares.

I have limited the Asteroids project to 1 <max_concurrent> and have allowed Seti to have 3 <max_concurrent>. Those constraints are observed. But Asteroids runs its 1 allowed tasks 24/7 yet never seems to get its scheduler priority below the other projects. The -0.49 value (in the table above) is the lowest I have seen recently and a value of -0.30 is more typical.

I apparently don't understand how Resource Share is "supposed" to work, especially with projects running a mix of CPU and GPU work.

Any instructive comments will be greatly appreciated.
19) Message boards : BOINC Manager : BOINC Manager displays after upgrade (Message 62544)
Posted 13 Jun 2015 by GeneAZ

The latest Linux x64 version is 7.2.42, so if this issue has been "fixed" in 7.4.40 (or later) it is not yet available for Linux systems.

I chose to install the (Debian) 7.4.23 version, as part of the recent major Debian upgrade, expecting it to have incorporated as many as possible of the boinc updates past the 7.2.42 release.

I did not file a bug report with Debian maintainers as to the problem with the boinc manager not starting smoothly as I was able to get it working with the addition of the "--redirectio" option on the boinc command line. I just accepted this as a new procedure to adapt to, as often happens with software upgrades.

O.K., I just exited boinc and went back to re-create the symptoms I encountered (and possibly the same as seen by the OP). What used to work in 7.2.42 was the command "boincmgr -e ~/BOINC -d ~/BOINC -n localhost & " and boincmgr would automatically start up the boinc client and resume all previously active projects.
When doing that in 7.4.23 the boincmgr window comes up but it is entirely blank. No projects shown, no tasks shown, no notices, ... even disk space says no projects are installed. The message status at the bottom of the boincmgr window shows "connecting to localhost" for about a minute and then shows "disconnected".

I was not getting anything written to the boinc stdoutdae.txt file, where I had hoped to find diagnostic clues. That led me, via the boinc "man", to include the redirectio option. It was not included in the boincmgr options so I did the individual startups of boinc and boincmgr successfully, as described in a previous post in this thread. Maybe this is not the "normal" way to run the boinc projects, or maybe not the easiest. But it does work for me! YMMV

As to the task status not updating properly, I will just wait for the Linux x64 release (> 7.4.23) directly from boinc developers and install that.
20) Message boards : BOINC Manager : BOINC Manager displays after upgrade (Message 62472)
Posted 8 Jun 2015 by GeneAZ
I had the same problem, after upgrading to Debian 8.0.0. I "found" (well, stumbled upon) a solution. If you're starting Boinc from a command line, i.e. virtual terminal or xterm window, do these steps (where you fill in your proper directory path instead of my "/home/gene/BOINC/")
/usr/bin/boinc --dir /home/gene/BOINC --redirectio &
/usr/bin/boincmgr -d /home/gene/BOINC -e /home/gene/BOINC -n localhost &

I suspend, and resume, Boinc often enough, but not often enough(!) that I can remember the right commands, that I've created a bash script to do these steps, and I gave it a name that I CAN remember...
exec /usr/bin/boinc --dir /home/gene/BOINC --redirectio &
exec /usr/bin/boincmgr -d /home/gene/BOINC -e /home/gene/BOINC -n localhost &

You will probably notice, as I have, when you get the boinc manager window in view that the "Tasks" tab display of running tasks doesn't quite update properly. Some running tasks don't advance their elapsed time. And other little quirks. Completed tasks sometimes continue to show "running" when, in fact they have finished and uploaded the result.


Next 20

Copyright © 2022 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.