Posts by Tuna Ertemalp

InfoMessage
1) Message boards : BOINC client : Please make keeping GPUs fully occupied at all times a priority for the task scheduler
Message 85802
Posted 9 Apr 2018 by ProfileTuna Ertemalp
EXACTLY my point. *I*, the human, shouldn't need to do anything. When you have so many projects running on your host, you cannot micro-manage your CPUs, GPUs, % usages, available numbers of each, etc. That is why there is a "task scheduler". Humans shouldn't try to beat it into submission by manual workarounds to not waste mucho $s invested.

All "GPU tasks" already indicate how many CPUs and GPUs they require. So, if there are 4 GPUs in the host, and there are 32 CPU threads available when nothing is running, and there are hundreds of GPU tasks in the queue, then the scheduler should grab the top GPU tasks from the queue per whatever other prioritization algorithm is in action right now to select the next project(s) to run tasks for, until all GPUs are full, then add up the declared CPU requirements of those tasks, subtract it from 32, and then schedule CPU-only tasks for the remaining CPUs/threads from the task queue per the same prioritization algorithm that is in action right now.

Expecting humans to do things to force an approximation to this behavior, and probably still fail at it, is futile.

Tuna
2) Message boards : BOINC client : Please make keeping GPUs fully occupied at all times a priority for the task scheduler
Message 85800
Posted 9 Apr 2018 by ProfileTuna Ertemalp
This is somewhat a continuation of https://boinc.berkeley.edu/dev/forum_thread.php?id=10746. The behavior has not changed since then. At the time I was puzzled at the behavior. Not so much, anymore. Seems this is completely due to an apparent preference of the task scheduler for keeping the CPUs/threads busy over keeping possibly multiple GPUs busy that are installed in the host.

For example, I have this host, among a dozen other mostly multiple-GPU hosts, that has a 16-core AMD ThreadRipper CPU plus four amped up 1080Ti cards (http://www.primegrid.com/show_host_detail.php?hostid=927928). And, very very VERY frequently, I am seeing some of the GPUs staying at idle, even sometimes only with 1 of 4 being utilized by a task.

Looking at it, it is NOT because there are no GPU tasks. On the contrary, there are hundreds of GPU tasks waiting, from PrimeGrid, Milkyway, SETI@Home, etc. I am one of those who attach to almost all projects (~45 currently) on all my hosts, so there are always plenty of CPU and GPU tasks to pick from.

However, the task scheduling seems to first assign tasks to the CPUs, to the 32 threads, and if there are some tasks that happen to use the GPU, great, as long as there is CPU reserves left to satisfy the CPU needs of that GPU task. Which means, say, if roughly 30 CPU threads are already assigned to some CPU tasks across the many projects because it is those projects' turn to run, that means that only two GPU tasks, each requesting 1 CPU, can get scheduled, leaving two GPUs completely unused until the scheduler looks at stuff again, and actually decides to pick up a GPU task instead of yet another CPU task.

Given that nowadays the latest GPUs are going at close to $1000, and are much much much faster at solving certain problems if the project developers chose to write their app for the GPU, compared to $850 for the latest non-server CPU with a $850/32=$26/thread cost, it is definitely a huge waste to keep the GPUs unoccupied to keep the CPUs fully utilized.

I am pretty sure this was probably covered or thought about before (a few quick searches on the message board didn't yield anything obvious), but now might really be the time to pay attention to this need: GPUs should stay fully utilized at all times for most return on the $ to the projects, and only then the remaining CPU resources should be distributed among the CPU-only tasks per whatever priority they are getting assigned by now. The answer is NOT and CAN'T be to carefully select a mix of projects, adjust their priorities, etc. Clearly, the field either is moving or already has moved from CPU to GPU computation, and the scheduler should acknowledge that.

Thanks for listening!
Tuna
3) Message boards : BOINC Manager : New ResourceShare values are not "read" by some hosts
Message 77308
Posted 11 Apr 2017 by ProfileTuna Ertemalp
Unfortunately, I cleaned them all up which took 2-3 days, after leaving them untouched for about a week to see if the problem would resolve itself on its own.
4) Message boards : BOINC Manager : New ResourceShare values are not "read" by some hosts
Message 77277
Posted 10 Apr 2017 by ProfileTuna Ertemalp
I'll wait until the developers at BAM have said anything about this: https://boincstats.com/en/forum/18/11507,1


Certainly. But note that my report here is about seemingly random hosts not respecting the resshare of seemingly random projects even though the XML sent to the host by that project during an update contains the correct value (since the project site itself has the correct value under MyAccount-->ProjectSettings), unless I detach n' reattach.

On the other hand, my report on BAM is that BAM doesn't seem to read back (or display) correctly the current resshare value of some projects on some hosts after an AccountMgrUpdate, regardless if that value on the host is correct based on what the project site has, even though the host sends an XML to BAM with the correct values.

I acknowledge that they sound similar, but I'll be surprised if they are the same issue. The two ends and direction of flow of information seems to be differeach in each case...

Tuna
5) Message boards : BOINC Manager : New ResourceShare values are not "read" by some hosts
Message 77258
Posted 10 Apr 2017 by ProfileTuna Ertemalp
So, I PAINSTAKINGLY went through all of my 12 hosts with ~50 projects they are attached to, and made sure every project on each host has the correct resshare that I set under MyProjects in BAM, and I also made sure that each project's own site also showed the same resshare in their ProjectPreferences under YourAccount. To do this, I had to identify all the projects for each of my hosts that for some strange reason wouldn't get the new resshare from the project's XML, drain it of any remaining tasks with NoNewTasks+AbortNotStartedWork+DelayedDetach, wait for detach, and then reattach it using BAM. Somehow this initialized the project on that host with the correct resshare value coming from the project. So, now the default values under BAM's MyProjects page, the project sites themselves and my hosts attached to those projects are all in sync.

But there clearly is a bug somewhere in BOINCMgr that prevents it from accepting the ResShare value from the XML file sent by the project, sometimes. Working around it was very very very time consuming.

Thanks
Tuna
6) Message boards : BOINC Manager : New ResourceShare values are not "read" by some hosts
Message 77058
Posted 31 Mar 2017 by ProfileTuna Ertemalp
I have 12 hosts. I use BAM as AcctMgr. All my projects were at ResourceShare=100 for the last 2 years. Yesterday, I assigned them values 1/5/10/25/50/100/200/500 in BAM, and BAM successfully set those in actual project sites. Then I forced an UPDATE on all projects on each host. To my surprise, some hosts got the whole set of new values for the projects they run, and some hosts got those only for a few projects. I cannot understand what is going on...

As an example, I will use GPUGRID. I set it to 200 in BAM, BAM relayed that info to the project site, and I see it at 200 there. I don't use any "computer location" stuff; all hosts are at the default location. Yet, some hosts received 200 as the new value, and some hosts are stuck at 100. For kicks, on such a host, I suspended & NNT'd everything except for GPUGRID, changed the value to 201 on GPUGRID site (to take any issues with BAM or stale files or file timestamps off the table), ran on update on GPUGRID on that host, it received 8 tasks (4xTitanX, baby!), yet the resshare stayed at 100! I changed it back to 200 on the site, rerun update, still 100. I looked at all *gpugid*.xml files under ProgramData/BOINC, resource_share entries are all 200; not a single instance of 100.

And, a good number of projects are at this state on some number of my hosts.

What to do? How can it be XXX in all *projname*.xml files, yet be 100 for the projname in BOINCMgr?? If it were for one project across all hosts, or all projects across one host, I could understand, and blame a project or host being stuck at something, but this? :(

Yes, everything is the latest version.

Thanks
Tuna
7) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67406
Posted 30 Jan 2016 by ProfileTuna Ertemalp
And, they are OpenCL type, not CUDA type.


For the sake of completeness of data, even though irrelevant to the question at hand: On another machine, SETI Beta CUDA tasks claim 0.06 CPUs unlike the OpenCL jobs' 0.473. So, seems folk behind these tasks thankfully try to do the right thing.

Tuna
8) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67404
Posted 30 Jan 2016 by ProfileTuna Ertemalp
Awesome! Thanks!!
9) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67402
Posted 30 Jan 2016 by ProfileTuna Ertemalp
Look at those SETI Beta tasks which are running (or compare the CPU time with the Elapsed time of tasks which are using the same application and have recently completed) to get a realistic measure of how much CPU is actually needed while the task is running.

If the SETI Beta tasks are the 'CUDA' type, I expect you'll find that they actually use far less than 47.3% of a CPU - though probably more than the 4% that I've defined for you in app_info.xml

Use app_config.xml to define a value for <cpu_usage> which is closer to reality than BOINC's notoriously generous stock estimate. You could do the same for POEM, but I don't have a feel for what that figure would be.


And, they are OpenCL type, not CUDA type. As such (I think), their Run Time and CPU Time are very similar, unless there is another "elapsed time" entry I am missing: http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=77274&offset=0&show_names=0&state=4&appid=

Tuna
10) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67400
Posted 30 Jan 2016 by ProfileTuna Ertemalp
Thanks, Richard. I will do that. But the question remains: As there are tons of other WUs waiting around with declared use of 0.01 or 0.04 CPUs, how come the scheduler doesn't move on to one of those to keep the valuable GPU resource fully used? Since my "task switch every N minutes" is set to "60", does it simply reserve the GPUs to a particular set of projects for the current 60mins, and doesn't move beyond that? If so, that would seem wasteful.

Tuna
11) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67398
Posted 30 Jan 2016 by ProfileTuna Ertemalp
Caught it! It was only assigning to #0 and #3, not #1 and #2. I looked through the event log:

1/30/2016 12:10:13 PM | SETI@home Beta Test | [coproc] Assigning NVIDIA instance 1 to 16no11aa.32407.25020.12.46.1_2
1/30/2016 12:10:13 PM |  | [slot] cleaning out slots/0: get_free_slot()
1/30/2016 12:10:13 PM | SETI@home Beta Test | [cpu_sched_debug] skipping GPU job 16no11aa.32407.25020.12.46.1_2; CPU committed
1/30/2016 12:10:13 PM |  | [slot] removed file slots/0/init_data.xml
1/30/2016 12:10:13 PM |  | [slot] removed file slots/0/boinc_temporary_exit
1/30/2016 12:10:13 PM |  | [cpu_sched_debug] enforce_run_list: end


and

1/30/2016 12:13:07 PM | Poem@Home | [coproc] Assigning NVIDIA instance 2 to poempp_2k39_1453838965_1955717516_0
1/30/2016 12:13:07 PM | Poem@Home | [cpu_sched_debug] skipping GPU job poempp_2k39_1453838965_1955717516_0; CPU committed
1/30/2016 12:13:07 PM |  | [cpu_sched_debug] enforce_run_list: end


Yes, all the 8 cores/16 threads of the 5690x CPU at this time are committed. And, SETI@home Beta Test needs 0.473 CPUs while Poem@Home needs 0.737. So, I understand that these guys don't get scheduled.

However, there are a whoooooole bunch of other projects with tasks waiting with very low CPU need along with needing a GPU, like Astroids@Home tasks that need 0.01 CPU and 1 GPU, and SETI@Home tasks that need 0.04 CPU plus 0.3 GPU (I am using Lunatics & my own app_config for SETI). Those tasks don't even get considered. BOINC Mgr seems to just look at the same two POEM and SETI Beta tasks over and over again, fails to allocate CPU, goes back into waiting, try again, etc. Why doesn't it move on to something else that would keep the GPUs busy? Heck, it could give one Astroid to one GPU, and three SETIs to another, and make the host scream.

Tuna

PS: I have saved all the log/xml files modified during the last 30mins while I was looking at this. If someone "from the staff" wants to see them, I can share a link. Don't want to do it publicly since I don't want to go into each of ~40 files to scrub IDs/names etc.
12) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67057
Posted 21 Jan 2016 by ProfileTuna Ertemalp
Report: Since I turned on the flags, as far as I can tell, all GPUs are being scheduled. Either outputting all that debugging data is slowing down something just enough to make the blockage go away, or the problem was somehow related to the jobs across multiple projects that were in the queue at the time, or something else totally random. Or, maybe I have just been lucky during the last 24hrs.

I'll keep the flags on & keep watching. If there is a problem again, I will report back.
13) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67014
Posted 20 Jan 2016 by ProfileTuna Ertemalp
No app_config use at all. They scare me... :) [Unrelated, not to hijack this thread, but if there is a link you can provide about using app_config files, for the uninitiated in that dark art, I'd be thankful.]

I turned on all those flags. Currently BOINC is able to schedule on all 4 GPUs, so nothing to debug, but I am getting LOADS and LOADS of data in my EventLog. Even with my 10,000 lines of increased buffer setting, I might end up losing data by the time I notice there is an issue. :)

I did NOT turn off "report completed tasks immediately". Partly, I forgot. But, why should that help solve this problem? I just don't want many 100% tasks collect in the client.

Tuna
14) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67012
Posted 20 Jan 2016 by ProfileTuna Ertemalp
Everything else is normal, as far as I can tell. The GPUs that BOINC is able to schedule do get tasks. The CPU seems totally unaffected. The job durations on CPUs and GPUs seem on par (although that is hard to tell since I am running 50 projects on 10 machines). I can tell, for instance, that SETI@Home Beta has received 394 completed & validated CPU tasks, mix of OpenCL/Cuda42. And, POEM has received 66 OpenCL/GPU tasks completed & validated. Similary for Astroids, 49 validated cuda55 jobs. All over the last 2-3 days. Etc. So, things seem to work otherwise, except that sometimes a GPU is not "task worthy".

Even though bunch of the flags you mention are CPU-specific, I will turn them all on and see if there are any clues...

Anybody else?

Thanks
Tuna
15) Message boards : GPUs : Why is BOINC recognizing all four TitanX GPUs but fails to assign to them randomly?
Message 67009
Posted 20 Jan 2016 by ProfileTuna Ertemalp
I noticed this last night and it is driving me crazy!

I have this Quad Titan X machine. They and the CPU are liquid cooled with those sealed dedicated units, so the room is always warm. Every time I look at it, or touch it, all five fans are blowing out hot air. But last night I touched them, and one was cold. Hmmm... Checked BOINC, and sure enough, #2 (out of #0...#3) wasn't running anything despite the many GPU jobs waiting... I looked at the EventLog, at the top, and it did recognize all 4 cards. Yikes! Did I burnout an X?? But I am also running MSIAfterBurner on all my machines (none of which is having this problem and one is Dual X and one is Dual Z) to see card temps, load%, memusage%, clock etc., and it is still getting data from all cards. Hmmmm....

I rebooted the machine. Now #3 isn't getting any jobs but #2 is. Huh? I turn on the coproc_debug flag for the Event Log. Yup, it is confirming assigning jobs to #0..#2, but not #3. I check the fans to see if somehow what BOINC called #2, it is now calling #3. Nope, a different fan is now blowing cold air, and the previous cold fan is now hot. Weird...

In frustration, since PhysX on the NVIDIA control panel was set to AutoAssign, I forced PhysX to only consider CPU and none of the GPUs (just in case, but the default "auto-assign" doesn't cause any problems on any of my other single or multi-GPU machines), make sure SLI is off, etc. I am using the latest 361.43 drivers, by the way. Reboot. And now, #2 and #3 are not used. Is this spreading?!

Just for kicks, I shutdown BOINC, and physically switch the HDMI connection to my 4K monitor between cards. Sure enough, every single card shows activity when it is driving the monitor, as I can see in MSIAfterBurner. So, no card is dead.

Frustrated, I go to bed, leaving only #0 and #1 churning. In the morning, I find the room hot again. Check the fans, all four hot. Look at BOINC, yup, all GPUs are running tasks.

An hour later, however, #1 and #3 are not used. What? Check the fans, and confirm that it is #3 and a new one, #1. So, now #0 and #2 are working. The "blindspot" has shifted & split...

I shut down BOINC, set MSIAfterBurner to recording a log, run 3DMark, their high level test (Fire-something), look at the log, and all GPUs are firing at max. GPUs are healthy! And, due to the individual liquid cooling per card, none of the cards ever even get close to 60C; they usually stay at 30-45C with the fans at around 30%, only hitting 50-55C briefly at times, which is something to be very happy about. Plus, the temperature envelop for these cards is in the 80s, so that is not the problem. These are also the temps I am seeing in MSIAfterBurner when all four of my GPUs were showing 100% GPU load earlier yesterday when BOINC was able to assign jobs to all of them.

Of course I am and have been running the latest BOINC software; see below.

No, I am not overclocking/overvoltaging/overanything these cards. They are Titan X SC models the way they were set in the factory, with the Hybrid kit slapped on them.

Snippets from my current BOINC session:

1/20/2016 10:53:03 AM |  | Starting BOINC client version 7.6.22 for windows_x86_64
1/20/2016 10:53:03 AM |  | log flags: file_xfer, sched_ops, task, coproc_debug, unparsed_xml
1/20/2016 10:53:03 AM |  | Libraries: libcurl/7.45.0 OpenSSL/1.0.2d zlib/1.2.8

....

1/20/2016 10:53:04 AM |  | [coproc] launching child process at C:\Program Files\BOINC\boinc.exe
1/20/2016 10:53:04 AM |  | [coproc] relative to directory C:\ProgramData\BOINC
1/20/2016 10:53:04 AM |  | [coproc] with data directory "C:\ProgramData\BOINC"
1/20/2016 10:53:06 AM |  | CUDA: NVIDIA GPU 0: GeForce GTX TITAN X (driver version 361.43, CUDA version 8.0, compute capability 5.2, 4096MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | CUDA: NVIDIA GPU 1: GeForce GTX TITAN X (driver version 361.43, CUDA version 8.0, compute capability 5.2, 4096MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | CUDA: NVIDIA GPU 2: GeForce GTX TITAN X (driver version 361.43, CUDA version 8.0, compute capability 5.2, 4096MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | CUDA: NVIDIA GPU 3: GeForce GTX TITAN X (driver version 361.43, CUDA version 8.0, compute capability 5.2, 4096MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | OpenCL: NVIDIA GPU 0: GeForce GTX TITAN X (driver version 361.43, device version OpenCL 1.2 CUDA, 12288MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | OpenCL: NVIDIA GPU 1: GeForce GTX TITAN X (driver version 361.43, device version OpenCL 1.2 CUDA, 12288MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | OpenCL: NVIDIA GPU 2: GeForce GTX TITAN X (driver version 361.43, device version OpenCL 1.2 CUDA, 12288MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | OpenCL: NVIDIA GPU 3: GeForce GTX TITAN X (driver version 361.43, device version OpenCL 1.2 CUDA, 12288MB, 4025MB available, 7468 GFLOPS peak)
1/20/2016 10:53:06 AM |  | [coproc] NVIDIA library reports 4 GPUs
1/20/2016 10:53:06 AM |  | [coproc] No ATI library found.

....

1/20/2016 10:53:06 AM |  | Processor: 16 GenuineIntel Intel(R) Core(TM) i7-5960X CPU @ 3.00GHz [Family 6 Model 63 Stepping 2]
1/20/2016 10:53:06 AM |  | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 fma cx16 sse4_1 sse4_2 movebe popcnt aes f16c rdrandsyscall nx lm avx avx2 vmx tm2 dca pbe fsgsbase bmi1 smep bmi2
1/20/2016 10:53:06 AM |  | OS: Microsoft Windows 10: Professional x64 Edition, (10.00.10586.00)
1/20/2016 10:53:06 AM |  | Memory: 63.90 GB physical, 73.40 GB virtual
1/20/2016 10:53:06 AM |  | Disk: 476.39 GB total, 392.29 GB free
1/20/2016 10:53:06 AM |  | Local time is UTC -8 hours
1/20/2016 10:53:06 AM |  | VirtualBox version: 5.0.10
1/20/2016 10:53:06 AM |  | Config: don't suspend NCI tasks
1/20/2016 10:53:06 AM |  | Config: event log limit 10000 lines
1/20/2016 10:53:06 AM |  | Config: report completed tasks immediately
1/20/2016 10:53:06 AM |  | Config: use all coprocessors

....

1/20/2016 11:07:41 AM | Milkyway@Home | [coproc] NVIDIA instance 0; 1.000000 pending for de_80_DR8_Rev_8_5_00004_1446686708_40269073_2
1/20/2016 11:07:41 AM | Asteroids@home | [coproc] NVIDIA instance 0; 1.000000 pending for ps_151226_input_13038_16_0
1/20/2016 11:07:41 AM | Milkyway@Home | [coproc] NVIDIA instance 0: confirming 1.000000 instance for de_80_DR8_Rev_8_5_00004_1446686708_40269073_2
1/20/2016 11:07:41 AM | Asteroids@home | [coproc] NVIDIA instance 2: confirming 1.000000 instance for ps_151226_input_13038_16_0
1/20/2016 11:07:41 AM | SETI@home Beta Test | [coproc] Assigning NVIDIA instance 1 to 16no11aa.11914.18087.6.40.96_0
1/20/2016 11:07:41 AM | Collatz Conjecture | [coproc] Assigning NVIDIA instance 3 to collatz_sieve_2518255562788568039424_6597069766656_1

....

1/20/2016 11:27:18 AM | SETI@home Beta Test | [coproc] NVIDIA instance 0; 1.000000 pending for 16no11aa.11914.18087.6.40.27_1
1/20/2016 11:27:18 AM | Asteroids@home | [coproc] NVIDIA instance 0; 1.000000 pending for ps_151226_input_13039_25_1
1/20/2016 11:27:18 AM | Collatz Conjecture | [coproc] NVIDIA instance 0; 1.000000 pending for collatz_sieve_2518259560612846632960_6597069766656_0
1/20/2016 11:27:18 AM | Poem@Home | [coproc] NVIDIA instance 0; 1.000000 pending for poempp_1vii_1453239133_373521905_0
1/20/2016 11:27:18 AM | SETI@home Beta Test | [coproc] NVIDIA instance 0: confirming 1.000000 instance for 16no11aa.11914.18087.6.40.27_1
1/20/2016 11:27:18 AM | Asteroids@home | [coproc] NVIDIA instance 1: confirming 1.000000 instance for ps_151226_input_13039_25_1
1/20/2016 11:27:18 AM | Collatz Conjecture | [coproc] NVIDIA instance 2: confirming 1.000000 instance for collatz_sieve_2518259560612846632960_6597069766656_0
1/20/2016 11:27:18 AM | Poem@Home | [coproc] NVIDIA instance 3: confirming 1.000000 instance for poempp_1vii_1453239133_373521905_0


Aaaaaaand, during the half hour it took me to write this post with all the copy/paste, right now, all 4 GPUs are being used. No, the different flips between use/no-use states are not the transient times between a task being reported and a new one starting. I have ReportTasksImmediately turned on (see above), and these state changes are measured in hours between different sets of GPUs being used.

Anybody else has seen this? Any ideas? Any further debugging flags to be turned on to see what is going on when BOINC says "Assigning NVIDIA instance N to blahblah" without a matching "NVIDIA instance N: confirming...."?

Thanks
Tuna
16) Message boards : Projects : Creating a preconfigured BOINC server in the Amazon cloud in five minutes
Message 66338
Posted 21 Dec 2015 by ProfileTuna Ertemalp
I tried to follow the steps pointed at by https://boinc.berkeley.edu/trac/wiki/CloudServer using the Linux BOINC - ami-0c24bd3c image, but they start breaking at Paste the Public IP nnn.nnn.nnn.nnn (once it appears) into a web browser, and it should say “Apache2 Ubuntu Default Page”. Instead of that, I tried using Connect from the AWS Management Console which results in a Java SSH console (which means Edge or Chrome on Windows doesn't work, but IE would, given the lack of Java support in the former two), but then the step su boincadm fails. After that, I gave up and closed down shop...

Tuna
17) Message boards : Projects : Creating a preconfigured BOINC server in the Amazon cloud in five minutes
Message 66337
Posted 21 Dec 2015 by ProfileTuna Ertemalp
So, the step search for “BOINCServerTemplate” in the instructions now return No AMIs found matching your filter criteria. Searching for BOINC, on the other hand, returns a Linux BOINC - ami-0c24bd3c and a WindowsBOINC - ami-dc25bcec. Are these latter ones the replacements for the former original one, or totally unrelated?

Thanks
Tuna
18) Message boards : Projects : Ibercivis JustFoldIt rebirth
Message 66336
Posted 21 Dec 2015 by ProfileTuna Ertemalp
Fingers crossed. Thanks Boboviz...
19) Message boards : Projects : Ibercivis JustFoldIt rebirth
Message 66294
Posted 19 Dec 2015 by ProfileTuna Ertemalp
On 11 Nov 2014, crowdfunding is announced about FoldIt!:
Tenemos abierta la campaña crowdfunding "JustFoldit: Una nueva herramienta basada en BOINC para impulsar la investigación en biomedicina".
...

On 1 Jun 2015, 6.5 months later on the Ibercivis forum, this was the last meaningful communication from the admin:
...
Sorry for this big delay.

We're testing the application, but we have found some differences in the output files between the standalone and the boinc version of it.

Thank you very much for your patience and understanding
...

Then, on 20 Nov 2015, 5.5 months later, this private message to boboviz:
Thanks for your mail, we are facing some troubles with checkpoints in different versions/OS systems. We are working on this app as well as in other BOINC projects to be run soon. JustFoldIt it is expected to be launched in beta in 1 week or so...
Sorry for any inconvenience, thanks for your understanding

Now, on 19 December 2015, another month later, still no activity in this project, including no further communication on its own forums.

I am not being hostile here, at all. I'd love to see it revived and working and churning. However, a frequent update by the admin would be great, especially after he accepted money through crowdfunding.

Tuna
20) Message boards : Questions and problems : How to use "Oracle VM VirtualBox" for my own purposes
Message 62198
Posted 16 May 2015 by ProfileTuna Ertemalp
I posted this on the BOINCstats forum, but maybe it'll get more traction here... My apologies if you are seeing this for the second time.

All my machines capable of running virtual machines has "Oracle VM VirtualBox" installed along with BOINC. And, all my machines are Windows 7 or 8.1 machines (Std or Pro). A number of projects, like those from CERN, nicely create Linux machines under VBox and run their projects on my Windows machines. I am so grateful to them.

Yet, there are other projects like BealF@Home and WEP-M+2 Project who are Linux only, without any Windows apps, either insisting on being Linux only or not being able to create a Windows app, and it kills me that (1) They won't provide a Windows app, (2) I don't get to provide CPU/GPU time for them.

So, I figure that it must be possible to use VBox to create virtual machines on which to run BOINC with just these two (and any future such) projects. But, I am TOTALLY ignorant about Linux at this point. Last time I did anything with Unix was June/1990, and I don't remember any of it.

If anybody did this already, I would appreciate an excruciatingly step-by-step, totally foolproof as well as future proof (as Linux/VBox/Windows versions change) instructions. It must include stuff like it being auto start, auto login, auto BOINC when the Windows machine boots, how to coexist with the resources of the host Windows machine as opposed to thinking that it has all the CPU/GPU/RAM/HD resources of the host machine available to it, etc.

And, since this is the BOINC forum: Wouldn't it be wonderful if BOINC+VBox install for Windows actually installed a default Linux image with the current BOINC already installed in it, and automatically redirected such projects to run on BOINC in that image as opposed to the host machine, but report everything (I am talking about the Event Log as well as the Notices/Projects/Tasks/Transfers/Statistics/Disk tabs) through the BOINC Manager on the host machine? Of course it would also have to lie to the project that it is asking for Linux tasks if the project refuses to deliver Windows tasks.

While on the subject, I also would like to run Bitcoin Utopia in a VM, but not under my BOINC-wide user account. I don't like how they artificially increase one's credits by the billions, but I also see the point of helping out projects monetarily. As such, I wouldn't mind running it in a VM, under a different BOINC persona to collect those credits, keeping them separate from my main (i.e. this) account. I know there are differing opinions about this, so I am not trying to start a conversation about it. This is just my preference. I am observing on their project website that some/many of the Bitcoin Utopia applications have Linux versions. So, that would mean that I would run TWO VMs on each of my VM capable Windows machines, one running Linux-only projects under my current persona, and one running Bitcoin Utopia under a new persona. I am interested in hearing what would be the effect of that on the main host machine running the rest of the ~50 projects, and how I would go about assigning resources to the VMs to allow them to do their job without stealing way too much from everything else, including other apps running under BOINC on the host machine that create their own VMs for their WUs.

Thanks for any guidance. If this thread results in something really useful, we could maybe have a sticky post that tells what to do to the newcomers.
Tuna
Next 20

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.