Recent items from the news feeds of various BOINC projects.
Thanks to all the users. (Please keep crunching!)
View article · Sat, 19 Jun 2021 10:57:48 +0000
...300316 times - still no sign of any larger factors
View article · Sat, 19 Jun 2021 10:52:53 +0000
Hello everyone, just wanted to give some updates about the machine learning - python jobs that Toni mentioned earlier in the "Experimental Python tasks (beta) " thread. What are we trying to accomplish? We are trying to train populations of intelligent agents in a distributed computational setting to solve reinforcement learning problems. This idea is inspired in the fact that human societies are knowledgeable as a whole, while individual agents have limited information. Also, every new generation of individuals attempts to expand and refine the knowledge inherited from previous ones, and the most interesting discoveries become part of a corpus of common knowledge. The idea is that small groups of agents will train in GPUgrid machines, and report their discoveries and findings. Information of multiple agents can be put in common and conveyed to new generations of machine learning agents. To the best of our knowledge this is the first time something of this sort is attempted in a GPUGrid-like platform, and has the potential to scale to solve problems unattainable in smaller scale settings. Why most jobs were failing a few weeks ago? It took us some time and testing to make simple agents work, but we managed to solve the problems in the previous weeks. Now, almost all agents train successfully. Why are GPUs being underutilized? and why are CPU used for? In the previous weeks we were running small scale tests, with small neural networks models that occupied little GPU memory. Also, some reinforcement learning environments, especially simple ones like those used in the test, run on CPU. Our idea is to scale to more complex models and environments to exploit the GPU capacity of the grid. More information: We use mainly PyTorch to train our neural networks. We only use Tensorboard because it is convenient for logging. We might remove that dependency in the future.
View article · Thu, 17 Jun 2021 10:40:32 +0000
Due to unforeseen circumstances, we at MilkyWay@home are temporarily deprecating our newest version of Nbody (v1.80) and un-deprecating the previous version (v1.76) in its place. Since the lua files associated with each version are incompatible with each other, we have replaced the previous optimizations with a new set:
We have cancelled all jobs pertaining to the previous set of runs. However, there still may be a few which we were not able to cancel in time. These runs will most likely error out if you get them, but should do so rather quickly (about 2 seconds).
If any complications arise from this, please notify us immediately, and we will quickly find a solution.
Thank you all for your time and continued support,
View article · Wed, 16 Jun 2021 23:02:15 +0000
With the recent addition of a new permanent team member, the researchers can begin leveraging machine learning techniques to help with data analysis.
View article · Wed, 16 Jun 2021 18:39:16 +0000
Congratulations to all Einstein@Home volunteers, whose computers have found 14 new pulsars in data from the Large Area Telescope (LAT) on board the Fermi gamma-ray satellite. These new pulsars are listed on the FGRP discoveries webpage, along with the names of the volunteers whose computers identified the new systems with the highest significance. In total, Einstein@Home has now found 39 new gamma-ray pulsars in Fermi LAT data.
View article · Tue, 15 Jun 2021 19:51:17 +0000
Check out recent advances in drifting RFI removal.
View article · Thu, 10 Jun 2021 21:20:45 +0000
A research team member hits an important academic milestone this month.
View article · Thu, 10 Jun 2021 20:14:30 +0000
I've just put some new separation runs up on the server. Remember those stripe 84 and 85 runs that would start to throw validate errors as they became more optimized? I've been testing and comparing runs on different builds and *hopefully* that problem has been resolved.
The names of the new runs are:
Please keep an eye on these runs and let me know if anything odd happens (validate errors or otherwise). With any luck, everything will work perfectly! These are the last runs that need to optimized before the latest results of separation can be submitted to a journal to be published.
Additionally, I have taken down the following runs:
As always, the stopped runs will continue to show up in your workunit queue for a few days as they finish up. This is normal and expected. Thank you all for your support and help with this project.
View article · Wed, 9 Jun 2021 23:14:57 +0000
We are updating the operating system on our servers on Thursday, June 10, beginning at 13:00 UTC.
View article · Wed, 9 Jun 2021 16:43:16 +0000
The fourth challenge of the 2021 Series will be a 5-day challenge celebrating the 16th anniversary of the launch of PrimeGrid on BOINC. The challenge will be offered on the ESP-LLR application, beginning 12 June 13:00 UTC and ending 17 June 13:00 UTC. To participate in the Challenge, please select only the Extended Sierpinski Problem LLR (ESP) project in your PrimeGrid preferences section. For more information, check out the forum thread for this challenge: https://www.primegrid.com/forum_thread.php?id=9684&nowrap=true#150570 Best of luck!
View article · Wed, 9 Jun 2021 13:46:42 +0000
MLC@Home has posted the Jun 8 2021 edition of its monthly "This Month In MLC@Home" newsletter!
A monthly update including the new client with DS4 support, a note on disk space on the server, and a mvoe to monthly updates instead of weekly ones moving forward.
Read the update and join the discussion here.
View article · Wed, 9 Jun 2021 04:22:49 +0000
Week Month in MLC@Home
Notes for June 8 2021
A monthly summary of news and notes for MLC@Home
Updates have come slowly these past few months, since the presentation at the BOINC workshop and the release of our initial paper, as we're personally adjusting (fortunately!) the the beginnings of a post-pandemic life. Work, family life, and everything is changing for many of us, and we're still trying to figure out the new normal. Because of this, going forward these updates to be monthly since they take quite a bit of time to put together and we've been failing to get them out weekly for a while now anyway. And here's hoping all our volunteers all over the world are in an area where they too can start to move beyond the worst of the pandemic.
But that doesn't mean the project has been dormant!
DS1/DS2/DS3 are all nearing completion, especially DS3 which is sitting at 97%. We've been talking about DS4 for months, and the code is ready for larger testing. Unfortunately, we rolled out a test client a few weeks ago that failed miserably, because of an incompatibility between PyTorch and the native BOINC API. there's a way around this, but it requires more development, and a change to how WUs are specified, and we've been working on it ever since. We should be ready any day now but its been more involved then we thought so we're not prepared to give it a time. But, we do know we need to have it soon as DS3 WUs are running out.
Some of the other benefits of the new client are it's statically linked, which vastly simplifies deployment. The extra development time has also given us a chance to make a change to make us more robust to NaNs, which should cut down on the amount of validation errors on the system.
Another new issue is the data partition is running out of space on the server.. DS3 is taking over 4TB! Thanks to all of our volunteers! We've moved some things around to make a little space so everything is still working for now. We received some new storage today and will need some downtime to get it installed. Shouldn't take more than a few minutes, so we'll just do it sometime within the next week.
So, stay tuned, the next month's going to be intresting for MLC@Home, as we move into DS4 and the next phase of this research.
Project status snapshot:
(note these numbers are approximations)
Last month's TMIM Notes: May 1 2021
Thanks again to all our volunteers!
-- The MLC@Home Admins(s)
Discord invite: https://discord.gg/BdE4PGpX2y
View article · Wed, 9 Jun 2021 04:16:08 +0000
Four years of volunteer computing power helped predict more than 330,000 protein structures. Now, the project's time on World Community Grid is coming to a close. But the data analysis and publication are just beginning.
View article · Tue, 8 Jun 2021 20:14:03 +0000
we want to be whitelisted in Gridcoin (anytime), so you will be able to receive this cryptocurrency for scientific computing. We believe that Gridсoin is ethical. This is also a chance to increase awareness of the project.
View article · Sun, 6 Jun 2021 19:00:14 +0000
We've released the latest version of Kaktwoos-cl, which is now 2.13!
In this update, we've added further checks for GPU model detection and printout for what GPU and its 'name' a given Kaktwoos-cl task is on. Please note that we highly suggest you run sudo apt-get install nvidia-opencl-dev if you are on Ubuntu or Debian (or a corresponding package + command on your distro) in order to install the missing OpenCL headers and allow Kaktwoos-cl to run. We would appreciate that all BOINC users thinking or currently running Nvidia GPUs on Linux to verify that BOINC tasks are not reporting computation errors accidentally, and leaving them running 'carelessly'. We have not truly resolved the very rare 'stuck at 100% / Infinite Task' post-crash issue, and we recommend for those tasks to be aborted manually if you encounter one at >99% for more than 10 minutes on a new GPU.
Nvidia GPUs will use one of two kernels depending on their age and model. For example, any RTX or 16xx (Turing) GPU shall continue to use optimizations introduced in Kaktwoos v2.10. Older or weaker GPU models shall use the previous kernel, which means we should see a 5% boost to any GPUs reported to have a regression, as some of you may reported on our BOINC threads.
AMD RDNA1/2 GPUs are now detected and matched to use another set of optimizations, increasing their speed to their original (pre 2.10) or further increasing it. Due to architectural differences, changes made that improved GCN (ie RX 480, Vega 56) performance by 25-40% reduced RDNA 1/2 performance by up to 2X, which is now mitigated.
All task outputs will include the seed range searched as well (s: and e: parameters) for general interest and debugging potential. Otherwise, there is not much left I (Hy) feel there is to code for Kaktwoos-cl, and CPU projects are now more of a focus for Minecraft@Home than before, as you may have seen with our OneChunk pre-announcement.
View article · Sun, 6 Jun 2021 18:39:47 +0000
From June 5, 2021 to June 6, 2021 there will be temporary problems with access to the project.
These problems are related to the changes in infrastructure.
View article · Thu, 3 Jun 2021 07:48:29 +0000
We have recently found a new bound for length 26 with prime 2,125,065,391 (2.1 billion), which is almost certainly optimal given that we have checked another interval of 350 million and a simple heuristic predicts a probability of 5% that we can do better. This bound is slightly below the predicted asymptote, which is common for even-length bounds.
There are also two new badges for tiers of recent average credit (RAC) and total credit (courtesy of Pavel_Kirpichenko). The total credit badges are those with the black rectangles and the RAC badges are those around. Hovering over the badge with your mouse will show its tier (e.g., T1M appears under a badge with a rectangle and corresponds to 1 million total credit).
We've also begun purging old tasks. The number of task records has been reduced from 4 million to just over 100 thousand. Task lists now load almost immediately.
View article · Wed, 2 Jun 2021 22:03:18 +0000
We are updating the operating system on our servers on Friday, June 4, beginning at 14:00 UTC.
View article · Wed, 2 Jun 2021 18:36:58 +0000
There are two new YouTube videos about Einstein@Home we think you might enjoy watching.
If you want to learn more about the scientific background of the project and recent discoveries have a look at Bruce Allen's talk about Einstein@Home at the 2021 BOINC Workshop.
View article · Wed, 2 Jun 2021 12:09:56 +0000
The following changes will be transparent, but if you are interested in technical details, then here they are. A new application cmdock-boinc-zcp has been released, including the new CmDock v0.1.3 release with (z)ipped input, (c)heckpoints and enhanced (p)rogress bar. In addition, a new docking experiment will begin under a new protocol.
Thanks to the CmDock team and pschoefer & walli for the release!
View article · Fri, 28 May 2021 11:07:44 +0000
The project has added GPU power to the existing strong CPU power that supports research for potential COVID-19 treatments.
View article · Thu, 27 May 2021 18:30:38 +0000
Videos of the talks from the 2021 BOINC Workshop are now available on YouTube. Day 01 includes a talk giving an overview of LHC@home and Day 02 has another talk which provides more details on the specific technology we use. There are many other interesting talks from the other BOINC projects and from the BOINC developers.
View article · Thu, 27 May 2021 12:45:13 +0000
The researchers have done further validation on their lung cancer marker data.
View article · Fri, 21 May 2021 19:08:06 +0000
We are glad to announce the new app version with checkpoints, CmDock v 0.1.2 (cmdock-boinc-zip 2.0 in the BOINC project). Currently it is implemented for Windows 64bit, Linux 64bit and Raspberry Pi (for RPi, please check this thread if you need instructions on configuring the client).
The new release has been tested, but not on every existing computer. In case of any problems, please report.
Thank you all for participation!
View article · Thu, 20 May 2021 17:00:11 +0000
Copyright © 2021 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.