Quality insurance.

Message boards : Projects : Quality insurance.
Message board moderation

To post messages, you must log in.

AuthorMessage
Rudolfensis

Send message
Joined: 6 Dec 08
Posts: 9
Benin
Message 21647 - Posted: 6 Dec 2008, 12:41:03 UTC


I've noticed a lot of puzzling projects out there lately, projects with limited participation on the part of the directors or even projects that run almost in-abstentia.

I'm wondering, is there a "standard" bureau for BOINC that supervises the projects and assure they meet minimum requirements? Is there anyone that actually assures participants that their efforts is not going down the drain because of a project that is run in a very poor manner?
ID: 21647 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14998
Netherlands
Message 21648 - Posted: 6 Dec 2008, 12:53:15 UTC - in response to Message 21647.  

No. With BOINC being Open Source anyone can use it.
All the developers can do is not list all of the projects out there, skimming the shady and possibly untrustworthy off the list.

Projects are asked, but not required, to join the Boinc_Projects email list.

As for assurance, use common sense: if you don't trust the project, don't attach to it.
ID: 21648 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 14998
Netherlands
Message 21652 - Posted: 6 Dec 2008, 13:55:15 UTC - in response to Message 21651.  
Last modified: 6 Dec 2008, 13:56:07 UTC

Yes, that one. Also this one: http://boinc.berkeley.edu/projects.php

I am also not showing all projects in the FAQs, although I do have more than the developers show. ;-)
ID: 21652 · Report as offensive
rakarin

Send message
Joined: 26 Dec 08
Posts: 9
United States
Message 22084 - Posted: 26 Dec 2008, 16:49:49 UTC - in response to Message 21655.  

There are at least 2 projects in those lists that waste a considerable portion of the CPU power donated to them by allowing volunteers to crunch results that don't need to be crunched. I don't question the value or quality of their work/science. It's just that much of the work at those projects is needlessly redundant. The 2 projects I speak of are Tanpaku and LHC@home. If one wants to make sure the spare CPU cycles they donate aren't going down the drain then one should not attach to either of those 2 projects.


You know, I've been running BOINC for years, and I never realized there were forums on the main BOINC page...

I'm curious as to why you say this about these projects? I crunch for both of these (or did when Tanpaku was alive), so I'm curious that perhaps you know something I don't?

LHC replicates work units because they use a "Monte Carlo" method. this means there are weak randomizers in the simulation. The computation will *tend* to go along a given path ("tend to" being the key phrase), but will "meander" a bit, and sometimes go wildly off course. LHC is not looking for a quorum, like other projects. If they send out 10,000 work units with the same initial parameters, they expect about 5,000+ different results. They are looking at probabilities and statistics, and so have to replicate work thousands of times to get good data. It may seem wasteful, but thats how such research models work. Consistent successes and consistent failures are all useful.

As for Tanpaku, well, their apparent negligence is hurting their image. They should communicate more, and if the project may close, they should simply say so. Language may be part of the problem, but urban Japan is fairly Westernized and English teachers and translation services are common.

As for Tanpaku's research, as I understand it, they are working with folding using a different modeling method, something called "Brownian energy". It's supposed to be a different model for handling potential energy and charge clashes. Yes, fundamentally they are doing the same thing as Folding@H, but they are trying to do it a different way. The value is in the method, as they may find parts of their algorithms that work better than other modeling programs. GROMACS is not the only program out there to do folding simulations, and it's not the only one used, even by Folding itself. Though the bulk of their work *is* GROMACS, they also use other modeling programs for the CPU clients, and GROMACS is evolving and has a few different versions in use. Even Rosetta, which can do folding path studies, has different versions in use. By developing other simulation software with fundamental differences, it's possible to find innovations that can be added to, or work alongside, GROMACAS and Rosetta (and others).


ID: 22084 · Report as offensive
rakarin

Send message
Joined: 26 Dec 08
Posts: 9
United States
Message 22128 - Posted: 28 Dec 2008, 15:53:59 UTC - in response to Message 22125.  

Thank you for replying, Dagorat. This is the type of information I was looking for. I agree with part of what you say.

I understand that and I accept those failures because they are unavoidable. However, LHC's policy of an initial replication of 5 results for a quorum of 3 wastes a lot of CPU cycles that need not be wasted. That waste would be eliminated if they would reduce the initial replication to 3.


This part I don't agree with. There are two points. First, the "failures". A consistent failure indicates a flaw in their design. The LHC team actually *wants* to see the failures now, so that they don't have to deal with the equipment (and possibly people) being regularly punctured by high-energy lead nuclei. Because of the probabilistic nature of their modeling, some failures will just be a series of unfortunate variables. However, consistent failures mean they have to re-examine their design.

I think the same holds true for the high quorum. Because the work units have built-in randomizing factors, you will need more to see a trend.

Another thing to keep in mind, they are doing this for engineering work on the most expensive piece of scientific equipment in the world. It cost more to build than the annual GDP of many small countries. Any margin of error is not acceptable. The computing is being done on computes they have no control over. Some people run BOINC on overclocked PC's, some try to recompile the source code because they think they can do better, and a few blatantly cheat. They have no control over software or hardware. To be honest, the only BOINC project that has ever crashed one of my Linux computers was LHC (more than one PC, more than one Linux distro). I can understand that they want a high quorum.

I have no problem with any of that. What I take exception to is Tanpaku's policy of setting the initial replication to 2 for a quorum of 1.


... Huh? Quorum of 1? So they just have to get one work unit back that does not report an error? Ok...

Personally, having worked in IT support jobs since '92, I think the r3q2 model is good. I personally don't think a quorum of 1 is a good idea.
ID: 22128 · Report as offensive

Message boards : Projects : Quality insurance.

Copyright © 2022 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.