Message boards :
BOINC client :
Points discussion comments
Message board moderation
Author | Message |
---|---|
Send message Joined: 4 Jul 06 Posts: 13 |
The following is a message that I was asked to deliver for someone else. They would prefer to stay anonymous until they gauge the response to this: YES, there was MAJOR security. It has all , apparantly, been lost out of the XML file. In the original BOINC, the hosts master XML file was stored on the server and the breach detected if it was modified client-side. It was the only way to prevent cheating, if you touched the client, or the XML, you got hammered. The only thing that could happeen is a faster app, faster cpu, etc.... And each project was responsible for establishing the credits/WU for each version of their apps. The man in charge of BOINC did this all to PREVENT cheating, and sold it as such. Everyone will remember SETI WU's being fed back in as the way to cheat. DA found a way to stop that using the server maintained master XML, which the HOST IS REQUIRED to accept when it connects. The SERVER controlled when benchmarks ran. The SERVER could, if the project wished, send a project-benchmark WU to establish a 'normalized' value. That is BOINC 3 and BOINC 4. It comes from the previous Folding@Home administrator. The user could, in later updates, request a benchmark, but the results would be sent to the server (for that project), normalized at the project level (even if it required a WU... as that is what the architecture/project was supposed to provide but used cobblestones if not available). EACH result returned had to verify the XML's matched or the result was rejected. After the results were sent, normalized, recorded, they were sent back. This was the norm. The users could run all the benchmarks they wanted, but it didn't matter. The server was in control. AKA... you cheat... the server ignores / disqualifies the returned result and puts it back out to dispatch for others, you get NO credit. There was a BOINC-distribued constant, in the server db, which was up to each project to use to map to the BOINC 'standard' for cross-project purposes and uniformity....... it was later to serve as the basis for cross-project status /scores. Each WU was assigned the values, to work in conjunction with cobblestones, to derive a 'project score for the WU'..... then the project used a separate multiplier / function to handle 32 vs 64 bit, pc vs mac, linux vs windows clients. and issues a balanced output 'cross-project' score. The outputting of the data was the responsiblity of the project to normalize... but it was NEVER a credit/WU or credit/cpu hour or credit/clock hour thing... it DID take memory, disk space, (and all the cobblestone) info into consideration. In the beginning, the BOINC project gave each admin the freedom to use either a) BOINC defaults or b) adjust based on the project. Most project adjusted.... as they knew their application better than anyone... AND they were the ones who could compensate for score if the 64 bit mac version was unoptimized , but the 32 bit windows client was..... the project Normalized this within itself. Before each update. The support tech and *I* ran controlled, known results, known time, known resource utilization standard tests to permit the database to be calibrated.... From there, the Work Unit Generator created WU's to match that calibration / conform to the calibration.....however, one thing that did happen... in a quorum =2 (2 good results out of 3 send out), the middle credit value was selected (OR) the totals were averaged and each user given the same credit for that WU, so sometimes a user got a low score because he/she ran a 32 bit, 1.5 ghz cpu on the WU (giving it a higher credit score... because it had to work harder / use more resources), than say a mac G4 which did it in nothing flat....Sometimes those go in your favor, sometimes they do not..... but that was known AND accepted by the users because statistically it worked out even. This former admin suggests the 'normalized score' be cobblestones & memory & IO (disk & network) related. It is important to remember the scale is not LINEAR... it is a log or ln scale type of score based on the predetermined (by the WU generator) score for each WU.... this is how we could intermix complex WU's (requiring hours of CPU, IO and Memory) with short jobs... ALSO, jobs on a short turn-around time period were given an added bonus for meeting the deadline. This typically was when 3 jobs ran, but no quorum was obtained, so a 4th or 5th host was dispatched to the task. The faster the job got back, the more (fraction of the original) credits the user was given for the WU.... and that user's credits carried more weight in the WU overall scoring... basically like taking a WU.... having 5 different machines run it.... then averaging the scores and giving all users the same score. This made it more statistcally balanced for users...... AND allowed fast WUs to be done quickly & properly as well as tough WUs to be given appropriate credit for their known complexity / resource utilization, think of it as .... 90% of my cpu, 15% of my memory, 3 GB of disk space, and 100% of my 256 Mb DSL. ..... That was one way we determined score based on a 'standard WU'.... the WU Generator assigned the appropriate offset / multiplier based on the 'standard model' for the project. Each project could issue a 'test' / standard WU for the user to run as a benchmark; it had to be done manually by the admin dropping it in the queue, but it was there. We did it often. It was also how a new WU generator was tested while running the 'existing accepted' WU generator. It allowed multiple clients (old, new & test) clients to run concurrently as well. To all: All the best at restoring BOINC to it's root concepts and don't hesitate to notify if needed, this admin hated to leave BOINC/ F@H. What I feel is needed to fix things: a) security to prevent cheating b) uniformity within each project, c) normalization across projects... all based on a common standard. d) And random Benchmarking (which used to be every 'N' days or jobs as determined by the admin to help keep all clients normalized with the apps and WUs. This former admin is willing to help put back what was supposed to be there, AND REMAIN there, from day # 1. If this not considered an agreeable solution, and as this is not rocket science, then BOINC is not Open as defind by it's name and should be scrapped. The admin may be contacted by the respresentative if desired.(I guess that's me, jasong) As admin, I spec'd the servers and set it all up.... the project ran flawlessly from the user perspective unless we got a change from BOINC=central and did not run it on the test bed... adjustments were always required, but THAT *IS* the nature of BOINC.. .the ability to be uniform but different. We had the ability, using the 'architecture-class-OS' name (like i686-pc-cygwin) to use for balancing different hosts. If BOINC is revoking per-project control and NOT going to normalize in a meaningfull, acceptable manner to all projects, then cross-project scoring should be removed. Depending on security, which MUST come first. AND, if cross-project scoring is removed, then Boinc HQ / Mr Anderson does not have the right to that which he dictated was to be controlled on a 'per project' basis. Hence, he cannot disqualify (for example) RS or F@H without removing the 'O' from BOINC, resulting in a 'David Anderson' view of the world. Which is not, by definition, open. No one person can control all. |
Send message Joined: 25 Nov 05 Posts: 1654 |
This sounds much the same as the long thread on The Lounge. All a load of cobblers, if not cobblestones. The method used by cpdn is good; it was equated to the amount of work being done by computers on SETI back when BOINC first came out, and has since been kept internally consistant as climate models became more resource intensive. And it can't be fiddled with because it's not based on benchmarks. |
Send message Joined: 29 Aug 05 Posts: 304 |
@jasong Your history is incorrect. There was never a "master xml" file on the servers as you describe it. The project specific xml file does come from the server and if you change it a new copy is downloaded at next connection, however this only deals with project specific preferences. Prevention of cheating was based on redundancy. There was originally no choice but to figure cobblestones based on benchmarks multiplied by CPU time. Support for other means of figuring cobblestones was added in client version 4.46, and did not work correctly until about version 4.55. I do seem to remember the servers being able to force a benchmark at request, however I can not find any proof of it. Folding at home never ran "flawlessly" on BOINC, at best it was a poor compromise between their existing processes and the BOINC framework. I hope I am misreading your post and you are refering to a different project. Sorry to be so negative, but I could not leave such inaccuracies uncorrected. BOINC WIKI BOINCing since 2002/12/8 |
Send message Joined: 4 Jul 06 Posts: 13 |
@jasong Well, whether what I quoted(and rephrased, since it came from an irc chat) is true or not, what IS true is that BOINC either needs a benchmark system that can't be tampered with, or point-for-point cross-project scoring comparisons need to be officially rejected by people involved in BOINC. I'm not a programmer, but the idea in my rephrased quotes about how to fix the cheating problem, seems to be a good one. :) |
Send message Joined: 30 Dec 05 Posts: 457 |
As you can get very different benchmark scores on the same PC just by having it dual boot with either Linux or Windows. And that the Result or Task, Duration Correct Factor has been introduced so that the scheduler can download the correct amount of work shows that the best step would be to remove the benchmarks entirely, if possible. |
Send message Joined: 4 Jul 06 Posts: 13 |
Sorry to restart an old thread, but couldn't we at least TRY to reinstitute what the old administrator had? Also, in response to something said in this thread awhile back: In terms of suggested inaccuracies in the first post, if they're inaccurate, then my friend didn't correct them. I honestly don't think he's the type of person who doublechecks something in a half-assed fashion. And he isn't a liar either(I'm not sure if that was implied). If he says he was a Project Administrator of a BOINC project that originally had a better system than the present one, then that is the truth. He is an EXTREMELY intelligent man, and refusing his help is inadvisable, however things may seem. I'm not saying it's stupid to refuse the help, since I realize how all this must seem to some people, but you need to give him the benefit of the doubt so that he can: (1) prove who he is, and (2) implement a better solution than we have now, which really isn't a solution at all. Implement the ORIGINAL solution, as a matter of fact. |
Send message Joined: 29 Aug 05 Posts: 15480 |
I'm not saying it's stupid to refuse the help, since I realize how all this must seem to some people, but you need to give him the benefit of the doubt so that he can: (1) prove who he is, and (2) implement a better solution than we have now, which really isn't a solution at all. Implement the ORIGINAL solution, as a matter of fact. Why are you posting this and not this person himself? The O in BOINC says it all, it's open. If he wants to change it back to how it was, why won't he do so and then present his findings of it on the BOINC_Developers email list, or the BOINC_Projects email list? |
Send message Joined: 16 Apr 06 Posts: 386 |
The benchmark-based credit system was abysmal and a huge mistake, now thankfully rectified by the various projects. I'm sure everyone (with the exception of this mythical admin, and also any credit cheats taking advantage of the old system) was pleased to see it go. |
Send message Joined: 29 Aug 05 Posts: 304 |
I am neither of those and I would have much rather seen it live up to it's theory than removed. I still think it is a better theory than anything the projects are using now, however in practice it never came close to living up to itself or any of the other theories. @jasong I did not mean to insult your friend. My memory may not be all that clear and there may have been things server side that I was not aware of. The big thing though is I am remembering things as a BOINC participant, not as an admin trying to add a BOINC project to an existing infrastructure. Yes they got folding working after a fashion and if you search Keck_Komputers over there all of those points were earned with BOINC clients (it was not possible to combine with my old name). But it was not as easy as just attaching a project like you would expect. That would seem to be a requirement to me if you want to describe it as working "flawlessly". BOINC WIKI BOINCing since 2002/12/8 |
Send message Joined: 4 Jul 06 Posts: 13 |
This "mythical administrator" as you call him, remembers a system that was BEFORE what you are calling the "old system." You are a bit like those well-meaning idiots who want to abolish portions of the American government and institute a flat tax. As with them, you really need to "check the knowledge." |
Send message Joined: 25 Nov 05 Posts: 1654 |
We can't "check the knowledge" if you won't tell us what it is. Or was. |
Send message Joined: 29 Aug 05 Posts: 15480 |
Leaving a warning. Keep it civilized or I'll lock the thread. This "mythical administrator" as you call him, remembers a system that was BEFORE what you are calling the "old system." When was this then? About what month and year? |
Send message Joined: 4 Jul 06 Posts: 13 |
Leaving a warning. Keep it civilized or I'll lock the thread. I'll ask him next time I see him in irc. Not sure when that will be. |
Send message Joined: 29 Aug 05 Posts: 304 |
Leaving a warning. Keep it civilized or I'll lock the thread. As you can see from my sig I started using BOINC in 2002, with version 0.04. At that time there was only the BOINC test project. The system before the old system was still a benchmark times time system however there was no scaling factor. A SETI line feed task at that time was worth about 0.02 cobblestones (I earned 11 cobblestones before the scaling factor was added). Now that I think about it FAH had a system of running a test task on a controlled standard computer. From this they generated standard FAH points per protein. I don't remember what they did for conversion to BOINC cobblestones if anything. While this may have been acceptable to the admins it was still hardly flawless. BOINC WIKI BOINCing since 2002/12/8 |
Send message Joined: 11 Aug 06 Posts: 4 |
Leaving a warning. Keep it civilized or I'll lock the thread. What you yourself aren't sure shouldn't you write on the public space. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.