Message boards :
Questions and problems :
Boinc performance
Message board moderation
Author | Message |
---|---|
Send message Joined: 9 Dec 13 Posts: 2 |
Hello everybody. According to http://boincstats.com/en/stats/-1/project/detail BOINC platform has performance of 8 PFlops. How does this counter counts? Does it simply adds performance of all CPUs and GPUs? |
Send message Joined: 5 Oct 06 Posts: 5082 |
Hello everybody. Look at two figures on that page: Recent average credit RAC 1,680,274,919 Average floating point operations per second 8,401,374.6 GigaFLOPS The first number is exactly 200 times the second number - well, 199.9999998809718590574451947423 times it, but near enough. That's not a coincidence - that's arithmetic. BOINCstats doesn't collect (BOINC doesn't record) the number of FLOPs performed, so instead the combined performance of all BOINC projects is estimated from the combined RAC (average credit awarded). That's all written up in the official Wiki (Computation credit). It has a wonderful side effect: all our computers can be doubled in speed (or be speeded up by any arbitrary amount) simply by doubling credit..... Of course, the definition page also says that the credit awarded by a project should also use the same exchange rate against the FLOP standard, but casual observation suggests that they don't. |
Send message Joined: 9 Dec 13 Posts: 2 |
Interesting article, thanks. But I still have the questions :) Remember that a 1 GigaFLOP machine, running full time, produces 200 units of credit in 1 day Are this Flops same to Flops that manufactor use in documentation? For example, according to intel http://download.intel.com/support/processors/corei3/sb/core_i3-2100_d.pdf , Intel Core i3-2100 CPU has 49 GFlops. Am I right, if predict that in ideal situation two i3 will have RAC near 49*200 + 49*200 = 19600 (for project, which use fair exchange rate RAC <-> Flops)? |
Send message Joined: 5 Oct 06 Posts: 5082 |
Sorry, I got distracted. 'Flops' is a slippery concept. Not all floating point operations are equal: addition/subtraction is quicker and easier to do than multiplication/division, and they in turn are easier than trigonometry. Which operations do you count and time? BOINC is contradictory on the subject. CPU operation speed is measured by the Whetstone benchmark, which "... was designed to defeat compiler optimizations". GPU speeds, on the other hand, are given as 'peak' flops, which I would take to mean the fastest the transistors could possibly switch, if they were only asked to do simple addition, and there were no inconvenient annoyances like memory accesses or bus transfers to get in the way. My guess is that manufacturers' figures - especially if distributed via the advertising department - are likely to be closer to 'peak' flops: they find some number and make it as big as possible, without necessarily being able to translate that into real work. That was particularly common in the early years of the last decade, when the Pentium 4 range was wound up to absurd speeds in competition with AMD. Now, Intel have the 'Core' and 'Core2' ranges, with lower rated frequencies, but much higher productivity. Think of the difference between a tuned moped, and a low, heavy, motorbike like a Harley or a BMW. The moped's engine will probably be rated with a higher RPM, but which would you prefer to ride across a continent? |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.