Changes between Version 31 and Version 32 of CreditNew


Ignore:
Timestamp:
Mar 26, 2010, 1:43:06 PM (14 years ago)
Author:
davea
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • CreditNew

    v31 v32  
    77For GPUs, it's given by a manufacturer-supplied formula.
    88
    9 Other factors, such as the speed of a host's memory system,
    10 affect application performance.
     9Application performance depends on other factors as well,
     10such as the speed of the host's memory system.
    1111So a given job might take the same amount of CPU time
    1212on 1 GFLOPS and 10 GFLOPS hosts.
     
    1919
    2020Notes:
    21 
    2221 * For our purposes, the peak FLOPS of a device
    2322   uses single or double precision, whichever is higher.
     
    2625
    2726Some goals in designing a credit system:
    28 
    2927 * Device neutrality: similar jobs should get similar credit
    3028   regardless of what processor or GPU they run on.
    31 
    3229 * Project neutrality: different projects should grant
    3330   about the same amount of credit per host, averaged over all hosts.
    34 
    3531 * Gaming-resistance: there should be a bound on the
    3632   impact of faulty or malicious hosts.
     
    7975
    8076This system has several problems:
    81 
    8277 * It doesn't address GPUs properly; projects using GPUs
    8378   have to write custom code.
     
    8883
    8984 * Completely automated - projects don't have to change code, settings, etc.
    90 
    9185 * Device neutrality
    92 
    9386 * Limited project neutrality: different projects should grant
    9487   about the same amount of credit per host-hour, averaged over hosts.
     
    171164   After that we use an exponentially-weighted average
    172165   (with appropriate parameter for app version and host)
    173 
    174166 * A given sample may be wildly off,
    175167   and we can't let this mess up the average.
    176168   Samples after the first are capped at 10 times the current average.
    177 
    178169 * We keep track of the number of samples,
    179170   and use an average only if its number of samples
     
    198189 * If app.min_avg_pfc is defined then
    199190
    200  D = app.min_avg_pfc * wu.fpops_est
     191   D = app.min_avg_pfc * wu.fpops_est
    201192
    202193 * Otherwise
    203194
    204  D = wu.fpops_est
     195   D = wu.fpops_est
    205196
    206197== Cross-version normalization ==
     
    221212   and at least 2 versions are above sample threshold,
    222213   X is their average (weighted by # samples).
    223 
    224214 * If there are both, and at least 1 of each is above sample
    225215   threshold, let X be the min of the averages.
     
    333323The algorithm:
    334324
     325{{{
    335326 pfc = peak FLOP count(J)
    336327 approx = true;
     
    356347         if Scale(H, V) is defined and (H,V) is not on scale probation
    357348       F *= Scale(H, V)
     349}}}
    358350
    359351== Claimed and granted credit ==
     
    370362Otherwise:
    371363
     364{{{
    372365 if app.min_avg_pfc is defined
    373366   C = app.min_avg_pfc*wu.fpops_est
    374367 else
    375368   C = wu.fpops_est * 200/86400e9
     369}}}
    376370
    377371== Cross-project version normalization ==
     
    397391Projects will export the following data:
    398392
    399  for each app version
     393{{{
     394for each app version
    400395   app name
    401396   platform name
     
    403398   plan class
    404399   scale factor
     400}}}
    405401
    406402The BOINC server will collect these from several projects
    407403and will export the following:
    408404
    409  for each plan class
     405{{{
     406for each plan class
    410407   average scale factor (weighted by RAC)
     408}}}
    411409
    412410We'll provide a script that identifies app versions
     
    529527 R(J, H) = wu.fpops_est * ET^mean^(H, V)
    530528
    531 
    532529== Implementation ==
    533530