wiki:AppPlanSpec

Version 47 (modified by davea, 6 years ago) (diff)

--

Specifying plan classes in XML

You can specify plan classes using an XML configuration file with the format

<plan_classes>
   <plan_class>
      ... specification of a plan class
   </plan_class>
   ... more plan class specifications
</plan_classes>

Name this file plan_class_spec.xml and put it in your project directory. This will replace the built in plan classes!

Examples

An example configuration file is here. This file specifies the predefined plan classes.

Specification format

The specification of a plan class has the following format. All elements except <name> are optional. In version numbers, M is major, m is minor, R is release.

General

<name>X</name>
the name of the plan class; must obey the rules for names.
<disabled/>
if plan class is disabled
<max_core_client_version>MMmmrr</max_core_client_version>
send only to BOINC clients of less than or equal version number.
<min_core_client_version>MMmmrr</min_core_client_version>
send only to BOINC clients of greater than or equal version number.
<user_id>N</user_id>
send only to hosts belonging to given user (e.g. cluster nodes).
<projected_flops_scale>x</projected_flops_scale>
multiply projected FLOPS by this factor. Use this to favor one class over another. For example, if you have both SSE and non-SSE versions, use 1.1 and 1.0 respectively.
<os_regex>regex</os_regex>
send only to hosts whose operating system version matches the given regular expression
<min_os_version>x</min_os_version>
send only to hosts with at least this numerical OS version. Numerical OS version derived from the host's os_version string as follows:.
  • Mac OS X: Version 10.x is reported as (x+4).y.z. For example, 10.7.0 is reported as "11.00.00"; the numerical version is 110000.
  • Windows: Windows 7 SP1 is reported as "(Microsoft Windows 7 ..., Service Pack 1, (06.01.7601.00))"; the numerical version is 601760100. More information is here.
  • Linux and Android: the kernel version is reported as "2.6.3"; the numerical version is 20603.
<max_os_version>x</max_os_version>
max numerical OS version (see above)
<cpu_feature>x</cpu_feature>
a required CPU feature (such as sse3). You can include more than one.
<host_summary_regex>regex</host_summary_regex>
send only to hosts with host.serialnum field that matches the given regular expression.
<cpu_vendor_regex>regex</cpu_vendor_regex>
send only to hosts whose CPU vendor matches the regular expression. Example CPU vendors are "GenuineIntel" and "AuthenticAMD", so to match Intel CPUs you could use
<cpu_vendor_regext>.*Intel</cpu_vendor_regexp>
<cpu_model_regex>regex</cpu_model_regex>
the host's CPU model must match the regular expression.
<infeasible_random>X</infeasible_random>
the app version won't be used with probability X.

The following elements let you use a project preference to decide whether to use the app version:

<project_prefs_tag>x</project_prefs_tag>
the name of the tag
<project_prefs_regex>x</project_prefs_regex>
the contents must match this regular expression
<project_prefs_default_true/>
treat the absence of the project_prefs_tag (i.e. the user didn't set it yet) as if the project_prefs_regex matched.

The following elements restrict the use of a particular app version to a certain range of workunits or batches:

<min_wu_id>x</min_wu_id>
minimum required workunit ID
<max_wu_id>x</max_wu_id>
maximum allowed workunit ID
<min_batch>x</min_batch>
minimum required batch #
<max_batch>x</max_batch>
maximum allowed batch #

Hyperthreading

We distinguish between "logical" and "physical" CPUs. Processors with hyperthreading have two logical CPUs per physical CPUs. The numbers of usable logical and physical CPUs on a host are denoted NLC and NPC. "Usable" refers to computing preferences, which allow volunteers to limit the % of CPUs (logical and physical) that can be used.

Pre-7.14 versions of the BOINC client measure and report only NLC. For these clients, we conservatively assume that NPC is max(1, N/2).

<physical_threads>[0|1]</physical_threads>
If set, each application thread uses a physical CPU; the default is logical CPU.

Floating-point intensive apps should use this, since pairs of logical CPUs generally share an FPU. Multi-thread VM-based apps should do so as well, since VMWare may refuse to create a VM with more threads than NPC.

Multithread apps

By default, apps are assumed to use 1 thread. Plan classes for apps that use multiple threads (possibly a variable number, depending on the host) use the following elements.

If <physical_threads> is set, NCPUS refers to NPC, otherwise to NLC.

<min_ncpus>N</min_ncpus>
run only on hosts with NCPUS >= N.
<max_threads>N [M]</max_threads>
Use min(N, NCPUS-M) threads (if not specified, M is zero).
<nthreads_cmdline>0|1</nthreads_cmdline>
if set, pass command-line args --nthreads N to the app, where N is the number of threads to use.
<mem_usage_base_mb>X</mem_usage_base_mb>
<mem_usage_per_cpu_mb>Y</mem_usage_per_cpu_mb>
if specified, estimated memory usage (in Megabytes) is X + NY, where N is the number of CPUs used. Pass this to the app with a --memory_size_mb X command-line arg.

Implementation note: the number of CPUs sent to the client, and visible to the user, is in terms of logical CPUs. if <physical_threads> is set, and the host is hyperthreaded, this will be twice the number of threads.

GPU apps

Required:

<gpu_type>X</gpu_type>
the GPU type (generally nvidia, amd, or intel)

Optional:

<cpu_frac>x</cpu_frac>
the fraction of total FLOPs that are done by the CPU. This is used to calculate CPU usage and estimated FLOPS. Default 0.1.
<min_gpu_ram_mb>x</min_gpu_ram_mb>
The minimum amount of GPU RAM. This is needed because older clients report total RAM but not available RAM.
<gpu_ram_used_mb>x</gpu_ram_used_mb>
require this much available GPU RAM
<gpu_peak_flops_scale>x</gpu_peak_flops_scale>
scale GPU peak speed by this in calculating projected FLOPS (default 1).
<ngpus>x</ngpus>
how many GPUs will be used (possibly fractional); default 1. If negative, calculate the number as the the RAM usage (gpu_ram_used_mb) divided by the GPU RAM size.
<min_driver_version>x</min_driver_version>
minimum display driver version. AMD driver versions are represented as MMmmRRRR. NVIDIA driver versions are represented as MMMmm.
<max_driver_version>x</max_driver_version>
maximum display driver version
<cuda/>
CUDA application (NVIDIA)
<cal/>
CAL application (AMD)
<gpu_utilization_tag>x</gpu_utilization_tag>
you can use a project-specific preferences to let users scale the # of GPUs used. This is the tag name.
<without_opencl>0|1</without_opencl>
send this version only to hosts without OpenCL capability
<min_gpu_peak_speed>X</min_gpu_peak_speed>
use only GPUs with peak speed >= X
<max_gpu_peak_speed>X</max_gpu_peak_speed>
use only GPUs with peak speed <= X

AMD/ATI GPU apps

<need_ati_libs/>
Require libraries named "ati", not "amd".
<need_amd_libs/>
Require libraries named "amd". You can verify which DLLs your application is linked against using Dependency Walker against your application. If your executable contains DLL names prefixed with 'ati', use this option. These flags are usually not needed for OpenCL apps.
<min_cal_target>N</min_cal_target>
<max_cal_target>N</max_cal_target>
Min and max CAL targets:
typedef enum CALtargetEnum {
    CAL_TARGET_600,                /**< R600 GPU ISA */     
    CAL_TARGET_610,                /**< RV610 GPU ISA */    
    CAL_TARGET_630,                /**< RV630 GPU ISA */
    CAL_TARGET_670,                /**< RV670 GPU ISA */
    CAL_TARGET_7XX,                /**< R700 class GPU ISA */
    CAL_TARGET_770,                /**< RV770 GPU ISA */
    CAL_TARGET_710,                /**< RV710 GPU ISA */
    CAL_TARGET_730,                /**< RV730 GPU ISA */
    CAL_TARGET_CYPRESS,            /**< CYPRESS GPU ISA */
    CAL_TARGET_JUNIPER,            /**< JUNIPER GPU ISA */
    CAL_TARGET_REDWOOD,            /**< REDWOOD GPU ISA */
    CAL_TARGET_CEDAR,               /**< CEDAR GPU ISA */
    CAL_TARGET_RESERVED0,
    CAL_TARGET_RESERVED1,           
    CAL_TARGET_WRESTLER,            /**< WRESTLER GPU ISA */
    CAL_TARGET_CAYMAN,              /**< CAYMAN GPU ISA */
    CAL_TARGET_KAUAI,           /** added by me **/
    CAL_TARGET_BARTS,               /**< BARTS GPU ISA */
    CAL_TARGET_TURKS,          /** added by me **/
    CAL_TARGET_CAICOS   /** added by me **/
} CALtarget;

NVIDIA GPU apps

<min_nvidia_compcap>MMmm</min_nvidia_compcap>
minimum compute capability
<max_nvidia_compcap>MMmm</max_nvidia_compcap>
maximum compute capability

CUDA apps

<min_cuda_version>MMmmm</min_cuda_version>
minimum CUDA version
<max_cuda_version>MMmmm</max_cuda_version>
maximum CUDA version

OpenCL apps (CPU or GPU)

<opencl/>
include this for OpenCL applications
<min_opencl_version>MMmm</min_opencl_version>
minimum OpenCL version
<max_opencl_version>MMmm</max_opencl_version>
maximum OpenCL version
<double_precision_fp/>
reject plan class if the device doesn't support double precision floating point math

OpenCL apps for AMD

<min_opencl_driver_revision>MMmmrr</min_opencl_driver_revision>
minimum OpenCL driver revision
<max_opencl_driver_revision>MMmmrr</max_opencl_driver_revision>
maximum OpenCL driver revision

VirtualBox apps

<virtualbox/>
VirtualBox application; send only to hosts with VirtualBox installed
<min_vbox_version>MMmmrr</min_vbox_version>
minimum VirtualBox version
<max_vbox_version>MMmmrr</max_vbox_version>
maximum VirtualBox version
<exclude_vbox_version>MMmmrr</exclude_vbox_version>
exclude a particular VirtualBox version (can have > 1 of these)
<is64bit/>
64-bit application.
<vm_accel_required/>
send only to hosts with enabled VM hardware acceleration.

Note: VirtualBox apps can be multicore (set <min_ncpus> and <max_threads>). However, when sent to hosts without enabled VM hardware acceleration, they'll be run single-core.

Android apps

<min_android_version>MMmmrr>
minimum Android version (e.g. 4.1.2 = 40102)
<max_android_version>MMmmrr>
maximum Android version

Non-compute-intensive apps

<avg_ncpus>x</avg_ncpus>
average # CPUs used. Use for non-compute-intensive apps; for others it's calculated for you.