[[PageOutline]] = Applications that use coprocessors = BOINC supports applications that use coprocessors. The supported coprocessor types are NVIDIA, AMD, and Intel GPUs. The BOINC client probes for coprocessors and reports them in scheduler requests. The client keeps track of coprocessor allocation, i.e. which instances of each are free. When it turns a GPU app, it assigns it to a free instance. You can develop your application using any programming system, e.g. CUDA (for NVIDIA), CAL (for ATI) or OpenCL. == Dealing with GPU memory allocation failures == GPUs don't have virtual memory. GPU memory allocations may fail because other applications are using the GPU. This is typically a temporary condition. Rather than exiting with an error in this case, call {{{ boinc_temporary_exit(60); }}} This exits the application and tells the BOINC client to restart it again in at least 60 seconds, at which point memory may be available. == Device selection == Some hosts have multiple GPUs. When your application is run by BOINC, it receives information about which GPU instance to use. This is passed as a command-line argument {{{ --device N }}} where N is the device number of the GPU that is to be used. If your application uses multiple GPUs, it will be passed multiple --device arguments, e.g. {{{ --device 0 --device 3 }}} '''Note:''' The use of this command-line argument is deprecated. New applications should instead use the value of gpu_device_num passed in the APP_INIT_DATA structure returned by '''boinc_get_init_data()'''. Some OpenCL apps can use either NVIDIA or ATI GPUs, so they must also be told which type of GPU to use. This is also passed in the APP_INIT_DATA structure. {{{ char gpu_type[64]; // "nvidia" or "ati" int gpu_device_num; }}} OpenCL apps should not use the command line argument, but should instead call the '''boinc_get_opencl_ids()''' API as described [http://boinc.berkeley.edu/trac/wiki/OpenclApps here]. == Do GPU kernels within critical sections == The BOINC client may kill your application during execution. If a GPU kernel is in progress at this point, a system crash or hang may occur. To prevent this, do GPU kernels within a critical section, e.g. {{{ boinc_begin_critical_section(); ... do GPU kernel boinc_end_critical_section(); }}} == Plan classes == All GPU applications must use a [AppPlan plan class] to specify their properties. You may be able use one of the predefined plan classes; otherwise you must define your own plan class, using either [AppPlanSpec XML] or [PlanClassFunc C++]. Plan class names for GPU apps must obey these rules: * For OpenCL apps, the name must contain "opencl" * For CUDA apps, the name must contain "cuda" * For CAL apps, the name must contain "ati".