GPU scheduling

Message boards : Server programs : GPU scheduling
Message board moderation

To post messages, you must log in.

AuthorMessage
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33047 - Posted: 26 May 2010, 17:12:01 UTC

Hi all,
I have a quick question about the scheduling functionality for jobs that use the GPU.

Consider I have two science applications, one which utilizes the GPU say using CUDA, the other that is CPU only. When jobs are scheduled for both applications, and a CUDA capable client connects, will it always be served a CUDA job from the queue? It seems non nonsensical to have clients that have GPUs pulling CPU-only jobs, i.e. jobs that could instead go to other non GPU capable nodes.

Cheers

Chris
ID: 33047 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 33051 - Posted: 26 May 2010, 17:51:54 UTC - in response to Message 33047.  

When jobs are scheduled for both applications, and a CUDA capable client connects, will it always be served a CUDA job from the queue?

The feeder will give work to any host that asks for work, independent on if it is a CPU or GPU job.
ID: 33051 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33064 - Posted: 27 May 2010, 13:10:22 UTC - in response to Message 33051.  

Ok, so the feeder pulls jobs from the DB in some pre-defined order, and places them in the "ready to send queue" for the scheduler to dispatch. So let me re-phrase the question...

How are jobs dispatched by the scheduler from the "ready to send queue"? If for example, a GPU capable node sends a request for work, will the scheduler always assign a Cuda job from the "ready to send queue" or is it possible it will assign a job from an application that does not use Cuda, i.e. a CPU only job?

Cheers

Chris
ID: 33064 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15477
Netherlands
Message 33065 - Posted: 27 May 2010, 13:32:38 UTC - in response to Message 33064.  

As far as I know, it'll get a CUDA job appointed as the work request is done with a message which application and application version is used. The feeder then picks up work for that application and version and sends it on its way.
ID: 33065 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33066 - Posted: 27 May 2010, 14:36:49 UTC - in response to Message 33065.  

I was under the impression that the Feeder pre-fetches a small number of database jobs and loads them into shared-memory (the "ready to send queue"), rather than in response to client initiated requests (can you confirm this)? Then the scheduler matches client requests to the "ready to send queue" listed in the shared-memory segment. In this case, this queue can contain a jobs from a several different applications, some requiring Cuda/GPU capable clients, and others not.

Here's the bit I don't understand but really need to (I've read a few Boinc publications and its not clear in any)....

Consider then a client requesting work. Will (as you imply in the previous post) the client request contain a request for work explicitly for a given application? If this is true, I'm guessing the scheduler would match the request to an entry in the "ready to send queue", and hence if the client is Cuda/GPU capable, and a job from a Cuda based application exists, it will be served to the client?

Cheers

Chris
ID: 33066 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 33067 - Posted: 27 May 2010, 14:39:15 UTC

A job is a job is a job.

The purpose of the feeder is always to keep a few jobs close at hand, just in case.

Your computer comes knocking on the door, and says "Can I have a job for my GPU, please?" The feeder will look at the jobs lying around the place, pick one up, ask itself "Does the label on this one say it can be done by (a) (that sort of) GPU?", and if it can, the feeder will give the job to your computer.

A millisecond later, my computer might come knocking, and say "Can I have a job for my CPU, please?" The feeder might pick up an *identical* job, say "Can this be done on a CPU?", find that it can, and pass it out.

Jobs can have more than one compatibility label on them. They only become "a job for a GPU" or "a job for a CPU" when they've been assigned to a particular GPU or CPU, respectively.
ID: 33067 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 33068 - Posted: 27 May 2010, 14:49:28 UTC - in response to Message 33066.  

Addendum: cross-posted.

Your computer doesn't ask for work for a particular application, but it does ask for work for a particular 'resource' - CPU, NVidia GPU, or ATI GPU. Usually just one type of resource at a time, although it's capable of asking for all three at once.

It's the server (scheduler) which is reponsible for asking "Who is this guy, anyway? What sort of work does he like doing?", and extracting your application preferences from your preference set on the server (it may also be influenced by any applications you installed manually via Anonymous Platform, or data files you already hold for Locality Scheduling).
ID: 33068 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33070 - Posted: 27 May 2010, 15:22:11 UTC - in response to Message 33068.  

Ok, this is much more clear for me now... One final question!

So an job from an application can be both CPU and GPU compatible; I'm assuming that the application developer has some CPU based code fragments to fall back on should the Cuda cuCtxGetDevice() return no device and hence the Cuda code cannot be executed?

Would it not be more efficient for a machine requesting work for its CPU to instead be served an explicitly CPU only job first, perhaps leaving jobs that can be executed on the GPU OR CPU to another client requesting a GPU job?

Thanks for the prompt replies!

Chris
ID: 33070 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 33071 - Posted: 27 May 2010, 15:39:50 UTC - in response to Message 33070.  

Not quite. A 'job' is just a description of something that needs doing. It may be data that needs searching or manipulating, or parameters that need testing.

The description of that job that needs doing doesn't say anything about how the work is going to be done, or what is going to do it. Windows or Linux? Intel, PowerPC or Sparc? It'll be a computer program of some sort - supplied by the project or by the user (optimised application)? There's a lot of pattern-matching going on: the objective is to assemble, out of the available jigsaw pieces, both a tool and a task which together can occupy the idle resource productively.
ID: 33071 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33072 - Posted: 27 May 2010, 16:35:59 UTC - in response to Message 33071.  

The job is associated with an application though, which has strict requirements upon the client's capabilities as defined by the application plan. Reading the application planning page, I can see that an application, and all subsequent jobs associated with it will be tied to its definition: NAME_VERSION_PLATFORM[__PLAN-CLASS]. So how can an application, and a job created for this application, have:

more than one compatibility label on them

Surely it either requires a GPU or it doesn't?
ID: 33072 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5077
United Kingdom
Message 33073 - Posted: 27 May 2010, 17:00:33 UTC - in response to Message 33072.  

The job is associated with an application though, which has strict requirements upon the client's capabilities as defined by the application plan. Reading the application planning page, I can see that an application, and all subsequent jobs associated with it will be tied to its definition: NAME_VERSION_PLATFORM[__PLAN-CLASS]. So how can an application, and a job created for this application, have:

more than one compatibility label on them

Surely it either requires a GPU or it doesn't?

Usually yes, a particular piece of hardware and an operating system will between them need a unique application. But have a look at this list of applications (from SETI Beta):

Platform Version Installation time 
Linux/x86 6.03 20 Aug 2008 18:30:38 UTC 
Windows/x86 6.03 31 Jul 2008 0:24:04 UTC 
Windows/x86 6.08 (cuda) 15 Jan 2009 18:55:47 UTC 
Windows/x86 6.09 (cuda23) 13 Aug 2009 22:38:38 UTC 
Windows/x86 6.10 (cuda_fermi) 19 May 2010 22:15:39 UTC 
Mac OS X 6.03 20 Aug 2008 23:50:39 UTC 
SPARC/Solaris 5.17 3 Aug 2006 21:20:53 UTC 
Linux/x86_64 5.28 26 Sep 2007 16:14:58 UTC 
SPARC/Solaris 5.17 3 Aug 2006 21:20:53 UTC 
Mac OS X/Intel 6.03 20 Aug 2008 23:50:39 UTC 
Linux/x86_64 6.03 20 Aug 2008 18:59:32 UTC 

Any one of those eleven application/OS/hardware combinations would be capable of doing the same job. Each job has eleven compatibility labels on it while it's in the queue: it'll only be allocated to one of the combinations that you have available and are requesting work for.
ID: 33073 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33177 - Posted: 1 Jun 2010, 12:01:16 UTC - in response to Message 33073.  

Ok, so presumably the feeder then just serves up the binary and libs matching the platform/OS etc combination as specified by the client's request.

Thank you so much for the clarification, extremely helpful!

Cheers

Chris
ID: 33177 · Report as offensive
cjreyn

Send message
Joined: 17 Aug 09
Posts: 19
United Kingdom
Message 33212 - Posted: 2 Jun 2010, 13:51:57 UTC - in response to Message 33177.  

Ok, one final question... A job for a GPU implies GPU execution only, but in practice this may not always be the case.

For example a more complex application may mostly use the CPU if most of the program cannot does not adhere to the SIMD model, whilst employing the GPU to speed up loops/code fragments that do adhere to the SIMD model. In effect, some ratio of the GPU/CPU can be utilized throughout the programs execution.

How does Boinc deal with this? Presumably there's some accurate measure of the CPU and GPU utilization for credit reporting?
ID: 33212 · Report as offensive
Nicolas

Send message
Joined: 19 Jan 07
Posts: 1179
Argentina
Message 33290 - Posted: 7 Jun 2010, 0:46:36 UTC - in response to Message 33212.  

For example a more complex application may mostly use the CPU if most of the program cannot does not adhere to the SIMD model, whilst employing the GPU to speed up loops/code fragments that do adhere to the SIMD model. In effect, some ratio of the GPU/CPU can be utilized throughout the programs execution.

How does Boinc deal with this?

Your app_plan function can say a job will use N CPUs and M GPUs. Usually if GPUs = 1, CPUs is set to something like 0.01, but you may very well say it uses more.
Presumably there's some accurate measure of the CPU and GPU utilization for credit reporting?

There is nothing accurate for credit reporting, anywhere in BOINC :P
ID: 33290 · Report as offensive

Message boards : Server programs : GPU scheduling

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.