Controling blend of mutlicore and single core VM's

Message boards : Questions and problems : Controling blend of mutlicore and single core VM's
Message board moderation

To post messages, you must log in.

AuthorMessage
Toby Broom

Send message
Joined: 14 Apr 12
Posts: 48
Switzerland
Message 76543 - Posted: 19 Mar 2017, 23:34:04 UTC

I have some tasks that are mutli core and some single core, I can create an app_config like so:

<app_config>
<project_max_concurrent>8</project_max_concurrent>
</app_config>

This will limit the number of tasks to 8 total, however if I have 8 dual core then this is actually 16 cores of usage. So I can create a limit like so:

<app_config>
<app>
<name>App2</name>
<max_concurrent>4</max_concurrent>
</app>
</app_config>

This limits the app to 4 so 8 cores.

However, if I have 4 of App1 & 4 of App2 then both conditions are met 8 total and 4 of app2 but the total number of cores used is 8+4 so again over used.

Is there a way to configure BOINC so it will respect the number of cores?

BOINC 7.6.33, Windows 10, LHC@Home, not installed as service
ID: 76543 · Report as offensive
HAL9000
Help desk expert
Avatar

Send message
Joined: 13 Jun 14
Posts: 81
United States
Message 76591 - Posted: 21 Mar 2017, 22:09:37 UTC

You can use <project_max_concurrent> and <max_concurrent> at the same time in your app_config.xml.

If you want to limit the total number for a project to 8 and 4 for each app it is just a matter of combining the two bits you have been using.
<app_config>
<project_max_concurrent>8</project_max_concurrent>
	<app>
		<name>App1</name>
		<max_concurrent>4</max_concurrent>
	</app>
	<app>
		<name>App2</name>
		<max_concurrent>4</max_concurrent>
	</app>
	<app>
		<name>App3</name>
		<max_concurrent>4</max_concurrent>
	</app>
</app_config>
ID: 76591 · Report as offensive
Toby Broom

Send message
Joined: 14 Apr 12
Posts: 48
Switzerland
Message 76708 - Posted: 22 Mar 2017, 22:37:15 UTC

I expect still does not work correctly? This would be my expected configuration

<app_config>
<project_max_concurrent>8</project_max_concurrent>
<app>
<name>Dual Core</name>
<max_concurrent>4</max_concurrent>
</app>
<app>
<name>Single Core</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>

My example run time of 4 Dual & 4 Single is fine with this config but use 12 cores.

BOINC knows that the dual core task core take 2 core as the tasks have (2 cores) after the name, it just doesn't seem to respect the limits.
ID: 76708 · Report as offensive
HAL9000
Help desk expert
Avatar

Send message
Joined: 13 Jun 14
Posts: 81
United States
Message 76833 - Posted: 25 Mar 2017, 16:38:24 UTC - in response to Message 76708.  

I expect still does not work correctly? This would be my expected configuration

<app_config>
<project_max_concurrent>8</project_max_concurrent>
<app>
<name>Dual Core</name>
<max_concurrent>4</max_concurrent>
</app>
<app>
<name>Single Core</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>

My example run time of 4 Dual & 4 Single is fine with this config but use 12 cores.

BOINC knows that the dual core task core take 2 core as the tasks have (2 cores) after the name, it just doesn't seem to respect the limits.

When you mentioned dual core and single core VM's I was thinking that had to do with the system configuration and not the tasks you were running.
If the system has more than 8 CPUs and you want to limit BOINC to only 8 cores. Then the easiest way to limit the usage would be to set <ncpus> in the cc_config.xml to 8. The Use at most N% of the CPUs setting could be used instead of the cc_config value, but I try to do anything that overrides my web settings.

If you are running multiple projects and reducing the number of CPUs to 8 for BOINC isn't desired. Then instead of using <app> in the app_config.xml <app_version> with <avg_ncpus> could be a better option.
For example on my 16c/32t system I used this app_config.xml setting to tell BOINC the SETI@home v8 app takes 4 CPUs instead of 1.
<app_config>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<avg_ncpus>4.0</avg_ncpus>
	</app_version>
<app_config>

This resulted in 8 tasks, instead of 32, running. As BOINC is configured to use all CPUs and it has been told that each instance of the SETI@home v8 app takes 4 CPUs. SO when 8 tasks are running it believes that all 32 CPUs are in use.
ID: 76833 · Report as offensive
Toby Broom

Send message
Joined: 14 Apr 12
Posts: 48
Switzerland
Message 77096 - Posted: 1 Apr 2017, 22:33:19 UTC - in response to Message 76833.  

I finally got it to work

The project_max_concurrent is jobs, so it will always run that many tasks regardless of the number of cores per task.

I forced the number of CPU's as per your suggestion.

Now it respects the number of cores correctly. I can also limit the number of cores in use with the Use at most N% or the <ncpus>.

I think the concept of a job is quite confusing, not sure why it would be useful, max cores per project would be better?
ID: 77096 · Report as offensive
HAL9000
Help desk expert
Avatar

Send message
Joined: 13 Jun 14
Posts: 81
United States
Message 77287 - Posted: 11 Apr 2017, 2:28:33 UTC - in response to Message 77096.  

I finally got it to work

The project_max_concurrent is jobs, so it will always run that many tasks regardless of the number of cores per task.

I forced the number of CPU's as per your suggestion.

Now it respects the number of cores correctly. I can also limit the number of cores in use with the Use at most N% or the <ncpus>.

I think the concept of a job is quite confusing, not sure why it would be useful, max cores per project would be better?

I would guess because only a few projects use multi-threaded apps that there are probably some unexpected issues when running such apps.
Milkyway is the only project I can think of right now that has multi-threaded apps. Which I need to get around to throwing one of my dual E5-2670 machines at someday.
ID: 77287 · Report as offensive

Message boards : Questions and problems : Controling blend of mutlicore and single core VM's

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.