System indefinitely slow after deactivating GPU

Message boards : BOINC Manager : System indefinitely slow after deactivating GPU
Message board moderation

To post messages, you must log in.

AuthorMessage
Jason P.

Send message
Joined: 8 Dec 12
Posts: 8
Spain
Message 46697 - Posted: 10 Dec 2012, 1:52:20 UTC

Sorry if I'm making a trivial question. I've asked the same recently but I'm not sure if it's right or wrong what's happening.

I'm using the latest Boinc client that is available from the Ubuntu repositories (version 7.0.27 x86) in an Ubuntu 12.04.1 64 bits. Plus, I have a Nvidia 450 GTS suitable for CUDA. I've tuned the general options to have less impact in my everyday use of the computer and everything is fine.

If I'm going to be off my desk for a while I choose the option "always use GPU" from the Boinc manager in order to speed up my tasks. Of course, the GPU temperature quickly rises and the system becomes more or less unusable, but I don't care because I'm not gonna use it.

After coming back I put this option in its default value "use GPU as in preferences". After 2 or 3 seconds, I'm already using the computer as usual, even the GPU temperature is around 30ºC. But.... all this is only apparently. Although the system seems stable, if I try to play a YT video, its audio and image are jerky. No matter how long I wait, sooner or later I'm forced to close the session to recover the normal behavior.

¿Is this normal in my computer scenario? ¿Is still processing the GPU despite the change of option? I know this is a minor problem with an easy workaround, but I'm curious :)


Thanks anyway!
ID: 46697 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 46700 - Posted: 10 Dec 2012, 9:34:21 UTC - in response to Message 46697.  

If all is running as it should be, the BOINC client itself will place a vanishingly small demand on your GPU - certainly undetectable at your desktop. All the BOINC client does is to start and - crucially for your question - stop a science application supplied by one or other of the projects you are attached to.

If your general preference is "Don't use GPU when computer is in use", then BOINC should detect your usage, and send a message to the science application to

1) stop GPU processing
2) unload itself from GPU memory

There are occasional reports from the Windows world of science applications which fail to notice the 'stop' message, and continue processing when BOINC thinks they are inactive. From what you say about temperatures, it sounds as if your science app has stopped, but maybe it hasn't released the card's video memory? It might be worth discussing your problems with other users of that application on the project's message board.
ID: 46700 · Report as offensive
Jason P.

Send message
Joined: 8 Dec 12
Posts: 8
Spain
Message 46710 - Posted: 11 Dec 2012, 3:09:29 UTC - in response to Message 46700.  
Last modified: 11 Dec 2012, 3:34:14 UTC

Thanks for your time. It could be the World Community Grid or the DistrRTgen. I'll check it in their forums.

By the way. In my version of Boinc there's a bug that prevents the client from detecting use of kb and mouse if they are wireless. Maybe this is interfering in some way...

[Update]

Seems that World Community Grid is not applying the same general options shared across all the projects. I've found in its control panel that the option "Use GPU while computer is in use" is set to yes when is disabled for the rest of projects. Maybe this is the reason why my GPU is occupied when it shouldn't be. I'll check it out.
ID: 46710 · Report as offensive
Jason P.

Send message
Joined: 8 Dec 12
Posts: 8
Spain
Message 46719 - Posted: 11 Dec 2012, 22:47:14 UTC

I've just checked it out and it's just the same. Definetely the problem was not in the World Community Grid options :(
ID: 46719 · Report as offensive
Jason P.

Send message
Joined: 8 Dec 12
Posts: 8
Spain
Message 46748 - Posted: 12 Dec 2012, 23:54:22 UTC

Apparently the problem could be related to my Nvidia card while handling the Help Conquer Cancer project.

Let see how performs without tasks from that project.

Anyway, thanks for your help ;)
ID: 46748 · Report as offensive
Joe Bloggs

Send message
Joined: 6 Jan 13
Posts: 40
Hong Kong
Message 47227 - Posted: 14 Jan 2013, 11:46:00 UTC - in response to Message 46700.  

If all is running as it should be, the BOINC client itself will place a vanishingly small demand on your GPU - certainly undetectable at your desktop. All the BOINC client does is to start and - crucially for your question - stop a science application supplied by one or other of the projects you are attached to.

If your general preference is "Don't use GPU when computer is in use", then BOINC should detect your usage, and send a message to the science application to

1) stop GPU processing
2) unload itself from GPU memory

There are occasional reports from the Windows world of science applications which fail to notice the 'stop' message, and continue processing when BOINC thinks they are inactive. From what you say about temperatures, it sounds as if your science app has stopped, but maybe it hasn't released the card's video memory? It might be worth discussing your problems with other users of that application on the project's message board.


There's an option in the computing preferences for leaving app data in or out of memory while suspended I wonder what he set it to...

Also he speaks of watching video. If you watch a video and dont touch the keyboard and mouse BOINC will start running again after the preset idle detection time, which is 3 minutes by default but can be set arbitrarily short...
ID: 47227 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15480
Netherlands
Message 47228 - Posted: 14 Jan 2013, 11:51:34 UTC - in response to Message 47227.  

There's an option in the computing preferences for leaving app data in or out of memory while suspended I wonder what he set it to...

Does not matter, as GPU applications are always unloaded from memory when BOINC or the GPU suspends, even with the Leave Application In Memory (LAIM) setting set. It'll only stay in memory when the client does a benchmark.
ID: 47228 · Report as offensive
Joe Bloggs

Send message
Joined: 6 Jan 13
Posts: 40
Hong Kong
Message 47230 - Posted: 14 Jan 2013, 11:55:47 UTC

And have you tried suspending the cpu as well as the gpu, or at least setting boinc to use less than the full number of cores?
ID: 47230 · Report as offensive
kdsjsdj

Send message
Joined: 5 Jan 13
Posts: 81
Message 47235 - Posted: 14 Jan 2013, 12:53:51 UTC - in response to Message 46697.  

After coming back I put this option in its default value "use GPU as in preferences". After 2 or 3 seconds, I'm already using the computer as usual, even the GPU temperature is around 30ºC.


If the GPU temperature has dropped back to 30C then you can be sure neither the GPU nor its memory are being used.

Although the system seems stable, if I try to play a YT video, its audio and image are jerky. No matter how long I wait, sooner or later I'm forced to close the session to recover the normal behavior.


There's your problem ===> YT video

YouTube uses FlashPlayer which just does not work well on Linux. I use Linux too. It's been an issue for years. YT needs to get with the times and adopt the new video standards in HTML 5 which works very well on all platforms except InternetExploder.

I've heard many people claim the Iced Tea add-on for the Konquorer browser plays Flash videos very well but I'm not sure, I've never tried it.

You could also try experimenting with different nVidia drivers. Sometimes the latest drivers aren't the best.
ID: 47235 · Report as offensive
Jason P.

Send message
Joined: 8 Dec 12
Posts: 8
Spain
Message 47236 - Posted: 14 Jan 2013, 13:05:37 UTC

Thank you all but the problem is solved for weeks. As I mentioned, the key was to not have selected Help Conquer Cancer from World Community Grid.
ID: 47236 · Report as offensive
SekeRob2

Send message
Joined: 6 Jul 10
Posts: 585
Italy
Message 47237 - Posted: 14 Jan 2013, 14:06:12 UTC - in response to Message 47236.  

Thank you all but the problem is solved for weeks. As I mentioned, the key was to not have selected Help Conquer Cancer from World Community Grid.

Problem with HCC-GPU tasks, which use OpenCL, is that NVidia cards are just not very good at crunching these and using the PC at same time. Opposed, ATI cards are good at that and reports are posted of running 12 concurrent per card multiple controlled per CPU core, even 24 on a dual CPU/ATI card system.

Revert to WCG GPU support forums for discussion. Some expert members may be able to help to tweak [recommended to use 7.0.42 and up with app_config, and remove app_info, if used]. App_info will be disabled in the near future at WCG, but can still be used for other projects, until they decide it's time for it to go too.
Coelum Non Animum Mutant, Qui Trans Mare Currunt
ID: 47237 · Report as offensive
kdsjsdj

Send message
Joined: 5 Jan 13
Posts: 81
Message 47255 - Posted: 14 Jan 2013, 20:33:29 UTC - in response to Message 47237.  

Problem with HCC-GPU tasks, which use OpenCL, is that NVidia cards are just not very good at crunching these and using the PC at same time.


Because nVidia is stuck on CUDA and hasn't embraced OpenCL. I used to be an nVidia fan but if they don't get with OpenCL soon I'll buy AMD-ATI instead.
ID: 47255 · Report as offensive
Claggy

Send message
Joined: 23 Apr 07
Posts: 1112
United Kingdom
Message 47256 - Posted: 14 Jan 2013, 20:47:28 UTC - in response to Message 47255.  
Last modified: 14 Jan 2013, 20:51:51 UTC

Problem with HCC-GPU tasks, which use OpenCL, is that NVidia cards are just not very good at crunching these and using the PC at same time.


Because nVidia is stuck on CUDA and hasn't embraced OpenCL. I used to be an nVidia fan but if they don't get with OpenCL soon I'll buy AMD-ATI instead.

What a load of Rubbish, All Nvidia GPUs that are capable of using Cuda are also capable of using OpenCL, and have been capable of using OpenCL since the 197.xx drivers:

http://www.nvidia.co.uk/object/cuda_opencl_new_uk.html

OpenCL

OpenCL™ (Open Computing Language) is a low-level API for heterogeneous computing that runs on the CUDA architecture. Using OpenCL, developers can write compute kernels using a C-like programming language to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications. As the OpenCL standard matures and is supported on processors from other vendors, NVIDIA will continue to provide the drivers, tools and training resources developers need to create GPU accelerated applications.

In partnership with NVIDIA, OpenCL was submitted to the Khronos Group by Apple in the summer of 2008 with the goal of forging a cross platform environment for general purpose computing on GPUs. NVIDIA has chaired the industry working group that defines the OpenCL standard since its inception and shipped the world’s first conformant GPU implementation of OpenCL for both Windows and Linux in June 2009.

NVIDIA has been delivering OpenCL support in end-user production drivers since October 2009, supporting OpenCL on all 250,000,000+ CUDA architecture GPUs shipped since 2006.

OpenCL Developer Resources:

OpenCL v1.1 Drivers and Code Samples Now Available (June 2010)
OpenCL v1.1 pre-release drivers and SDK code samples are now available to GPU Computing registered developers. Log in or apply for an account to download OpenCL v1.1 today.


NVIDIA has been delivering OpenCL support in end-user production drivers since October 2009, supporting OpenCL on all 250,000,000+ CUDA architecture GPUs shipped since 2006.

NVIDIA enthusiastically supports all languages and API’s that enable developers to access the parallel processing power of the GPU. NVIDIA has a long history of embracing and supporting standards, since a wider choice of languages improves the number and scope of applications that can exploit parallel computing on the GPU. With C/C++ and Fortran language support along with API’s such as OpenCL and Microsoft DirectCompute available today, GPU computing is now mainstream. NVIDIA is the only processor company to offer this breadth of open and standard language solutions for the GPU.

NVIDIA’s Industry-leading support for OpenCL:

2010

November – NVIDIA releases updated Visual Profiler and new cuda-memcheck support for OpenCL applications

July – Khronos Group certifies NVIDIA’s OpenCL 1.1 as industry first conformant implementation

June – NVIDIA releases updated Visual Profiler and new SDK code samples for OpenCL developers

June – NVIDIA releases R256 OpenCL 1.1 conformance candidate to thousands of developers

March – NVIDIA releases Visual Profiler 3.0 with integrated support for both OpenCL and CUDA C/C++ applications on Fermi architecture GPUs

March – NVIDIA releases updated R195 drivers with the Khronos-approved ICD, enabling applications to use OpenCL NVIDIA GPUs and other processors at the same time

January – NVIDIA releases updated R195 drivers, supporting developer-requested OpenCL extensions for Direct3D9/10/11 buffer sharing and loop unrolling

January – Khronos Group ratifies the ICD specification contributed by NVIDIA, enabling applications to use multiple OpenCL implementations concurrently

2009

November – NVIDIA releases R195 drivers with support for optional features in the OpenCL v1.0 specification such as double precision math operations and OpenGL buffer sharing

October – NVIDIA hosts the GPU Technology Conference, providing OpenCL training for an additional 500+ developers

September – NVIDIA completes OpenCL training for over 1000 developers via free webinars

September – NVIDIA begins shipping OpenCL 1.0 conformant support in all end user (public) driver packages for Windows and Linux

September - NVIDIA releases the OpenCL Visual Profiler, the industry’s first hardware performance profiling tool for OpenCL applications

July – NVIDIA hosts first “Introduction to GPU Computing and OpenCL” and “Best Practices for OpenCL Programming, Advanced” webinars for developers

July – NVIDIA releases the NVIDIA OpenCL Best Practices Guide, packed with optimization techniques and guidelines for achieving fast, accurate results with OpenCL

July – NVIDIA contributes source code and specification for an Installable Client Driver (ICD) to the Khronos OpenCL Working Group, with the goal of enabling applications to use multiple OpenCL implementations concurrently on GPUs, CPUs and other types of processors

June – NVIDIA releases industry first OpenCL 1.0 conformant drivers and developer SDK

April – NVIDIA releases industry first OpenCL 1.0 GPU drivers for Windows and Linux, accompanied by the 100+ page NVIDIA OpenCL Programming Guide, an OpenCL JumpStart Guide showing developers how to port existing code from CUDA C to OpenCL, and OpenCL developer forums

2008

December – NVIDIA shows off the world's first OpenCL GPU demonstration, running on an NVIDIA laptop GPU at SIGGRAPH Asia

June – Apple submits OpenCL proposal to Khronos Group; NVIDIA volunteers to chair the OpenCL Working Group is formed

2007

December – NVIDIA Tesla product wins PC Magazine Technical Excellence Award

June – NVIDIA launches first Tesla C870, the first GPU designed for High Performance Computing

May - NVIDIA releases first CUDA architecture GPUs capable of running OpenCL in laptops & workstations

2006

November - NVIDIA released first CUDA architecture GPU capable of running OpenCL


Claggy
ID: 47256 · Report as offensive
kdsjsdj

Send message
Joined: 5 Jan 13
Posts: 81
Message 47263 - Posted: 15 Jan 2013, 3:31:56 UTC - in response to Message 47256.  

They have OpenCL but it doesn't seem to work worth a damn on their GPUs from what I hear.
ID: 47263 · Report as offensive

Message boards : BOINC Manager : System indefinitely slow after deactivating GPU

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.