1)
Message boards :
GPUs :
GPU questions.
(Message 110976)
Posted 28 Jan 2023 by BoincSpy Post: 1. I have observed if you have one RTX 30x or RTX 40x and have the build in intel GPU processing working units the rate of processing goes down ~10 %. IE if not running an RTX 3070 ti without intel GPU the processing rate ~31% if we add the intel processor the rate drops to ~19-21%. Anyone know what the underlying cause of might be the slow down. 2. I just purchased a RTX 4070Ti and noted the GPU rate is not much higher than the RTX3070 Ti. IE 32.0 % /min vs 29.0 % /minute. I know it might be related to what WUs processing but thought I would get a better rate / minute IE almost double based on the GFLOPS peak numbers. Here are the specs of the 2 graphics cards. Again any thoughts on way the 4070ti is not really outperforming the 3070Ti. Computer 1 1/28/2023 10:50:09 AM | | CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 3070 Ti (driver version 528.02, CUDA version 12.0, compute capability 8.6, 8192MB, 8192MB available, 21934 GFLOPS peak) 1/28/2023 10:50:09 AM | | OpenCL: NVIDIA GPU 0: NVIDIA GeForce RTX 3070 Ti (driver version 528.02, device version OpenCL 3.0 CUDA, 8192MB, 8192MB available, 21934 GFLOPS peak) 1/28/2023 10:50:14 AM | | Windows processor group 0: 12 processors 1/28/2023 10:50:14 AM | | Processor: 12 GenuineIntel Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz [Family 6 Model 165 Stepping 5] 1/28/2023 10:50:14 AM | | OS: Microsoft Windows 11: Professional x64 Edition, (10.00.22621.00) 1/28/2023 10:50:14 AM | | Memory: 7.82 GB physical, 16.16 GB virtual Average GPU rate: 29% / minute. Computer 2. 1/28/2023 10:42:31 AM | | CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 4070 Ti (driver version 528.24, CUDA version 12.0, compute capability 8.9, 12282MB, 12282MB available, 42624 GFLOPS peak) 1/28/2023 10:42:31 AM | | OpenCL: NVIDIA GPU 0: NVIDIA GeForce RTX 4070 Ti (driver version 528.24, device version OpenCL 3.0 CUDA, 12282MB, 12282MB available, 42624 GFLOPS peak) 1/28/2023 10:42:46 AM | | Windows processor group 0: 20 processors 1/28/2023 10:42:46 AM | | Processor: 20 GenuineIntel 12th Gen Intel(R) Core(TM) i7-12700K [Family 6 Model 151 Stepping 2] 1/28/2023 10:42:46 AM | | OS: Microsoft Windows 11: Professional x64 Edition, (10.00.22621.00) 1/28/2023 10:42:46 AM | | Memory: 31.78 GB physical, 63.78 GB virtual Average GPU rate: 32 % / minute |
2)
Message boards :
GPUs :
One Nvidia GPU unable to process after a couple of days.
(Message 108277)
Posted 30 May 2022 by BoincSpy Post: The Linux client is not the 7.16.20 as reported earlier. Its running : 5/29/2022 10:00:29 PM | | Starting BOINC client version 7.18.1 for x86_64-pc-linux-gnu 5/29/2022 10:00:29 PM | | This a development version of BOINC and may not function properly. I will dig into the issue of GPU task requires more that one thread, and play around with the CPU settings. Thanks, BoincSpy |
3)
Message boards :
GPUs :
One Nvidia GPU unable to process after a couple of days.
(Message 108152)
Posted 19 May 2022 by BoincSpy Post: So the machine in is running ubuntu, so I will have to do a clean install on the nvidia drivers. However the other machine (Windows ) that I have issues with has the same versions of the drivers and it appears the task cpu/gpu scheduler is causing the issue of not running the other task on the other GPU. I suspect this because of the following. 1) If I play around with the Computing preferences CPU usage limit ( % of CPU ) I can get the other GPU to start processing. 2) If I delete all tasks I can the the other GPU to run for a couple of days. |
4)
Message boards :
GPUs :
One Nvidia GPU unable to process after a couple of days.
(Message 108128)
Posted 18 May 2022 by BoincSpy Post: Yes BOINC sees both GPUs. 5/18/2022 10:38:41 AM | | CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 2070 (driver version 470.99, CUDA version 11.4, compute capability 7.5, 4096MB, 3968MB available, 7465 GFLOPS peak) 5/18/2022 10:38:41 AM | | CUDA: NVIDIA GPU 1: NVIDIA GeForce RTX 2070 (driver version 470.99, CUDA version 11.4, compute capability 7.5, 4096MB, 3968MB available, 7465 GFLOPS peak) 5/18/2022 10:38:41 AM | | OpenCL: NVIDIA GPU 0: NVIDIA GeForce RTX 2070 (driver version 470.103.01, device version OpenCL 3.0 CUDA, 7981MB, 3968MB available, 7465 GFLOPS peak) 5/18/2022 10:38:41 AM | | OpenCL: NVIDIA GPU 1: NVIDIA GeForce RTX 2070 (driver version 470.103.01, device version OpenCL 3.0 CUDA, 7982MB, 3968MB available, 7465 GFLOPS peak) Both GPus are working as I use distributed.net opencl application that really push's the GPUs. |
5)
Message boards :
GPUs :
One Nvidia GPU unable to process after a couple of days.
(Message 108114)
Posted 17 May 2022 by BoincSpy Post: I have the following issue in that One of the two NVIDIA GPUS no longer computer after a couple of days. The only way to get work again is if i remove all the tasks or reset the project but I have to do this every 2 - 3 days. Here are the specs. This happens on a couple of machines. Project: Einstein@Home GPUs: 2 RTX 2070s. CPU: Intel I7 / Gen 8, with 8 GB ram. Boinc Version: 7.16.20 There are plenty of GPU tasks enabled. I have turned on coproc_debug, cpu_sched_debug and work_fetch_debug options. Here is the event log. Noticed that device is not able to run because CPU is committed but I have set the CPU limit to 70% so I thought there would be plenty of CPU head room. I tried setting it lower makes no different. 5/17/2022 9:39:12 AM | | Re-reading cc_config.xml 5/17/2022 9:39:12 AM | | Config: GUI RPCs allowed from: 5/17/2022 9:39:12 AM | | 172.16.0.23 5/17/2022 9:39:12 AM | | Config: use all coprocessors 5/17/2022 9:39:12 AM | | log flags: file_xfer, task, coproc_debug, cpu_sched_debug, work_fetch_debug 5/17/2022 9:39:12 AM | | [cpu_sched_debug] Request CPU reschedule: Core client configuration 5/17/2022 9:39:12 AM | | [work_fetch] Request work fetch: Core client configuration 5/17/2022 9:39:12 AM | | [cpu_sched_debug] schedule_cpus(): start 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] reserving 1.000000 of coproc NVIDIA 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: LATeah3012L08_796.0_0_0.0_32509764_0 (NVIDIA GPU, FIFO) (prio -1.000000) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] reserving 1.000000 of coproc NVIDIA 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: LATeah3012L08_796.0_0_0.0_32507847_0 (NVIDIA GPU, FIFO) (prio -1.020764) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3800_0 (CPU, EDF) (prio -1.041527) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3552_0 (CPU, EDF) (prio -1.041562) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3560_0 (CPU, EDF) (prio -1.041597) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3088_0 (CPU, EDF) (prio -1.041632) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1752_0 (CPU, EDF) (prio -1.041667) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] add to run list: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1896_0 (CPU, EDF) (prio -1.041702) 5/17/2022 9:39:12 AM | | [cpu_sched_debug] enforce_run_list(): start 5/17/2022 9:39:12 AM | | [cpu_sched_debug] preliminary job list: 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 0: LATeah3012L08_796.0_0_0.0_32509764_0 (MD: no; UTS: yes) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 1: LATeah3012L08_796.0_0_0.0_32507847_0 (MD: no; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 2: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3800_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 3: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3552_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 4: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3560_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 5: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3088_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 6: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1752_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 7: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1896_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | | [cpu_sched_debug] final job list: 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 0: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3800_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 1: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3552_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 2: p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3560_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 3: p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3088_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 4: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1752_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 5: p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1896_0 (MD: yes; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 6: LATeah3012L08_796.0_0_0.0_32509764_0 (MD: no; UTS: yes) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] 7: LATeah3012L08_796.0_0_0.0_32507847_0 (MD: no; UTS: no) 5/17/2022 9:39:12 AM | Einstein@Home | [coproc] NVIDIA instance 0; 1.000000 pending for LATeah3012L08_796.0_0_0.0_32509764_0 5/17/2022 9:39:12 AM | Einstein@Home | [coproc] NVIDIA instance 0: confirming 1.000000 instance for LATeah3012L08_796.0_0_0.0_32509764_0 5/17/2022 9:39:12 AM | Einstein@Home | [coproc] Assigning NVIDIA instance 1 to LATeah3012L08_796.0_0_0.0_32507847_0 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3800_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3552_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.44-01.82.S.b4s0g0.00000_3560_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.31-01.59.S.b6s0g0.00000_3088_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1752_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling p2030.20180616.G55.31-01.59.S.b1s0g0.00000_1896_0 (high priority) 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] scheduling LATeah3012L08_796.0_0_0.0_32509764_0 5/17/2022 9:39:12 AM | Einstein@Home | [cpu_sched_debug] skipping GPU job LATeah3012L08_796.0_0_0.0_32507847_0; CPU committed 5/17/2022 9:39:12 AM | | [cpu_sched_debug] enforce_run_list: end 5/17/2022 9:39:14 AM | | choose_project(): 1652805554.692413 5/17/2022 9:39:14 AM | | [work_fetch] ------- start work fetch state ------- 5/17/2022 9:39:14 AM | | [work_fetch] target work buffer: 86400.00 + 86400.00 sec 5/17/2022 9:39:14 AM | | [work_fetch] --- project states --- 5/17/2022 9:39:14 AM | Einstein@Home | [work_fetch] REC 607807.647 prio -0.104 can't request work: scheduler RPC backoff (14.56 sec) 5/17/2022 9:39:14 AM | | [work_fetch] --- state for CPU --- 5/17/2022 9:39:14 AM | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 1317305.84 busy 1072572.89 5/17/2022 9:39:14 AM | Einstein@Home | [work_fetch] share 0.000 5/17/2022 9:39:14 AM | | [work_fetch] --- state for NVIDIA GPU --- 5/17/2022 9:39:14 AM | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 278830.81 busy 0.00 5/17/2022 9:39:14 AM | Einstein@Home | [work_fetch] share 0.000 5/17/2022 9:39:14 AM | | [work_fetch] ------- end work fetch state ------- 5/17/2022 9:39:14 AM | Einstein@Home | choose_project: scanning 5/17/2022 9:39:14 AM | Einstein@Home | skip: scheduler RPC backoff 5/17/2022 9:39:14 AM | | [work_fetch] No project chosen for work fetch 5/17/2022 9:39:29 AM | | [work_fetch] Request work fetch: Backoff ended for Einstein@Home 5/17/2022 9:39:29 AM | | choose_project(): 1652805569.760385 5/17/2022 9:39:29 AM | | [work_fetch] ------- start work fetch state ------- 5/17/2022 9:39:29 AM | | [work_fetch] target work buffer: 86400.00 + 86400.00 sec 5/17/2022 9:39:29 AM | | [work_fetch] --- project states --- 5/17/2022 9:39:29 AM | Einstein@Home | [work_fetch] REC 607807.647 prio -1.104 can request work 5/17/2022 9:39:29 AM | | [work_fetch] --- state for CPU --- 5/17/2022 9:39:29 AM | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 1317242.70 busy 1072565.50 5/17/2022 9:39:29 AM | Einstein@Home | [work_fetch] share 1.000 5/17/2022 9:39:29 AM | | [work_fetch] --- state for NVIDIA GPU --- 5/17/2022 9:39:29 AM | | [work_fetch] shortfall 0.00 nidle 0.00 saturated 278828.81 busy 0.00 5/17/2022 9:39:29 AM | Einstein@Home | [work_fetch] share 1.000 5/17/2022 9:39:29 AM | | [work_fetch] ------- end work fetch state ------- Anyone have suggestions to fix this? I have tried talking to the Einstein@home people and didn't get to far with them. Thanks, Bob |
6)
Message boards :
Promotion :
BoincSpy Lives!!!
(Message 106203)
Posted 27 Nov 2021 by BoincSpy Post: Hi Everyone, BoincSpy still lives and the current release is 4.5.0. Can be used for Boinc projects and Folding at Home. Here are the changes:
|
7)
Message boards :
API :
not returning a newer BOINC version number
(Message 105880)
Posted 28 Oct 2021 by BoincSpy Post: I don't think this is an issue with get_newer_version but I have 7.16.11 (x64) installed however i have noticed that there is a newer version available on the boinc.berkeley.edu website 7.16.20. I am assuming that the project that I am running needs to update there servers to indicate a newer version is available or is this incorrect and boinc @ berkley needs to make the change. Thanks in advance, BoincSpy |
Copyright © 2023 University of California. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.