Scheduler problem - not receiving more than 1 AMD GPU task at a time

Message boards : Questions and problems : Scheduler problem - not receiving more than 1 AMD GPU task at a time
Message board moderation

To post messages, you must log in.

AuthorMessage
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 90240 - Posted: 25 Feb 2019, 0:39:39 UTC

I have a computer with an AMD Ryzen 3 2200G APU (no discrete graphics card) that typically is dedicated to SETI@Home. It also runs Einstein@home and MilkyWay@home, but the resource share for both of those projects are set to 0.

For the past few weeks when S@H would come back online from its weekly outage, I found that only one GPU task for the Ryzen would be downloaded at a time. Once the task would be completed, a new one would be downloaded and so on. Sometimes it would download a MW@H or E@H GPU task. Plenty of CPU tasks would be stocked up.
Currently I have the storage settings for at least 10 days with an additional 1 day of work.

I started logging debug flags, and I think I see where the problem is but I'm not sure what to do.

2/24/2019 6:27:50 PM |  | Starting BOINC client version 7.14.2 for windows_x86_64
2/24/2019 6:27:50 PM |  | log flags: file_xfer, sched_ops, task, sched_op_debug, work_fetch_debug
2/24/2019 6:27:50 PM |  | OpenCL: AMD/ATI GPU 0: AMD Radeon(TM) Vega 8 Graphics (driver version 2766.5 (PAL,HSAIL), device version OpenCL 2.0 AMD-APP (2766.5), 7206MB, 7206MB available, 43980464 GFLOPS peak)
2/24/2019 6:27:50 PM |  | Host name: DESKTOP-FIDJHGU
2/24/2019 6:27:50 PM |  | Processor: 4 AuthenticAMD AMD Ryzen 3 2200G with Radeon Vega Graphics [Family 23 Model 17 Stepping 0]
2/24/2019 6:27:50 PM |  | OS: Microsoft Windows 10: Core x64 Edition, (10.00.17763.00)
2/24/2019 6:27:50 PM |  | Memory: 13.93 GB physical, 16.06 GB virtual
2/24/2019 6:27:50 PM | Einstein@Home | URL http://einstein.phys.uwm.edu/; Computer ID 12767141; resource share 0
2/24/2019 6:27:50 PM | Milkyway@Home | URL http://milkyway.cs.rpi.edu/milkyway/; Computer ID 792907; resource share 0
2/24/2019 6:27:50 PM | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 8640304; resource share 100
2/24/2019 6:27:50 PM | SETI@home Beta Test | URL http://setiweb.ssl.berkeley.edu/beta/; Computer ID 87175; resource share 0
2/24/2019 6:27:55 PM |  | [work_fetch] Request work fetch: Prefs update
2/24/2019 6:27:55 PM |  | [work_fetch] Request work fetch: Startup
2/24/2019 6:27:55 PM |  | [work_fetch] ------- start work fetch state -------
2/24/2019 6:27:55 PM |  | [work_fetch] target work buffer: 864000.00 + 86400.00 sec
2/24/2019 6:27:55 PM |  | [work_fetch] --- project states ---
2/24/2019 6:27:55 PM | Einstein@Home | [work_fetch] REC 73697483.342 prio -1000.110 can request work
2/24/2019 6:27:55 PM | Milkyway@Home | [work_fetch] REC 233343.794 prio 0.000 can't request work: suspended via Manager
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] REC 597659498.464 prio -0.890 can request work
2/24/2019 6:27:55 PM | SETI@home Beta Test | [work_fetch] REC 159.585 prio -1000.000 can't request work: "no new tasks" requested via Manager
2/24/2019 6:27:55 PM |  | [work_fetch] --- state for CPU ---
2/24/2019 6:27:55 PM |  | [work_fetch] shortfall 3383200.98 nidle 0.00 saturated 103262.40 busy 0.00
2/24/2019 6:27:55 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] share 1.000
2/24/2019 6:27:55 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM |  | [work_fetch] --- state for AMD/ATI GPU ---
2/24/2019 6:27:55 PM |  | [work_fetch] shortfall 949786.75 nidle 0.00 saturated 613.25 busy 0.00
2/24/2019 6:27:55 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] share 1.000
2/24/2019 6:27:55 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/24/2019 6:27:55 PM |  | [work_fetch] ------- end work fetch state -------
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] set_request() for CPU: ninst 4 nused_total 100.00 nidle_now 0.00 fetch share 1.00 req_inst 0.00 req_secs 3383200.98
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] set_request() for AMD/ATI GPU: ninst 1 nused_total 0.00 nidle_now 0.00 fetch share 1.00 req_inst 0.00 req_secs 0.00
2/24/2019 6:27:55 PM | SETI@home | [sched_op] Starting scheduler request
2/24/2019 6:27:55 PM | SETI@home | [work_fetch] request: CPU (3383200.98 sec, 0.00 inst) AMD/ATI GPU (0.00 sec, 0.00 inst)
2/24/2019 6:27:55 PM | SETI@home | [sched_op] CPU work request: 3383200.98 seconds; 0.00 devices
2/24/2019 6:27:55 PM | SETI@home | [sched_op] AMD/ATI GPU work request: 0.00 seconds; 0.00 devices
2/24/2019 6:27:56 PM | SETI@home | [sched_op] Server version 709
2/24/2019 6:36:24 PM |  | [work_fetch] ------- start work fetch state -------
2/24/2019 6:36:24 PM |  | [work_fetch] target work buffer: 864000.00 + 86400.00 sec
2/24/2019 6:36:24 PM |  | [work_fetch] --- project states ---
2/24/2019 6:36:24 PM | Einstein@Home | [work_fetch] REC 74333115.116 prio -0.000 can request work
2/24/2019 6:36:24 PM | Milkyway@Home | [work_fetch] REC 233259.294 prio 0.000 can't request work: suspended via Manager
2/24/2019 6:36:24 PM | SETI@home | [work_fetch] REC 597443070.817 prio -0.000 can't request work: scheduler RPC backoff (104.09 sec)
2/24/2019 6:36:24 PM | SETI@home Beta Test | [work_fetch] REC 159.528 prio 0.000 can't request work: "no new tasks" requested via Manager
2/24/2019 6:36:24 PM |  | [work_fetch] --- state for CPU ---
2/24/2019 6:36:24 PM |  | [work_fetch] shortfall 3383536.54 nidle 0.00 saturated 103186.90 busy 0.00
2/24/2019 6:36:24 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM | SETI@home | [work_fetch] share 0.000 project is backed off  (resource backoff: 159.04, inc 600.00)
2/24/2019 6:36:24 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM |  | [work_fetch] --- state for AMD/ATI GPU ---
2/24/2019 6:36:24 PM |  | [work_fetch] shortfall 947543.05 nidle 0.00 saturated 2856.95 busy 0.00
2/24/2019 6:36:24 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM | SETI@home | [work_fetch] share 0.000
2/24/2019 6:36:24 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/24/2019 6:36:24 PM |  | [work_fetch] ------- end work fetch state -------
2/24/2019 6:36:24 PM |  | [work_fetch] No project chosen for work fetch

So somehow, even though there is at most 1 or 2 GPU tasks stored on the computer, it is requesting 0 seconds of work for the GPU. I'm sure that is why I am not receiving GPU tasks until the currently GPU task is fully crunched.

Any thoughts?
ID: 90240 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15480
Netherlands
Message 90241 - Posted: 25 Feb 2019, 6:51:54 UTC - in response to Message 90240.  

Currently I have the storage settings for at least 10 days with an additional 1 day of work.
The first value is the low water mark, the time you tell BOINC to check for work. The second value tells how much work you want to store.

So try to turn the two around, 1 and 10
ID: 90241 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90247 - Posted: 25 Feb 2019, 12:32:47 UTC

I'm not sure what speed estimate the Client uses during the work fetch calculation: it might be using 59.63 GFLOPS from the server APR, or it might be using 43,980,464 GFLOPS from its own calculation.

Call it 60 GF to be optimistic. I think that gives a maximum SETI runtime of about 40 minutes, or 36 tasks per day. There is no point whatsoever in requesting more than 3 days work, because you'll never get more than 100 tasks from SETI.

My personal suggestion would be to try a setting of 2.5 days + 0.05 days: if you get things working cleanly, that should result in BOINC turning in a completed task and requesting a new one roughly once an hour. Now that BOINC is reporting tasks no more than an hour after they complete, there's no point in going for a big 'additional' number.

I don't know what would happen if BOINC is paying attention to the absurd driver speed during work fetch. It might - I rather hope it would - say "that's impossible - there must be something wrong", and drop down to a safe level. If I get time, I might try to look through the code. But just to check it out, I'd turn the work fetch right down, just to see what happens. I run 0.25 days + 0.05 days for SETI (except on Tuesdays). That chugs along quite happily if you have a good internet connection and backup projects.
ID: 90247 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90248 - Posted: 25 Feb 2019, 12:43:44 UTC

Well, we used to have a safety net:

        if (wacky_dcf(p)) {
            // if project's DCF is too big or small,
            // its completion time estimates are useless; just ask for 1 second
            //
            req_secs = 1;
That was back in the day when we used DCF, before CreditNew. I wonder what wacky_dcf() does now.
ID: 90248 · Report as offensive
Profile Jord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15480
Netherlands
Message 90249 - Posted: 25 Feb 2019, 13:03:56 UTC - in response to Message 90247.  

I'm not sure what speed estimate the Client uses during the work fetch calculation: it might be using 59.63 GFLOPS from the server APR, or it might be using 43,980,464 GFLOPS from its own calculation.
LOL, I didn't even see that peak GFlops value. I use the same driver on my RX 470 and it's showing a more down to earth number for that: 5,161 GFlops.

But see https://github.com/BOINC/boinc/issues/2988 with workaround in https://github.com/BOINC/boinc/pull/3001
ID: 90249 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90250 - Posted: 25 Feb 2019, 14:32:31 UTC - in response to Message 90249.  

You missed #3006 ;-)

Yes, I know all about those - I've been making myself a pain in the arse all over BOINC for the last three weeks. See summary at Agenda item 4 of MINUTES: BOINC PROJECTS CALL #4. Bill knows all about it as well - he's been testing both client patches, and Eric's server patch at SETI Beta. We're in a bit of a hiatus waiting for Laurence to update the Server Stable branch.
ID: 90250 · Report as offensive
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 90261 - Posted: 26 Feb 2019, 3:00:02 UTC - in response to Message 90250.  

You missed #3006 ;-)

Yes, I know all about those - I've been making myself a pain in the arse all over BOINC for the last three weeks. See summary at Agenda item 4 of MINUTES: BOINC PROJECTS CALL #4. Bill knows all about it as well - he's been testing both client patches, and Eric's server patch at SETI Beta. We're in a bit of a hiatus waiting for Laurence to update the Server Stable branch.
I thought I was the one being a pain asking about all the problems that I'm having!

I did adjust my settings to 2.5 days + 0.05 as you mentioned, RIchard. That didn't seem to change anything, so I decided to run SETI today with NNT to run down my queue. I left it alone for at least 12 hours, and when I allowed new tasks, only CPU tasks (33 total) were downloaded. In case I didn't mention it before, I did revert back to 7.14.2.

This may be superfluous at this point, but more debug lines from my event log:
2/25/2019 8:32:59 PM | SETI@home | work fetch resumed by user
2/25/2019 8:32:59 PM |  | [work_fetch] Request work fetch: project work fetch resumed by user
2/25/2019 8:33:02 PM |  | [work_fetch] ------- start work fetch state -------
2/25/2019 8:33:02 PM |  | [work_fetch] target work buffer: 216000.00 + 4320.00 sec
2/25/2019 8:33:02 PM |  | [work_fetch] --- project states ---
2/25/2019 8:33:02 PM | Einstein@Home | [work_fetch] REC 142297886.589 prio -1000.190 can request work
2/25/2019 8:33:02 PM | Milkyway@Home | [work_fetch] REC 216419.883 prio 0.000 can't request work: suspended via Manager
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] REC 605565081.292 prio -0.810 can request work
2/25/2019 8:33:02 PM | SETI@home Beta Test | [work_fetch] REC 148.011 prio -1000.000 can't request work: "no new tasks" requested via Manager
2/25/2019 8:33:02 PM |  | [work_fetch] --- state for CPU ---
2/25/2019 8:33:02 PM |  | [work_fetch] shortfall 591219.75 nidle 0.00 saturated 69308.16 busy 0.00
2/25/2019 8:33:02 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] share 1.000
2/25/2019 8:33:02 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM |  | [work_fetch] --- state for AMD/ATI GPU ---
2/25/2019 8:33:02 PM |  | [work_fetch] shortfall 219985.51 nidle 0.00 saturated 334.49 busy 0.00
2/25/2019 8:33:02 PM | Einstein@Home | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM | Milkyway@Home | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] share 1.000
2/25/2019 8:33:02 PM | SETI@home Beta Test | [work_fetch] share 0.000 zero resource share
2/25/2019 8:33:02 PM |  | [work_fetch] ------- end work fetch state -------
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] set_request() for CPU: ninst 4 nused_total 67.00 nidle_now 0.00 fetch share 1.00 req_inst 0.00 req_secs 591219.75
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] set_request() for AMD/ATI GPU: ninst 1 nused_total 0.00 nidle_now 0.00 fetch share 1.00 req_inst 0.00 req_secs 0.00
2/25/2019 8:33:02 PM | SETI@home | [sched_op] Starting scheduler request
2/25/2019 8:33:02 PM | SETI@home | [work_fetch] request: CPU (591219.75 sec, 0.00 inst) AMD/ATI GPU (0.00 sec, 0.00 inst)
2/25/2019 8:33:02 PM | SETI@home | [sched_op] CPU work request: 591219.75 seconds; 0.00 devices
2/25/2019 8:33:02 PM | SETI@home | [sched_op] AMD/ATI GPU work request: 0.00 seconds; 0.00 devices
2/25/2019 8:33:04 PM | SETI@home | [sched_op] Server version 709
2/25/2019 8:33:04 PM | SETI@home | [sched_op] estimated total CPU task duration: 138200 seconds
2/25/2019 8:33:04 PM | SETI@home | [sched_op] estimated total AMD/ATI GPU task duration: 0 seconds
2/25/2019 8:33:04 PM | SETI@home | [sched_op] Deferring communication for 00:05:03
2/25/2019 8:33:04 PM | SETI@home | [sched_op] Reason: requested by project
2/25/2019 8:33:04 PM |  | [work_fetch] Request work fetch: RPC complete
2/25/2019 8:33:09 PM |  | [work_fetch] ------- start work fetch state -------
2/25/2019 8:33:09 PM |  | [work_fetch] target work buffer: 216000.00 + 4320.00 sec
2/25/2019 8:33:09 PM |  | [work_fetch] --- project states ---
2/25/2019 8:33:09 PM | Einstein@Home | [work_fetch] REC 142377747.529 prio -0.000 can request work
2/25/2019 8:33:09 PM | Milkyway@Home | [work_fetch] REC 216409.194 prio 0.000 can't request work: suspended via Manager
2/25/2019 8:33:09 PM | SETI@home | [work_fetch] REC 605535172.123 prio -0.000 can't request work: scheduler RPC backoff (297.88 sec)
2/25/2019 8:33:09 PM | SETI@home Beta Test | [work_fetch] REC 148.004 prio 0.000 can't request work: "no new tasks" requested via Manager
2/25/2019 8:33:09 PM |  | [work_fetch] --- state for CPU ---
2/25/2019 8:33:09 PM |  | [work_fetch] shortfall 457683.50 nidle 0.00 saturated 104761.07 busy 0.00


Einstein is still requesting GPU tasks one at a time (expected since the resource share is 0). I'm wondering, is there any chance that this is related to the peak flops bug? I'm wondering if peak flops is used in scheduling. I could see a possibility where it is used to determine how much work is needed to download. If the peak flops is artificially low, then that could be used to under estimate how much work is required to be downloaded, which would cause only one task to be downloaded at a time.

I'm out of ideas of what I could tweak to fix this problem.
ID: 90261 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90266 - Posted: 26 Feb 2019, 10:21:23 UTC - in response to Message 90261.  

I thought I was the one being a pain asking about all the problems that I'm having!
You'd be surprised who's been emailing who. I was personally asked by Bruce Allen (Director not only of Einstein, but also of the Institute in Hannover that hosts Einstein) to add that agenda item to the project call agenda and speak to it - and by his secretary to write my own minutes afterwards.

And I think you've uncovered another one. Work fetch is currently (I think) determined by Client Scheduling October 2010 - Estimated credit: they decided it was too difficult to maintain resource share according to real credit, and so they used REC instead.

So I pulled out these two tables.

Bill:
				RAC		    REC
Einstein@Home		  18,906.01	142,297,886.589
MilkyWay		   4,143.10	    216,419.883
SETI@home		   6,462.70	605,565,081.292

OpenCL: AMD/ATI GPU 0: AMD Radeon(TM) Vega 8 Graphics (43,980,464 GFLOPS peak)

Richard:
				RAC		    REC
Einstein@Home		   8,119.18	      8,322.576
GPUGRID			 391,764.22	    303,164.317
NumberFields@home	   3,222.80	      1,834.406
SETI@home		  10,627.91	    367,015.648

OpenCL: NVIDIA GPU 0: GeForce GTX 970 (4,087 GFLOPS peak)
OpenCL: NVIDIA GPU 1: GeForce GTX 750 Ti (1,639 GFLOPS peak)
OpenCL: Intel GPU 0: Intel(R) HD Graphics 4600 (192 GFLOPS peak)
They're not directly comparable, but between them, they teach us a lot.

Bill: I know you can't run MilkyWay at the moment because of the bug, so your MilkyWay figures (both RAC and REC) are currently low. I left out SETI Beta for the same reason - you only ran a few test tasks, so the averages didn't have time to stabilise.

Richard: this is my daily driver, and the work pattern has been steady for months, if not years. I run:

Einstein in the Intel GPU
GPUGrid on the GTX 970
SETI on the GTX 750 Ti and (rarely) on the GTX 970
NumberFields on the CPU

Comments:

Bill's REC has been blown out of the water by the GFlops error. I don't think we'd thought of that before. If that's the only GPU in the system (as yours is), it shouldn't make much difference - but it would unbalance the work fetch calculations between CPU, other GPUs, and the Ryzen GPU component.

In my case, the balance between GPUGrid and SETI is most interesting. Discount RAC - GPUGrid give credit away like popcorn - but REC for SETI should be half what it is. REC isn't taking account of the actual device speeds. It's interesting how close RAC and REC are for Einstein and GPUGrid (that says something about the policies of those two projects). I think NF@Home is another slight overpayer, but SETI is, as we all know, way out of line.

So, now I've got to write that up in a place and in a way that it will be taken seriously. We know that David is under pressure to review work fetch because of the changes he made when fixing the first part of Keith's problem: he won't want to do that twice, but I don't think he'll want to add this to that work. And no other developer has yet come forward to say that they have the capacity to review this whole area of code. Bummer.
ID: 90266 · Report as offensive
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 90267 - Posted: 26 Feb 2019, 13:14:49 UTC - in response to Message 90266.  

Bill: I know you can't run MilkyWay at the moment because of the bug, so your MilkyWay figures (both RAC and REC) are currently low. I left out SETI Beta for the same reason - you only ran a few test tasks, so the averages didn't have time to stabilise.

Richard: this is my daily driver, and the work pattern has been steady for months, if not years. I run:

Einstein in the Intel GPU
GPUGrid on the GTX 970
SETI on the GTX 750 Ti and (rarely) on the GTX 970
NumberFields on the CPU

Comments:

Bill's REC has been blown out of the water by the GFlops error. I don't think we'd thought of that before. If that's the only GPU in the system (as yours is), it shouldn't make much difference - but it would unbalance the work fetch calculations between CPU, other GPUs, and the Ryzen GPU component.

In my case, the balance between GPUGrid and SETI is most interesting. Discount RAC - GPUGrid give credit away like popcorn - but REC for SETI should be half what it is. REC isn't taking account of the actual device speeds. It's interesting how close RAC and REC are for Einstein and GPUGrid (that says something about the policies of those two projects). I think NF@Home is another slight overpayer, but SETI is, as we all know, way out of line.

So, now I've got to write that up in a place and in a way that it will be taken seriously. We know that David is under pressure to review work fetch because of the changes he made when fixing the first part of Keith's problem: he won't want to do that twice, but I don't think he'll want to add this to that work. And no other developer has yet come forward to say that they have the capacity to review this whole area of code. Bummer.

I suppose I can remove MilkyWay from the affected computer. It is only there as a backup if others are dry. I will have to see if that changes anything...?

I also saw that Radeon 19.2.3 was just released for the APUs. I waited with installing the latest (optional) drivers, but perhaps I should give it a go.
ID: 90267 · Report as offensive
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 90349 - Posted: 28 Feb 2019, 13:46:41 UTC - in response to Message 90267.  

So I removed MW for now, and I also installed the latest drivers for the Vega 8, and neither changed anything (not that I was expecting it to).
ID: 90349 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90766 - Posted: 21 Mar 2019, 16:12:21 UTC

We held another of the developer conference calls today. David Anderson was not present (this one was at 13:00 UTC, convenient for Europe but very poorly timed for California), but sent in a written report. He said he's ready to start work on a new client release, but knows that work fetch issues have to be addressed first. I was asked to liaise with David on the matters which need attention.

I've opened a new issue to consolidate the state of play: #3065. Please add any relevant comments, either here or in the issue.
ID: 90766 · Report as offensive
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 90910 - Posted: 3 Apr 2019, 3:15:30 UTC - in response to Message 90766.  

I don’t know how I missed this post from before, but thank you for the update. Let me know if you need me to test anything. In the meantime, I will hit the subscribe button...
ID: 90910 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5081
United Kingdom
Message 90911 - Posted: 3 Apr 2019, 7:56:56 UTC - in response to Message 90910.  

Welcome back. There is indeed some new code to test - #3076 - but note that this is primarily aimed at finishing off the max_concurrent problem, not the 'recovering from massive FLOPS error' problem.
ID: 90911 · Report as offensive
Bill
Avatar

Send message
Joined: 13 Jun 17
Posts: 91
United States
Message 91187 - Posted: 23 Apr 2019, 23:44:13 UTC

I just checked my computer, and it appears that I now have 50+ AMD GPU tasks downloaded, ready for crunching! I'm assuming the SETI server was patched during today's outage?
ID: 91187 · Report as offensive

Message boards : Questions and problems : Scheduler problem - not receiving more than 1 AMD GPU task at a time

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.