Info | Message |
---|---|
1) Message boards : BOINC Manager : Boinc.exe and Boincmgr.exe hitting www.007guard.com
Message 30460 Posted 8 Jan 2010 by Ed Meadows |
I found the problem. This might be useful to anybody else who is seeing this. SpyBot modifies the hosts file to redirect malware-related website names to localhost (127.0.0.1) to prevent communications, but fails to insert '127.0.0.1 localhost' as the first entry in the list. The first entry in the list is the '127.0.0.1 www.007guard.com' and the system ends up showing that name by mistake whenever localhost is accessed. The article that explains this is here: http://overclockedtech.com/?tag=007guardcom You would think that SpyBot would have a handle on this by now. |
2) Message boards : BOINC Manager : Boinc.exe and Boincmgr.exe hitting www.007guard.com
Message 30459 Posted 8 Jan 2010 by Ed Meadows |
In my Win 7's task manager's resource monitor, it shows under the Network section that Boinc.exe and Boincmgr.exe are communicating with www.007guard.com. I'm running two projects, climateprediction.net and malariacontrol.net I tried to go to this site with my browser and was immediately warned by the Firefox's WOT (Web of trust) plug-in that it had a poor reputation, and McAfee site advisor red-flagged it as containing malware. Scanning with Malwarebytes, SpyBot, HitMan Pro, and McAfee doesn't reveal any malware on my system. Does anybody know what this is about? Thanks! |
3) Message boards : BOINC Manager : Boinc 6.6.36 scheduling strangely
Message 25883 Posted 3 Jul 2009 by Ed Meadows |
I reset both WCG and Leiden. I received two WCG tasks but no Leiden tasks. I received the same message: "7/3/2009 8:33:04 AM Leiden Classical Message from server: (won't finish in time) Computer on 99.4% of time, BOINC on 97.7% of that, this project gets 25.0% of that" It could be that BM considers it now to be over-scheduled so I'll wait and see what happens when some of the CPDN tasks complete. If it appears that something is still amiss I'll revert back to an earlier release of BM. |
4) Message boards : BOINC Manager : Boinc 6.6.36 scheduling strangely
Message 25882 Posted 3 Jul 2009 by Ed Meadows |
Hey there, Jord, Yes, that is exactly the message I'm receiving, except the numbers are different: "7/2/2009 1:48:24 PM Leiden Classical Message from server: (won't finish in time) Computer on 99.3% of time, BOINC on 97.8% of that, this project gets 25.0% of that" I'll try resetting Leiden and WCG, but I cannot reset CPDN as I have four long-running models currently. Here's the lines you requested from sched_request_www.worldcommunitygrid.org.xml: <on_frac>0.993443</on_frac> <connected_frac>0.995929</connected_frac> <active_frac>0.977402</active_frac> And from sched_request_boinc.gorlaeus.net.xml: <on_frac>0.993494</on_frac> <connected_frac>0.995961</connected_frac> <active_frac>0.977579</active_frac> Hope this helps. Ed |
5) Message boards : BOINC Manager : Boinc 6.6.36 scheduling strangely
Message 25871 Posted 2 Jul 2009 by Ed Meadows |
I'm running BM 6.6.36 on a quad-core Intel machine under XP SP3. I'm attached to the following projects: Climate Prediction, resource share 200 (50%) Leiden Classical, resource share 100 (25%) World Community Grid, resource share 100 (25%) This configuration in earlier BM releases has always resulted in two CPDN tasks running and one each of Leiden and WCG. But with 6.6.36, CPDN gets ALL FOUR CORES assigned (four tasks) and I get no new Leiden or WCG tasks. The messages from the server state that the tasks "won't finish in time". This is not true. When I force Leiden or WCG tasks to download by suspending CPDN, when they finish the behavior always reverts back to what I described above. Is the 200/100/100 resource allocation causing this behavior? Should I reset the profiles to 100/50/50 ? Or is something else going on? Thanks. |
6) Message boards : Projects : News on project outages.
Message 23678 Posted 14 Mar 2009 by Ed Meadows |
Does anyone have any news about Cosmology@home? They had a power failure last weekend, came back up but then circled the drain all week. Now everything is offline, including their website. |
7) Message boards : Questions and problems : Not problem, just an option suggestion.
Message 22855 Posted 3 Feb 2009 by Ed Meadows |
I've heard far and wide that using the throttle causes problems, but I use it all the time on a particular server to prevent overheating, and I've never experienced a single problem. I know it "throttles" by constantly suspending and resuming WUs. Maybe the projects that I run on it aren't adversely affected (Cosmology, World Community Grid). What type of problems do people have? |
8) Message boards : Questions and problems : Benchmarks Question
Message 22838 Posted 2 Feb 2009 by Ed Meadows |
Beemer, Your suggestion spurred me into action. I found a pair of 3.2 GHz 1MB L3 cache Xeons on eBay for $69 and swapped out the old processors. 3.2 GHz is as high as the model G3 will support according to HP's specs. I didn't see any 2MB L3 cache available at the time that had the proper FSB speed. From what I've seen in the BIOS settings of the G3, it's locked down pretty tight and doesn't allow for much tweaking for OC'ing, and I really don't want to go there as the cooling system is really weak, and the chips run pretty hot already. The DL360 line has a real, documented problem with fan noise. There are 7 tiny, high-RPM fans that move the air through this slab. The fan speeds are 1) standby (noisy), 2) normal (loud), and 3) high (screaming). With the old CPUs, if I crunch with the Boinc Manager setting over 90%, if the ambient room temp goes over 66F, the fans launch into high and the noise is intolerable. With my new CPUs, anything above 75% at the same ambient will cause the fans to go into high. I guess the new CPUs run a bit hotter. This is even after several repeat removals and re-installations using different amounts of Artic Silver thermal grease (maybe it was too thin/ maybe it was too thick, etc). Following Arctic Silver's break-in process I've managed to get it up to 75% from 65% before the fans shift into high. BTW, you were right - my old CPUs were 512K cache, not 256K. It was an interesting experience, but I didn't plan on the cooling system in this box to be so WEAK. I might end up going back to my old CPUs. We'll see. Thanks for your suggestions. |
9) Message boards : Questions and problems : Benchmarks Question
Message 22554 Posted 21 Jan 2009 by Ed Meadows |
BeemerBiker, The machine is an HP DL360 G3 1U-height server that I just HAD to get (it's a piece of Amazon.com history - it was recently retired from the Seattle data center). That's great info you gave me and it has got me curious as to what other options I might have in the way of processors. Unfortunately, I think the DL360 G3 is very limited to which CPUs will work on its mobo. Yeah, I know that my CPUs' cache is VERY small (256K) and I see very high soft page fault rates because of it. There is NO L3 cache on them. Thanks, Ed |
10) Message boards : Questions and problems : Benchmarks Question
Message 22552 Posted 21 Jan 2009 by Ed Meadows |
Jord, Thank you for the answer. This is what I expected, and makes sense given what I'm seeing. Ed |
11) Message boards : Questions and problems : Benchmarks Question
Message 22543 Posted 20 Jan 2009 by Ed Meadows |
Perhaps I provided too much info above. If you have a machine with twice as many threads as processors, when you run Boinc Manager's CPU benchmarks, and it reports some numbers PER CPU, is it really reporting per A) PHYSICAL processor core, or per B) VIRTUAL thread? Thanks. |
12) Message boards : Questions and problems : Benchmarks Question
Message 22501 Posted 18 Jan 2009 by Ed Meadows |
I'm comparing actual operating results of two of my computers against reported BOINC CPU benchmarks. Computer 1: Win XP 32-bit on 2.8 GHz Pentium-D Dual-Core, 2 threads available (i.e., number of threads = number of cores): 1415 Whetstones per CPU 2403 Dhrystones per CPU Computer 2: Win 2003 Server 32-bit on two single-core 3.06 GHz Xeon CPUs with multi-threading, 4 threads available (i.e., number of threads is 2x the number of cores): 1442 Whetstones per CPU 3235 Dhrystones per CPU Now, both are running the same project (Cosmology at home). Computer 1 takes about 10.5 hours to complete a WU. Computer 2 takes about 21 hours to complete a WU. Computer 2 benchmarks slightly faster than computer 1, but computer 2 turns out to be twice as slow in terms of time to complete a WU. My question is, does the BOINC Manager's "Run CPU Benchmarks" report per physical CPU core, or per THREAD? It would seem like the former, since computer 2 turns out to be twice as slow per thread in terms of actual runtime required. If not, then something else is causing the discrepancy. I hope this question makes sense. Thanks. |
13) Message boards : Projects : News on project outages.
Message 21639 Posted 5 Dec 2008 by Ed Meadows |
Any news about Leiden Classical? They've been down since yesterday (Wednesday, 12/3/2008). |
14) Message boards : Projects : News on Project Outages
Message 19807 Posted 28 Aug 2008 by Ed Meadows |
What's going on with Superlink at Technion? They've been completely down for two days now. Even their website is down. |
15) Message boards : BOINC Manager : BOINC and the XP administrator account
Message 19779 Posted 27 Aug 2008 by Ed Meadows |
Is there any requirement that BOINC Manager 6.2.18 has to run only when you're logged on as an administrator? I installed it originally to run under the user under which it was installed. I switched this user from an administrator to a "limited user" and BOINC seems to be working ok. Thanks. |
16) Message boards : BOINC Manager : BOINC and the XP administrator account
Message 19771 Posted 26 Aug 2008 by Ed Meadows |
Is there any requirement that BOINC Manager 6.2.18 has to run only when you're logged on as an administrator? That's way it's currently running but I would like to have BOINC run under a power user's account instead - I.E., I just want to change the account classification from administrator to power user for the user that BOINC is currently running under. Thanks |
17) Message boards : BOINC client : Files camb_scalarcls.chk and camb_tensorcls.chk
Message 15010 Posted 17 Jan 2008 by Ed Meadows |
Sekerob - thanks, I always suspend all projects and then completely exit BOINC manager before defragging. I also back up the BOINC directory while I'm out. Ageless - this makes sense. There is always more Cosmology@Home WUs in queue, some of which may have some compute time, preventing these large files from resetting. I've reposted my question to the Cosmology@Home forum to see what they say. Ed |
18) Message boards : BOINC client : Files camb_scalarcls.chk and camb_tensorcls.chk
Message 15007 Posted 17 Jan 2008 by Ed Meadows |
I take care to keep my system very well tuned, including defragmenting my hard drive every other day. That's what brought my attention to these two files in the first place - in the defragmenter's report. So these are Cosmology files... since they are constantly growing and Cosmology WUs are flowing through my system normally, its apparent that this file isn't deleted once WUs are completed. I'll just drain the Cosmology project queue, detach, and reattach to the project and this should get rid of the files, so they can reappear and start growing again :^( Thanks for the information. Ed |
19) Message boards : BOINC client : Files camb_scalarcls.chk and camb_tensorcls.chk
Message 15003 Posted 17 Jan 2008 by Ed Meadows |
These files are in c:program files/boinc/BOINC/slots/2 and take up 54 and 11 mb, respectively. The file type is labeled as "Recovered File Fragments". They are growing larger and are frequently badly fragmented. What are these, and is something going wrong that I need to know about? Should I do something to manage these files? I don't see any evidence that anything is failing with my work units. Thanks, Ed |
20) Message boards : BOINC Manager : BOINC Q&A
Message 11826 Posted 27 Jul 2007 by Ed Meadows |
Do I just install over my existing version (5.10.7) or do I have to uninstall it first? Thanks, William. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.