Posts by Ozylynx

1) Message boards : The Lounge : Ted Kennedy killed a woman and got away with it. (Message 15203)
Posted 1 Feb 2008 by Profile Ozylynx
Post:
Hi Learylin.

I'm not sure what your post has to do with anything much or what it means to you.
Perhaps what it means to me might be of interest....:

Ted Kennedy (Rich and influential people) has killed a woman (kill other people) in 1969 (any time) drove his car off a bridge (anywhere) and the woman drowned(Victim of unfortunate circumstance?).And he got away with it(Rich people hire good lawyers).I would never let anyone endorse me that has a bad record(I wasn't in court that day. So I don't know. Was there a conviction recorded?). Would You? (Nah. I've never made a mistake. I'm a legend in my own lunchbox)

Lighten up man.
You're guessing at the circumstances and one shouldn't make judgements like that anyway. That's what the courts are for.
You either trust the system or change it.
Or move to another country.

Cheers.
2) Message boards : The Lounge : How hot does your CPU get? (Message 15202)
Posted 1 Feb 2008 by Profile Ozylynx
Post:
I run my 8 computers 24/7 crunching 100%.

The highest temp is 53C on my AthlonXP3000+, as I think the copper ThermalTake HS&F may not be seated quite square atm. All of the others run at sub 50C on standard air. I worry at 50C+ and take action at 55C without delay.

Sekerob and particularly Nicolas must either be getting bad temperature readings or be the luckiest people on earth. 'cause the critical temp for a P4 is 69C and it's all over when that gets passed. I can't imagine them not shutting down at those temps guys. Most accurate temperature readings, for Intel chipsets only, are found in the TAT program available for download from Microsoft. It gives real time temps in 0.1 Deg increments and loads your system even harder than BOINC!

Hottest thing here, no she's not looking, is an old ASUS MoBo, running an OC'd CeleronD. The CPU runs nice and cool but the MoBo will boil the kettle for the nice cuppa tea one needs, to settle the nerves, while awaiting the inevitable disaster...

Cheers.
3) Message boards : The Lounge : Help with Project Points Proposal (Message 15023)
Posted 18 Jan 2008 by Profile Ozylynx
Post:
SLM is not an alternative client or plug-in. It's a stand alone program and it won't bother BOINCView at all.

Even better.
4) Message boards : The Lounge : Help with Project Points Proposal (Message 14998)
Posted 17 Jan 2008 by Profile Ozylynx
Post:
That seems like an idea worth discussing if that's what you're saying.

Indeed. I came across a reference to it when looking for a 'plug-in' to force the BOINC client to only use one core for a particular project, leaving the other core free for other BOINC projects. My objective was to avoid L2 flooding, by a single high demand project, using both cores simultaeneously, as happens now with the 5.10.30 client I'm using.

If SLM fits the bill then you're on to something. I think in many respects, memory usage and L2 usage often go hand in hand.

btw. The 'plug-in' aspect is fairly important to me as I use BOINCView on my farm and I don't think it works with alternative clients. Could be wrong there...

Cheers.
Keith
5) Message boards : The Lounge : Help with Project Points Proposal (Message 14995)
Posted 17 Jan 2008 by Profile Ozylynx
Post:
Pepo and KevinT.

I agree with both of you.

I also still believe that my proposal, in the first article of this thread, stands as the best alternative(to date) for what we all want. Regarding what isn't wanted, in the Trac proposal. I respectfully suggest that another thread might be more appropriate and I would be only too eager to join you there in that discussion.

Meanwhile , back at the ranch, I'm trying to get some improvements and suggestions for the original proposal. Which at least takes into account some of the missing factors in current credit equations. (If not all)

I do disagree on one point in the previous posts though. Excess bandwith requirements should be given a bonus IMO. U.S. and U.K. citizens enjoy cheap broadband, unavailable to most of the rest of the world. I personally pay $60/month for 512K Cable limited to 25G/month. That is a very good deal in my country! I can pay more, much more, for a faster connection but the total usage limit remains regardless of the plan I choose. Projects like BURP become an impossibility, under any circumstances!

Cheers.
Keith
6) Message boards : The Lounge : Help with Project Points Proposal (Message 14987)
Posted 17 Jan 2008 by Profile Ozylynx
Post:
In my experience you won't buy a Prescott on Ebay for less than the E2180 and Mobo would cost you to buy new. You'd be much happier with that. Believe me! Of course, you'd have to make sure a 'standard' MoBo fits your case, assuming its a proprietary brand Computer. Some do and most don't.

btw. Northwood outperform Prescott in many areas and mine @2.8Ghz runs 2*Simap with ease. In fact it's keeping some very illustrious company, on the RAC records within the project.

Something else to think on...

Cheers.
Keith.
7) Message boards : The Lounge : Help with Project Points Proposal (Message 14984)
Posted 17 Jan 2008 by Profile Ozylynx
Post:
Q6600 2.4ghz has 2x4mb L2
Xeon 3.6ghz has 1mb L2

The former does certain jobs at half the time of the latter, perceived powerhouse, so my next replacement coming up for the P4HT would be a QX9650 45nm with 12mb L2, just 1300 USD i read :( Might earn it back on the ebill though :)

Very nice.

My next will likely be a Q6600, but being of meagre means, I'll use the Asrock 775i65G MoBo like the one I have the E2180 on, for around US$67. That lets me use the DDR RAM, IDE HDDs, and AGP cards, I already have. The E2180 was built for a total of US$164 that way.

When I build the Q6600s I will sell the AthlonXPs and MoBos on Ebay. Everything else will be reused. That should cost less than US$400 each to build. I hope 2x4M L2 will last a year or two on BOINC.

Is your P4 HT a Prescott or a Northwood? 1M or 512K of L2

Cheers.
Keith.
8) Message boards : The Lounge : Help with Project Points Proposal (Message 14957)
Posted 16 Jan 2008 by Profile Ozylynx
Post:
Dagorath.

I'm generally unimpressed with 'spin' doctors. We are not amused... lol

All suggested remedies have already been implemented. Specially the 'detach from projects' one.

I was, of course, referring to my original proposal in regard to ironing out the anomaly created in the credit system, by the "screw you over" projects. Despite having found that some can be very useful, if properly managed. A computer with plenty of L2 can be most profitable in a project that gouges on L2 demand, has a quorum and averages points, for example.

That still does nothing to fix the Credit system as a whole and all the proposals for fixes can't work without, implementing a L2 fix first. It's just not possible.

Can't stop thinking that modern computers with all that Cache and multi-core technology, Gigabytes of RAM, and Terra-bytes of storage is a little like taking an Euchlid to pick the kids up from school.

btw. How's the SLM project coming? A strange one for a bloke that doesn't know enough about L2 and RAM issues to comment. (ROFLMAO)

Cheers.
Keith

9) Message boards : The Lounge : Help with Project Points Proposal (Message 14944)
Posted 16 Jan 2008 by Profile Ozylynx
Post:
Now I'm spamming my own thread. I just realised something too important to let pass though so....

The figures for the E2180 with a flooded cache indicate about 210% of 'normal' work time to complete the task. Now remember the previous example of the Celeron 1300 claiming 1465 points for a WU, when its Cache was swamped? The E2180 claimed 683 credits for the same WU. Try this: 683*210%=1434, a 31 point discrepancy. That's 0.97% deviation. Not bad for 2 totally different systems. I bet some projects, wish they could get that close in the quorums.;0)

Conclusion: 99% of point deviation under the cobblestone credit system can be directly attributed to L2 cache issues which remain ignored. The remainder is most likely to be RAM speed.

Can we return to creating a system which takes this anomaly into account please?
Ideas?

Cheers.
Keith.
10) Message boards : The Lounge : Help with Project Points Proposal (Message 14936)
Posted 16 Jan 2008 by Profile Ozylynx
Post:
Oops. forgot this...
even though, at times it would be quicker to go directly to the HDD.

This applies to the situation where the CPU would intuitively 'know' that its instruction is still on the HDD and would skip the other steps, accessing the HDD first. That doesn't ever happen in real life unfortunately.

Cheers.
Keith.
11) Message boards : The Lounge : Help with Project Points Proposal (Message 14930)
Posted 16 Jan 2008 by Profile Ozylynx
Post:
Hi Peter.

Just quoting from an article, as best I can recall it.
Page files to RAM, as I understand them, do exist and are handled in exactly the same way as page files to Disk. Apparently page files to disk are written when available RAM becomes low.

Regarding >50% CPU usage on R/W of page files. yes! You can just feel another example coming.

e.g.
two_agd_anthracine alone..................................................... = 2.15%/hour.
two_agd_anthracine with other project................................. = 2.15%/hour (very low cache demand, runs on 128K level 2 cache very well)
two_agd_anthracine with other QMC (last2_224_peptidexp) = 1.43%/hour (lower cache demand for peptidexp)
two_agd_anthracine with another two_agd_anthracine........= 1.03%/hour

This is a set of figures I compiled in answer to someone's question regarding E2180 performance with shared l2 cache in another forum.
They are expressed as percent completed per hour for the same WU. The WUs on this project are very L2 intensive. The examples given which include a 'low' intensity WU are in fact SIMAPS. Note: These results are all on the SAME machine, at the SAME time with the SAME equipment and setup. They are easily repeatable. The two_agd_anthracine mentioned swamps 512K of L2 while the last2_224_peptidexp uses less L2 than the other it is sufficient to mildly flood the 1M L2 when in combination with a two_agd_anthracine.

One can see that the main WU under study the two_agd_anthracine, is completed on one core at a rate of 2.15%/ hour and that doesn't change if combined with a SIMAP WU on the other core. The real news however is when it is combined with another L2 intensive WU on the second core and the processing drops to 1.03%/hour for BOTH cores!! If it were not for L2 flooding, one would expect 2.15% on each core, giving a total of 4.3%/hour, but in reality only achieving 2.06%/hour overall. As you can see this is way less than 50% efficient.

While all of this is happening, the claimed benchmark(cobblestone) scores, continue to accumulate as if the computer were working at 100% efficiency.

Cheers.
Keith.
12) Message boards : The Lounge : Help with Project Points Proposal (Message 14919)
Posted 15 Jan 2008 by Profile Ozylynx
Post:
What's the world coming to? I find myself agreeing with Dagorath...sigh.

Anyway, if Dagorath doesn't know enough about L2 Cache, then I'm assuming many others probably know less. I will try to explain to the best of my ability. I only recently learned myself and can't find the excellent article so I will just do my best from memory.

Cache is divided into 2 main types. L1 and L2 and sometimes L3, which has traditionally resided on the motherboard, in a separate high speed memory module. L3 cache is now being placed on the CPU at full processor speed in the latest top end CPUs.

L1 cache is usually small 128K is typical of chips produced in the last few years, and is divided into 2 sections each of 64K, one for Data the other for instruction code. L2 cache is also at full processor speed and ranges in size from 128K on older CPUs to 8M on a modern top end model. Most of us would be familiar with sizes between 256K and 2M.

How it works: The CPU asks for instruction from the L1 Cache as to what to do with the data. If it is unable to find the instruction set there, which is more often than not, it next asks the L2 Cache, then the L3 cache, if there is one. Then the RAM and if that all fails, the R/W Cache on the HDD and finally the HDD itself. It always follows that order from fastest to slowest source of the information it needs even though, at times it would be quicker to go directly to the HDD. Often the instruction set will be in the L2 cache, so the processor barely misses a beat and is able to carry on work at full speed. If however the L2 Cache does not contain the required instruction set it needs to ask the Paged files on the RAM. The instructions are then written from RAM to cache, taking processor time, and a set of instructions is also written from Cache back to the Page file on the RAM, to make room for the new instructions and taking more time. btw this all takes CPU cycles which, importantly for benchmarking purposes, ARE counted as CPU time spent on processing.[edit] A CPU with a 'swamped or flooded' L2 Cache can spend more than 50% of its flops writing swap files to RAM[/edit] Obviously if the L2 cache is large enough there is a much reduced need for the processor to look further and it is able to continue its work without interruption and at full speed for most of the WU. Having to write read and write back and forth to RAM or the HDD has a very dramatic effect on overall CPU performance and that effect is most apparent when doing 100% demand tasks like BOINC.

Another factor in the constant reading and writing to and from the Cache is the quality of programming within the application. Due to modern 'modular' cut and paste techniques, it is common for the one set of instructions to be repeated many times within a program, instead of using the old fashioned, 'nested loop' technique where a program was told to return to a previous instruction, it will be given the same instruction under a different tag or line number. The processor thinks it will be a completely new set of instructions every time and quickly floods the L2 Cache requiring the swap file routine to be repeated over and over for convenience sake of the programmer. Hence, one of the reasons for ever increasing Cache sizes and faster CPUs to cope with the demand.

That's about all I've got. Hope it helps some of you.

Cheers.
Keith.
13) Message boards : The Lounge : Help with Project Points Proposal (Message 14913)
Posted 15 Jan 2008 by Profile Ozylynx
Post:
Yes Sekerob. Uniformity is indeed the key. There will always be exceptions present. AMD Vs. Intel architecture etc., These nuances become the responsibility of the end user to make an informed choice about project selection.

I note however that nobody has tackled the L2 cache issue. This, in my observation, is the single greatest variant in the credit system. It's hardware. It currently is completely ignored by benchmarking systems. It is never mentioned when looking at minimum system requirements and is effected by ram speed, quantity and CPU FSB. It is also subject to sloppy or improper programming and most of all is difficult for the user to detect, as swap files to and from paged ram don't cause any visible reaction by the computer itself. Therefore it is nearly if not completely impossible to quantify with software benchmarking. Unless one is specifically looking for and recognizes its effects it will go completely unnoticed.

WCG is one of the worst offenders in this area as nearly all of their projects put serious demands on cache. Just take a look at the claimed points within any quorum, that is where it shows most easily. I need say no more.

The Trac proposal: A quick glance sees so many gaping holes in it that I haven't spent any time trying to decipher the math. Even if the blatantly unfair surface issues were corrected, which would require a totally different mindset to the way it is written now, the administration of it would be a nightmare. Moreover it still doesn't answer any of the critical benchmarking issues. It also is open to viral type attacks simply by being flooded with point claims, which it is fundamentally designed to reject and recalibrate itself upon. It would be tied in knots within hours. It hasn't been edited as there is no basis to build on.
Don't get me wrong. I can see the aim of it. It is built with an agenda, to take choice away from people and encourage them to spread their computer shares over as many projects as possible and to always look for new ones. That may be the authors philosophy. It certainly isn't mine and I resent the implications of such a surreptitious action in the guise of a credit proposal. That may work in the political arena, but most politicians aren't that 'ham fisted' about it. One notable exception....

I also note it is unsigned. That's good. Means I don't know to whom I need to be politically correct or polite. In short, an abomination.

Hey that's more than I thought I'd ever have to write on that topic.

Cheers.
Keith.
14) Message boards : The Lounge : Help with Project Points Proposal (Message 14908)
Posted 15 Jan 2008 by Profile Ozylynx
Post:
As was shown a few days ago in a screenshot from Process Explorer (Post Nicolas), the elevation of BOINC.exe, though doing something to up the stability and increase of the test results, the actual individual test thread runs at low priority, which is why we barely notice it takes place. Would the test be elevated, a user will experience this while using the computer (you definitely notice it when the old WCG UD agent runs the test). Dilemma, should a user be submitted to that? Maybe if so, the pre-benchmark routine should sense if a system is idle for x-minutes before launching. Given it runs once every 5 days, that seems an viable fix.

I have to ask, Sekerob. What is it that you think will be fixed?

The benchmark doesn't and can't take unknown variables into account. It therefore measures nothing of any 'real world' value, under the current circumstances. The unknown variables,(see previous post), must become known and quantified before any formula can be applied to their measurement and subsequent impact. Let me remind you that the example which brought this segment of the thread into being, was on my computers in a real world situation. The computers I use to illustrate these situations are properly benchmarked and are 24/7 dedicated crunching machines, never used for or influenced by, other programs.

This situation, which exists right now, is the driving force behind this thread. Ideally the only variable in the equation should be Time and that should be able to be forecast from the other 'knowns'. If one can't do that, any formula for benchmarking cannot work. Attempting to 'fix' benchmark issues on a client computer, is like P'ing into the wind. It will simply come back and splatter you. Having the WU carry a specific value is the only viable alternative.

This also applies to the 'indeterminate' work. The fact is, when push comes to shove, that this work hasn't been studied to determine the exact processes which take place within them, how those processes are handled, and what the resources are which is needed by those processes. Where once we would need to know such things to write appropriate and efficient code to handle them, we now just build faster, more efficient CPUs, larger HDDs install more and faster ram and put L3 cache on die with multi processor cores. That's just the way of the world. That's not a criticism. Nor is it within my domain to develop the skills to 'fix' it in order to create an appropriate benchmarking technique for them. The projects, using incompletely researched WUs need to design their own system.

I read somewhere, not very long ago, that we are still about 100 years away from learning how to utilize the FULL potential of a Pentium 100 processor. I'd dare say that time frame is expanding rather than reducing.

Cheers.
Keith
15) Message boards : BOINC Manager : How do I control Dual CPU usage? (Message 14904)
Posted 15 Jan 2008 by Profile Ozylynx
Post:
Thanks Pepo. I'll see how they went. Sounds like just what I'm looking for.

Keith.
16) Message boards : The Lounge : Help with Project Points Proposal (Message 14903)
Posted 15 Jan 2008 by Profile Ozylynx
Post:
I will try again.

L2 cache demands and memory usage are unknown variables which cannot be taken into account by the users computer. While they remain unknown, cobblestone calculations of credit are useless.

I have proposed, what I believe to be, a simple, effective and workable solution, for some projects.
I'm not sure if processes can change their own priority level on the fly but if they can then it should be easy to mod BOINC to raise it's priority just before it runs benchmarks and lower it after benchmarks finish. That might not be acceptable to some users so that feature should be made optional with the default set to off. There should be a warning if user turns the option on.

This is easily done in WindowsXP under task manager. Unfortunately, it doesn't address any of the problems. Most low claiming computers, claim low, because they are better equipped to handle more extreme system demand. As outlined earlier in this post.(counter intuitive in my book) i.e. 2 different computers will claim the same credit on extremely low demand tasks and vary by 300% or more when claiming for high demand tasks. Or anywhere in between. The better, faster computer will always be the low claimer.

Keith

17) Message boards : The Lounge : Help with Project Points Proposal (Message 14880)
Posted 14 Jan 2008 by Profile Ozylynx
Post:
Rubbish!
18) Message boards : The Lounge : Help with Project Points Proposal (Message 14868)
Posted 13 Jan 2008 by Profile Ozylynx
Post:
Dagorath.

You not only see problems, you invent them where they don't exist.
Forum rules prevent further comment.


Zombie67.

I gave a clear example of why, in many cases, internal position comparisons DON'T work. Yes they could work. That doesn't change real world fact.

It was also hoped that the disparity between the 'claimed' credits of the two computers may be noticed. Both would be claiming less than the proposed Benchmark Computer est. 1500. All would be awarded the 1500 credits instead of the ridiculous 683 the E2180 would receive on a project only paying benchmark credits as claimed. It only serves to highlight the credit erosion that has taken place over the years.

That's what is trying to be set right in this forum.

Cheers.
Keith
19) Message boards : The Lounge : Help with Project Points Proposal (Message 14865)
Posted 13 Jan 2008 by Profile Ozylynx
Post:
Rather than comparing credits between projects, you compare positions


Yes this is an excellent incentive by BOINC and an average overall position comparison of individual users, as well as teams would be interesting.

It doesn't however, overcome the fundamental problem of unfair credit allocation, within the same project, affecting the position of the individual within that project.

An example: One project, that I was with for some time, used an averaging technique for claimed credit. I'll use the WU mentioned earlier. I have database info on that one and it is from a different project but the principals apply equally. The Celeron 1300, 256K cache, took 211.5 CPU hours to complete claiming 1465 credits. E2180 1M shared cache, took 47.9 CPU hours and claims 683 credits. Average 1074. The E2180 receives 157% of its benchmarked credit while the Celeron only gets 73%. This effect is more or less, depending upon which computers are grouped for an average and IMO represents no Credit System whatsoever. It certainly makes rankings and positions, even within the same project, quite farcical.

Further to the other issue. To quote myself from the original post in this thread:
While I am aware that some projects have fairly constant run times within a project others do not. At least with a firm base to build upon, the likelihood of an appropriate algorithm being developed to deal with this situation greatly increases.

There is an answer out there. This is getting into the technical areas beyond my capability. Some see problems and others solutions. C'mon BOINCers.

Cheers
Keith
20) Message boards : The Lounge : Help with Project Points Proposal (Message 14863)
Posted 13 Jan 2008 by Profile Ozylynx
Post:
you've gone from a system in which a fairly accurate/reliable extrapolation is envisioned to one where we measure the worst case scenario, whatever that is, and then let the chance that the WU takes less time determine the final outcome. I don't see much chance of a system that works on chance ever getting off the ground.

Thank you Dagorath.
At least one person sees the idea as giving an accurate and reliable extrapolation. The chance factor, of which you speak, applies to the few specific projects which are 'indeterminate'(means by chance) within their own structure. The system, was never designed with that situation in mind. They are a completely different issue to be handled under a completely different set of rules.
[edit] btw. pph would remain constant. Credits for a completed task would be the 'chance' variable, in line with the time taken to complete.[/edit]
And since it's all happening on the host it will get cheated anyway.

This is a very valid point. I actually confirmed this for myself after my previous post. I also discovered that they can put irregular strain on L2 cache demands. Another idea is needed. The, one I put forward for handling this type of project is flawed.
They are not as easily cheated, they're easier to define, easier to code, easier to tweak too.

The info behind this statement would doubtless solve all of the problems. Please enlighten us as to how they are being easily and I sunmmize accurately, defined?

I'm encouraged that the basic proposal appears sound. Let's iron out the wrinkles. Any ideas? Let's Think Tank this people.

Kathryn, thanks for the input. That's all valuable info.

Cheers.
Keith


Next 20

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.