Message boards : Questions and problems : Do any projects currently run on Itanium (IA-64) under any OS?
Message board moderation
Author | Message |
---|---|
Send message Joined: 19 Dec 06 Posts: 90 ![]() |
Thread title says it all. Don't bring up http://dotsch.de/; all his ports are either for shut-down projects (R.I.P. SIMAP!) or too old for the current science applications. |
![]() Send message Joined: 23 Feb 12 Posts: 198 ![]() |
Without recompiling the application yourself, I don't believe there are any BOINC projects still supporting it. There just aren't that many Itaniums in the hands of DC'ers these days. If the project requires a second copy for validation, they probably don't want those being used as the odds of having someone validate the work unit would be slim. Perhaps contacting or posting at a project that does not require peer validation and see if they are of more assistance. ![]() |
Send message Joined: 19 Dec 06 Posts: 90 ![]() |
If the project requires a second copy for validation, they probably don't want those being used as the odds of having someone validate the work unit would be slim. That's a good point; I didn't consider it. These days it seems like more projects are requiring the wingman thing. On that note, not to get too off-topic, would it be possible to be my own wingman with an Itanium bladeserver, which could run dozens of VMs, as long as some of them were registered under a different BOINC account? |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
If the project requires a second copy for validation, they probably don't want those being used as the odds of having someone validate the work unit would be slim. On the other hand, I suspect very few projects use or require the Homogeneous Redundancy mechanism. In general, and including the projects requiring a second copy for validation, it is assumed that every {OS|CPU|GPU} scientific calculation produces the same numerical outcome, to within the scientific accuracy required by the project. If an Itanium chip routinely produces a different (non-validating) answer when compared with an x86, 68k, PowerPC, RISC or ARM chip, which one of them is right? I'd say that's a false problem. Compile the source code with a reputable IA-64 compiler, don't use excessive optimisation switches, and launch your application under Anonymous platform. If the first few tasks validate, you're up and running. If they don't, go back and check your compiler settings (and any source code mods you needed) very, very, carefully until you find a configuration which does return the same scientific data as the other platforms. |
![]() Send message Joined: 23 Feb 12 Posts: 198 ![]() |
Richard I would say that there have been multiple projects that had trouble validating between Windows and Linux and therefore do use mechanisms for validation. Not sure how far that goes but, this is one of the reasons why some projects (WCG for example) have dropped support for some hardware. Power PC's is one example. It just took too long to get another user to validate using similar hardware. However, I don't have the time to look into it to see what percentage do this. Jazzop Yes it is possible to be your own wingman as long as it is a different "pc" or in your case VM. Some projects when they first start out will allow you to be your own wing man on same system but that is usually quickly corrected when pointed out to admins. So, if you ran VM's you could technically validate your work all from the same physical machine. Unless of course that project has done some tweaks of their own to prevent it for some reason. I see no problem with a user validating their own work as long as it is different machines. ![]() |
Send message Joined: 6 Jul 10 Posts: 585 ![]() |
On the other hand, I suspect very few projects use or require the Homogeneous Redundancy mechanism. Well, WCG does until this day do use [with historically about 2 exceptions of 28 projects that were hosted by them] to at minimum split at main OS level, so Android matches to Android, Linux to Linux etc. For Android at least they use Linux as alternate to rerun if Android does not produce a quorum. Always understood this as being HR level distribution. Coelum Non Animum Mutant, Qui Trans Mare Currunt |
Send message Joined: 25 Nov 05 Posts: 1654 ![]() |
I see no problem with a user validating their own work as long as it is different machines. But would they be "different machines" if they had the same client code compiled by the same person using the same compiler options, if those options were wrong? The 2 results could be the same, while at the same time being the wrong answer. |
Send message Joined: 19 Dec 06 Posts: 90 ![]() |
I see no problem with a user validating their own work as long as it is different machines. The old "Precision vs. Accuracy" problem. |
![]() Send message Joined: 23 Feb 12 Posts: 198 ![]() |
I see no problem with a user validating their own work as long as it is different machines. And yet allowing the code to be compiled also means it can be distributed to other people using that same bad code which does happen. So, yes this would be absolutely no different as far as faulty code goes. I agree that limiting it by user instead of by machine may be better, but when you have donors with entire data centers at their disposal, how far should we go with it? ![]() |
Send message Joined: 6 Jul 10 Posts: 585 ![]() |
Aside from 'to allow or not to allow private compiles', different physical machines is the requirement on those results needing verification, but if this is still ensured with the 'allow multiple clients' or multiple VMs on one device is doubtful. Does the feeder/scheduler look beyond client ID and any homogeneous redundancy requirements when distributing work? Self-verification is just not a good idea, when there's a higher chance of one class being controlled by one or few users. Coelum Non Animum Mutant, Qui Trans Mare Currunt |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
Aside from 'to allow or not to allow private compiles', different physical machines is the requirement on those results needing verification, but if this is still ensured with the 'allow multiple clients' or multiple VMs on one device is doubtful. Does the feeder/scheduler look beyond client ID and any homogeneous redundancy requirements when distributing work? Self-verification is just not a good idea, when there's a higher chance of one class being controlled by one or few users. Self-verification can be allowed by project administrators, if circumstances permit - I've seen it done for Beta testing, where the pool of volunteers is too smaller to ensure timely validation. In that case, you're perhaps looking for the validation failures for further investigation. <one_result_per_user_per_wu/> <one_result_per_host_per_wu/> from http://boinc.berkeley.edu/trac/wiki/ProjectOptions#Joblimits I've also seen at least one project administrator - was it Eric Mcintosh at LHC? - work very diligently to get exact bit-wise matching results from heterogeneous platforms - I don't think I'd be happy contributing cycles to a project that didn't at least pay lip-service to that ideal. |
Send message Joined: 6 Jul 10 Posts: 585 ![]() |
"<one_result_per_host_per_wu/>" If this checks the device network string e.g. comp01 and a second client on the same host also transmitting comp01, then it looks like hardware wise segregation of task assignments is ensured. What if though the <suppress_net_info>1</suppress_net_info> option is set? (maybe the internal IP/sub comes into play). Integrity is the numero-uno concern for any DC project owner, and somehow temptation seems to win it for the few who think scoring is more important than delivering good results. Coelum Non Animum Mutant, Qui Trans Mare Currunt |
Send message Joined: 5 Oct 06 Posts: 5149 ![]() |
I think 'host' will probably be determined solely by HostID number - and the second client would get a different HostID. We'd need to dig a lot deeper to get a definitive answer to that. |
Send message Joined: 19 Dec 06 Posts: 90 ![]() |
different physical machines is the requirement on those results needing verification, but if this is still ensured with the 'allow multiple clients' or multiple VMs on one device is doubtful. Would it make any difference if the VMs are running on a blade server? The specific machine I had in mind when starting this thread is an SGI Altix 450 with 16 IA-64 (x2 CPUs each) blades. Sort of blurs the line separating "different physical machines", don't you think? To add another wrinkle, what about blade servers that can handle blades with different CPU architectures? I think you can populate an IBM BladeCenter with x86/x64, POWER/PPC, Cell, and UltraSPARC blades simultaneously. If that doesn't count as different physical machines, I give up! |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.