I do. Do you understand that that means it can compile code using at least SSE2 on non-Intel processors and that Intel either must have validated them or not considered it necessary to validate them. :welcome:
Printable View
They have validated only the code that is actually used in the compiler - not all of the code that can be used in the compiler.
I'm no coder thats for sure but this to me sounds like sse2 function is currently assumed...
Quote:
3.1 Compatibility
In version 11, the IA-32 architecture default for code generation has changed to assume that
Intel® Streaming SIMD Extensions 2 (Intel® SSE2) instructions are supported by the processor
on which the application is run. See below for more information.
http://software.intel.com/file/24088Quote:
3.4.3 Instruction Set Default Changed to Require Intel® Streaming SIMD Extensions 2
(Intel® SSE2)
When compiling for the IA-32 architecture, /arch:SSE2 (formerly /QxW) is the default as of
version 11.0. Programs built with /arch:SSE2 in effect require that they be run on a processor
that supports the Intel® Streaming SIMD Extensions 2 (Intel® SSE2), such as the Intel®
Pentium® 4 processor and some non-Intel processors. No run-time check is made to ensure
compatibility – if the program is run on a processor that does not support the instructions, an
invalid instruction fault may occur. Note that this may change floating point results since the
Intel® SSE instructions will be used instead of the x87 instructions and therefore computations
will be done in the declared precision rather than sometimes a higher precision.
All Intel® 64 architecture processors support Intel® SSE2.
To specify the older default of generic IA-32, specify /arch:IA32
It's a technically superior method to select a codepath based on processor capabilities. There really isn't any valid excuse for not doing so and intel should be well aware they'd be criticized for selecting based on vendor id string.
Nobody expects intel to fully verify the non-optimized codepath on competing processors. Why would you expect that they'd have to verify optimized codepaths on competing processors either? The default behavior should be to query processor caps and utilize the best supported codepath. If there is a bug in a competing processor that causes that path to run slower or crash then it's the maufacturer's fault not intel's.
This really isn't that big of a deal though. It only affects lazy programmers that don't set compiler switches. And if the default behavior is nerfing the competition then the FTC will likely make them change it anyway.
Intel and Microsoft have always been closely related. Actually the existence of Microsoft is very much due to Intel. ;))
In other words, only trust in GCC.
Joking aside; The compiler generated µArch specific optimizations can't really make huge differences in games, they apply to very specific fragments of code which rarely exist in inner loops.
The problem you have is who defines nerfing?
Again, just using default code path (which would run on Intel CPUs WITHOUT SSE2 or similar special instruction sets) on non-Intel CPUs is NOT nerfing.
If Intel used a completely different code path on lowest capability Intel CPU and on any AMD CPU - THAT would be a reason for legal action.
IMO, the best optimization comes from non-SSE specific optimization with ICC for most code - for example, vectorization which is not done by MSVC, IPO that is above MSVC level, etc. SSE2 can make a difference in only some specific scenarios (it can make a huge difference there, but how much of real-world or any code really can be optimized for that - I believe very low, aside from encoding)
Exactly!!! What should have happen is that Intel provided the optimized codepath resulting in AMD's SSE 2, etc to either not work, error out or simply not work correctly. This should have been the news of the day! Not about Intel using vendor id string just to use SSE 2, etc instruction.
Found it:
This is version 8 though, no telling if current versions so this as well or its been fixed since then.
http://www.swallowtail.org/naughty-intel.shtml
Quote:
n many of not most benchmarks, the Intel compiler produces the fastest code of any F77/F90 compiler. However, care needs to be taken on non-Intel chips. Code compiled with -axW, for example, will not use ANY SSE or SSE2 instructions on non-Intel chips. This will almost certainly greatly impact performance. If the use of SSE2 is forced with -xW (which is what Intel sort-of-recommend for Opterons), then some SSE2 code will be used (as the code in the main program will use SSE2 instructions), but calls to the vectorised single-precision math instrinsics will use SSE, not SSE2.
So it seems that if the next Radeon drivers will only give
half of the performance on Intel cpus compared to AMD cpus
-a few people here will have no problem with that?
Since taking youre Honda to a toyota garage and having your
wheels fall off is OK too - it's your fault really for expecting
the toyota people to put all the screws back.They only do it
for toyota.They're not supposed to do it for competition
do they? I would guess if someones relatives happened to be in that car - they would change their minds a little
quicker.
I think if the difference is only up to 10%, they are not deliberately "de-optimising" for AMD cpus.Quote:
His solution? Patch the compiler to sidestep the CPU check, and run a few quick benchmarks.
He finds that "patching out the 'GenuineIntel' check makes no difference to the P4, but increases the performance of the Opteron by up to 10%."
If you had to choose 2 types of optimisation, one that is fastest in 9 out of 10 cases, but 50% slower on the 10th, or one that is 90% as fast, but in all cases, which would you choose? Intel spends no time supporting AMD with its compiler, as it isn't in its best interests, but you can't say it specifically de-optimises for such small differences.
What Intel have done with their compiler is the same as nVidia have done with Batman AA (pun intended) and PhysX.
It ammounts to:
{if CPU=GenuineIntel}
{Check & Apply Extensions support}
{else}
{ignore Extensions}
NVidia have gone further by disabling ATI Graphics with nVidia PhysX (e.g. relegating a GTX8800 to PhysX because nVidia don't have a 5800 series competitor wont work out-of-the-box). Yet I don't see half as many people standing up for nVidia on this as have stood up for Intel in this thread, perhaps it is because there is a workaround for PhysX/AA, but once code is compiled it can't be easily hacked/patched to re-enable AMD CPU extension support, and maybe the stealth, longevity and success of Intel's code gimping is worthy of merit.
In my eyes and more importantly the eyes of the FTC Intel has been caught out on this one.
The defence of Intel by this community leaves a nagging suspicion. so would all the Intel employees on XS please stand up?
Seems like programmers on Ubuntu's forum ran into this 18 months ago:
Do Intel compilers still cripple AMD processors? - Ubuntu Forums
They had no problems getting the Intel compiler version 10 to enable SSE2 optimizations for AMD CPUs by default.
After a patch, they were able to run SSE3 optimizations for AMD as well.
Seems like there is a lot of smoke to this claim, but not much fire.
You are so totally missing the point, if you want the best assurance a job is done right to take it to the correct dealer with the correct resources such as localized parts dept, vehicle specific service/tech manuals on hand as well as experienced/trained personnel to properly diagnose & support brand X. Dealers generally will specialize on their supported/sold brands, it doesn't mean they can't work on other brands but the techs are generally certified/trained and specialize on the brands the dealer supports.
Sure you can use Intel's compiler for your code but don't expect them to support a competing product to the same level as their own...
As has been pointed out ad naseum, supporting a competing product and intentionaly crippling it are 2 entirely different things. Well it just adds credibility to AMD if intel felt they couldn't compete on a level playing field, and needs benchmark favortism on it's side.
Well I wouldn't expect allot of perspective coming from you on the matter regardless since you will choose one sided irrespective of the topic.
Is it Intel's job to hold the hand of the developer to make sure they know how to use the compiler or should a credible developer be smart enough to know how to use the compiler begin with.
I see. So my perspective only counts if it falls on the side of the monopoly who even the FTC finds their compiler anti competitive and mandates change. I'm looking at the proof in black and white and not choosing to ignore it. That pretty much explains all I need to know about your perspective also. ;) Unfortunately for the crowd with your opinion on the matter, people have the ability to read, and i'll wager that the vast majority see things the right way; that intel's underhanded BS has caught up to them.
No, simply that you have one perspective regardless whether it be right, wrong or otherwise, there's no reasoning outside of your perspective. Like this conversation is a waste of time since you will never see it from any other perspective than one of AMD's and Intel will always be wrong.
Again, the developers have free choice on the compiler they use and the responsibility of knowing how to use it.
Do you always install all your software with std factory defaults or do you ever customize the settings...
[edit]
bah, forget it
Apparently you don't completely understand what is being discussed.
You are not correct. It is not Intels job to to take on the responsibility of the non-Intel CPU manufacturer.
Intel has "Intel specific" portions of their instructions sets in addition to their non-Intel specific portions. Nobody is suggesting that Intel allow non-Intel chips to use the Intel specific code. What is being suggested is that Intel allow the non-Intel chips to use the most efficient/fastest non-Intel versions available. Currently they don't do that.
Actually it IS there job to make sure their code works on non-Intel CPU. When their compiler claims to support non-Intel CPU then they have accepted that task. When they do that job they need to use the best instruction set that is supported by each CPU regardless of brand. Some people are purposefully forgetting that Intel has claimed this support and accepted the task. Or they claim that Intel doesn't need to do the best job possible. (I.e., they are advocating that it is acceptable for Intel to be incompetent.)
None of the things you mention allow them to purposefully cripple the competition. If they don't want to support non-Intel they need to specifically drop all non-Intel support. (Which they won't do or nobody would use their compiler and we wouldn't be having this conversation.)
ANSWER: Since they claim to support non-Intel CPU. Since they have accepted this responsibility then they need to do the most professional job possible. (Again: if they don't then they are being incompetent.)
No I don't make compilers.
And since I only started using compilers about the time that the original Star Wars movie was released I probably don't know much about them. (Even if I did fix a few bugs in one a few decades ago.) <sorry for the sarcasm>
EDIT: Actually I am amazed this can even be really debated in any meaningful way; no less 5 forum pages. It's very simple: a bug was found with Intel's compiler. It needs to be fixed. Intel claimed they would fix the bug a long time ago, but never bothered actually fixing it. (A side point is that obviously it is beneficial to them to leave it for marketing reasons.) That is really the end of the discussion. (And if the rumors that the AMD/Intel settlement specifically includes a clause that they fix this bug they apparently will do it sooner or later. But that doesn't change the fact that they should have done it a long time ago.)
If you can prove that that happens because they didn't bother optimizing, then yes, I would have no problem with it - I would simply buy nVidia which I would buy anyway (since I have a lot of issues with Radeon drivers).
But you did not prove that Intel has nerfed the performance, but just that it did not bother providing the best.
We don't agree what nerfing/crippling means - so we won't agree on whether Intel is guilty or not.
AND? ATI is to blame for that - they didn't try to help the Batman AA development.
You will disagree with me on this probably (or you would not have mentioned it as an example), but that just shows the two sides here don't have the same mindset and don't think alike.
I don't like the fact that Batman AA doesn't support everything an ATI card can do, nor do I like that ICC compiled code does not provide the best possible performance on non-Intel chips - but that's where it ends - dislike.
I don't play Batman, and when I used ICC I made sure I compiled the code the way it gives the best performance for me - I used my brain! Unlike most developers it seems. I tested the code.. I didn't just trust it would run correctly.
You do know there is a "workaround" for ICC as well? A forced instruction set compile?Quote:
Yet I don't see half as many people standing up for nVidia on this as have stood up for Intel in this thread, perhaps it is because there is a workaround for PhysX/AA
Errr, actually it's ridicolously easier to do the patch for this. Waaay easier than nVidia PhysX hax.Quote:
but once code is compiled it can't be easily hacked/patched to re-enable AMD CPU extension support
Now, PhysX not working with ATI card - that's quite simple - did nVidia claim anywhere that it works in situation where the primary card is not nVidia? If they did, they'd be sued already. They didn't? Slutty practice, but hey... did they lie to you actually? You'll just know what to expect from them after.
And they are at a loss here really - you bought a low-end nVidia card, and a high-end ATI card. You won't be crazy to get a high-end nVidia card now, instead you will see the low-end nVidia card and do without the PhysX - or hack it.
I'm still sitting.Quote:
The defence of Intel by this community leaves a nagging suspicion. so would all the Intel employees on XS please stand up?