Page 2 of 2 FirstFirst 12
Results 26 to 48 of 48

Thread: Strange trend with GPU and CPU combo

  1. #26
    Banned
    Join Date
    Oct 2006
    Location
    Haslett, MI
    Posts
    2,221
    Quote Originally Posted by informal View Post
    Zucker you have to read the article first
    Don't worry, I read it.

    These results also repeat themselves in other games like H.A.W.X. and Left 4 Dead but not in Crysis Warhead or Dawn of War II. So, besides the gaming situation, we also see a similar pattern in AutoCad 2010 and other 3D rendering applications where GPU acceleration is utilized, it is just not as pronounced.
    Therfore imo negligible, and that's why I singled out those two. Read Tom's and the plethora of tests they did to isolate the problem. This is not a new issue, but an old one.

  2. #27
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    And feel free to neglect the rest of the post which addressed the core of your "arguments" . Especially the graph which shows the average performance in office,games and multimedia of i7-870 and 965BE being 6% apart

    Anyhow the trend is repeatable in non-gaming apps which proves there is no "problem" ,it seems the I/O system on AMD hardware just works a bit better with NV cards.

  3. #28
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    not the compiler conspiracy stuff again...

  4. #29
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    http://aceshardware.freeforums.org/post5566.html#p5566
    Quote Originally Posted by Opteron@Aces
    As mentioned before, I thought that Intel stopped the "intel only" coding with that version, as AMD is using it against intel in the ongoing lawsuit:
    125. Intel has designed its compiler purposely to degrade performance when a program is run on an AMD platform. To achieve this, Intel designed the compiler to compile code along several alternate code paths. Some paths are executed when the program runs on an Intel
    platform and others are executed when the program is operated on a computer with an AMD microprocessor. (The choice of code path is determined when the program is started, using a feature known as “CPUID” which identifies the computer’s microprocessor.) By design, the
    code paths were not created equally. If the program detects a “Genuine Intel” microprocessor, it executes a fully optimized code path and operates with the maximum efficiency. However, if the program detects an “Authentic AMD” microprocessor, it executes a different code path
    that will degrade the program’s performance or cause it to crash.
    http://redirectingat.com/?id=593X100..._Complaint.pdf

    Now this
    http://aceshardware.freeforums.org/post5584.html#p5584
    Quote Originally Posted by Agner Fog@Aces
    I just tried Intel C++ compiler version 10.1 with option /QxO as you suggested. It generates the following versions of code for common mathematical functions: SSE2, SSE3, SSE4.1 and non-Intel SSE2. It doesn't work on any CPU prior to SSE2. This is the only compiler option that makes it run reasonably on an AMD, but why are there two different SSE2 versions, one for Intel and one for AMD? When I hack the CPU-dispatcher and makes it believe that it is an Intel, it runs 50 - 100 % faster. This means that the Intel-SSE2 version is faster than the AMD-SSE2 version when running on an AMD processor!

    There are also options that work on any processor. For example /QaxB. This options runs non-vectorized SSE2 code on Intel processors and old 8087 code on AMD processors. I measured this to be 5-10 times slower than the /QxO option on an AMD Opteron.
    Last edited by informal; 09-16-2009 at 06:11 AM.

  5. #30
    Xtreme Mentor
    Join Date
    May 2008
    Location
    cleveland ohio
    Posts
    2,879
    wasn't that all proven when they used a via cpu for both code paths.
    HAVE NO FEAR!
    "AMD fallen angel"
    Quote Originally Posted by Gamekiller View Post
    You didn't get the memo? 1 hour 'Fugger time' is equal to 12 hours of regular time.

  6. #31
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by informal View Post
    *snip*
    Wonder why you warm up old stories, that complain has no substance anymore, cause past ICC 9.x intel enable the use of sse2 on none Intel architectures.

    Not to mention that the "Crippeled" non intel SSE2 code is faster than any optimized code gcc/msvc can produce for x86....

    Also quite entertaining that in your, link the conclusion was that ICC is only used in a minority of apps, and most of the time gcc/msvc is used. Thus the you made your own ing look kinda foolish, when the majority of the industrie already is not using ICC (according to that thread).

    Or you are disagreeing with that thread and no tell me that all open source apps like Lame, x264, etc. and the majority of games is using ICC?

    And why again are we discussing compilers, when its quite obviouse that theres something wrong with NV+Intel, when all other setups work (NV+AMD, AMD+Intel, AMD+AMD). So the blame is either to be given intel or NV, but with the fact that Intel+AMD works the suspicion is quite strong that NV just failed at that regard....
    Last edited by Hornet331; 09-16-2009 at 08:19 AM.

  7. #32
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    I just stated that icc didn't provide the best optimization for AMD cores in the past(most commercial apps on the market today that were compiled with icc were done with older version ).
    We are discussing this since other pro intel user complained about 750 i5 and i7 linux review.
    Also most developers do code and optimize for the market leader which is intel(they do hold the 80+% of the market after all).

    And yes,again,when AMD combo is better in something then obviously it's NV's fault or AMD's fault since it CAN'T be better,as we all know i7 is omgbbq and it roxorz...

  8. #33
    Registered User
    Join Date
    Jun 2009
    Location
    India
    Posts
    28
    Quote Originally Posted by Hornet331 View Post
    And why again are we discussing compilers, when its quite obviouse that theres something wrong with NV+Intel, when all other setups work (NV+AMD, AMD+Intel, AMD+AMD). So the blame is either to be given intel or NV, but with the fact that Intel+AMD works the suspicion is quite strong that NV just failed at that regard....
    I don't think that NV can be blamed here. Its not like GTX275 is not performing at all with the I7, rather its quite competitive with the 4890 when coupled with I7. The thing is that GTX275 is performing much better with Phenom.

    You can't blame nVidia when things work much better than expected, you can only blame when things don't happen the right way.
    Last edited by I_no; 09-16-2009 at 08:37 AM. Reason: typos
    Are there more ways to love than to break a perfectly working computer?
    Don't think so.

  9. #34
    Xtreme Cruncher
    Join Date
    Apr 2008
    Location
    Ohio
    Posts
    3,119
    Quote Originally Posted by informal View Post
    And feel free to neglect the rest of the post which addressed the core of your "arguments" . Especially the graph which shows the average performance in office,games and multimedia of i7-870 and 965BE being 6% apart

    Anyhow the trend is repeatable in non-gaming apps which proves there is no "problem" ,it seems the I/O system on AMD hardware just works a bit better with NV cards.
    I really like that review, and no not cause AMD faired so well. They did a great job of including so many bench test and showing more of the "whole" picture. One of the best reviews I have seen.
    ~1~
    AMD Ryzen 9 3900X
    GigaByte X570 AORUS LITE
    Trident-Z 3200 CL14 16GB
    AMD Radeon VII
    ~2~
    AMD Ryzen ThreadRipper 2950x
    Asus Prime X399-A
    GSkill Flare-X 3200mhz, CAS14, 64GB
    AMD RX 5700 XT

  10. #35
    Registered User
    Join Date
    Oct 2008
    Posts
    47
    I saw those numbers in AT and if I have to make a conclusion based solely on those graphics I would say that the GTX275 is a better performing card than the HD4890 but has problems with the i7 platform. Also what about all the reviews done when the HD4890 and GTX275 were launched? That reviews in the vast mayority were done with a Core i7 platform.
    Last edited by VoodooČ; 09-16-2009 at 12:30 PM.
    ASUS M4A79T Deluxe
    Phenom II X2 555 BE (4 cores unlocked)
    Sapphire 6770 1GB
    G.Skill RipJaws 2 x 2GB 1600MHZ cl7
    480 watt Topower/Tagan Power supply
    Thermaltake Soprano
    24" 1920x1080 BenQ G2410HD
    MAXTOR 500GB 32MB x2
    BenQ DW1650 16x Dvd burner

  11. #36
    Xtreme Member
    Join Date
    Aug 2004
    Posts
    178
    This comes from an old article, but would seem to add some backup to the idea its down to a system bottleneck:



    http://www.tomshardware.com/reviews/...is,1572-8.html

    if the 275gtx and 4890 are both scaling like their older siblings, in pcie limited situations the ati card should handle it much better, so any platform advantage in bandwidth/latency will not show up on the ati card. but, would show prominently on the nvidia card.

    easy enough to test also, see if the cards scale nicely if the # of slots is reduced. if the pattern persists, or becomes more pronounced - nail in coffin. if the change has little effect, its not pcie limitations
    LCB9E 0641 APMW @3100 1.65V Decapped ~50c orthos load, TDX+House Rad (passive!)+Eheim 1250, Abit AX8, 2*1gig Crucial PC4000 @ 221 3-3-3-8-1T

    X4 940BE @ 3640 @1.475, Gigabyte GA-MA790X-UD3P, 4x G.Skill F2-8500CL5D-2GBPK @ 1110mhz 5-5-5-15 @1.8v, 3870XT

  12. #37
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Our friend justapost did his own i5(turbo on) Vs Phenom II tests and this is his post at phoronix forum:
    Quote Originally Posted by justapost@ phronix forum
    I received an i5 750 today, together with an GBT P55-UD4 mobo. I compared it with my 955BE + GBT GA-MA785GMT-UD2H. Only mobo and cpu differ between setups. Both used 4GB OCZ Plats at 1333MHz CL7 and an nvidia 8800GT 1GB gfx. As os I choose sidux 2009-2 dist-updated. I left all power saving features on and also enabled turbo on the 750. cpufreq-acpi seems to ignore the two and one core increases. The chip ran at 2.8GHz most of the time.
    http://global.phoronix-test-suite.co...5168-12682-147
    I plan to run the full universe suite and more clock vs. clock comparisons in the next days.
    this is a comment on his results:
    Quote Originally Posted by Apopas
    Similar results with Michael's tests which show Phenom to beat i5 in general, whether the windows' benchmarks shows the opposite...
    These are his results,note that his OS is different from the one Phoronix used.

  13. #38
    Xtreme Member
    Join Date
    Jun 2009
    Location
    Budapest, Hungary
    Posts
    262
    very good article, nice to see a fellow sidux user benching
    1090T | CH4F | HIS HD5850 | TT EvoBlue 750W | TT Spedo Advance | CM Aquagate Max | Samsung S27A350

  14. #39
    Banned
    Join Date
    Oct 2006
    Location
    Haslett, MI
    Posts
    2,221
    Quote Originally Posted by informal View Post
    Our friend justapost did his own i5(turbo on) Vs Phenom II tests and this is his post at phoronix forum:

    this is a comment on his results:

    These are his results,note that his OS is different from the one Phoronix used.
    Sidux what.....? Have you made the switch from Windows to Sidux $@%*&%^ yet? I do get your point, lynnfield runs poorly on some unstable alpha linux OS platform.

    OS: Debian unstable
    Kernel: 2.6.31-0.slh.3-sidux-amd64 (x86_64)
    Desktop: KDE 4.3.1
    Display Server: X.Org Server 1.6.3.901 (1.6.4 RC 1)
    OpenGL: 3.0.0 NVIDIA 185.18.36
    Compiler: GCC 4.3.4
    File-System: ext3
    Screen Resolution: 1600x1200

  15. #40
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Quote Originally Posted by Zucker2k View Post
    Sidux what.....? Have you made the switch from Windows to Sidux $@%*&%^ yet? I do get your point, lynnfield runs poorly on some unstable alpha linux OS platform.
    Yeah more excuses. BTW your selective replying skills are now becoming better . Still haven't replied to the post I made the other day . To sum it up again,across a whole range of Windows apps,the difference on average between stock i7-870 and stock 965BE is 6%.Think about it .

  16. #41
    Xtreme Member
    Join Date
    Jun 2009
    Location
    Budapest, Hungary
    Posts
    262
    Quote Originally Posted by Zucker2k View Post
    Sidux what.....? Have you made the switch from Windows to Sidux $@%*&%^ yet? I do get your point, lynnfield runs poorly on some unstable alpha linux OS platform.
    sweet ignorance
    1090T | CH4F | HIS HD5850 | TT EvoBlue 750W | TT Spedo Advance | CM Aquagate Max | Samsung S27A350

  17. #42
    Banned
    Join Date
    Oct 2006
    Location
    Haslett, MI
    Posts
    2,221
    Quote Originally Posted by informal View Post
    Yeah more excuses. BTW your selective replying skills are now becoming better . Still haven't replied to the post I made the other day . To sum it up again,across a whole range of Windows apps,the difference on average between stock i7-870 and stock 965BE is 6%.Think about it .
    The "pot" calling the "kettle" black? Well, according to the link in that post I "selectively" replied to, a Q9550=i5 750=PH II 965! I'm sure you agree that the 1% that separates the first two from the third cpu is within the margin of error? I mean you're known to shave off as much as 3-4% in other cases. The fact is that you would go to lengths to muddy the numbers, even if it means shooting yourself in the foot. There is also one very simple fact, if you sum up all the tests in all reviews, the i5 750 is the all around better processor. This is not saying AMD's flagship processor is bad; it simply means it is bested by Intel's latest low end mainstream cpu. The i5 750 compared to the 965 BE you get:

    Same or better performance
    Lower price
    Lower power consumption
    A far more robust and feature-rich selection of motherboards

    What's not to like?

  18. #43
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Yeah and following your logic the i5 750 is be all end all of all the desktop models since it is only 7% off from the 870 with SMT and turbo and a few percents more off from the rest of the high end. The point is that the 965BE is not that far behind the whole Nehalem line,it loses by a hair in many tests and by a lot in select few which twists the overall performance rating towards the i5 and i7. Turbo boost,although very good future that AMD will also use, makes the 750 actually not the 2.66Ghz CPU as it never actually works at that clock.This is not bad since you get higher clock out of the box,but paints a wrong picture when someone says 2.66Ghz lynnfield is as fast as 955BE or 965BE.

    To sum it up,one more speed bin from AMD and they will cover the 3Ghz Bloomfield equivalent ,they really don't have to be more competitive than this .They will have IPC boosted(a la Deneb) 32nm shrink of 10h so that will tide them over quite nicely until bulldozer launches.

  19. #44
    Banned
    Join Date
    Oct 2006
    Location
    Haslett, MI
    Posts
    2,221
    Quote Originally Posted by informal View Post
    Yeah and following your logic the i5 750 is be all end all of all the desktop models since it is only 7% off from the 870 with SMT and turbo and a few percents more off from the rest of the high end. The point is that the 965BE is not that far behind the whole Nehalem line,it loses by a hair in many tests and by a lot in select few which twists the overall performance rating towards the i5 and i7. Turbo boost,although very good future that AMD will also use, makes the 750 actually not the 2.66Ghz CPU as it never actually works at that clock.This is not bad since you get higher clock out of the box,but paints a wrong picture when someone says 2.66Ghz lynnfield is as fast as 955BE or 965BE.

    To sum it up,one more speed bin from AMD and they will cover the 3Ghz Bloomfield equivalent ,they really don't have to be more competitive than this .They will have IPC boosted(a la Deneb) 32nm shrink of 10h so that will tide them over quite nicely until bulldozer launches.
    I really don't understand what "logic" you're talking about. The TRUTH is, depending on what particular app you're running, the difference (% in chips performance) actually varies greatly, eg. most apps capable of taking advantage of 8 threads. And please stop whining about the turbo, it's what you get with Intel's latest. You never hear anyone complaining about the fact that the 965 BE is running at 3.4Ghz at stock ALL THE TIME. Even assuming the best case scenario, the i5 750 is running at only 2.8Ghz with four threads, while the 965BE has a 600mhz advantage, but you won't hear me complaining because that's what you get - that's why one would buy the 965BE over a 940BE for example. Is the i5 750 bad because it packs technologies that allow it to perform better than advertized in certain scenarios?

    Don't talk about the future as if AMD's competitors will be sitting on their hands doing nothing. Besides, the NOW is what matters, not to mention Intel's 32nm chip production has taken off and 6-core chips are clocking past 6.3Ghz already (though you didn't hear that from me).

  20. #45
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    I'm just saying that with turbo it's not 2.66Ghz CPU anymore.BTW ,applications that can pin down all the cores to the max all the time are not that many(in the desktop world at least).This overall leads to Turbo kicking in harder. Like I said it's useful for benchmarketing,but with stock HSF (the thing that is almost never used in lynnfield reviews-guess why?) the turbo would not kick in due to thermal limitations and the results would be lower for the regular desktop end user-the one that buys stuff and expects it to work out of the box as being reviewed on the net.

    As for the Gulftown,I have known the "news" since Shamino first posted his results,you do not have to tell me anything.The CPU is obviously a leaky one,taking ~2V and being produced on highk/mg 32nm at the same time is not bad but now that awesome either. Cold bug is still there,at -150 degrees . They are improving on this though,so kudos for that.

  21. #46
    Xtreme Addict
    Join Date
    Sep 2007
    Location
    Munich, DE
    Posts
    1,401
    Quote Originally Posted by Zucker2k View Post
    Sidux what.....? Have you made the switch from Windows to Sidux $@%*&%^ yet? I do get your point, lynnfield runs poorly on some unstable alpha linux OS platform.
    Ehh, I picked sidux because it uses the latest kernel and gcc, and I expect this system is better optimized for lates hardware than older debian or ubuntu releases. Turbo works fine it seems but I need to do more testings.
    Michael from phoronix had problems with iinconsisten results using an ubuntu alpha release, my results are repeatble consistent. The packages in debian unstable are not alpha versions from git repositories, normaly they are just the latest available stable versions.

  22. #47
    Banned
    Join Date
    Oct 2006
    Location
    Haslett, MI
    Posts
    2,221
    Quote Originally Posted by informal View Post
    I'm just saying that with turbo it's not 2.66Ghz CPU anymore. BTW ,applications that can pin down all the cores to the max all the time are not that many(in the desktop world at least).This overall leads to Turbo kicking in harder. Like I said it's useful for benchmarketing,but with stock HSF (the thing that is almost never used in lynnfield reviews-guess why?) the turbo would not kick in due to thermal limitations and the results would be lower for the regular desktop end user-the one that buys stuff and expects it to work out of the box as being reviewed on the net.

    As for the Gulftown,I have known the "news" since Shamino first posted his results,you do not have to tell me anything.The CPU is obviously a leaky one,taking ~2V and being produced on highk/mg 32nm at the same time is not bad but now that awesome either. Cold bug is still there,at -150 degrees . They are improving on this though,so kudos for that.
    Isn't technology amazing? You actually get more cpu for your money; go figure! My nickname for the i5/i7 line is now the chameleon cpu. It changes "color" based on a range of variables.

    I respect your opinion, but you don't have to be the prophet of doom you know. Could you show me a similarly clocked chip in this universe? That's right, there's none. Contrary to what you might think, high clocks/high ipc on the 32nm scale is not a given.

    Quote Originally Posted by justapost View Post
    Ehh, I picked sidux because it uses the latest kernel and gcc, and I expect this system is better optimized for lates hardware than older debian or ubuntu releases. Turbo works fine it seems but I need to do more testings.
    Michael from phoronix had problems with iinconsisten results using an ubuntu alpha release, my results are repeatble consistent. The packages in debian unstable are not alpha versions from git repositories, normaly they are just the latest available stable versions.
    Thanks for the info, but why review a newly released platform on an OS that is clearly not final? Is windows not to be trusted? Or other mainstream linux platforms that are fully mature? I would like to see your configs tested on mainstream OSes and apps and see what the differences in performance are.

    In all this, I say, if one feels they can get a better performance on some fringe OS/apps then of course they need to take that decision. But, it paints a muddy picture when a fresh platform is benched on an alpha OS, using some really obscure apps to arrive at conclusions which differ from 99.9% of the results out there, and especially when 99% of consumers are not going to use that OS or apps benched?

    I'm not trying to diss your efforts, but there has to be a reason, so what's your reason?

  23. #48
    Xtreme Addict
    Join Date
    Sep 2007
    Location
    Munich, DE
    Posts
    1,401
    Quote Originally Posted by Zucker2k View Post
    Thanks for the info, but why review a newly released platform on an OS that is clearly not final? Is windows not to be trusted? Or other mainstream linux platforms that are fully mature? I would like to see your configs tested on mainstream OSes and apps and see what the differences in performance are.
    Well I use linux more often than windows, so the perrformance under that os is what's getting my interest first.
    For a new platform you must use an up to date kernel under linux. For example the nic on the p55 platform requires at least kernel version 2.6.30. I picked the more recent 2.6.31 (stable) kernel because it contains the latest cpufreq module used for acpi based eist/cnq cpu power (and frequency) management. There where a few issues in the past like too high delays when the frequency changes. Of course I could build my own recent kernel under a more stable distro (like ubuntu stable based on debian testing and stable). Sidux is based on debian unstable and their last release concentrated on infrastructural changes which are required for best performance with the 2.6.30 kernel. So this distro looked like the best one for this new platform. Again the packages used on that distribution for themselfs are the most actual stable versions. Debian has thousands of packages and a package is considered unstable till it was tested against all it's possible dependencies.
    To run the phoronix-test-suite you only need an minimal installation with an basic window manager. The benchmarks themselves depend on a few dozend packages and most of them only require the basic c library and and c-compiler. I use linux since 94 so I know how to fix an dependency problem or how to build an older version of a package if somethings goes wrong.
    In general the latest stable versions of the required packages should be the best choice for a new platform.
    I'm sure if I would have used an older stable distribution someone would moan that this distribution is not yet optimized for the new platform.
    i7 is available for a while now, so chances are good that many of the packages I use have been optimized already.
    Once I finished a few overclocking test I plan to compare gcc open64 and icc builds of a few benchmarks to see what difference those make and how the relations may change, you can''t do that on a closed source os with closed source benchmarks.

    Here are a few more results, stuff like audio video encoding, disc performance, build, web server and database performace tests. Those are less obscure (at least to linux users).

    http://global.phoronix-test-suite.co...358-5083-22089
    Last edited by justapost; 09-18-2009 at 10:36 AM.

Page 2 of 2 FirstFirst 12

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •