Page 8 of 30 FirstFirst ... 56789101118 ... LastLast
Results 176 to 200 of 733

Thread: AMD FX-8150 Bulldozer finally tested

  1. #176
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    225
    Quote Originally Posted by highoctane View Post
    It is indicative of the cpu's ability to generate game & frame data for the gpu. If you use high or low resolution the cpu still generates the same game and frame data but at high resolution the cpu is limited by the gpu's speed for rendering the frames which means all you see is more of a gpu limitation than a cpu limitation at higher resolutions.

    In the end the cpu that can pump the highest fps period, raw cpu speed unhindered, has more processing power but that doesn't mean it translates into raw performance for other workloads.
    So how do you explain those occasions where the *much* faster cpu at low res has the tables turned at loses by a couple of frames at higher resolution? Nehalem was notorious for this for a long time, losing at high resolution to the 940 BE. If it's always faster, why was it losing so many at highest res? Maybe a different kind of bottleneck is starting to show, one that moves the fps in favour of the AMD chip at actual gaming (ie gpu bound) resolutions?

    If SB is 50% faster at low res all the time, stands to reason it should be at least 1% faster at high res all the time, right? We'll see.
    Last edited by jimbo75; 10-09-2011 at 11:03 AM.

  2. #177
    Xtreme Member
    Join Date
    Jan 2004
    Posts
    393
    Quote Originally Posted by jimbo75 View Post
    So why not benchmark it to see if that's true or not?

    If SB is 50% faster than BD then why is it not 50% faster in anything else (except superpi :p)? Why only very low resolution gaming? It is obvious to me that low res gaming is not indicative of actual cpu performance. It's an abberation.

    since these games probably can't take full advantage of 8 threads, and you start considering the advantages of the sandy bridge it makes some sense to see a big difference,

    1280x1024 is not that low compared to 640x480, even more with details on high,
    this is an indicative of the performance with a less GPU constrained situation,
    I know that perhaps it's irrelevant if in real game (with higher res, or slower VGA) the GPU will limit far more than the CPU, but comparing CPUs when the GPU is the biggest limiting factor it's not ideal to as AMD did with heir slides,
    so again, think about it in this way, they are using a single GTX 580, now let's say you increase the res to 1080p, and add another GTX 580 (SLI, or let's say a 2012 high end VGA), I think the situation will look mostly the same, but now use a GTS 450 instead of a GTX 580 and keep the settings, probably both CPU are going to score a lot less than 100fps and achieve about the same performance in these games (the same effect basically of increasing the res without adding a faster VGA)

  3. #178
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    raising game settings can have a impact on cpu performance
    keep things like AA/AF/tess off, leave the rest on high and use a high resolution with really strong GPUs, it can show cpu bottlenecks since the gpu should be relatively relaxed.

    btw i find it absolutely fine to run games at settings people play out, cause it will give a quick yes/no for if the cpu will be the limit or not.
    and running things on the lowest settings my give very artificial results because of how far it is from what people do. if people want to worry about FUTURE games being cpu limited, then just buy a chip with more cores than are used today.
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  4. #179
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by jimbo75 View Post
    So how do you explain those occasssions where the *much* faster cpu at low res has the tables turned at loses by a couple of frames at higher resolution? Nehalem was notorious for this for a long time, losing at high resolution to the 940 BE. If it's always faster, why was it losing so many at highest res? Maybe a different kind of bottleneck is starting to show, one that moves the fps in favour of the AMD chip at actual gaming (ie gpu bound) resolutions?
    Max/min on its own mean bollocks on its own..
    http://www.computerbase.de/artikel/g...frameverlaeufe

    Check the last graph of "the witcher 2" Though intel has the highest drop here its for one frame and is constantly higher then the X6... does this make the X6 a better overall cpu?

    Min/Max fps are way to influenced by other things, is there a high priority call form the system that puts all other threads to the back for a few mili seconds? etc. etc.

    Quote Originally Posted by gosh View Post
    Testing prefetching on the cpu.

    Most games have one heavy render thread, only BF3 and some other are using multithreaded rendering (a dx11 feature). Most games are using half or less than half of total CPU power. For most games data is rather static, maps etc are data loaded from disk and not modified during gameplay.
    The main work will be calculating dots sent to the gpu. Long trains of data is calculated and sent to the gpu in chunks with commands informing the gpu how to work with those dots.

    The amount of work the gpu needs to do for each frame is much less on low resolutions. It could be faster than the cpu and then it starts to wait for the cpu to process all dots.

    Prefetching will increase performance a lot working with long trains of data. The cpus guess where the next data will be and fetches that while current data is being processed.

    i5/i7 prefetches data to L1 and L2 cache, phenom is only using the L1 with some small prefetching functionality.

    This is the reason why i5/i7 can produce much more fps on low resolutions. It's about prefetching and a fast cache.

    If the game are using more threads and maps are dynamic and complex. Data isn't at all that predictable and prefething isn't as important any more.


    Pls not this again...

  5. #180
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by jimbo75 View Post
    So how do you explain those occasions where the *much* faster cpu at low res has the tables turned at loses by a couple of frames at higher resolution? Nehalem was notorious for this for a long time, losing at high resolution to the 940 BE. If it's always faster, why was it losing so many at highest res?
    Increasing the resolution or change the gpu to a slower cutting thos high fps where cpu don't need to work. Areas where there isn't any fight, just running on some with no enemies. Then the gpu will even out the prefetching advantage because it slows the cpu producing frames on easy parts.

    The hard parts for the cpu. parts that are CPU dependent will start to show.

    Games isn't linear, the work differs a lot depending whats going on

    I think that BF3 will have maps where there could be 64 players. This will probably much work for the cpu
    Last edited by gosh; 10-09-2011 at 11:09 AM.

  6. #181
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    225
    Quote Originally Posted by gosh View Post
    Increasing the resolution or change the gpu to a slower cutting thos high fps where cpu don't need to work. Areas where there isn't any fight, just running on some with no enemies. Then the gpu will even out the prefetching advantage because it slows the cpu producing frames on easy parts.

    The hard parts for the cpu. parts that are CPU dependent will start to show.

    Games isn't linear, the work differs a lot depending whats going on

    I think that BF3 will have maps where there could be 64 players. This will probably much work for the cpu
    I disagree. Take an intel and AMD system then load up a game with inside and outside areas and go inside, I bet the Intel system will score much higher indoors, and much higher if you look up at the ceiling. Intel cpu's do the easier stuff better and this leads to higher average fps (and much higher maximums), but they do the harder stuff no better or worse. That's why SB's fps craters in Unigine at the difficult parts. The thing is, the intel cpu's are scoring much higher in the parts that the AMD cpu is already scoring very high fps at, it's practically meaningless if it's 150 fps or 250 fps.
    Last edited by jimbo75; 10-09-2011 at 11:19 AM.

  7. #182
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by jimbo75 View Post
    I disagree. Take an intel and AMD system then load up a game with inside and outside areas and go inside, I bet the Intel system will score much higher indoors, and much higher if you look up at the ceiling.
    I think you need to select a game that renders from more than one thread. Intel is fast but it isn't as fast as it somtimes shows on low resolutions.
    Games using only one renderthread isn't going to use that much of what the cpu can deliver.

  8. #183
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by jimbo75 View Post
    I disagree. Take an intel and AMD system then load up a game with inside and outside areas and go inside, I bet the Intel system will score much higher indoors, and much higher if you look up at the ceiling. Intel cpu's do the easier stuff better and this leads to higher average fps (and much higher maximums), but they do the harder stuff no better or worse. That's why SB's fps craters in Unigine at the difficult parts. The thing is, the intel cpu's are scoring much higher in the parts that the AMD cpu is already scoring very high fps at, it's practically meaningless if it's 150 fps or 250 fps.
    You'd better not check this thread then...

    http://www.hardwareluxx.de/community...en-720931.html

    The frame gparhs are pretty much all the same, the only difference is the position on the y-axis and some anomalies where you have dips of a single frame.
    Last edited by Hornet331; 10-09-2011 at 11:38 AM.

  9. #184
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    225
    If the point is to show fps difference in cpu-bound games, why not do that? Why not show games like Starcraft or Battleforge, but do it at actual gaming settings? That would be infinitely preferable to showing games at low resolution surely?

  10. #185
    Registered User
    Join Date
    Aug 2011
    Posts
    73
    Quote Originally Posted by Iconyu View Post
    I'm sure I just posted this a minute ago and it never made it on screen. Anyway, I'll just leave this here.
    I'm afraid some people will have to apologize if other reviews show similar numbers.

  11. #186
    Xtreme Member
    Join Date
    Mar 2008
    Posts
    338
    A lot of people are wondering why we are seeing slower then last gen PHII results.

    BD is a new arch that is completely different from PHII. With so much change, there was never a gaurantee of it being faster. AMD was more focused on finding a way to win the "core" race, and in the process, lost a lot of single threaded performance. Its not really much of a surprise when you really think about it.

    What they did do though, was lay a foundation to build upon. With BDII they should be able to get the single threaded performance back up, and they can start adding more modules for better multi threaded performance.


    The 8150 vs 2600k isnt 8 core vs 4 core, its 4 modules vs 4 cores with HT.
    Heatware Cecil

  12. #187
    Xtreme Member
    Join Date
    Jan 2004
    Posts
    393
    Quote Originally Posted by jimbo75 View Post
    If the point is to show fps difference in cpu-bound games, why not do that? Why not show games like Starcraft or Battleforge, but do it at actual gaming settings? That would be infinitely preferable to showing games at low resolution surely?
    it would also be very interesting,
    but I suppose running re5 and hawx bult in benchmarks is much easier/quicker

  13. #188
    I am Xtreme
    Join Date
    Aug 2008
    Posts
    5,586
    Quote Originally Posted by Cecil View Post

    BD is a new arch that is completely different from PHII. .
    same socket....


  14. #189
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by jimbo75 View Post
    If the point is to show fps difference in cpu-bound games, why not do that? Why not show games like Starcraft or Battleforge, but do it at actual gaming settings? That would be infinitely preferable to showing games at low resolution surely?
    Because most reviewers have a given set of benchmarks, if they add a new game they have to retest all the previous hardware with it. Using an available benchmark and making an artificial setting is easier, especial when you already have the data.

    Thats also the reason why anandtech still is stuck on a half decade old benchmark (sysmark 07). People want to comapre results, anand has a pretty huge library of tested cpus... if they replace one benchmark they need to do it for 100+ cpus..
    Last edited by Hornet331; 10-09-2011 at 12:00 PM.

  15. #190
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Hornet331 View Post
    Because most reviewers have a given set of benchmarks, if they add a new game they have to retest all the previous hardware with it. Using an available benchmark and making an artificial setting is easier, especial when you already have the data.
    for most that is true, but for a review thats only 2 games and 2 chips, it sure seemed like a lot of extra work
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  16. #191
    Xtreme Addict
    Join Date
    May 2007
    Location
    Romania
    Posts
    1,246
    It's a damn preview,goddamit,not a review.Full review will come on 12 like others...You people do not deserve to get info ahead of schedule cause it seems you are not able to process it properly.

  17. #192
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by Cecil View Post
    A lot of people are wondering why we are seeing slower then last gen PHII results.

    BD is a new arch that is completely different from PHII. With so much change, there was never a gaurantee of it being faster. AMD was more focused on finding a way to win the "core" race, and in the process, lost a lot of single threaded performance. Its not really much of a surprise when you really think about it.

    What they did do though, was lay a foundation to build upon. With BDII they should be able to get the single threaded performance back up, and they can start adding more modules for better multi threaded performance.


    The 8150 vs 2600k isnt 8 core vs 4 core, its 4 modules vs 4 cores with HT.
    They lost MT performance too. Thuban has better MT performance with same TDP on a 45nm process.
    Smile

  18. #193
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by Manicdan View Post
    for most that is true, but for a review thats only 2 games and 2 chips, it sure seemed like a lot of extra work
    its a quick preview.. what to expect more?

  19. #194
    Xtreme Member
    Join Date
    Jun 2005
    Posts
    442
    Quote Originally Posted by Hondacity View Post
    same socket....
    Prescott Pentium 4's and Core 2 Duos/Quads both used socket LGA775, yet the Core 2's thrashed Pentium 4/D CPUs.

    Socket means nothing. You can have a different architecture on the same socket type.
    Last edited by Mad Pistol; 10-09-2011 at 12:07 PM.
    PII 965BE @ 3.8Ghz /|\ TRUE 120 w/ Scythe Gentle Typhoon 120mm fan /|\ XFX HD 5870 /|\ 4GB G.Skill 1600mhz DDR3 /|\ Gigabyte 790GPT-UD3H /|\ Two lovely 24" monitors (1920x1200) /|\ and a nice leather chair.

  20. #195
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    ROMANIA
    Posts
    687
    It's a damn preview,goddamit,not a review.Full review will come on 12 like others...You people do not deserve to get info ahead of schedule cause it seems you are not able to process it properly.
    I told them twice, too that it's a "preview".
    The title of the article is written "AMD FX-8150 Bulldozer Preview"
    Hard to see or understand it seems.

    Mad Pistol, yes but different NB.
    Anyway for AMD it's clear now that they should have done a new socket from 0.
    May be to fit and work better with BD arhitecture.
    Socket AM3+ which is basically AM2 has too much years, even compared with LGA 775,
    Last edited by xdan; 10-09-2011 at 12:13 PM.
    i5 2500K@ 4.5Ghz
    Asrock P67 PRO3


    P55 PRO & i5 750
    http://valid.canardpc.com/show_oc.php?id=966385
    239 BCKL validation on cold air
    http://valid.canardpc.com/show_oc.php?id=966536
    Almost 5hgz , air.

  21. #196
    I am Xtreme
    Join Date
    Aug 2008
    Posts
    5,586
    Quote Originally Posted by Mad Pistol View Post
    Prescott Pentium 4's and Core 2 Duos/Quads both used socket LGA775, yet the Core 2's thrashed Pentium 4/D CPUs.

    Socket means nothing. You can have a different architecture on the same socket type.
    thats intel, lets talk am3 and am3+ which are amd...


  22. #197
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by Hondacity View Post
    thats intel, lets talk am3 and am3+ which are amd...
    You base your opinions, assumptions and facts on nothing but the fact that the sockets are physically similar...when they aren't even that. There are fundamental differences within the socket and PWM + VRM other than the fact that they have 941 or 942 pins.
    Smile

  23. #198
    Xtreme Member
    Join Date
    Jun 2005
    Posts
    442
    Quote Originally Posted by Hondacity View Post
    thats intel, lets talk am3 and am3+ which are amd...
    Yes, but I disproved your conclusion. AMD has done the same thing going from Deneb/Thuban to Bulldozer (Zambezi). The reason for the + appears to be improved power circuitry (more current available) as opposed to vanilla AM3. Also, the 9x0 motherboards have SLI support as well as native BD support.

    Read the specs and look at the die-shots off BD. Bulldozer is a complete departure from Deneb/Thuban.
    PII 965BE @ 3.8Ghz /|\ TRUE 120 w/ Scythe Gentle Typhoon 120mm fan /|\ XFX HD 5870 /|\ 4GB G.Skill 1600mhz DDR3 /|\ Gigabyte 790GPT-UD3H /|\ Two lovely 24" monitors (1920x1200) /|\ and a nice leather chair.

  24. #199
    Xtreme Member
    Join Date
    Mar 2008
    Posts
    338
    Quote Originally Posted by BeepBeep2 View Post
    They lost MT performance too. Thuban has better MT performance with same TDP on a 45nm process.
    Most of the loss in MT is from the loss in ST. The 4 module design should roughly equal 7 cores (assuming the MT app scales 100%). With each core of each module not being as fast as a Deneb/Thuban core, there is a shared performance drop on all the modules.

    What they basicaly did is revert back to a quad core and give it a form of hardware level HT that scales better, but lost ST performance by doing so.
    Heatware Cecil

  25. #200
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by Cecil View Post
    Most of the loss in MT is from the loss in ST. The 4 module design should roughly equal 7 cores (assuming the MT app scales 100%). With each core of each module not being as fast as a Deneb/Thuban core, there is a shared performance drop on all the modules.

    What they basicaly did is revert back to a quad core and give it a form of hardware level HT that scales better, but lost ST performance by doing so.
    The problem is, they would have been much more competitive with Sandy Bridge if they had shrunk X6 (I'm not sure why guys keep saying "add two cores!"), added 4-6MB between L2 and L3 caches, and sped stock speeds up to 3.6-4.0 single thread. I'm fairly certain that is doable within a 95-125w TDP on 32nm, and they should have been able to figure out some way to gain clockspeed...intel has had no problems with it, and AMD did it well from transition to 65nm to 45.

    X6's that would OC on water 4.6-4.7 Ghz on 32nm +3% IPC boost from extra cache or adopted Llano's STARS core + L3, would have gained ~5-10% IPC over current gen and would have been a lot better than this.
    Smile

Page 8 of 30 FirstFirst ... 56789101118 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •