Page 8 of 12 FirstFirst ... 567891011 ... LastLast
Results 176 to 200 of 280

Thread: Intel haswell i7-4770K preview article-TMSHW

  1. #176
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Belgium
    Posts
    163
    Quote Originally Posted by Andi64 View Post
    So, less than 5% average improvement over 3770K, and less than 1% on games. I'm glad I went LGA2011
    So from Sandy to Ivy there was like around 5% increase but on average the Sandy overclocked a bit higher, so the performance was about the same. Now from Ivy to Haswell there is less than 1% difference for games? What will the next gen of CPU's do after Haswell? 0,5% increase? If this continues my 2600K will still be rocking in 10 years, with new CPU's only consuming less power and have more igu etc stuff build in.
    Asus Z87 Deluxe, 4770K,Noctua NH-D14, Crucial 16 GB DDR3-1600, Geforce Titan, ASUS DRW-24B3ST, Crucial M500 960GB, Crucial M4 256GB, 3 X Seagate 4TB, Lamptron FC5 V2 Fancontroller, Noctua Casefans, Antec P183 Black, Asus Essence STX, Corsair AX860i, Corsair SP2500 speakers, Logitech Illuminated Keyboard, Win7 Home Pro 64 bit + Win 8.1 Home 64 bit Dual boot, ASUS VG278H

  2. #177
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by nossy23 View Post
    Now from Ivy to Haswell there is less than 1% difference for games? What will the next gen of CPU's do after Haswell? 0,5% increase?
    We are GPU limited. Nvidia and AMD need to play more rough.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  3. #178
    Registered User
    Join Date
    Mar 2010
    Posts
    52
    Unless it's fake, OC at 5Ghx at 0.904v...

    http://www.fudzilla.com/home/item/31...to-5ghz-at-09v

  4. #179
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Belgium
    Posts
    163
    Quote Originally Posted by zalbard View Post
    We are GPU limited. Nvidia and AMD need to play more rough.

    I agree that we need more from Nvidia and AMD, however we also need more from Intel. If you look at benchmarks between e.g. a 2500K and 2600K for games, the difference is small, but still larger than Haswell vs Ivy. So I'm betting that even with a lot extra GPU power, there is not much of an increase coming from Haswell for games.
    Asus Z87 Deluxe, 4770K,Noctua NH-D14, Crucial 16 GB DDR3-1600, Geforce Titan, ASUS DRW-24B3ST, Crucial M500 960GB, Crucial M4 256GB, 3 X Seagate 4TB, Lamptron FC5 V2 Fancontroller, Noctua Casefans, Antec P183 Black, Asus Essence STX, Corsair AX860i, Corsair SP2500 speakers, Logitech Illuminated Keyboard, Win7 Home Pro 64 bit + Win 8.1 Home 64 bit Dual boot, ASUS VG278H

  5. #180
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Voodoo Hoodoo View Post
    Unless it's fake, OC at 5Ghx at 0.904v...

    http://www.fudzilla.com/home/item/31...to-5ghz-at-09v
    CPU-Z reports voltages for Haswell incorrectly. Also, validation has little to do with stability.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  6. #181
    I am Xtreme Ket's Avatar
    Join Date
    Apr 2004
    Location
    United Kingdom
    Posts
    6,822
    Well, I always said Haswell will at best be 10% better than IB with the only significant improvement being the IGPU so I'm not surprised in the slightest by the results that are trickling out now. I would of been happy sticking with my 2500k but since that is half dead for reasons unknown I've just ordered a 3570k as its replacement. Absolutely no need to go to the hassle of the Haswell "upgrade". I think it also has to be acknowledged though that CPU silicone is reaching its capability limits. A lot more performance improvements are going to start coming from the software side of things, although you can expect games programmers to be as lazy as ever until AMD / nvidia can't pack anymore raw grunt under the hood of their silicone.
    Last edited by Ket; 05-10-2013 at 03:40 PM.

    "Prowler"
    X570 Tomahawk | R7 3700X | 2x16GB Klevv BoltX @ 3600MHz CL18 | Powercolor 6800XT Red Devil | Xonar DX 7.1 | 2TB Barracuda | 256GB & 512GB Asgard NVMe drives | 2x DVD & Blu-Ray opticals | EVGA Supernova 1000w G2

    Cooling:

    6x 140mm LED fans, 1x 200mm LED fan | Modified CoolerMaster Masterliquid 240

    Asrock Z77 thread! | Asrock Z77 Extreme6 Review | Asrock P67 Extreme4 Review | Asrock P67 Extreme4/6 Pro3 thread | Asrock Z68 Extreme4 thread | Asrock Z68 Extreme4 Review | Asrock Z68 Gen3 Thread | 8GB G-Skill review | TK 2.ZERO homepage | P5Q series mBIOS thread
    Modded X570 Aorus UEFIs

  7. #182
    Xtreme Member
    Join Date
    Jan 2010
    Posts
    323
    Already selling in China...


  8. #183
    Xtreme Mentor stasio's Avatar
    Join Date
    Jan 2008
    Location
    Malaysia
    Posts
    3,036
    Quote Originally Posted by zalbard View Post
    CPU-Z reports voltages for Haswell incorrectly. Also, validation has little to do with stability.
    Yes,
    according to AIDA64,CPU-Z 1.64 display CPU VRM voltage instead CPU Vcore.

    http://www.chinadiy.com.cn/batch.download.php?aid=67878

    New Intel Core i7 4770k "Review" :
    http://translate.google.co.ve/transl...%2Fn-9024.html
    Need a Gigabyte latest BIOS?
    Z370 AORUS Gaming 7,
    GA-Z97X-SOC Force ,Core i7-4790K @ 4.9 GHz
    GA-Z87X-UD3H ,Core i7-4770K @ 4.65 GHz
    G.Skill F3-2933C12D-8GTXDG @ 3100 (12-15-14-35-CR1) @1.66V
    2xSSD Corsair Force GS 128 (RAID 0), WD Caviar Black SATA3 1TB HDD,
    Evga GTS 450 SC, Gigabyte Superb 720W
    XSPC RayStorm D5 EX240 (Liquid Ultra)
    NZXT Phantom 630 Ultra Tower
    Win 7 SP1 x64;Win 10 x64

  9. #184
    Xtreme Addict
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,084
    Quote Originally Posted by nossy23 View Post
    I agree that we need more from Nvidia and AMD, however we also need more from Intel. If you look at benchmarks between e.g. a 2500K and 2600K for games, the difference is small, but still larger than Haswell vs Ivy. So I'm betting that even with a lot extra GPU power, there is not much of an increase coming from Haswell for games.
    It's all about which resolution used when benchmarking. Many sites uses way too low settings when testing CPU's.

    2700K vs 3770K with discrete graphics is pretty much a draw in most games at 1920 x 1080, just because it's GPU limited.
    So even if 4770K is ~8 % faster at rendering/encoding/whatever/, don't expect it to make much difference for realistic gaming.

  10. #185
    Xtreme Addict
    Join Date
    Mar 2009
    Posts
    1,116
    doesn't matter if the IPC hasn't changed if it hits 5ghz as easily as ivy hits 4ghz...

  11. #186
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Quote Originally Posted by bamtan2 View Post
    doesn't matter if the IPC hasn't changed if it hits 5ghz as easily as ivy hits 4ghz...
    The thing is it (HSWL) cannot hit 5GHz easily according to all these leaks. It can hit 4.5Ghz and the temps under air are extremely high. This is what we have today with IB core.

  12. #187
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Argentina
    Posts
    412
    Quote Originally Posted by bamtan2 View Post
    doesn't matter if the IPC hasn't changed if it hits 5ghz as easily as ivy hits 4ghz...
    Last time I checked Ivy was clocking to 4500-4600 easy. Nearly as good as Sandybridge on average. You could find a 5Ghz Sandybridge easier than a 5Ghz Ivybridge, but neither are average CPUs
    Main: Windows 10 Core i7 5820K @ 4500Mhz, Corsair H100i, 32GB DDR4-2800, eVGA GTX980 Ti, Kingston SSDNow 240GB, Crucial C300 64GB Cache + WD 1.5TB Green, Asus X99-A/USB3.1
    ESXi Server 6.5 Xeon E5 2670, 64GB DDR3-1600, 1TB, Intel DX79SR, 4xIntel 1Gbps
    ESXi Server 6.0 Xeon E5 2650L v3, 64GB DDR4-2400, 1TB, Asrock X99 Xtreme4, 4xIntel 1Gbps
    FreeNAS 9.10 x64 Xeon X3430 , 32GB DDR3-1600, 3x(3x1TB) WD Blue, Intel S3420GPRX, 4xIntel 1Gbps

  13. #188
    Xtreme Member
    Join Date
    Jan 2009
    Location
    Central PA/Southern NH
    Posts
    177
    The IPC has been progressing but I really haven't been compelled to upgrade from my i7-920. I'm still looking at the release of Ivy Bridge-E as a time for an upgrade.
    [Intel core i7 4820K..........Asus Rampage IV Black Edition] LAN Parties attended:
    [512GB Samsung 840 PRO 512GB......2xWD Black 7200 2TB RAID0] FITES [fites.net] 2012, 2011, 2010, 2009, 2008, 2007
    [32GB G.Skill DDR3 2400 4x8GB....2xEVGA GTX780ti Classified] L'Pane NorEaster [lpane.net] 2010, 2009, 2008
    [Corsair 900D.......Primochill CTR 250......Corsair AX1200i] Quakecon [quakecon.org] 2010, 2009
    [MCP35x2.........Swiftech Apogee HD......Swiftech MCR420-XP] PAX East [east.paxsite.com] 2012

  14. #189
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by Mats View Post
    It's all about which resolution used when benchmarking. Many sites uses way too low settings when testing CPU's.

    2700K vs 3770K with discrete graphics is pretty much a draw in most games at 1920 x 1080, just because it's GPU limited.
    So even if 4770K is ~8 % faster at rendering/encoding/whatever/, don't expect it to make much difference for realistic gaming.
    Nope. There are enough games that are CPU bottlenecked in places. People just have to test them. Think strategy, simulations, even some shooters if you want 60fps. No one should care about GPU bottlenecked benchmarks where you have less than 60fps unless it's a really slow game.

  15. #190
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Bf3 is multiplayer is heavely cpu bound. 5 ghz ivy ir sandy 6core is not enough for 120hz/fps play.

    Example on "sharki" map:
    65fps on roof of the hotel 1080p ultra settings 3770k@ stock
    90fps 3770k 5ghz. Looks like the "mesh" quality is eating cpu power

  16. #191
    Xtreme Addict
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,084
    Yeah you're right, of course the CPU makes a difference in gaming, but I can't really see a difference between IB and SB.

    I thought I went through enough benchmarks the other day when I compared 2600K and 3770K.
    The latter was faster in many benchmarks, except in games, and that 100 MHz extra didn't really help.

  17. #192
    Xtreme Member
    Join Date
    Nov 2010
    Location
    Brazil
    Posts
    161
    I think the major point why people changed from SB to IB, and now IB to HW is the "overclockability" of the processor itself. I sold my pc a while ago, and I'm planning to go Haswell now, even though I know it won't make any pratical difference from the others when running at frequencies above 4ghz...
    PC:
    MOBO: Maximus VI Extreme
    CPU: Core i7-4770k
    RAM: 2x4gb Dominator Platinum 2133
    GPU: GeForce GTX Titan

    Greetings from Brazil!

  18. #193
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by boxleitnerb View Post
    Nope. There are enough games that are CPU bottlenecked in places. People just have to test them. Think strategy, simulations, even some shooters if you want 60fps. No one should care about GPU bottlenecked benchmarks where you have less than 60fps unless it's a really slow game.
    Strategy & simulation games might benefit from a 5GHz IB compared to IB @ 4.5GHz, maybe not.
    Some of them show a difference ( not mindblowing but above the run to run variance threshold ) in their built-in or add-on Benchmarks, but I can't tell if these results and differences appear in real-life gaming as well since I'm not a strategy/rpg/mmo/fsx gamer and can't play those games neither have the patience and will to try to learn how to play those games just to have them added to my benchmark list for the reviews.

    Perhaps someone who knows how to play those games and has enough knowledge to test them properly and the integrity to give us some real-life results without any kind of "malicious editing" ( sadly there are plenty of sh1tty and/or shady people out there ).

    Quote Originally Posted by Nizzen View Post
    Bf3 is multiplayer is heavely cpu bound. 5 ghz ivy ir sandy 6core is not enough for 120hz/fps play.

    Example on "sharki" map:
    65fps on roof of the hotel 1080p ultra settings 3770k@ stock
    90fps 3770k 5ghz. Looks like the "mesh" quality is eating cpu power
    That's not a proper benchmark by any means ( even a single step backwards/forward/left/right or slightly different viewing angle changes the fps drastically ).

    With proper testing this is what comes out:



    I should have some S. Peninsula results ( from 30 mins rounds ) somewhere, if I locate the results I'll post 'em
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  19. #194
    Xtreme Addict
    Join Date
    Sep 2010
    Location
    US, MI
    Posts
    1,680
    I think all the call of duty games only use a single core.
    At least mw2 is that way, I think the rest are too if I remember right.

    When you enable nvidia 3d vision it seems like it could use more cpu power on those games.
    Well on all games, but those would make for a good test even though it's a very old engine.

  20. #195
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by BenchZowner View Post
    Strategy & simulation games might benefit from a 5GHz IB compared to IB @ 4.5GHz, maybe not.
    Some of them show a difference ( not mindblowing but above the run to run variance threshold ) in their built-in or add-on Benchmarks, but I can't tell if these results and differences appear in real-life gaming as well since I'm not a strategy/rpg/mmo/fsx gamer and can't play those games neither have the patience and will to try to learn how to play those games just to have them added to my benchmark list for the reviews.

    Perhaps someone who knows how to play those games and has enough knowledge to test them properly and the integrity to give us some real-life results without any kind of "malicious editing" ( sadly there are plenty of sh1tty and/or shady people out there ).



    That's not a proper benchmark by any means ( even a single step backwards/forward/left/right or slightly different viewing angle changes the fps drastically ).

    With proper testing this is what comes out:



    I should have some S. Peninsula results ( from 30 mins rounds ) somewhere, if I locate the results I'll post 'em
    Even shooters can be CPU bottlenecked in certain locations (Crysis 3, Serious Sam 3...), it all depends on the fps you want to have/sustain (and if you have a strong enough graphics card or if you're willing to turn down the graphics settings a bit to get those fps GPU-wise). PCGH.de, a German website, does excellently documented CPU benchmarks, real gameplay, no GPU bottlenecks.

    Btw Nizzens results are fine because they show the (rough) potential what you get by using a faster CPU. 36% more fps with 36% higher clocks (5 GHz vs 3.6 GHz - all core turbo). You can btw look at the fps, alt-tab out of the game, change clock speed and go back in and observe the change. That way it's perfectly reliable as you don't have to move or reboot. The results you posted are not really useful since they are GPU bottlenecked and don't show the benefit of a faster CPU. It doesn't make sense that a 4.7 GHz 2600K isn't faster than a 4.3 GHz one. It wouldn't be much, but it would be measurable at least in scenes where the CPU load is high.
    Last edited by boxleitnerb; 05-12-2013 at 08:03 AM.

  21. #196
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by BenchZowner View Post
    Strategy & simulation games might benefit from a 5GHz IB compared to IB @ 4.5GHz, maybe not.
    Some of them show a difference ( not mindblowing but above the run to run variance threshold ) in their built-in or add-on Benchmarks, but I can't tell if these results and differences appear in real-life gaming as well since I'm not a strategy/rpg/mmo/fsx gamer and can't play those games neither have the patience and will to try to learn how to play those games just to have them added to my benchmark list for the reviews.

    Perhaps someone who knows how to play those games and has enough knowledge to test them properly and the integrity to give us some real-life results without any kind of "malicious editing" ( sadly there are plenty of sh1tty and/or shady people out there ).



    That's not a proper benchmark by any means ( even a single step backwards/forward/left/right or slightly different viewing angle changes the fps drastically ).

    With proper testing this is what comes out:



    I should have some S. Peninsula results ( from 30 mins rounds ) somewhere, if I locate the results I'll post 'em
    Even shooters can be CPU bottlenecked (Crysis 3, Serious Sam 3...), it all depends on the fps you want to have/sustain (and if you have a strong enough graphics card or if you're willing to turn down the graphics settings a bit to get those fps GPU-wise). PCGH.de, a German website, does excellently documented CPU benchmarks, real gameplay, no GPU bottlenecks.

    Btw Nizzens results are fine because they show the (rough) potential what you get by using a faster CPU. You can btw look at the fps, alt-tab out of the game, change clock speed and go back in and observe the change. That way it's perfectly reliable as you don't have to move or reboot. The results you posted are not really useful since they are GPU bottlenecked and don't show the benefit of a faster CPU. It doesn't make sense that a 4.7 GHz 2600K isn't faster than a 4.3 GHz one, let alone slower. It wouldn't be much, but it would be measurable at least in scenes where the CPU load is high.

  22. #197
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by boxleitnerb View Post
    Even shooters can be CPU bottlenecked in certain locations (Crysis 3, Serious Sam 3...), it all depends on the fps you want to have/sustain (and if you have a strong enough graphics card or if you're willing to turn down the graphics settings a bit to get those fps GPU-wise). PCGH.de, a German website, does excellently documented CPU benchmarks, real gameplay, no GPU bottlenecks.
    Turning down the graphics settings reduces the GPU load and your CPU gets more influential to your fps, that's nothing new.

    I'm not going to play Crysis 3 at anything but max details with a 3770K + GTX Titan, that's my choice & scenario, and in my scenario we both know that what my chart shows is true and right.

    Turning the game settings down will change things ( how much ? depends on the settings & the downgrading level ).
    But you can't invalidate results with that argument.

    Btw Nizzens results are fine because they show the (rough) potential what you get by using a faster CPU. 36% more fps with 36% higher clocks (5 GHz vs 3.6 GHz - all core turbo). You can btw look at the fps, alt-tab out of the game, change clock speed and go back in and observe the change. That way it's perfectly reliable as you don't have to move or reboot. The results you posted are not really useful since they are GPU bottlenecked and don't show the benefit of a faster CPU. It doesn't make sense that a 4.7 GHz 2600K isn't faster than a 4.3 GHz one. It wouldn't be much, but it would be measurable at least in scenes where the CPU load is high.[/QUOTE]

    Alt-tabbing out of the game isn't a good idea, sorry, most games do not like that and you'll get inconsistent results even at the very same system clocks & settings.
    The 2600K stops scaling there simply due to its IPC, architecture in general and the GPU bottleneck.
    As long as we don't have low level access to BF3's engine, we can't tell where the Ivy Bridge increased perf. increase comes from ( could be anything, more efficiency in the Physics calc. thread, etc etc ).

    If you're an "eye candy gamer" with a high-end system, you usually game at max details ( in-game ) and AA/TrAA/AF usually, and in that case it is totally indisputable that regardless if you're gunning with a 4.5GHz 3770K or a 4.7GHz 4770K, you'll have the very same performance at high resolutions & maxed game settings ( *maybe some RPG/strategy/mmo games could be an exception, don't know, I said I can't play these kinds of games ).

    You can see a plethora of games tested just like BF3 in the chart above here
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  23. #198
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by BenchZowner View Post
    Turning down the graphics settings reduces the GPU load and your CPU gets more influential to your fps, that's nothing new.

    I'm not going to play Crysis 3 at anything but max details with a 3770K + GTX Titan, that's my choice & scenario, and in my scenario we both know that what my chart shows is true and right.

    Turning the game settings down will change things ( how much ? depends on the settings & the downgrading level ).
    But you can't invalidate results with that argument.

    Alt-tabbing out of the game isn't a good idea, sorry, most games do not like that and you'll get inconsistent results even at the very same system clocks & settings.
    The 2600K stops scaling there simply due to its IPC, architecture in general and the GPU bottleneck.
    As long as we don't have low level access to BF3's engine, we can't tell where the Ivy Bridge increased perf. increase comes from ( could be anything, more efficiency in the Physics calc. thread, etc etc ).

    If you're an "eye candy gamer" with a high-end system, you usually game at max details ( in-game ) and AA/TrAA/AF usually, and in that case it is totally indisputable that regardless if you're gunning with a 4.5GHz 3770K or a 4.7GHz 4770K, you'll have the very same performance at high resolutions & maxed game settings ( *maybe some RPG/strategy/mmo games could be an exception, don't know, I said I can't play these kinds of games ).

    You can see a plethora of games tested just like BF3 in the chart above here
    Why play with max details if that doesn't give you enough fps? Ultimately it's about fps - turning down settings is not a crime
    Imagine this case (at max details):
    CPU A can do 60 fps avg
    CPU B can do 45 fps avg
    The GPU can do 30 fps avg. Conclusion -> CPU A and B are equally fast (30 fps). Which is wrong. And not everyone wants to play at 30 fps. Someone who would rather want 60 fps will not know which CPU is better.
    Some people need 100+ fps in BF3 for competitive online play. Why should they play at 2560x1600 with maximum details and TrSSAA if their GPU(s) cannot get them those 100fps? That makes no sense. This applies to all games. I'm not saying people should reduce their settings because fps drop below a certain threshold sometimes. I'm saying, a benchmark should show that. Benchmarking CPUs in GPU bottlenecks is a waste of time since it doesn't give you any new information. Your BF3 benchmark may be fine for someone who only needs around 60fps, but for others who need more, it is useless.

    Since games can be GPU or CPU bottlenecked depending on where you are in the game, what you're doing and what kind of fps you need, you always need two benchmarks. One that shows what your CPU can ultimately do and one separate benchmark for the GPU. One single benchmark will never give you all the information you need. Only by combining these two benchmarks you can really gauge the performance you can get from a certain CPU/GPU combo in all relevant cases.

    If we take the example from above:
    CPU benchmark (maximum settings, because they can influence CPU load, but 1280x720, no AA, no AF):
    CPU A: 45 fps avg
    CPU B: 60 fps avg
    Need 60 fps -> get CPU B

    GPU benchmark (maximum settings, 1080p, AA+AF):
    GPU: 30 fps avg
    Need 60 fps -> get a faster GPU or go SLI/CF or reduce some details.

    That way you know exactly what components you need before you buy.
    Alt-tabbing can lead to stability problems, that much is true. But it doesn't affect performance in most cases. It's so easy to test that - alt-tab out and in of the game 2 or 3 times and see if the fps stay the same (in a static scene of course where other players or events don't skew results even if you do nothing). If the results are consistent, just go ahead and bench. But I agree with you that usually you should have a certain test environment. Savegames are best for that.

    The results you linked to are worthless imo. For one thing, in-game benchmarks were used, but these often show much higher fps than you can actually see in the game. That is because not always is AI, pathfinding etc. computed in those benchmarks, but they are rather movies instead of real gameplay. For instance the GTA 4 ingame benchmark shows at least twice the fps that are common in the game. Very misleading. And secondly, on top of that you make sure due to the GPU bottleneck that relevant performance differences are just "hidden". That might be irrelevant in the case of 2600K vs 3770K since the differences would be very small anyway. But the method in general is just wrong.
    Last edited by boxleitnerb; 05-12-2013 at 09:19 AM.

  24. #199
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by boxleitnerb View Post
    Why play with max details if that doesn't give you enough fps? Ultimately it's about fps - turning down settings is not a crime
    Imagine this case (at max details):


    Some people need 100+ fps in BF3 for competitive online play. Why should they play at 2560x1600 with maximum details and TrSSAA if their GPU(s) cannot get them those 100fps? That makes no sense. This applies to all games. I'm not saying people should reduce their settings because fps drop below a certain threshold sometimes. I'm saying, a benchmark should show that. Benchmarking CPUs in GPU bottlenecks is a waste of time since it doesn't give you any new information. Your BF3 benchmark may be fine for someone who only needs around 60fps, but for others who need more, it is useless.

    Since games can be GPU or CPU bottlenecked depending on where you are in the game, what you're doing and what kind of fps you need, you always need two benchmarks. One that shows what your CPU can ultimately do and one separate benchmark for the GPU. One single benchmark will never give you all the information you need. Only by combining these two benchmarks you can really gauge the performance you can get from a certain CPU/GPU combo in all relevant cases.

    If we take the example from above:


    Alt-tabbing can lead to stability problems, that much is true. But it doesn't affect performance in most cases. It's so easy to test that - alt-tab out and in of the game 2 or 3 times and see if the fps stay the same (in a static scene of course where other players or events don't skew results even if you do nothing). If the results are consistent, just go ahead and bench. But I agree with you that usually you should have a certain test environment. Savegames are best for that.

    The results you linked to are worthless imo. For one thing, in-game benchmarks were used, but these often show much higher fps than you can actually see in the game. That is because not always is AI, pathfinding etc. computed in those benchmarks, but they are rather movies instead of real gameplay. For instance the GTA 4 ingame benchmark shows at least twice the fps that are common in the game. Very misleading. And secondly, on top of that you make sure due to the GPU bottleneck that relevant performance differences are just "hidden". That might be irrelevant in the case of 2600K vs 3770K since the differences would be very small anyway. But the method in general is just wrong.
    I'm too tired to read the whole post again.

    You said "some people need XXX fps"... those people are under 10% of the people out there.
    Of those 10% 90% of them are professional gamers, these people usually have mediocre graphics cards by the way, and play at 1280x1024 all game details low, etc etc.
    Plus, you don't need a degree to figure out that the higher the CPU influence => more MHz, faster CPU = more fps.

    Most of the gamers out there play at the highest settings they can afford given their graphics card's capabilities and what feels smooth for them ( and for example in BF3 minimum 45fps - avg 60fps - is more than smooth for 90% of the gamers out there ), that's the target audience of the article and most of the articles online.
    If you want to figure out something that is straightforward ( that Haswell > Ivy Bridge > Sandy Bridge > AMD FX in low game details and perhaps low resolutions as well ) there are gazillions of CPU reviews with such benches.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  25. #200
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    You cannot really say that. Those optimal (and achievable by using the fastest CPUs available) fps values occupy a very wide range depending on the game.
    In BF3 it might be 120 fps, in NFS 2012 it might be around 60, in Shogun 2 could be 20-30 in large battles.

    The reviewer should never ever make a judgement about who needs what kind of fps, but instead provide the maximum amount of information possible (within reason - for example really no one cares if you have 120 fps or 200 fps, that indeed is irrelevant for 100% of the readers), so everyone who reads the article can take some useful stuff home. You test GPUs in GPU bottlenecks, so why not test CPUs in CPU bottlenecks? That would make sense. Otherwise you risk not showing relevant performance differences.

    CPU benchmarking is difficult and time consuming. You have to know the game, know the demanding spots, do real gameplay runs with the right settings. Most people unfortunately do it wrong. They use 4:3 resolutions like 640x480 when 16:9 can have a higher CPU load due to more objects being shown (wider field of view than 4:3). They use low settings when that also can reduce CPU load (i.e. view distance, mesh quality etc.).

Page 8 of 12 FirstFirst ... 567891011 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •