In before fanboys
Printable View
In before fanboys
Wow, so the 3770 is within 5% of the 8350 without hyperthreading?
If Crytek gets hyperthreading working it won't even be close. I suppose it's possible they won't patch, but to leave performance lower than it could be for 85% of the market would be an interesting move on their part.
wait so even with one core the cpu only gets utilized ~50% on haswell.. hell the picture is inconsistent even on AMD itself... why has a FX-4300 nearly full utilization with one core and a FX-8350 only 63%, its basicalyl the same chip, the only difference is 200mhz less clock and 4mb less 3rd level cache, this can't count for 35% less utilization on single core environment .. crytek botched things up again pretty bad.
A big wow at the Intel fanboys, its just hard for them to accept the result it seems. Funny stuff
The 8350 gets used 63% and the 2600K 83% without HT. If you want HT to work, which will at most make a 5 - 10% difference, how about make the 8350 work 83% as well? Go on, bring about another argument now. Also, the difference between the 3570 and 3770 seems to imply that HT is working somewhat but whatever.
Yeah the AMD guys seem to get all jumped up over a single good result but I find it baffling as to how the Intel guys can be so blind at times. I remember some arguments about clock for clock tests etc. Err, what? The 8350 clocks about 400mhz faster on average compared to the 3770 anyway, so either compare stock or max overclocked speeds.
Just accept the fact that it is a good result for AMD. Yeah they suck lots of power, suck at single threaded apps yada yada yada. But stop the blindness and accept the reality. Jeez
Youre funny. How much of a difference will HT make? 75%? Go on, say that you use an AMD system now but yes, HT will make an enormous difference
Well not quite 2600k has higher overall utilization than the FX8350. AMD's module is performing with in-consistence because of shared resources leading to one of the cores getting higher utilization, The 2600K on the other hand gets similar utilization as one of the cores from AMD's module. I am very sure the game is not optimized for the LGA 2011 platform but instead for the LGA 1155 because surveys say that is the dominant platform being sold.
The 2600k performs close to a 3570k which is included in the first post and as far as utilization goes i expect the 3570K to have similar stats as the 2600k.
Amd should be bundling crysis 3 with FX chips now ;) lol
Any other benchmarks around besides these 2?
How is it that it's not utilizing enough cores/threads on intel CPUs when top CPU in the chart is 6C/12 3.3Ghz SB-E? Morel likely is that AMD's approach to sharing the FP hardware(4 SMT-like units shared between 8 cores) is yielding better utilization than intels approach(4 SMT capable cores,2 threads running on one core).
There was never any mystery. Properly optimized game will run good on both intel and AMD. Poorly optimized game= Starcraft 2 :D
What infromal says is what I am thinking as well, but it's really hard to say anything about this without knowing what coding Cry-Tek has done in favor of either architecture.
HT gives quite the leap in benchmarks like Cinebench and x264 encoding, about ~30%, remember that logical core != physical core. I do not think that we can put these two examples in the same boat as gaming loads, but I think it can paint a picture of the extra performance you can expect if HT was used to it's fullest potential.
Now, correct me if I am wrong, but does not the graph also point to that the AMD is almost bottlenecking the graphic card?
Interesting.... NOT!
I won't bother with the consistency of the tests ( how were they performed and how many loops ? ), but point is, faster or slower, nobody cares about a 200$ CPU or higher playing a game at 720p with low details.
I have been informed that a Lada Niva runs faster than a Ferrari F12 Berlinetta when going down a cliff... that's it, I must buy a Lada, screw the Ferrari...
...or...
What really would be interesting is...
FX-8350 vs 3770K with a GTX Titan at 1080p & 2560x1600 ( or even at 5760x1080 ) at Max Details 2xMSAA 16xAF.
Because this is where it matters. Are the two CPU's driving the Titan to the same mark, or one of the CPU's is bottlenecking the card ( >3fps difference on avg framerate, and also paying respect to the minimum fps as well ).
If I had an AMD platform I'd do it, but since AMD doesn't support me, I won't splash my own money to get a mobo & FX-8350 just to do this.
Look at I3 cpu load bars, something is obviously not ok.
Benchzowner has a point. I wonder if there will be some review sites doing the Titan review with various CPUs in order to see what one needs from his CPU/board platform in order to push the beast to its full potential.
look at the FX4300 vs FX8350, both are vishera cores so basically 2M vs 4M and the FX4300 has overall better utilization then the the FX8530 at 1-4 thread situations. For me it look more like crytek :banana::banana::banana::banana:ed something up with cpu thread dispatching if it dedects a CPU with more then 4 threads available.
It has better utilization because the scheduler is bouncing the threads on various modules in 8350 ;). On 4300 there is not many destinations,either module 0 or module 1. BTW it's not a given to have much better performance on 4M/4T affinity vs 2M/4T ;). Some apps/games can see tangible benefits(~20%) while others see close to zero. And those that see close to zero are not benefited when threads bounce on 4 modules like a ping pong ball.
No one mentioning power consumption... :confused:
are you trying to troll me... when there is one thread only used it should be fully utilized on any of the cpus, regardless how many cores/moduels thread the cpu has. Especial when it happens on cpus with less core counts.
1 Thread
AMD/Intel up to 4 cores -> ~85-100% load
AMD/Intel more then 4 cores -> ~50-80% load
there has nothing changed in the workload, just the numbers of core on the cpu itself.
It would be logical to see a decline of utilization the more threads you run, since a game can never saturate all cores to it fullest, but when the game is limited to one care and loads on a quadcore one thread to 100% and then a hexacore only 80%, while nothing significant has changed there is something wrong with the game itself.
If anything if you compare the AMD FX series, the single thread utilization shouldn't show such huge difference, especial between the FX6300 and the FX8350. The turbo max frequency difference is only 100mhz (so ~2,5%) where everything is the same should not account to 20% difference in load.
The picture given here lacks consistency.
http://www.tomshardware.com/reviews/...m,2057-12.html
http://www.overclock.net/t/671977/hy...ading-in-games
http://forums.anandtech.com/showthread.php?t=2149659
Where's the mighty hyperthreading here:
http://images.anandtech.com/graphs/graph6396/51136.png
http://images.anandtech.com/graphs/graph6396/51120.png
Nice to see the nvidia zealots doing the fight for Intel fanboys.
Oh noes AMDZ looking good on Crysis 3, patch it or kill it with fire.
Now now, I guess there was a question in there and its easily accountable.
Look at 3770k and 3570k. 3570k is essentially the same without HT. This in a benchmark that take ~(full advantage) of this feature, which equates to about 30% performance increase on the high side.
http://images.anandtech.com/graphs/graph6396/51136.png
However, just to point out. Bulldozer also did fine at launch in Multi-Threaded benchmarks, especially considering the price.
How are they limiting the game to run on one core only on hexa core? By affinity mask? If game code recognizes that there is 6C CPU in the system and you limit it artificially via process manager to use only one core, you cannot blame the game for bad performance in that case...
30% is on the higher side :). Usually HT(SMT) can give up to 30% performance increase. It's not a magic pixie dust, it's just tries to make the cores busier,which is very good approach to extract performance. But it is not the same as having a dedicated hardware to do the job.
GPU performance can be limited by CPU bottlenecks. At high resolution and settings, CPUs make a huge difference based on how much, or how little they bottleneck the GPU. A more powerful CPU would create less bottleneck, so using high resolutions and settings is a fair method of comparing CPUs.
As was said, no one cares about testing the latest CPUs and a Titan at 720p. No one with a system like that is going to be running that resolution so it is a meaningless comparison.
No, I am waiting for you to teach me.
Seriously, how can you miss a fricking easy point: REAL-LIFE APPLICATION.
As simply as I can put it: Who gives a rat's ass if X CPU is 500% faster at 720p with a GTX Titan, when the same X CPU is 20% slower at 1080p with the Titan.
FX gets a speed up for similar data in a modules instruction cache and ending up in both cores of the same module.
The game looks to like Cache hungry? to me anyways
One senseless word in the title and this thread is 5 pages and growing - lousy reporting has its advantages ;)
That is very unlikely to happen. If X can keep up with Titan at 720p then it will surely be able to it at 1080p because the GPU load increases much more than CPU load as resolution climbs. What you are suggesting is not really a CPU benchmark - though I wouldn't say its any less useful. In fact to make it even more related to the average customer , I would like to see a 4.5 Ghz 3570K vs 4.8 Ghz FX8350 comparision (budget overlclock on air) w/ 1080p @ 4xAA 16xAF