In before fanboys
Printable View
In before fanboys
Wow, so the 3770 is within 5% of the 8350 without hyperthreading?
If Crytek gets hyperthreading working it won't even be close. I suppose it's possible they won't patch, but to leave performance lower than it could be for 85% of the market would be an interesting move on their part.
wait so even with one core the cpu only gets utilized ~50% on haswell.. hell the picture is inconsistent even on AMD itself... why has a FX-4300 nearly full utilization with one core and a FX-8350 only 63%, its basicalyl the same chip, the only difference is 200mhz less clock and 4mb less 3rd level cache, this can't count for 35% less utilization on single core environment .. crytek botched things up again pretty bad.
A big wow at the Intel fanboys, its just hard for them to accept the result it seems. Funny stuff
The 8350 gets used 63% and the 2600K 83% without HT. If you want HT to work, which will at most make a 5 - 10% difference, how about make the 8350 work 83% as well? Go on, bring about another argument now. Also, the difference between the 3570 and 3770 seems to imply that HT is working somewhat but whatever.
Yeah the AMD guys seem to get all jumped up over a single good result but I find it baffling as to how the Intel guys can be so blind at times. I remember some arguments about clock for clock tests etc. Err, what? The 8350 clocks about 400mhz faster on average compared to the 3770 anyway, so either compare stock or max overclocked speeds.
Just accept the fact that it is a good result for AMD. Yeah they suck lots of power, suck at single threaded apps yada yada yada. But stop the blindness and accept the reality. Jeez
Youre funny. How much of a difference will HT make? 75%? Go on, say that you use an AMD system now but yes, HT will make an enormous difference
Well not quite 2600k has higher overall utilization than the FX8350. AMD's module is performing with in-consistence because of shared resources leading to one of the cores getting higher utilization, The 2600K on the other hand gets similar utilization as one of the cores from AMD's module. I am very sure the game is not optimized for the LGA 2011 platform but instead for the LGA 1155 because surveys say that is the dominant platform being sold.
The 2600k performs close to a 3570k which is included in the first post and as far as utilization goes i expect the 3570K to have similar stats as the 2600k.
Amd should be bundling crysis 3 with FX chips now ;) lol
Any other benchmarks around besides these 2?
How is it that it's not utilizing enough cores/threads on intel CPUs when top CPU in the chart is 6C/12 3.3Ghz SB-E? Morel likely is that AMD's approach to sharing the FP hardware(4 SMT-like units shared between 8 cores) is yielding better utilization than intels approach(4 SMT capable cores,2 threads running on one core).
There was never any mystery. Properly optimized game will run good on both intel and AMD. Poorly optimized game= Starcraft 2 :D
What infromal says is what I am thinking as well, but it's really hard to say anything about this without knowing what coding Cry-Tek has done in favor of either architecture.
HT gives quite the leap in benchmarks like Cinebench and x264 encoding, about ~30%, remember that logical core != physical core. I do not think that we can put these two examples in the same boat as gaming loads, but I think it can paint a picture of the extra performance you can expect if HT was used to it's fullest potential.
Now, correct me if I am wrong, but does not the graph also point to that the AMD is almost bottlenecking the graphic card?
Interesting.... NOT!
I won't bother with the consistency of the tests ( how were they performed and how many loops ? ), but point is, faster or slower, nobody cares about a 200$ CPU or higher playing a game at 720p with low details.
I have been informed that a Lada Niva runs faster than a Ferrari F12 Berlinetta when going down a cliff... that's it, I must buy a Lada, screw the Ferrari...
...or...
What really would be interesting is...
FX-8350 vs 3770K with a GTX Titan at 1080p & 2560x1600 ( or even at 5760x1080 ) at Max Details 2xMSAA 16xAF.
Because this is where it matters. Are the two CPU's driving the Titan to the same mark, or one of the CPU's is bottlenecking the card ( >3fps difference on avg framerate, and also paying respect to the minimum fps as well ).
If I had an AMD platform I'd do it, but since AMD doesn't support me, I won't splash my own money to get a mobo & FX-8350 just to do this.
Look at I3 cpu load bars, something is obviously not ok.
Benchzowner has a point. I wonder if there will be some review sites doing the Titan review with various CPUs in order to see what one needs from his CPU/board platform in order to push the beast to its full potential.
look at the FX4300 vs FX8350, both are vishera cores so basically 2M vs 4M and the FX4300 has overall better utilization then the the FX8530 at 1-4 thread situations. For me it look more like crytek :banana::banana::banana::banana:ed something up with cpu thread dispatching if it dedects a CPU with more then 4 threads available.
It has better utilization because the scheduler is bouncing the threads on various modules in 8350 ;). On 4300 there is not many destinations,either module 0 or module 1. BTW it's not a given to have much better performance on 4M/4T affinity vs 2M/4T ;). Some apps/games can see tangible benefits(~20%) while others see close to zero. And those that see close to zero are not benefited when threads bounce on 4 modules like a ping pong ball.
No one mentioning power consumption... :confused:
are you trying to troll me... when there is one thread only used it should be fully utilized on any of the cpus, regardless how many cores/moduels thread the cpu has. Especial when it happens on cpus with less core counts.
1 Thread
AMD/Intel up to 4 cores -> ~85-100% load
AMD/Intel more then 4 cores -> ~50-80% load
there has nothing changed in the workload, just the numbers of core on the cpu itself.
It would be logical to see a decline of utilization the more threads you run, since a game can never saturate all cores to it fullest, but when the game is limited to one care and loads on a quadcore one thread to 100% and then a hexacore only 80%, while nothing significant has changed there is something wrong with the game itself.
If anything if you compare the AMD FX series, the single thread utilization shouldn't show such huge difference, especial between the FX6300 and the FX8350. The turbo max frequency difference is only 100mhz (so ~2,5%) where everything is the same should not account to 20% difference in load.
The picture given here lacks consistency.
http://www.tomshardware.com/reviews/...m,2057-12.html
http://www.overclock.net/t/671977/hy...ading-in-games
http://forums.anandtech.com/showthread.php?t=2149659
Where's the mighty hyperthreading here:
http://images.anandtech.com/graphs/graph6396/51136.png
http://images.anandtech.com/graphs/graph6396/51120.png
Nice to see the nvidia zealots doing the fight for Intel fanboys.
Oh noes AMDZ looking good on Crysis 3, patch it or kill it with fire.
Now now, I guess there was a question in there and its easily accountable.
Look at 3770k and 3570k. 3570k is essentially the same without HT. This in a benchmark that take ~(full advantage) of this feature, which equates to about 30% performance increase on the high side.
http://images.anandtech.com/graphs/graph6396/51136.png
However, just to point out. Bulldozer also did fine at launch in Multi-Threaded benchmarks, especially considering the price.
How are they limiting the game to run on one core only on hexa core? By affinity mask? If game code recognizes that there is 6C CPU in the system and you limit it artificially via process manager to use only one core, you cannot blame the game for bad performance in that case...
30% is on the higher side :). Usually HT(SMT) can give up to 30% performance increase. It's not a magic pixie dust, it's just tries to make the cores busier,which is very good approach to extract performance. But it is not the same as having a dedicated hardware to do the job.
GPU performance can be limited by CPU bottlenecks. At high resolution and settings, CPUs make a huge difference based on how much, or how little they bottleneck the GPU. A more powerful CPU would create less bottleneck, so using high resolutions and settings is a fair method of comparing CPUs.
As was said, no one cares about testing the latest CPUs and a Titan at 720p. No one with a system like that is going to be running that resolution so it is a meaningless comparison.
No, I am waiting for you to teach me.
Seriously, how can you miss a fricking easy point: REAL-LIFE APPLICATION.
As simply as I can put it: Who gives a rat's ass if X CPU is 500% faster at 720p with a GTX Titan, when the same X CPU is 20% slower at 1080p with the Titan.
FX gets a speed up for similar data in a modules instruction cache and ending up in both cores of the same module.
The game looks to like Cache hungry? to me anyways
One senseless word in the title and this thread is 5 pages and growing - lousy reporting has its advantages ;)
That is very unlikely to happen. If X can keep up with Titan at 720p then it will surely be able to it at 1080p because the GPU load increases much more than CPU load as resolution climbs. What you are suggesting is not really a CPU benchmark - though I wouldn't say its any less useful. In fact to make it even more related to the average customer , I would like to see a 4.5 Ghz 3570K vs 4.8 Ghz FX8350 comparision (budget overlclock on air) w/ 1080p @ 4xAA 16xAF
Still is a CPU benchmark as long as the rest of the components are the same.
And since overclocking the CPU can make a difference for Intel Core 2nd & 3rd generation processors with a single GTX 680, with the Titan or an SLI/CF config ( I hate them, but since people use those, putting those to the test as well would be nice ) the gap could be wider.
I have seen reviews that could be used to compare a GTX 680 with an FX-8150 and a 2600K, but can't be sure about the results ( validity, consistency, testing routines, configs, etc ).
I'd really like to do it myself, but can't be arsed to spend more money on tech, especially one I don't need and has nothing to offer me at all for the time being at least ( perhaps somebody might lend me an AMD FX platform to do those tests, we'll see ).
By disabling the cores in the bios e.g.
I doubt they go through the hassle to assign affinity masks every time (which often get ignored if you do it via windows task manager)
Even when the do assignment via taskmanger a sungle thread should always be maxed out, even more so on amd then intel (which both shouldn't matter on a HT aware system), but they don't. It works for dual and quadcores and everything above the rsults are just a mess.
As said I distrust crytek ever since the crysis 2 fiasco.
How am I a "NVIDIA zealot"? I have one AMD box, one NVIDIA box, giving AMD equal space. Bought one 680, one 7970.
You see me around here posting "OMGWTFBBQ?!?" about Titan? No, I stated they're nice, won't be buying.
On the other hand, if the "true" 8000 series came out, I'd buy one. (at least at $500-$600, no $1000 for AMD either)
Just saying the game is obviously poorly optimized for hyper threading, and were it optimized for hyperthreading, the AMD results wouldn't be so remarkable.
And for better or worse hyperthreading is what most of the market has for above 4 core threading. (although I personally have an intel hex core)
Around 60-80% of my previous CPUs and GPUs were AMD / ATi.
However as Rollo put it, currently I wouldnt put an AMD CPU into my gaming rig even if you paid me to. I make a decision on what to buy from benchmarks, and every OTHER benchmark from every site on the internet shows that AMD CPUs have been worse for gaming than Intel ever since S775.
If AMD could get back to as good as they were in the S939 days, I'd use more of their CPUs without hesitation. I remember paying 350 GBP for an Athlon X2 4400+. It lasted me for almost 4 years because it was that good. Currently my I7 980 is likely going to last me for at least double that time. Theres nothing that AMD can currently do to make me consider ditching it for one of their CPUs.
Not one person in this thread has complained about AMD doing well, rather the complete opposite. People who can think rationally have simply noticed that Intel CPUs are vastly under performing and that this should be fixed.
Oh, well thats true too, though I meant that people were saying positive things about AMD CPUs doing well, not negative.
I wish I'd never bothered wasting time with an E8400 and I7 920 now, should have just gone from my X2 4400+ > I7 980.
In the future I'm just going to slap the best CPU architecture into my PC and be sorted for a good long time like I was with those two CPUs. Well, actually I paid less for an I7 920 + 980 than a 980x would have cost me so that was OK, just going with the C2D in between was a waste.
There is enough gamers with DirectX 11 now that games may skip previous versions and that means that they can improve performance using more threads.
DX10 and before couldn't use multi-threaded rendering. More cores will increase performance in DX11
This thread :lol:
-PB
I really hope more new games will follow suit and be multi-threaded like Crysis 3...words can't explain how pissed off I was that I could only utilize 80% of my 6970 crossfire setup when I first got my fx CPU, no games could utilize all the cores. Hey, this could turn out like when AMD put out the 64-bit Opteron and Athlon and Intel reacted with the updated Xeon and Pentium 4...
On page 3, still reading, and with a huge smile on my face.... I actually think its funny and sad..
w0mbat, you are a fanboy, no shame in that, but really, your approach is extremely biased, and you keep slapping everyone that suggests that it is not total ?ber domination... Take a chill pill, and leave XS news section alone if you feel the urge to pick a war with anyone seeing things differently than yourself.
As for the news: Think software patches will even out things, but time will tell - none the less, nice to see the bulldozer/vishera isnt completely fail, because..... It was just not what we expected it to be at launch.
You guys, just go get a 120 Hz monitor and you'll feel allot better...seriously you will.
I think it's time to get the Ban Hammer ready.
Lets knock it off with the attacks.
Is there any image quality test (by Anand?) that shows us the difference between OpenCL accel. transcoding and traditional one(slow way)? We seldom hear that argument about poorer image quality and all but I haven't seen anyone actually shows us how worse is it.
An important aspect of this for anyone with multi GPU would be if this prowess on single GPU translates to multi GPU at this game as well.
We've all seen benches where the AMD chips are close in single GPU but fall down at multi, so does the highly threaded Crysis 3 reverse that trend?
As a multi GPU owner, I'd be interested.
Its been fun watching but time for the lock on this one.