In before fanboys
In before fanboys
I like large posteriors and I cannot prevaricate
Wow, so the 3770 is within 5% of the 8350 without hyperthreading?
If Crytek gets hyperthreading working it won't even be close. I suppose it's possible they won't patch, but to leave performance lower than it could be for 85% of the market would be an interesting move on their part.
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
wait so even with one core the cpu only gets utilized ~50% on haswell.. hell the picture is inconsistent even on AMD itself... why has a FX-4300 nearly full utilization with one core and a FX-8350 only 63%, its basicalyl the same chip, the only difference is 200mhz less clock and 4mb less 3rd level cache, this can't count for 35% less utilization on single core environment .. crytek botched things up again pretty bad.
A big wow at the Intel fanboys, its just hard for them to accept the result it seems. Funny stuff
The 8350 gets used 63% and the 2600K 83% without HT. If you want HT to work, which will at most make a 5 - 10% difference, how about make the 8350 work 83% as well? Go on, bring about another argument now. Also, the difference between the 3570 and 3770 seems to imply that HT is working somewhat but whatever.
Yeah the AMD guys seem to get all jumped up over a single good result but I find it baffling as to how the Intel guys can be so blind at times. I remember some arguments about clock for clock tests etc. Err, what? The 8350 clocks about 400mhz faster on average compared to the 3770 anyway, so either compare stock or max overclocked speeds.
Just accept the fact that it is a good result for AMD. Yeah they suck lots of power, suck at single threaded apps yada yada yada. But stop the blindness and accept the reality. Jeez
Youre funny. How much of a difference will HT make? 75%? Go on, say that you use an AMD system now but yes, HT will make an enormous difference
Last edited by LightSpeed; 02-21-2013 at 04:46 AM.
i7 920@4.34 | Rampage II GENE | 6GB OCZ Reaper 1866 | 8800GT (zzz) | Corsair AX750 | Xonar Essence ST w/ 3x LME49720 | HiFiMAN EF2 Amplifier | Shure SRH840 | EK Supreme HF | Thermochill PA 120.3 | MCP355 | XSPC Reservoir | 3/8" ID Tubing
Phenom 9950BE @ 3400/2000 (CPU/NB) | Gigabyte MA790GP-DS4H | HD4850 | 4GB Corsair DHX @850 | Corsair TX650W | T.R.U.E Push-Pull
E2160 @3.06 | ASUS P5K-Pro | BFG 8800GT | 4GB G.Skill @ 1040 | 600W Tt PP
A64 3000+ @2.87 | DFI-NF4 | 7800 GTX | Patriot 1GB DDR @610 | 550W FSP
Well not quite 2600k has higher overall utilization than the FX8350. AMD's module is performing with in-consistence because of shared resources leading to one of the cores getting higher utilization, The 2600K on the other hand gets similar utilization as one of the cores from AMD's module. I am very sure the game is not optimized for the LGA 2011 platform but instead for the LGA 1155 because surveys say that is the dominant platform being sold.
The 2600k performs close to a 3570k which is included in the first post and as far as utilization goes i expect the 3570K to have similar stats as the 2600k.
Coming Soon
Amd should be bundling crysis 3 with FX chips nowlol
Any other benchmarks around besides these 2?
FX-8350(1249PGT) @ 4.7ghz 1.452v, Swiftech H220x
Asus Crosshair Formula 5 Am3+ bios v1703
G.skill Trident X (2x4gb) ~1200mhz @ 10-12-12-31-46-2T @ 1.66v
MSI 7950 TwinFrozr *1100/1500* Cat.14.9
OCZ ZX 850w psu
Lian-Li Lancool K62
Samsung 830 128g
2 x 1TB Samsung SpinpointF3, 2T Samsung
Win7 Home 64bit
My Rig
How is it that it's not utilizing enough cores/threads on intel CPUs when top CPU in the chart is 6C/12 3.3Ghz SB-E? Morel likely is that AMD's approach to sharing the FP hardware(4 SMT-like units shared between 8 cores) is yielding better utilization than intels approach(4 SMT capable cores,2 threads running on one core).
There was never any mystery. Properly optimized game will run good on both intel and AMD. Poorly optimized game= Starcraft 2![]()
What infromal says is what I am thinking as well, but it's really hard to say anything about this without knowing what coding Cry-Tek has done in favor of either architecture.
HT gives quite the leap in benchmarks like Cinebench and x264 encoding, about ~30%, remember that logical core != physical core. I do not think that we can put these two examples in the same boat as gaming loads, but I think it can paint a picture of the extra performance you can expect if HT was used to it's fullest potential.
Now, correct me if I am wrong, but does not the graph also point to that the AMD is almost bottlenecking the graphic card?
Aber ja, naturlich Hans nass ist, er steht unter einem Wasserfall - James May
Hardware: Gigabyte GA-Z87M-D3H, Intel i5 4670k @ 4GHz, Crucial DDR3 BallistiX, Asus GTX 770 DirectCU II, Corsair HX 650W, Samsung 830 256GB, Silverstone Precision -|- Cooling: Noctua NH-C12P SE14
Interesting.... NOT!
I won't bother with the consistency of the tests ( how were they performed and how many loops ? ), but point is, faster or slower, nobody cares about a 200$ CPU or higher playing a game at 720p with low details.
I have been informed that a Lada Niva runs faster than a Ferrari F12 Berlinetta when going down a cliff... that's it, I must buy a Lada, screw the Ferrari...
...or...
What really would be interesting is...
FX-8350 vs 3770K with a GTX Titan at 1080p & 2560x1600 ( or even at 5760x1080 ) at Max Details 2xMSAA 16xAF.
Because this is where it matters. Are the two CPU's driving the Titan to the same mark, or one of the CPU's is bottlenecking the card ( >3fps difference on avg framerate, and also paying respect to the minimum fps as well ).
If I had an AMD platform I'd do it, but since AMD doesn't support me, I won't splash my own money to get a mobo & FX-8350 just to do this.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
Look at I3 cpu load bars, something is obviously not ok.
2x Dual E5 2670, 32 GB, Transcend SSD 256 GB, 2xSeagate Constellation ES 2TB, 1KW PSU
HP Envy 17" - i7 2630 QM, HD6850, 8 GB.
i7 3770, GF 650, 8 GB, Transcend SSD 256 GB, 6x3 TB. 850W PSU
Benchzowner has a point. I wonder if there will be some review sites doing the Titan review with various CPUs in order to see what one needs from his CPU/board platform in order to push the beast to its full potential.
look at the FX4300 vs FX8350, both are vishera cores so basically 2M vs 4M and the FX4300 has overall better utilization then the the FX8530 at 1-4 thread situations. For me it look more like cryteked something up with cpu thread dispatching if it dedects a CPU with more then 4 threads available.
It has better utilization because the scheduler is bouncing the threads on various modules in 8350. On 4300 there is not many destinations,either module 0 or module 1. BTW it's not a given to have much better performance on 4M/4T affinity vs 2M/4T
. Some apps/games can see tangible benefits(~20%) while others see close to zero. And those that see close to zero are not benefited when threads bounce on 4 modules like a ping pong ball.
No one mentioning power consumption...![]()
X2 555 @ B55 @ 4050 1.4v, NB @ 2700 1.35v Fuzion V1
Gigabyte 890gpa-ud3h v2.1
HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 8.9.8.27.41 1T 110ns 1.605v
C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
Dell U2412m, G110, G9x, Razer Scarab
are you trying to troll me... when there is one thread only used it should be fully utilized on any of the cpus, regardless how many cores/moduels thread the cpu has. Especial when it happens on cpus with less core counts.
1 Thread
AMD/Intel up to 4 cores -> ~85-100% load
AMD/Intel more then 4 cores -> ~50-80% load
there has nothing changed in the workload, just the numbers of core on the cpu itself.
It would be logical to see a decline of utilization the more threads you run, since a game can never saturate all cores to it fullest, but when the game is limited to one care and loads on a quadcore one thread to 100% and then a hexacore only 80%, while nothing significant has changed there is something wrong with the game itself.
If anything if you compare the AMD FX series, the single thread utilization shouldn't show such huge difference, especial between the FX6300 and the FX8350. The turbo max frequency difference is only 100mhz (so ~2,5%) where everything is the same should not account to 20% difference in load.
The picture given here lacks consistency.
http://www.tomshardware.com/reviews/...m,2057-12.html
http://www.overclock.net/t/671977/hy...ading-in-games
http://forums.anandtech.com/showthread.php?t=2149659
Where's the mighty hyperthreading here:
http://images.anandtech.com/graphs/graph6396/51136.png
http://images.anandtech.com/graphs/graph6396/51120.png
Nice to see the nvidia zealots doing the fight for Intel fanboys.
Oh noes AMDZ looking good on Crysis 3, patch it or kill it with fire.
Now now, I guess there was a question in there and its easily accountable.
Look at 3770k and 3570k. 3570k is essentially the same without HT. This in a benchmark that take ~(full advantage) of this feature, which equates to about 30% performance increase on the high side.
However, just to point out. Bulldozer also did fine at launch in Multi-Threaded benchmarks, especially considering the price.
Last edited by Kallenator; 02-21-2013 at 07:32 AM.
Aber ja, naturlich Hans nass ist, er steht unter einem Wasserfall - James May
Hardware: Gigabyte GA-Z87M-D3H, Intel i5 4670k @ 4GHz, Crucial DDR3 BallistiX, Asus GTX 770 DirectCU II, Corsair HX 650W, Samsung 830 256GB, Silverstone Precision -|- Cooling: Noctua NH-C12P SE14
How are they limiting the game to run on one core only on hexa core? By affinity mask? If game code recognizes that there is 6C CPU in the system and you limit it artificially via process manager to use only one core, you cannot blame the game for bad performance in that case...
30% is on the higher side. Usually HT(SMT) can give up to 30% performance increase. It's not a magic pixie dust, it's just tries to make the cores busier,which is very good approach to extract performance. But it is not the same as having a dedicated hardware to do the job.
Last edited by informal; 02-21-2013 at 07:27 AM.
Aber ja, naturlich Hans nass ist, er steht unter einem Wasserfall - James May
Hardware: Gigabyte GA-Z87M-D3H, Intel i5 4670k @ 4GHz, Crucial DDR3 BallistiX, Asus GTX 770 DirectCU II, Corsair HX 650W, Samsung 830 256GB, Silverstone Precision -|- Cooling: Noctua NH-C12P SE14
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
GPU performance can be limited by CPU bottlenecks. At high resolution and settings, CPUs make a huge difference based on how much, or how little they bottleneck the GPU. A more powerful CPU would create less bottleneck, so using high resolutions and settings is a fair method of comparing CPUs.
As was said, no one cares about testing the latest CPUs and a Titan at 720p. No one with a system like that is going to be running that resolution so it is a meaningless comparison.
No, I am waiting for you to teach me.
Seriously, how can you miss a fricking easy point: REAL-LIFE APPLICATION.
As simply as I can put it: Who gives a rat's ass if X CPU is 500% faster at 720p with a GTX Titan, when the same X CPU is 20% slower at 1080p with the Titan.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
FX gets a speed up for similar data in a modules instruction cache and ending up in both cores of the same module.
The game looks to like Cache hungry? to me anyways
One senseless word in the title and this thread is 5 pages and growing - lousy reporting has its advantages
That is very unlikely to happen. If X can keep up with Titan at 720p then it will surely be able to it at 1080p because the GPU load increases much more than CPU load as resolution climbs. What you are suggesting is not really a CPU benchmark - though I wouldn't say its any less useful. In fact to make it even more related to the average customer , I would like to see a 4.5 Ghz 3570K vs 4.8 Ghz FX8350 comparision (budget overlclock on air) w/ 1080p @ 4xAA 16xAF
Va fail, dh'oine.
"I am going to hunt down people who have strong opinions on subjects they dont understand " - Dogbert
Always rooting for the underdog ...
Bookmarks