http://pclab.pl/art52489-9.html
It seems performance depends on the levels. The "welcome to the jungle" level can take advantage of more threads and only there AMD can match Intel.
http://pclab.pl/art52489-9.html
It seems performance depends on the levels. The "welcome to the jungle" level can take advantage of more threads and only there AMD can match Intel.
Are there any full benchmarks at commonly-played resolutions and quality? We all know that's where the benchmark matters, in min fps/stutters and such. If I want to see CPU benchmarks I'd head straight to WPrime.
While at it, since you started a new thread, I'll answer informal's question.
Found this through googling. http://techreport.com/review/23324/a...ng-on-the-pc/5Is there any image quality test (by Anand?) that shows us the difference between OpenCL accel. transcoding and traditional one(slow way)? We seldom hear that argument about poorer image quality and all but I haven't seen anyone actually shows us how worse is it.
No quality difference. The 'poorer quality' argument only applies to fixed-function encoders i.e. they don't use shaders.
Last edited by blindbox; 02-23-2013 at 11:24 PM.
No 3960X?
-PB
-Project Sakura-
Intel i7 860 @ 4.0Ghz, Asus Maximus III Formula, 8GB G-Skill Ripjaws X F3 (@ 1600Mhz), 2x GTX 295 Quad SLI
2x 120GB OCZ Vertex 2 RAID 0, OCZ ZX 1000W, NZXT Phantom (Pink), Dell SX2210T Touch Screen, Windows 8.1 Pro
Koolance RP-401X2 1.1 (w/ Swiftech MCP35X), XSPC EX420, XSPC X-Flow 240, DT Sniper, EK-FC 295s (w/ RAM Blocks), Enzotech M3F Mosfet+NB/SB
Why so different to all other CPU tests out there? Seems strange, especially those very low fps at that resolution. Smth is wrong here.
AMD Phenom II X6 1055T@3.5GHz@Scythe Mugen 2 <-> ASRock 970 Extreme4 <-> 8GB DDR3-1333 <-> Sapphire HD7870@1100/1300 <-> Samsung F3 <-> Win8.1 x64 <-> Acer Slim Line S243HL <-> BQT E9-CM 480W
w0mbat's fanboyistic logical :
Test is good for AMD -> Test is valid
Test is good for Intel -> Test is wrong
Ok guys let's not go down that road again,previous topic got locked and I suspect some posters got a "PM" .
Just what I wanted to see - 4.5 Ghz 3570K vs 4.7 Ghz FX8350 . However I am having difficulty analyzing the results - graph I shows AMD is back in the game but 2 & 3 show AMD is where it has been for the past few years .
I think there may be more than just CPU perf. affecting these results here. Some more data ( hopefully in English) would be helpful before coming to any conclusions.
Va fail, dh'oine.
"I am going to hunt down people who have strong opinions on subjects they dont understand " - Dogbert
Always rooting for the underdog ...
lol you guys are going to make Buckeye older in a shorter time!
Lay off with the fanboyism.
First of all, I think the test is off, now why would you run the 3770K in OC and not the FX8350? Cmon, fairplay - sure the 3770K is more likely to boost, but still, level the playing field. (Refering to welcome to the jungle)
Second; From the last thread DilTech made it pretty obvious that Crysis3 is still not very mature, and does not take advantage of hyperthreading, while using the AMDs threads to their full potential.
No wonder, no suprise that AMD is a little ahead, but it is a matter of time before a patch comes, not a patch to sack AMD, but to let the Intel CPUs with hyperthreading take advantage of their technology.
Competition ranking;
2005; Netbyte, Karise/Denmark #1 @ PiFast
2008; AOCM II, Minfeld/Germany #2 @ 01SE/AM3/8M (w. Oliver)
2009; AMD-OC, Viborg/Denmark #2 @ max freq Gigabyte TweaKING, Paris/France #4 @ 32M/01SE (w. Vanovich)
2010: Gigabyte P55, Hamburg/Germany #6 @ wprime 1024/SPI 1M (w. THC) AOCM III, Minfeld/Germany #6 @ 01SE/AM3/1M/8M (w. NeoForce)
Spectating;
2010; GOOC 2010 Many thanks to Gigabyte!
Yeah I am with you on this, maybe its the same bullcrap crytek pulled with crysis 2 where they put tessellation everywhere, so the limiting factor was the GPU or better the power of the tessellation unit.
http://techreport.com/review/21404/c...f-a-good-thing
And some levels are more affected then others.
Since when is a 4.7GHZ FX8350 stock?
A stock i3 3220 is beating a stock FX4300
The other benchmark were showing FX-4300 winning. So, which one should we trust ?
Also, it is worth to mention that i5 3570 smokes an overclocked FX-8350 in 2 of the 3 of those tests. THe one test which FX8350 wins doesn't even matter because the frame-rate on i5 3570 was already high
So, what AMD fanboys think about this are they going to say it is fake just because it favors Intel and not AMD ?
Last edited by dartaz; 02-24-2013 at 05:14 AM.
Sorry my bad, I guess I dont work well before breakfast
I didnt spot the 4.7ghz in first chart, I'll go hide now
Competition ranking;
2005; Netbyte, Karise/Denmark #1 @ PiFast
2008; AOCM II, Minfeld/Germany #2 @ 01SE/AM3/8M (w. Oliver)
2009; AMD-OC, Viborg/Denmark #2 @ max freq Gigabyte TweaKING, Paris/France #4 @ 32M/01SE (w. Vanovich)
2010: Gigabyte P55, Hamburg/Germany #6 @ wprime 1024/SPI 1M (w. THC) AOCM III, Minfeld/Germany #6 @ 01SE/AM3/1M/8M (w. NeoForce)
Spectating;
2010; GOOC 2010 Many thanks to Gigabyte!
More threads = AMD is better
One main thread that do most work = Intel is better
It depends on the load CPU needs to handle
Most applications that need raw CPU power tries to scale the load to more threads but it is difficult so if it is possible to do it in one thread they go for that. Applications isn't going to scale to more threads utnil there is a real advatage.
the difference is that they used other levels beside welcome to the jungle.
Look up the results from welcome to the jungle form pcghardware and this one, they are quite similar. In fact they match each other quite well when you look at the stock results from both one.
I dont consider difference of +/- 1 fps more then the margin of error. The only real difference is the FX6300 vs the 3570, were the last one score a bit higher avg fps and has a bit higher performance compared to the 6300.
Its the application that is multithreaded. That they have more threads running on one level and less on other is not correct ( only their load should vary from scene to scene ). As Hornet331 pointed out this is more likely some Crytek bull If I had to predict, those graphs would be lining up blue on top and red below on all levels after some patches.
Va fail, dh'oine.
"I am going to hunt down people who have strong opinions on subjects they dont understand " - Dogbert
Always rooting for the underdog ...
How many threads can Crysis 3 support? up to 12 threads?
CPU : Athlon X2 7850,Clock:3000 at 1.20 | Mobo : Biostar TA790GX A2+ Rev 5.1 | PSU : Green GP535A | VGA : Sapphire 5770 Clock:910,Memory:1300 | Memory : Patriot 2x2 GB DDR2 800 CL 5-5-5-15 | LCD : AOC 931Sw
The more I think of this, the more I like it.
While I doubt devs are going to start making games highly threaded, if they did it might drive intel to increase cores. For my part, I'd love to start buying AMD CPUs again if they were competitive at gaming.
I wonder if these scores translate to competitive multi GPU at this game?
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
The thing with C3 and testing CPUs is very simple. It's all about finding a place that has ANYTHING to do with CPU.
"Welcome to the jungle" is showing AMD is equal to Core i7-3770K - right it is. Too bad that place is not important like at all - there's nothing happening and you get constant 70 fps. Why would you test a CPU in that place? CPU's not limiting factor there. Amateurs.
Situation completely changes when you find a place that's CPU limited.
PCGH was testing a performance of a graphics card, not a CPU.
Conclusion is: don't try testing CPU if you have no ing idea how to do that. And Pcgameshardware have no idea.
And to make that clear, because someone would have come up with that sooner or later: yes, i'm from pclab.
Last edited by c22; 02-24-2013 at 08:20 AM.
Sigh. So much for the last hope of AMD.
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
Not necessarily. Different level, different workload, different result.
You're quite wrong. PCGH is the no. 1 in CPU benchmarks. Just because the CPUs were close together doesn't imply a CPU bottleneck. Look at the resolution, 720p.
Also, you made a quite grave mistake:
1024x768 is a 4:3 resolution. CPU performance can depend on aspect ratio, because if you have a wider field of view, more objects will be displayed at the edges, thus increasing CPU load. But you're a professional, right?
16:9 resolutions should be used to benchmark CPUs, and because 1080p is rather GPU bottlenecked, 720p is the best choice. In most cases it is more demanding than 800x600 or 1024x768.
Last edited by boxleitnerb; 02-24-2013 at 09:02 AM.
I think that Crysis 3 is GPU limited all over
Graphs you showed is using four cores or less and probably one renderthread. It is easy to spot that it is motsly one thread that is doing most work. Increasing resolution to 1080p and there will be similar performance on all cpu's
Tell me something I don't know.
Of course C3 is GPU limited. That's why pclab made a test in 3 different places using 2 different resolutions.
PCGH is showing us only 1/3 of a truth, but it doesn't matter cause "PCGH is the no. 1 in CPU benchmarks.", right? Time to look around and verify some vortals, because they're apparently not even close to being to "no. 1". Far from that.
Last edited by c22; 02-24-2013 at 10:03 AM.
And if you play at higher resolutions, CPUs still bottleneck GPU performance so its still a fair method of analysing CPU performance.
Testing at these low resolutions is stupid, as are the reasons that people want me to believe as to why it is a valid way of testing CPU performance.
No one puts a GTX 680 and a current 4-8 core CPU together only to play games at 1024x768. Show some real results from at least 1080p.
Bookmarks