LegitReview GT 240 Batman Preview Nov 17, 2009
LegitReview HD 5870 Batman review Sep 22, 2009
http://img301.imageshack.us/img301/8666/lrhd5870.jpg
Looks like big check can change the review big time. We can only guess where the check came from
Printable View
LegitReview GT 240 Batman Preview Nov 17, 2009
LegitReview HD 5870 Batman review Sep 22, 2009
http://img301.imageshack.us/img301/8666/lrhd5870.jpg
Looks like big check can change the review big time. We can only guess where the check came from
Its true though. It DOES run 57% faster. Its just misleading as they're forcing uncommon settings onto the 5870.
Can somebody please explain to me. I'm DYING here.
90nm G80. 128 shaders
65nm G92. 128 shaders
55nm G92b. 128 shaders
65nm G200 240 shaders - 576mm2!
40nm GT240. 96 shaders
nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.
Does this mean AMD shader architecture design is more compact and efficient,
or
nVidia design has technical limitations and overhead?
Not sure if this was posted here yet or not:
Quote:
NVIDIA Fun fact of the week: The GF100 board pictured in last's week's fun fact is 10.5-inches long -- the same length as GeForce GTX 200 Series graphics cards!
the result they obtained is completely legitimate.
Quite simply, if you put Physics = high, ANY Radeon will plummet in fps like stone.
How? Why? Ask TWIMTBP. You'd think a 3.3Ghz Core i7, executing 8 threads in parallel, could perhaps be enough to figure out where to render a few dozen swirling pieces of paper. After all, whats a few pieces of paper compared to cranking out 3000fps per core in Unreal Tournament.
The simple and logical conclusion is that some *special* BS is being done to make the GeForces look good.
Another successful example of parking dump trunk full of money at developer's HQ.
Check the CPU utilization screenshots.. pretty much explains everything:
http://www.tomshardware.com/reviews/...m,2465-10.html
Forgot 55nm G200b 240shaders and 484mm2 and the supposed G212 w/ 384shaders on 40nm.
AMD's shaders are more compact and efficient in terms of power consumption and die area.
The shaders for Nvidia are also clocked +2x higher than the core clock which helps even out the playing field in terms of shader count and performance.
Hmmm.. what happened to the 9-9.5in board that some people here where so adamant was true?
Nice how they picked out Batman for its own unique graph without any proper analysis of the results though isn't it :)
I think the issue is more the paragraphs of text surrounding the graph than the graph itself. In particular, the text right before the Batman graph looks like something I'd read on an NVIDIA press release, not on a 3rd party website.
I don't know if its necessarily Nvidia's design that as limitations, but that ATI's was designed from the start to be super-scalable.
They said that R600 would be the foundation for 3 generations of video cards, and while a lot of people said that R600 was a failed architecture, R700 definitely vindicated the design. R600's failures were more likely due to the fab process and leakage which killed any chance of higher clocked cores or the rumored original specs of 480SP's, rather than 320.
That being said, it is true that G92 and GT200 have all been heavily based on the G80 (G92 basically just a shrink) and Nvidia did hit a wall earlier on the scaling of its design
Yeah but even though shader clock domain is higher, clocks havent really improved much G80, G92, G92b, G200, G200b..
nVidia's crazy fantastic "PROGRESS" in clockspeeds :rofl::ROTF:
90nm - avg = 562
8800 ULTRA 612/1500
8800 GTX 575/1350
8800 GTS 500/1200
65/55nm - avg = 662
8800GT/9800GT 600/1500
8800GTS 650/1625
9800GTX+/GTS250 738/1836
65/55nm - avg = 615
GTX280 602/1296
GTX260 55nm 576/1350
GTX275 633/1404
GTX285 648/1476
40nm - avg = 617 (ie die shrink = slower?)
GT210 675/1450
GT220 625/1360
GT240 550/1360
Although nVidia has yet to beat 740Mhz, which ATI/AMD did with 2900XT, 2600XT, 3870, 3870x2, 4890, 5870 etc..
ATI/AMD clocks aren't improving much either. 850 is tiny improvement over 750, but at least its not slower.
TWIMTBP is that a misprint? Did you mean TWIMTBG? (see #3)
first graph looks to be from NVIDIA PR material! No one else puts full explanation like "Boost Performance by 57% with a dedicated GeForce GT 240"
Journalists usualy put on that place name of the benchmark (like on the second graph)!
It's shame that "Legit" do not mention anywhere fact that article is PR propaganda from NV :(
more importantly, note how fast clocks decline on nvidias 40nm gpus with added complexity! over a 100mhz drop for the most complex 40nm part so far, and its only a cut down G92... so a tweaked G92 in 40nm can only clock to 550mhz, but fermi which is 5+ times more complex will reach 650+mhz? (clock derived from nvidias flops numbers mentioned at the super computing event)
Dont think thats the problem. X2900XT was 80nm and something like 740Mhz. The mid-end X2600XT was a blazing 800Mhz.. a milestone nVidia has yet to conquer 3 years later.
AMD moved to 55nm early on, at same time as nVidia launched 8800GT on 65nm. Likewise, AMD first to 40nm. So, you'd think AMD would be the one with yield problems, right?
Lets compare last 3 gen product launches.
X2900XT/PRO - full chip, 600-740 clocks
HD3850/HD3870 - full chip, 668-825 clocks
HD4850/HD4870 - full chip, 625-850 clocks
HD5xxx... only one where not selling all full chips at launch.
nVidia? crippled chips gallore - 8800GTS, 8800GT/9800GT, GTX260.. even their mid-range where you'd think yields wouldn't be an issue.
Clearly, nVidia designs are either more susceptable to defects or tight schedules are pressuring them to cut corners... litterally.
die shrink - > lower clocks??... this happened for each and all of AMD's Athlon64s. Recall those top of the line FX were 130nm. Couple years later the 90nm were offered at highest clockspeed while 65nm for mid/low end.
And then ofcourse Phenom1 but that was more of a power budget issue.
And power budget is clearly no prob for a 32sp 40nm nVidia chip... so how does AMD make 850Mhz 2B tran and nVidia only 600Mhz and shaders running barely as high as original 90nm GTX....?
1. They do it on purpose, so Fermi looks good compared to GT220 /GT240:ROTF::rofl:
2. Fermi delay will be announced and they'll ship a 40nm G92 in the meantime :shocked:
3. Design not scalable. :shrug: Could be a prob if even crippled cut-down Fermi can only run (extrapolating)... 400Mhz.
4. Simply inexperienced and incompetent engineers. Can't be the "process" since 4770 was getting great clocks early on before things were ironed out.
5. Management.:mad:
Hate to repeat it so many times, but nVidia Fermi is way late, their DX10.1 cards are a joke, huge die GT200s are sinking profits, and dont even have Intel/AMD chipset business to fall back on. If they dont get PERFECT Fermi out with whole lineup down to bottom, it could be not just NV30, but more like 3Dfx time
Lots PHYSX games only use cpu do physics,Only 14 games has gpu PHYSX.
So,radeon will do well in most PHYSX games.