LegitReview GT 240 Batman Preview Nov 17, 2009
LegitReview HD 5870 Batman review Sep 22, 2009
Looks like big check can change the review big time. We can only guess where the check came from
LegitReview GT 240 Batman Preview Nov 17, 2009
LegitReview HD 5870 Batman review Sep 22, 2009
Looks like big check can change the review big time. We can only guess where the check came from
Last edited by Heinz68; 11-26-2009 at 08:44 PM.
Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
Sennheiser RS 180 - Cooler Master Cosmos S Case
Its true though. It DOES run 57% faster. Its just misleading as they're forcing uncommon settings onto the 5870.
Can somebody please explain to me. I'm DYING here.
90nm G80. 128 shaders
65nm G92. 128 shaders
55nm G92b. 128 shaders
65nm G200 240 shaders - 576mm2!
40nm GT240. 96 shaders
nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.
Does this mean AMD shader architecture design is more compact and efficient,
or
nVidia design has technical limitations and overhead?
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Not sure if this was posted here yet or not:
NVIDIA Fun fact of the week: The GF100 board pictured in last's week's fun fact is 10.5-inches long -- the same length as GeForce GTX 200 Series graphics cards!
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
the result they obtained is completely legitimate.
Quite simply, if you put Physics = high, ANY Radeon will plummet in fps like stone.
How? Why? Ask TWIMTBP. You'd think a 3.3Ghz Core i7, executing 8 threads in parallel, could perhaps be enough to figure out where to render a few dozen swirling pieces of paper. After all, whats a few pieces of paper compared to cranking out 3000fps per core in Unreal Tournament.
The simple and logical conclusion is that some *special* BS is being done to make the GeForces look good.
Another successful example of parking dump trunk full of money at developer's HQ.
Check the CPU utilization screenshots.. pretty much explains everything:
http://www.tomshardware.com/reviews/...m,2465-10.html
Last edited by ***Deimos***; 11-27-2009 at 04:20 PM.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Forgot 55nm G200b 240shaders and 484mm2 and the supposed G212 w/ 384shaders on 40nm.
AMD's shaders are more compact and efficient in terms of power consumption and die area.
The shaders for Nvidia are also clocked +2x higher than the core clock which helps even out the playing field in terms of shader count and performance.
Hmmm.. what happened to the 9-9.5in board that some people here where so adamant was true?
Last edited by LordEC911; 11-27-2009 at 04:26 PM.
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
Nice how they picked out Batman for its own unique graph without any proper analysis of the results though isn't it
I think the issue is more the paragraphs of text surrounding the graph than the graph itself. In particular, the text right before the Batman graph looks like something I'd read on an NVIDIA press release, not on a 3rd party website.
I don't know if its necessarily Nvidia's design that as limitations, but that ATI's was designed from the start to be super-scalable.
They said that R600 would be the foundation for 3 generations of video cards, and while a lot of people said that R600 was a failed architecture, R700 definitely vindicated the design. R600's failures were more likely due to the fab process and leakage which killed any chance of higher clocked cores or the rumored original specs of 480SP's, rather than 320.
That being said, it is true that G92 and GT200 have all been heavily based on the G80 (G92 basically just a shrink) and Nvidia did hit a wall earlier on the scaling of its design
Yeah but even though shader clock domain is higher, clocks havent really improved much G80, G92, G92b, G200, G200b..
nVidia's crazy fantastic "PROGRESS" in clockspeeds
90nm - avg = 562
8800 ULTRA 612/1500
8800 GTX 575/1350
8800 GTS 500/1200
65/55nm - avg = 662
8800GT/9800GT 600/1500
8800GTS 650/1625
9800GTX+/GTS250 738/1836
65/55nm - avg = 615
GTX280 602/1296
GTX260 55nm 576/1350
GTX275 633/1404
GTX285 648/1476
40nm - avg = 617 (ie die shrink = slower?)
GT210 675/1450
GT220 625/1360
GT240 550/1360
Although nVidia has yet to beat 740Mhz, which ATI/AMD did with 2900XT, 2600XT, 3870, 3870x2, 4890, 5870 etc..
ATI/AMD clocks aren't improving much either. 850 is tiny improvement over 750, but at least its not slower.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
TWIMTBP is that a misprint? Did you mean TWIMTBG? (see #3)
Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
Sennheiser RS 180 - Cooler Master Cosmos S Case
first graph looks to be from NVIDIA PR material! No one else puts full explanation like "Boost Performance by 57% with a dedicated GeForce GT 240"
Journalists usualy put on that place name of the benchmark (like on the second graph)!
It's shame that "Legit" do not mention anywhere fact that article is PR propaganda from NV![]()
Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.
more importantly, note how fast clocks decline on nvidias 40nm gpus with added complexity! over a 100mhz drop for the most complex 40nm part so far, and its only a cut down G92... so a tweaked G92 in 40nm can only clock to 550mhz, but fermi which is 5+ times more complex will reach 650+mhz? (clock derived from nvidias flops numbers mentioned at the super computing event)
Dont think thats the problem. X2900XT was 80nm and something like 740Mhz. The mid-end X2600XT was a blazing 800Mhz.. a milestone nVidia has yet to conquer 3 years later.
AMD moved to 55nm early on, at same time as nVidia launched 8800GT on 65nm. Likewise, AMD first to 40nm. So, you'd think AMD would be the one with yield problems, right?
Lets compare last 3 gen product launches.
X2900XT/PRO - full chip, 600-740 clocks
HD3850/HD3870 - full chip, 668-825 clocks
HD4850/HD4870 - full chip, 625-850 clocks
HD5xxx... only one where not selling all full chips at launch.
nVidia? crippled chips gallore - 8800GTS, 8800GT/9800GT, GTX260.. even their mid-range where you'd think yields wouldn't be an issue.
Clearly, nVidia designs are either more susceptable to defects or tight schedules are pressuring them to cut corners... litterally.
die shrink - > lower clocks??... this happened for each and all of AMD's Athlon64s. Recall those top of the line FX were 130nm. Couple years later the 90nm were offered at highest clockspeed while 65nm for mid/low end.
And then ofcourse Phenom1 but that was more of a power budget issue.
And power budget is clearly no prob for a 32sp 40nm nVidia chip... so how does AMD make 850Mhz 2B tran and nVidia only 600Mhz and shaders running barely as high as original 90nm GTX....?
1. They do it on purpose, so Fermi looks good compared to GT220 /GT240![]()
2. Fermi delay will be announced and they'll ship a 40nm G92 in the meantime
3. Design not scalable.Could be a prob if even crippled cut-down Fermi can only run (extrapolating)... 400Mhz.
4. Simply inexperienced and incompetent engineers. Can't be the "process" since 4770 was getting great clocks early on before things were ironed out.
5. Management.
Hate to repeat it so many times, but nVidia Fermi is way late, their DX10.1 cards are a joke, huge die GT200s are sinking profits, and dont even have Intel/AMD chipset business to fall back on. If they dont get PERFECT Fermi out with whole lineup down to bottom, it could be not just NV30, but more like 3Dfx time
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Lots PHYSX games only use cpu do physics,Only 14 games has gpu PHYSX.
So,radeon will do well in most PHYSX games.
Last edited by mindfury; 11-28-2009 at 06:28 PM.
Bookmarks