if these are true then they might actually be worth the pricetag.Quote:
GTX 280 = X4098
GTX 260 = X3782
what is the source of the vantage scores?
Printable View
if these are true then they might actually be worth the pricetag.Quote:
GTX 280 = X4098
GTX 260 = X3782
what is the source of the vantage scores?
Yeah I don't think people realize that the card is nearly 2x the score of the 8800 Ultra at those settings, where the 8800Ultra still has the edge on the G92 cards (besides the 9800GX2). And 3dMark always seems to be optimized well for multi-GPU setups, so 9800GX2 will score higher than real-world game performance where they're at the mercy of multi-GPU driver optimizations.
sprechen zie deutsche?
ok.
And ATI numbers are fake.
http://www.forumdeluxx.de/forum/show...postcount=1378
:(
Yeah I realize that, but the chart initially showed the 3870=4850 and something like x1950 (which I quoted) and that's what I saying, way too low, should be 50%+, 80% sounds perfect to me actually
As I said, look at what I quoted, the poster fixed it
Once again, I understand this
Problem is (and I'm sure not everyone will agree) is that ATI is still a product cycle behind, all due to R600, and arguably, started earlier then that, with the 1800XT, and they haven't caught up yet. In August when R700 launches, nVidia will just release their dual chip card. If AMD would have a card that performed as well as the GT280, then they would sell it for a much higher price. Lower price just means that it can't compete performance wise.
Even with nvidia giving them time with the whole 9800GTX debacle, they were just laughing and perfecting G200.
When ATI's new architecture launches, nVidia will be there with their new card, or G200 on a smaller process, whatever is necessary to stay on top. Unless nVidia fails with a chip or process, like the FX5800, I think they'll stay on top, they've got the time and resources to do so. AMD doesn't have that necessarily. Or if AMD would drop a bombshell next year with a new architecture, the latter being more likely. But like we've seen before, because nVidia has got a cycle ahead, they can test, and use the smaller process on their midrange cards first, and then doing it to the big guns. Perfection.
Technically, Nvidia's 9900GTX WILL be a current gen card, seeing as it will not have a GTX 2xx based card in that range for a little time to come.
In addition, a single 4870X2 w/ GDDR5 should cost around $599 as well and should be provide a significant boost over the 3870X2. (960 Shaders and 64 TMUs anyone ;) )
Perkam
How did cj of daamit get the new cards I wonder?
Looks like the HD48xx series will be beaten by the GT2xx series again. But I don't mind, as long as the lower the HD3870 to about $100 when the HD48xx series releases I'm happy.
The GT200 seems to have only 240 SPs instead of the popular consensus of 256 + 96TMU. No redundancy that is, like the 8800GTX at launch.
They can't really shrink to much efficiency, they have to get a balance between the bus width (too big) and ROP count (too much). 384bit + GDDR5 in the future seems to be the best balance, otherwise they'd still have to use a 400+mm^2 die in 40nm!
Why are people disregarding a chance for ATI to make the same RV770, just with double the specs, and still a small chip compared to the GT200 later on (units are small compared to memory bus), I have no idea. That chip is apparently, called the R(V)870 or what you may say otherwise.
Heard GT200 info from a buddy who has looked at the core die itself, so I'd let you guys go on from there. ;)
That point will never get across m8, its useless to try.
Heck, Nvidia has their GTX 280 going up against the 3870 on their editor's day last week :rofl: , so there's no way people will suddenly realize that a $350 card was never meant to compete with a $600 card. :p:
Over time members will see the performance and price and decide for themselves for what they "NEED" for their gaming needs :cool:
Perkam
It doesn't work like that. Look at G80->G92. It went from 90 to 65 (same steps as the GT200 shrink) BUT was still crazy big (320-330mm2) and that was with 2 64-bit mem controllers taken.
GT200 won't shrink well. At least the memory bus won't. Even with that out of the way (256 bit), they will still be seeing 350+mm^2 per chip (not the big deal) and 120-150W per chip (BIG deal)