Considering the GX2 now costs $549.99, I don't see why this card should start out at $500.
Printable View
Considering the GX2 now costs $549.99, I don't see why this card should start out at $500.
you guys are right. I was misinformed, should have done more research. I guess making the transition from 256 to 512 bit is a pretty complicated process.
My point though still stands, if you can get one of these for 300 bucks in a month or two, (instead of a gts that costs 250, or even 220ish after rebate)
i think it is a good deal, just for the memory which default clocks are 2200 mhz. The default memory bandwidth is 128 gb/second. I have an 8800gts 640
with the mem overclocked to 2060mhz, and the mem bandwidth is only 84 gb/s (according to gpu-z).
I can definitely see how it's release is a bit underwhelming, but at it's price point i think it is a pretty good deal.
it should have been 512 bit..but yeah, nvidia going backwards
Why does the 9800GTX beat the 8800GTX clock for clock, when the 8800GTX has 24 ROPs?
:shrug:
it doesn't mean i won't jump whenever reading:
"512bit" and "Nvidia" in the same sentence. I'd like to see 1024MB and 256 3Ghz shaders too. :D Oh, and I'd like a nice video transcoding tool as well.
the 9800 is nothing more than an 8800GTS G92 overclocked, nVidia should have named the G92's as 9 series, they messed up naming then 8800GTS again..........
The 9800GTX's have a new PCB, so nVidia decided to release them as a new series. :rofl:
I agree. The 8800GT should have been named as 9700GTS, the 8800GTS (g92) should have been skipped, the 9800GTX should have been named as 9800GTS, the 9800GTX should have been released with gDDR4 @2.4GHz and core @700MHz/shader @1800MHz.Quote:
nVidia should have named the G92's as 9 series, they messed up naming then 8800GTS again..........
Here's a datapoint from my rig:
3 GHz Q6600
8800GTS G92 GPU: 760 MHz Memory: 972 MHz Shader: 1900 MHz
XP 3DMark06 1280x1024: 14,400
http://service.futuremark.com/compare?3dm06=5934528
Tweaktown measured
3 GHz Q6600
9800GTX GPU: 675 MHz Memory: 1100 MHz Shader:1688 MHz
XP 3DMark06 1280x1024: 14,927
http://www.tweaktown.com/articles/13..._xp/index.html
It is obvious that the RAM is becoming a bottleneck for the 8800GTS 512, so the high shader and core clocks can't make it up for the lack of bandwidth.
I managed to squeeze 17645 3D Marks out of a Q6600 @4.1GHz and a 8800GTS 512 GPU: 820MHz Memory: 1100MHz Shader: 2000MHz:
http://service.futuremark.com/result...&resultType=14
8800GTS 512 MB vs. 9800GTX
^ deliberately botlnekked by 2.7G cpu.
???Quote:
but the 8800 GT OC hits nearly the same score like the GX2. Something must be wrong there.
http://images.tweaktown.com/imagebank/nv98gtx_g_19.gif
http://www.tweaktown.com/articles/13..._xp/index.html
126fps 9800gtx vs 58 fps 8800gt oc in UT3. @ 1920x1200 all settings maxed, according to the article.
notable absence of 8800gts512 in article comparisons :rolleyes:
adamsleath E6750@3600 MHz, 450x8 with C1E enable
Nicksterr printscreen, 9800GTX=8800GTS 512 MB with idem clocks.
Thanks dsaraolu, finally a direct comparison. 9800GTX = 8800GTS + more o/c headroom + 9 series HD features.
It's strange that Tweaktown has deleted their article about 9800 GTX.
Dsaraolu, thank you very much for these results.
I am hesitating between a 8800 GTX and a 9800 GTX and what I find strange is the weak gain when the 9800 GTX is overclocked : according to your results, the GPU is 7 % increased, RAM 16 % and shader 7 %.
And in 3D Mark 06, the gains are far weaker :
GT1 : + 2.2 %
GT2 : + 5 %
HDR1 : + 9%
HDR2 : + 6%
It looks like the 9800 GTX architecture is not well-balanced, something prevents it from showing gains roughly proportional to the clocks increases.
For example, my 2900 Pro 1 GB sees roughly proportinal gains when overclocked from 600 to 900 Mhz (GPU) and 925 to 1240 Mhz (RAM) :
GT1 : + 42 %
GT2 : + 44.9 %
HDR1 : + 43.2 %
HDR2 : + 45%
The CPU is a E6850@4140 Mhz. I don't want to show that the 2900 architecture is better than the 9800 one.
Would it be possible for a 8800 GTX owner to show 3DMark06 results at different GPU and RAM frequencies, wituout changing the CPU clock. I would like to check if the results are roughly proportional to the clocks increases. In this case, the weak 9800 GTX memory bandwidth would be the bottleneck in 3DMark06.
Which would mean that the 8800 GTX would be a better purchase today than the 9800 GTX.
Thanks.
Yes, you are right but I am right too : I compared 3 D Mark06 posted just above :
14960 with 830/2075/1250 Mhz and 14365 with 775/19385/1075 Mhz.
Whatever the frequencies you choose - 775/19385/1075 Mhz or default clocks - the result is the same : the 3DMark06 gains are far weaker than the freqeuncies increases.
Why are sites still using the canned benchmark for Crysis where the resolution on ATI cards is limited to below 1920x1200? With the Crysis benchmarking tool all you have to do is record a custom timedemo, select the proper resolution and voila...a real comparison of how an ATI card does against the Nvidia cards in Crysis high res.
If he used a q6600 anything over 3ghz and that score is in the 16's... That's pretty dag on good.