It's not my expected 2x performance, but it's a nice 30% improvement, I guess it's worth selling my X1900XTX and upgrading to 8800GT if it costs me under $50
Where do you sell old Video Cards for $200?
Printable View
It's not my expected 2x performance, but it's a nice 30% improvement, I guess it's worth selling my X1900XTX and upgrading to 8800GT if it costs me under $50
Where do you sell old Video Cards for $200?
Nice, looks like it might be the first time I choose Palit. :D
I ordered my 8800GT Superclocked.. ( couldn't wait for the SSC anymore)
from zipzoomfly. It seems really good. 279 and free shipping. But I got 2 day shipping for only 2.99!!! Haha, can't wait until it comes... it should by tomorrow.
http://www.hardwarezone.com/img/data...P888_D_400.jpg
Specifications
Calibre P888
NVIDIA GeForce 8800 GT
675 MHz Core clock
1800 MHz Memory clock
512MB GDDR3
256-bit Memory bus
112 SPs
1728 MHz Shader clock
PCI-Express 2.0
400Mhz RAMDAC
Dual DVI-I
Resolution up to 1920 X 1080i
HDCP: Yes
Source
I think maybe with palit SLI work fine , look at the card , dual slot instead single slot but the cooler dont seem taller than the slot
http://img150.imageshack.us/img150/8246/palittm9tc2.jpg
i think it´s the same thing you had Two G80 8800GTS with stokcooler (dual slot) @ SLI
regards
Hmm I like the freespace on the card that cooler creates, leaves much room for further cooling improvements. Such as attaching a 80mm fan over the edge of the card to the right using the unused screwholes in the corner. Did same mod to my 6800GT as long as I was using stock cooler and it actually helped quite a few degrees (~7C or so I think) and was an easy 1 min job almost.
So yes the reference NVIDIA cooler design seems to be very crappy if even a Zalman VF700 copy brings noticable improvement. And judging by all the temps measured in reviews it seems to be the case.
I'm surprised no one has tested the effects of PCIe 2.0 especially in SLI config.
Anyone here know what "shortly" means to an Evga customer service rep? LOL I called to check on the status of the SSC I ordered yesterday morning with way overpriced overnight shipping. The rep indicated it would ship "shortly," however, when asked if shortly meant a few days, a day, today, the rep only said "shortly." WOW, talk about being helpful, what azzhats.
Im using a palit 8600GT sonic, the cooler looks the same and it really just takes two slots. SLI won't be a problem with this card.
But, if it gonna be palit, I'd wait for the sonic version.
palit already listed in webiste
>> http://www.palit.biz/en/press_center...s_20071029.htm
>> http://www.palit.biz/en/products/nv_pcie_8800GT.html
:up:
those palits look pimp
For comparison - 8800GTS 640:
http://aycu40.webshots.com/image/311...5899525_rs.jpg
That 8800GT score looks a bit low for that CPU frequency. Not that much but had expected around 300 - 400 points more.
EDIT: NVM just saw his services list. Ewww, ~20 services 24/7 usage ftw. :p:
736/1083/1712shader... that is very fast and quite a bit higher than most gts cards.
I had 2 xfx cards and the most I could overclock them to (watercooled) was 688/1000/1620 shader
I think if you were to compare the 2, you would need to find a gt thats just as fast on the oc 750+/1850/2100+ to really compare since your card has quite the unusual overclock.
http://resources.vr-zone.com/floppyimage/8800gt/oc.jpg
look how much difference the oc to shaders makes...
can you oc the shaders on the old gts? and if so why did i sell mine :rofl: !! :rofl:
guffaw chuckle....but anyway the new cards are 65nano..:shrug:..maybe cooling advantage.
and as has been shown some games (at particular settings) up to 30% perf. increase.
http://www.xtremesystems.org/forums/...ghlight=8800gtQuote:
especially that shader @1770, Rest Stock bar ...holymoly.
the shaders oc accounts for 80.5% of the performance increase according to this graph
14190-12495=1695
13858-12495=1363
1363/1695=0.8041
& i like the look of the sparkle Calibre with a double fan :)
yes with rivaruner :
Quote:
NVIDIA G80 based GPU shader clock speed adjustment using 163.67 drivers and RivaTuner 2.04
>> http://forums.guru3d.com/showthread.php?t=238083
NVIDIA G80 based GPU shader clock speed adjustment using 163.67 drivers and RivaTuner 2.04
================================================== ====================
Overview/Background
---------------------
Prior to NVIDIA driver release 163.67, the shader clock speed was linked to the core clock (aka ROP domain clock) speed and could not be changed independently by itself. The relationship between core and shader domain clock speeds (for most cards) is shown in table A. Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values. To overclock the shader speed it was necessary to flash the GPU BIOS with a modified version that sets a higher default shader speed.
By way of an example, my 8800 GTS EVGA Superclocked comes from the factory with BIOS programmed default core and shader speeds of 576 and 1350, respectively. When increasing the core speed, I found 648 to be my maximum stable speed. From table A, you can see that with a core of 648 the maximum shader speed (owing to the driver controlled core/shader speed linkage) is 1512. To push it higher you increase the BIOS set shader speed. For example, with a BIOS set to core/shader 576/1404 (from 576/1350), all linked shader speeds are bumped up by 54MHz. So now when increasing the core to 648, the maximum shader speed becomes 1512+54=1568. I eventually determined my maximum stable shader speed to by 1674 (achieved with GPU BIOS startup speeds set to 576/1512; overclocking core to 648 now yields a shader speed of (1512-1350)+1512=1674).
However, as of NVIDIA driver release 163.67, the shader clock can now be modified independently of the core clock speed. Here is the announcement by Unwinder:
"Guys, I've got very good news for G80 owners. I've just examined overclocking interfaces of newly released 163.67 drivers and I was really pleased to see that NVIDIA finally added an ability of independent shader clock adjustment. As you probably know, with the past driver families the ForceWare automatically overclocked G80 shader domain synchronicallly with ROP domain using BIOS defined Shader/ROP clock ratio. Starting from 163.67 drivers internal ForceWare overclocking interfaces no longer scale shader domain clock when ROP clock is adjusted and the driver now provides completely independent shader clock adjustment interface. It means that starting from ForceWare 163.67 all overclocking tools like RivaTuner, nTune, PowerStrip or ATITool will adjust ROP clock only.
However, new revisions of these tools supporting new overclocking interfaces will probably allow you to adjust shader clock too. Now I've played with new interfaces and upcoming v2.04 will contain an experimental feature allowing power users to definie custom Shader/ROP ratio via the registry, so RT will clock shader domain together with ROP domain using user defined ratio.
And v2.05 will give you completely independent slider for adjusting shader clock independently of core clock.
Note:
By default this applies to Vista specific overclocking interfaces only, Windows XP drivers still provide traditional overclocking interface adjusting both shader and ROP clocks. However, XP drivers also contain optional Vista-styled overclocking interfaces and you can force RivaTuner to use them by setting NVAPIUsageBehavior registry entry to 1."
Two big points of note here:
*) The driver's new overclocking functionality is only used *by default* on Vista. Setting the rivatuner NVAPIUsageBehaviour registry entry to 1 will allow XP users to enjoy the new shader speed configurability.
*) With the new driver interface, by default, the shader speed will not change AT ALL when you change the core speed. This is where the use of RivaTuner's new ShaderClockRatio registry value comes in (see below). It can be found under the power user tab, RivaTuner->Nvidia->Overclocking.
Changing the shader clock speed
--------------------------------
On to the mechanics of the new ShaderClockRatio setting in Rivatuner 2.04. Here's more text from Unwinder:
"Guys, I’d like to share with you some more important G80 overclocking related specifics introduced in 163.67:
1) The driver’s clock programming routine is optimized and it causes unwanted effects when you’re trying to change shader domain clock only. Currently the driver uses just ROP domain clock only to see if clock generator programming have to be performed or not. For example, if your 8800GTX ROP clock is set to 612MHz and you need to change shader domain clock only (directly or via specifying custom or shader/ROP clock ratio) without changing current ROP clock, the driver will optimize clock frequency programming seeing that ROP clock is not changed and it simply won’t change the clocks, even if requested shader domain clock has been changed. The workaround is pretty simple: when you change shader clock always combine it with ROP clock change (for example, if your 8800GTX ROP clock is set to 612MHz and you’ve changed shader clock, simply reset ROP clock to default 576MHz, apply it, then return it to 612MHz again to get new shader clock applied). I hope that this unwanted optimization will be removed in future ForceWare, and now please just keep it in mind while playing with shader clock programming using RT 2.04 and 163.67.
2) Currently Vista driver puts some limitations on ROP/shader domain clocks ratio you’re allowed to set. Most likely they are hardware clock generator architecture related and hardware simply cannot work (or cannot work stable) when domain clocks are too asynchronous. For example, on my 8800GTX the driver simply refuses to set the clocks with shader/ROP ratio within 1.0 – 2.0 range (default ratio is 1350/575 = 2.34), but it accepts the clocks programmed with ratio within 2.3 – 2.5 range. Considering that the driver no longer changes domain clocks synchronically and all o/c tools (RT 2.03, ATITool, nTuner, PowerStrip) currently change ROP clock only, that results in rather interesting effect: you won’t be able to adjust ROP clock as high as before. Once it gets too far from (or too close to) shader clock and shader/ROP clock ratio is out of range – the driver refuses to set such clock. Many of you already noticed this effect, seeing that the driver simply stops increasing ROP clock after a certain dead point with 163.67."
and
"In the latest build of 2.04 (2.04 test 7) I've added an ability of setting ShaderClockRatio to -1, which can be used to force RivaTuner to recalculate desired Shader/ROP ratio automatically by dividing default shader clock by default ROP clock.
So if you set ShaderClockRatio = -1 and change ROP clock with RT, it will increase shader clock using you card's BIOS defined ratio (e.g. 1350/576=2.34 on GTX, 1188/513 = 2.32 on GTS etc). If you wish to go further, you may still override the ratio, for example increase shader clock by specifying greater ratio (e.g. ShaderClockRatio = 2.5)."
Three important points here:
*) The driver currently imposes restrictions on how far the shader clock speed can be changed from what it otherwise would've been when linked to the core clock speed in old drivers (it is suspected that the restriction is owing to hardware limitations rather than a driver software design choice). This means you can't set an arbitrary shader speed which you know your card is capable of and necessarily expect it to work.
*) Setting the ShaderClockRatio to the special value of -1 will give you a very similar core / shader speed linkage that you had under previous drivers (163.44 and older).
*) When the change the value of ShaderClockRatio, in order for it to take effect, you must make a core speed. So, for example, you might reduce the core speed a little, apply and then put it back to how it was and apply again.
Worked example
----------------
Surprise surprise, back to my EVGA 8800 GTS superclocked! .. First off, if you've not already done so, I recommend setting up RivaTuner monitor to show the core clock, shader clock and memory clock speeds so that you can immediately tell if your core/shader clock changes are having any effect. My setup is vista with 163.67 drivers. With RivaTuner 2.03, when overclocking the core to 648, the shader would now stick at the bootup default speed of 1512 MHz (see last paragraph of "Overview/Background" above). If I had blindly run 3dmark2006 tests after installing the 163.67 driver, I would've assumed that the new drivers give worse performance but the rivatuner graphs show you that the shader is not running at the expected speed.
After installing RivaTuner 2.04, we are now able to set the ShaderClockRatio value to restore a higher shader clock speed. In my case since I want a shader speed of 1674 when the core is 648, I use 1674/648 = 2.58.
=======================
Table A
--------
Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values.
Code:Set core | Resultant frequency
frequency | Core Shader
---------------------------------
509-524 | 513 1188
525-526 | 513 1242
527-547 | 540 1242
548-553 | 540 1296
554-571 | 567 1296
572-584 | 576 1350
585-594 | 594 1350
595-603 | 594 1404
604-616 | 612 1404
617-617 | 621 1404
618-634 | 621 1458
635-641 | 648 1458
642-661 | 648 1512
662-664 | 675 1512
665-679 | 675 1566
680-687 | 684 1566
688-692 | 684 1620
693-711 | 702 1620
712-724 | 720 1674
725-734 | 729 1674
735-742 | 729 1728
743-757 | 756 1728
regards
:doh: :lol:
...but if you had to choose from a 640mb 8800gts and the new 512 gt...for the same price...for example :)...which one would you take????
dat's the new gts and it will be fairly pricey...but will it beat the gtx at a lower price point ?
and i still think 65nano card should be the focus.