DFI LANParty DK 790FX-B
Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
-cooling: Scythe Mugen 2 + AC MX-2
XFX ATI Radeon HD 5870 1024MB
8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
Seagate 1TB 7200.11 Barracuda
Corsair HX620W
Support PC gaming. Don't pirate games.
I can confirm that on my Abit IN9 32X-MAX board, Linkboost is an option (with latest bios) and does influence pci-e frequency if this is not set to manual control. What else can anyone do apart from test things on the hardware they have access to or simply repeat things they have read elsewhere?
No, I do not believe these cards have been running at 800 odd mhz during reviews, I believe they have been running at the speeds stated in the review, which was notably different from the values advertised for the card. I believe this is due to a different PCI-e bus speed. Others on this thread seem to have have confirmed this behaviour, which is not seen on previous cards.
Serenity:
Core2 E6600
Abit IN9 32X-MAX
Corsair PC2-6400C4D
2x BFG OC2 8800GTS in SLI
Dell 3007WFP-HC
so they disabled it on 680i and left it on 780i to make people think that 780 has some real advantage apart from pci express 2.0
lmao would be too lame if true
3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R
this could explain why when going from 9600gt to 9600gt sli, the performance looks soooo much better than sli setups on 7 or 8 series.
Perhaps this is why geforce 9 sli setups scale so well? nvidia looks gayer and gayer as each day passes leading up to this launch.
They shouldn't even call it a launch, more like a "squirt" or something.
so clocks go up with pci-e mhz???? is this whats happening???
I don't see a real problem here, just gives easy overclocking for novice users?
I would rather be able to set the clock myself, then run the highest possible PCIe frequency though, for a few extra 3DMark points.![]()
Core 2 duo E8400 @ 4.2Ghz | Swiftech H20 Apex Ultra Plus | 2GB OCZ Reaper HPC PC8500 @ 1066 4-4-4-4 | 320GB WD + 640GB WD | 512MB 8800GT @ 700/1900 | Antec 900 | 22" Dell E228WFP + 22" Asus VW222U | Logitech G9 + G15 | Asus P5K-E WiFi | Running on a 380W Antec NeoHE
- Me @ HWBOT
So let me get this clear....If I am reading this correctly:
Its a pretty good card (for the price) with an auto overclock "feature" when used with compatible boards? However this "feature" can lead to overclocks that are too high to be stable?
However you can just not use the auto "feature" and manually overclock it as much as you can until it becomes unstable and then back down till its stable?
Does that make sense?
-yonton228/timmy
"Foldin, Foldin, Foldin...keep those benchers foldin..." (Lyrics by Angra, Music is Rawhide)
BOYCOTT MIR's
Thats pretty interesting. Gonna try the stock clocks (675) with the 110 bus. Cant seem to do much better than 750 right now (700 set in riva).
UPDATE; Ran at 680 (read 734) and saw some artifacting, but posted my highest 06 so far, breaking the 17k barrier. Overall it was less than 100 point increase from what I was running before (700-shown as 756 with pci-E at 100). Maybe the artifacting was due to the vram, I'm not sure how far the mem on these can go. Currently at 1100.
Last edited by op1e; 02-29-2008 at 08:04 PM.
EVGA 780I P02bios
E8400@3.6
BFG 9600gt SLI 710/1000
2x2g GSkill ddr2
Sceptre 20.1 naga
Antec SP500/TT Sli psu
well at least with this discover we can now set the gpu to intermediate clocks instead of going by steps...find the limit at 100 mhz bus then slowly rise bus freq
ie limit at 100 mhz -> 25*30 = 750 (775 unstable)
rise bus to 101 -> 25,25*30 = 757,5 (stable)
rise bus to 102 -> 25,5*30 = 765 (unstable)
rise bus to 105 -> 26,25*29 = 761,25 (stable! you fine tuned your gpu clock and gained 11,25 mhz ^^)
3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R
part of the issue is that due to Nvidia only allowing sli on their chipsets means reveiwers are required to use the nvidia chipsets to test. when you place an ati card in the board it does not get the boost where the nvidia does. had the tests been on intel chipsets (like the overwhelmong majority of single cards) the 9600's scores would have been lower.
The point being made is that on intel those boost do not exist and the consumer who buys the card based on the fact it beat the ati card by a few percent winds up getting the slower card. It is shady like W1zzard said, is it wrong...no. they should have simply called it a feature and acknowledged it. instead they deny that it is true.
QX 9650 5ghz with 1.55v 4.8ghz with 1.5v 24/7 in a VAPOLI V-2000B+ Single stage phase cooling.![]()
DFI LP LT X-38 T2R
2X HD4850's water cooled , volt modded
Thermaltake 1KW Psu
4x Seagate 250GB in RAID 0
8GB crucial ballistix ram
score of Radeon HD3870X2 is too bigger with higher PCIexpress ...
maybe it is new standard - read reffrence clock from PCIe, not from crystal on board ...
I don't see this as a problem more of an added feature. Judging from what some of you have posted its a nice way to establish a base overclock prior to tweaking. Think of it this way how many of you will setup your overclock on the motherboard bios and then use set fsb, memset etc to gain that little bit extra for a spi run? I would consider this to be similar and just like clocking your fsb past it's boot limits if you go too far with this youll crash. However, once your in windows if may actually be a benefit to balance the pcie overclock with a driver level overclock and get better results. I wish i'd waited now and got the 9600 instead of 8800s but id like to see some tweaking to see if a pcie and driver balanced overclock can get you a better 3dmark score.
its time to stop buying Nvidia boards.
Tests should be fair to offer the same variables when testing.
Unless they do, performance cant be known.
I will from now on never trust a review using Nvidia boards.
4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11(one of the 26% that isnt confused on xtreme forums)
honestly, i have on every nForce board Linkboost DISABLED! Because some GFX hate it ... any experienced user turn off this feature everytime, and because first GFX with refference clock from PCIe i dont want to sell my nForce boards ...
What they are saying is that NForce 680i (the silicon not the platform) can handle much higher PCI-E frequency than 100MHz. On 590i and 680i platforms LinkBoost took andvantage of this; on 780i, instead of overclocking PCI-E to 125MHz when Nvidia card is present, they overclocked PCI-E to 180MHz and connected a Nforce 200 chip. They ditched LinkBoost in 780i in order to get enough bandwidth out of PCI-E 1.1 to handle two PCI-E 2.0 cards.Originally Posted by Eastcoasthandle
www.teampclab.pl
MOA 2009 Poland #2, AMD Black Ops 2010, MOA 2011 Poland #1, MOA 2011 EMEA #12
Test bench: empty
FYI, read this review. Made with an X38 chipset, compare XFX9600GT to HD3870. No nVidia tricks here![]()
Are we there yet?
that review was nothing but Nvidia tricks. get real! what kind of review doesn't even show cpu clockspeeds or use the 8.2 drivers which were available when the test was run. the 06 scores were 2,000 points higher than TPU. that would be like me reviewing 3850's and showing that they do 23,000 in crossfire on 06 and not mentioning the overclock. besides the cpu speed they do not mention pci-e clocks either.
besides the obvious what did you expect that review to prove to me? how is it in any way related to the subject of this thread? did the author even mention the clockspeed change.....NO. quit spreading your crap and trying to draw people away from the issue. the issue here is clearly stated in W1zzards review.
QX 9650 5ghz with 1.55v 4.8ghz with 1.5v 24/7 in a VAPOLI V-2000B+ Single stage phase cooling.![]()
DFI LP LT X-38 T2R
2X HD4850's water cooled , volt modded
Thermaltake 1KW Psu
4x Seagate 250GB in RAID 0
8GB crucial ballistix ram
Seems we made our greatest error when we named it at the start
for though we called it "Human Nature" - it was cancer of the heart
CPU: AMD X3 720BE@ 3,4Ghz
Cooler: Xigmatek S1283(Terrible mounting system for AM2/3)
Motherboard: Gigabyte 790FXT-UD5P(F4) RAM: 2x 2GB OCZ DDR3 1600Mhz Gold 8-8-8-24
GPU:HD5850 1GB
PSU: Seasonic M12D 750W Case: Coolermaster HAF932(aka Dusty)
my friend did test on intel motherboard, raising pci-e freq with 9600gt at stock.
works on intel p35 & x38 as well. 3dmark scores go up, yet core clock reads the same:
Ok ive just done 2 benchies of 3dmark 06 heres the setup
Q6600 @ 3.2Ghz
2048 DDR2 XMS2
9600GT @ Stock 650/1625/900
P35C DS3R Rev 1.1 Intel P35 chipset.
Scores with PCI-e @ 100Hz
3dMark score 11527
SM2.0 4683
SM3.0 4387
CPU 5091
Score With PCI-e @ 110Hz
3dMark Score 12176
SM2.0 5003
SM3.0 4667
CPU 5081
So a little jump in performance there
Bookmarks