I did read the article. You have to manually change the PCIe frequency for this to kick in. Otherwise Linkboost will do it, but again, not a lot of reviews would be affected by that.
Printable View
I can confirm that on my Abit IN9 32X-MAX board, Linkboost is an option (with latest bios) and does influence pci-e frequency if this is not set to manual control. What else can anyone do apart from test things on the hardware they have access to or simply repeat things they have read elsewhere?
No, I do not believe these cards have been running at 800 odd mhz during reviews, I believe they have been running at the speeds stated in the review, which was notably different from the values advertised for the card. I believe this is due to a different PCI-e bus speed. Others on this thread seem to have have confirmed this behaviour, which is not seen on previous cards.
so they disabled it on 680i and left it on 780i to make people think that 780 has some real advantage apart from pci express 2.0
lmao would be too lame if true
this could explain why when going from 9600gt to 9600gt sli, the performance looks soooo much better than sli setups on 7 or 8 series.
Perhaps this is why geforce 9 sli setups scale so well? nvidia looks gayer and gayer as each day passes leading up to this launch.
They shouldn't even call it a launch, more like a "squirt" or something.
so clocks go up with pci-e mhz???? is this whats happening???
I don't see a real problem here, just gives easy overclocking for novice users?
I would rather be able to set the clock myself, then run the highest possible PCIe frequency though, for a few extra 3DMark points. ;)
So let me get this clear....If I am reading this correctly:
Its a pretty good card (for the price) with an auto overclock "feature" when used with compatible boards? However this "feature" can lead to overclocks that are too high to be stable?
However you can just not use the auto "feature" and manually overclock it as much as you can until it becomes unstable and then back down till its stable?
Does that make sense?
-yonton228/timmy
Thats pretty interesting. Gonna try the stock clocks (675) with the 110 bus. Cant seem to do much better than 750 right now (700 set in riva).
UPDATE; Ran at 680 (read 734) and saw some artifacting, but posted my highest 06 so far, breaking the 17k barrier. Overall it was less than 100 point increase from what I was running before (700-shown as 756 with pci-E at 100). Maybe the artifacting was due to the vram, I'm not sure how far the mem on these can go. Currently at 1100.
well at least with this discover we can now set the gpu to intermediate clocks instead of going by steps...find the limit at 100 mhz bus then slowly rise bus freq
ie limit at 100 mhz -> 25*30 = 750 (775 unstable)
rise bus to 101 -> 25,25*30 = 757,5 (stable)
rise bus to 102 -> 25,5*30 = 765 (unstable)
rise bus to 105 -> 26,25*29 = 761,25 (stable! you fine tuned your gpu clock and gained 11,25 mhz ^^)
part of the issue is that due to Nvidia only allowing sli on their chipsets means reveiwers are required to use the nvidia chipsets to test. when you place an ati card in the board it does not get the boost where the nvidia does. had the tests been on intel chipsets (like the overwhelmong majority of single cards) the 9600's scores would have been lower.
The point being made is that on intel those boost do not exist and the consumer who buys the card based on the fact it beat the ati card by a few percent winds up getting the slower card. It is shady like W1zzard said, is it wrong...no. they should have simply called it a feature and acknowledged it. instead they deny that it is true.
score of Radeon HD3870X2 is too bigger with higher PCIexpress ...
maybe it is new standard - read reffrence clock from PCIe, not from crystal on board ...
I don't see this as a problem more of an added feature. Judging from what some of you have posted its a nice way to establish a base overclock prior to tweaking. Think of it this way how many of you will setup your overclock on the motherboard bios and then use set fsb, memset etc to gain that little bit extra for a spi run? I would consider this to be similar and just like clocking your fsb past it's boot limits if you go too far with this youll crash. However, once your in windows if may actually be a benefit to balance the pcie overclock with a driver level overclock and get better results. I wish i'd waited now and got the 9600 instead of 8800s but id like to see some tweaking to see if a pcie and driver balanced overclock can get you a better 3dmark score.
Free performance? didnt the end user buy the card to start with?
so its not free, its like having a card at stock speeds the ocing it yourself, its not free performance you just unlocked some of whats already there, which you paid for when you handed over your money:)
its time to stop buying Nvidia boards.
Tests should be fair to offer the same variables when testing.
Unless they do, performance cant be known.
I will from now on never trust a review using Nvidia boards.
honestly, i have on every nForce board Linkboost DISABLED! Because some GFX hate it ... any experienced user turn off this feature everytime, and because first GFX with refference clock from PCIe i dont want to sell my nForce boards ...
What they are saying is that NForce 680i (the silicon not the platform) can handle much higher PCI-E frequency than 100MHz. On 590i and 680i platforms LinkBoost took andvantage of this; on 780i, instead of overclocking PCI-E to 125MHz when Nvidia card is present, they overclocked PCI-E to 180MHz and connected a Nforce 200 chip. They ditched LinkBoost in 780i in order to get enough bandwidth out of PCI-E 1.1 to handle two PCI-E 2.0 cards.Quote:
Originally Posted by Eastcoasthandle
FYI, read this review. Made with an X38 chipset, compare XFX9600GT to HD3870. No nVidia tricks here :rolleyes:
that review was nothing but Nvidia tricks. get real! what kind of review doesn't even show cpu clockspeeds or use the 8.2 drivers which were available when the test was run. the 06 scores were 2,000 points higher than TPU. that would be like me reviewing 3850's and showing that they do 23,000 in crossfire on 06 and not mentioning the overclock. besides the cpu speed they do not mention pci-e clocks either.
besides the obvious what did you expect that review to prove to me? how is it in any way related to the subject of this thread? did the author even mention the clockspeed change.....NO. quit spreading your crap and trying to draw people away from the issue. the issue here is clearly stated in W1zzards review.
my friend did test on intel motherboard, raising pci-e freq with 9600gt at stock.
works on intel p35 & x38 as well. 3dmark scores go up, yet core clock reads the same:
Ok ive just done 2 benchies of 3dmark 06 heres the setup
Q6600 @ 3.2Ghz
2048 DDR2 XMS2
9600GT @ Stock 650/1625/900
P35C DS3R Rev 1.1 Intel P35 chipset.
Scores with PCI-e @ 100Hz
3dMark score 11527
SM2.0 4683
SM3.0 4387
CPU 5091
Score With PCI-e @ 110Hz
3dMark Score 12176
SM2.0 5003
SM3.0 4667
CPU 5081
So a little jump in performance there
If marketed correctly, this could boost nvidia's motherboard sales. Since linkboost is a feature of the motherboard, you cannot call this a lock in and have the usual anti-trust allegations.
However, not telling reviewers was clearly a mistake.
Also, it isnt unfair in terms of comparing the 9600gt to ati cards. Effectively, the stock frequency is higher than what people originally thought. But people on intel/amd motherboards deserve to know this.
Nvidia plays dirty again.
And no, this is not just an undocumented feature.
If ati made drivers that overclock cards but make it appear as if stock speeds are used it's foul play as well.
The question is not whether or not some gpu's can handle the overclock (obviously ati cards could handle it as well), neither is this about nvidia making it easier to overclock gpu's (it has always been easy).
This is about :banana::banana::banana::banana:ing up reviews and benchmarks, just like nvidia has done before. (Remember the early Crisis drivers that set your Crisis detail settings to low automaticly?)
No nvidia for me.
haha i feel for you man
nvidia would have only had to attach a little note saying "PCI-e clock and GPU-Core clock are linked" to all the samples they sent to review sites to have stopped this from happening.
oh well :shrug:
none of these companies are very good at supporting tweakers and overclockers, except maybe intel. AMD won't even release the AM2 pinout
Gosh people, calm down.
So... does this mean that the base clock for those cards is PCI-Express bus frequency / 4?
Looks like just another way to save cents on BOM -- one crystal less on the board itself.
From that perspective, NVIDIA did the good thing because cards can be sold cheaper.
Unfortunately, the problem is they forgot to tell us so those people who overclocked their PCI-E bus will end up overclocking the GPU as well.
What I don't understand is this -- if reviewers tested 9600GT overclocking potential which method they have used to raise the GPU clock?
I mean if RivaTuner does that directly via PLL, then there is a chance they haven't actually managed to change the GPU clock at all. That would make all overclocking results invalid, right?
Cannot run stable at 110 pci-E even with stock clocks. 675 core 1050 vram and 105 pci-E gives best result for me.
The discrepancy in clocks is shown with or without increase in bus speed. What bothers me is the actual clocks are not shown, that I know of. 675 reads 734, I think, and is not stable with 110 set in bios. It is stable at 105, however.
I guess the part that bothers me is where will this stop. can amd now launch processors they claim run at 3ghz because they decided to change the math? there are organizations such as jedec that are supposed to govern these things.
Damn it man? have you read the whole thread or are you just nit-picking?
Here's your answer to that, it's a RIVATUNER BUG, READ IT.
Thank you.
I can't believe this is xtremesystems forums, all I see is noobs.
I cannot see LINKBOOST in my Asus 780i BIOS nor is it in the evga 780i manual. Is this fully automatic then? If it used to be a BIOS option it is no longer there.
I would suggest that Linkboost set to disabled and PCI-e set to 100Mhz would be default BIOS options for most motherboard BIOS so I am not convinced how many reviews have been corrupted by this. It seems more like someone is excited to find this and other people are excited to beat nvidia with it :)
It should have been documented though as it can lead to instabilities if how the system works has changed.
Regards
Andy
I hardly see how this is nearly such a big deal that is being made of it. So what if it is clocked in any way that secretly makes it faster. I fail to understand the problem. After all, AMD's Ghz and Intel's GHz have always given different results.
I would be upset if they actually found a way to make 128 shaders appear as 64 shaders. That is what I have suspected about these cards. But so far, there has been plenty of rational explanations as to why it is so much faster with 1/2 the shaders. People need to be looking at this aspect, not a silly clock speed difference.
On my default run ASUS 9600GT have this effect and this GPU clock show on everest from 120MHz pci-e by Striker II Formula
http://www.overclockzone.com/zolkorn...0gt/Clip_3.jpg
OMG LOLOLOLOL
The article says rivatuners bugs because it reads the final clock by using the gpu multiplayer x 27mhz ALWAYS, as you only find 27mhz crystal clock on the PCB, BUT this is the memory clock generator.
As I managed to test this bug with destr0yer, he's readings at stock clocks (675mhz) were 729mzh with rivatuner monitoring. Do you know why? Because:
675mhz:25mhz=27; multiplier27x27mhz= 729mhz
Then, I asked him to raise core clock 25mhz, so the multiplier would increase by 1. Like, 700mhz:25mhz=28
So, multiplier28x27mhz=756mhz, and he confirmed that happened, rivatuner was now reading 756mhz with hw monitoring.
Did you understand it or should I make a draw?
God damn it man, read the damn thread!
From the article:
"Please also note that RivaTuner's monitoring clock reading is wrong. It uses 27 MHz for its calculation which is incorrect. When the PCI-E bus is 100 MHz, the core clock is indeed 650 MHz on the reference design. A RivaTuner update is necessary to reflect GPU clock changes cause by PCI-E clock properly though."
Board partners don't even seem to know this, or are they also playing the same game? :shrug:
Yeah, disappointing, even with all the proof in front of their eyes they keep on telling its otherwise. Like... GET A CARD AND SEE FOR YOURSELF! :shakes:
Insulting other users like that you hardly pass or point of view and no one take you serious ;)
And yes the article is right. If you chage the PCI-E frequencie it will overclock the card:
This leads to various problems. Instability, not advertized by Nvidia this thing and mostly is starting to appear some problems with the cards in some users.Quote:
675Mhz with PCI-E 110mhz , real clock = 742,5Mhz
675Mhz with PCI-E 105Mhz , real clock = 708Mhz
NF780i shouldnt have linkboost on, since it uses a bridge and the actual PCI-Express works a lot higher to give more bandwith.
On NF680i at least EVGA the Linkboost is disabled at default, this has been for a long time, since bios 17 or 18, now they are in bios 31+.
Then if no linkboost and default PCI-e clocks are 100Mhz by default then why are a few people claiming nvidia are cheating in reviews? Did they tell reviews to put the PCI-e clock up?
It just seems to me that a few people want to :slapass: :stick: :horse: nvidia because for some reason they do not like them, perhaps due to their actions in the past.
And Luka_Aveiro, do us all a favour and shut the feck up. All you've done on this thread is demeened other people because they don't seem to share the same viewpoint as yourself.
Regards
Andy
780i's default pci-e clock is 125mhz when using a pci-e 2.0 card....thats why in the bios you only have pci-e_3
the pci clock frequency becomes an issue when it is tweaked to one sides advantage. if this adnvantage can be reproduced for both sides, then a true like for like comparison can be made as to which is the superior product...
IN THIS CONTEXT ONLY , having gpu boosting bios settings for one card is unfair.
From a chipset performance standpoint this is good news, as it shows that all areas of the chipset that can be improved have been....
i've always kept my pci clock at 110-120 anyway as we all know it provides some performance....
Luka...that type of posting is unacceptable on XS. How you say things is just as important as what you say.
As far as I see, this topic has been beaten down. Everything that needs to be said has been said pending further developments. Since this thread has devolved into attacks, it's closed.
When something new comes up, make a new thread.