source:http://www.techpowerup.com/reviews/N...Shady_9600_GT/
Printable View
Another Nvidia scam for sales.But because they make billion nothing will ever be done about it
Nice find.
It´s now resolved the case of 9600GT SLI in one SLI Nvidia board is better (in some games) then one HD 3870X2 in the same Nvidia board:
Quote:
It is certainly nice for NVIDIA to see their GeForce 9600 GT reviewed on NVIDIA chipsets with LinkBoost enabled where their card leaves the competition behind in the dust (even more). Also it could send a message to customers that the card performs considerably better when used on an NVIDIA chipset? Actually this is not the case, the PCI-Express frequency can be adjusted on most motherboards, you will see these gains independent of Intel/AMD CPU architecture or Intel/NVIDIA/AMD/VIA chipset.
Interesting, has huge implications for the review much discussed in this thread:
Clicky
So what are they doing then, basically automatically overclocking thier cards when using Nforce chipsets. Would be somewhat amusing if this made Nvidia cards and chipsets incompattible (for cards that have little to no overclocking headroom in them... there are always some...)
So what are the actual default clocks for 9600GT?
or it could just be that rivatuner reads the card wrong ?
nvidia drivers and gpu-z always read the card the same, but on every card I have at least one of the clocks reads different on riva tuner hardware monitor....
if they are all eminating from the same source then who is wrong?
my shader cores have been off as much as 30mhz on all the g80s'g92's
I havnt seen such a huge discrepancy in core clocks though.
from what it looks like there saying a stock 650 9600gt should read about 708 in riva tuner then ?
slightly misleading of nVidia, but not so much of a problem once people are aware of it...
but it implies that every 9600 GT can handle a 25% overclock on stock volts.
you put a 650mhz 9600 GT in a linkboost enabled board and it clocks it to 125/100 * 650mhz = 812.5mhz... is that gonna work????
or are a lot of nforce 590i boards gonna be mysteriously buggy while running 9600 GTs in SLI :rofl:
I believe linkboost is out since the 680i chipset was launched
according to techpowerup , it depends on PCI-E frequency
http://img264.imageshack.us/img264/9220/pciaa3.jpg
regards
so to sum up, every 9600gt review on nv chipset with linkboost enabled should be trashed?
is g94 the only affected chip up to now?
This examples a whole lot.
Interesting...I thought its awesome performance could be due to an tweaked arch., or maybe at the cost of IQ, but never because of this.
Very well find dude...Bad trick from nV :shakes:
Link boost has been around for quite some time.... ATi has a form of it in their RD600 as well. All it does is overclock the PCI-E bus automatically when it senses the card brand of it's choice. All this is is merely NVidia finally making a board that can take advantage of the pci-e bus clocking feature that's been around for almost 2 years now.
So, if this is true and not shens, does that mean the reviews where they overclock the cards and still hit 800mhz+ actually closer to 1000mhz? After all, just 850 * 25% would be 1062 mhz.
Calling this shady, I don't know about that one. It boosts the cards performance for the end user without worry of voiding warranty or requiring any work at all. That's not shady, that's called increasing performance. Nothing wrong with that at all.
is linkboost related only to SLIed cards or also to sinle card?
the performance increase has nothing to do with linkboost itself, with wider bandwidth or higher pci frequency, it's just a gpu overclock
actually, there's no problem in bringing to the average Joe an auto-overclocking and dummyproof performance increase
what makes it a cheat and a **** is:
- it's not documented/advertised by nvidia
- the driver reports the non-overclocked frequency
so basically looks like nvidia wants to hide this and make people think their cards at stock frequencies are faster then they actually are.
I hope this is implemented on 9800GX2 and GTX :D
I think it's pretty sweet. :party:
Allot of folks by oc'd cards and pay extra for them, this is basically no different other than not paying extra for an oc edition card.
It's free performance so I don't personally see anything wrong or shady about it.
default clock is 650mhz
http://www.techarp.com/article/Deskt...idia_4_big.png
but in that article they must have been using an overclock model 9600 GT
actual clock = 25mhz (dependant on PCI-e frequency) * 29 = 725mhz (reported by GPU-z and rivatuner overclocking)
there's also 27mhz * 29 = 783mhz from rivatuner monitoring, which the author says is incorrect
Quote:
Originally Posted by techpowerup
The same average joe you speak of wouldn't know how to check the driver for the non-overclocked frequency in the first place. Also, linkboost IS an advertised feature, this is just the first card to truly take advantage of the increased PCI-E frequency.
Finally, stock is stock. You take your card, pull it out of the box, put it in your computer and run it? That's stock. All this means is, stock speed on a NVidia chipset is different than stock speed on a different chipset.
Personally, I really like the idea of the new feature, as I'm sure a lot of people here will.
I do want to know one thing though.....
Why doesn't the guru3d testing show the same issue?. Guru3d use rivatuner in their sli review of the 9600GT to show temps(page 4 of the review), which is done on a 680i. The inno3d card is still at 700mhz, using the same tool tech powerup said showed the issue.... So can someone explain what's going on here with this?
It does not take advantage of the increased PCI-E frequency, it takes advantage of the increased GPU frequency
So that's not shady at all?
The point is, if nobody would have spotted this thing:
average joe goes on the web, looks for vgas benches, finds a 9600gt bench on 680i with linkboost enabled, thinks, wow, that card is fast!
Average Joe buys a 9600gt for his P35 mobo.
Average Joe PC will perform lower than what he thought
How's about having it reviewed on nv chipset boards, and forgetting to
mention that it only does so well on their chipsets, but on others, that
speed advantage is gone? :)
IMO techreport should've made a test: performance on nv and performance
on non-nv chipsets. It would be much better proof, these are just theories.
remember, linkboost is only for SLI
edit: my bad, it's automatic on all link-boost enabled chipset and gfx card combinations
a feature that overclocks SLI setups (more than just the PCI-e bandwidth, which does basically nothing) is bad enough, if nVidia made a feature that overclocked single nVidia GPUs on nVidia chipsets they'd get teh rape from the tech-press
And the hardware monitor in rivatuner, your readings are...?
http://i1.techpowerup.com/reviews/NV...ges/clocks.gif
This is the basis of TechPowerUP theory, it could just be a reading error, afaik.
Since when you can run SLI on P35? and if average Joe look for SLI benchs to buy a Single card, he did it wrong.
I like this you get more performance for you $$$ out of the box, every guy that has seen a review saw it on nf680i or nvidia chipset and if they want to use this they have to buy a nf680i so they will get the boost. If they want single card they wont get it as linkboost only works on SLI.
It is still nice to know about this feature, also does everycard out there do this?
I guess no one read my post pointing out that guru3d shows the readings off of rivatuner, and nothing out of the ordinary showed up on their testing.
Quote:
Originally Posted by Guru3D review
You'll have to ask Guru3D to know if they have LinkBoost enabled or disabled, or if they have a BIOS that automatically enables or disables it.Quote:
Originally Posted by Techpowerup article
This "feature" is very good for overclocking noobs.
But it's a VERY BAD marketing procedure, not telling the reviewers about it, increasing the scores WITHOUT even know it, looks like the card is much better than it really is. You can't review a card saying that its clocks are X, which in fact are X+something due to LinkBoost or the reviewer manually increasing PCI-E frequency on any other chipset, overclocking the card and showing overcloked results. So, again, :clap: NVIDIA for another lamentable marketing move.
But this time we have caught you :lol: :owned:
Guys, you haven't realized yet this has nothing to do with linkboost...
It has to do with PCI-Express frequency, once techpowerup considers PCI-Express frequency/4 is the clock generator for gpu frequency, as 100mhz/4=25mhz, that should be a regular crystal clock generator, but that doesn't exist on the 9600GT!!!!
So, taking their example: 725mhz/25mhz(clock generator powered by pci-express@100mhz)=x29;
If PCI-Express Frequency= 110mhz-> 110:4=27,5; 27,5x29= ~798mhz
Let's take it to stock clocks:
650/(100/4=25)=x26
(110/4)=27,5; 27,5x26=715mhz
So, 715-650=65; 65=10% of 650
So if you increase PCI-Express by 10%, you will increase your core clocks by 10%.
I wish I had a 9600GT in my hands to confirm this, it just seems too good to be true :D
well, linkboost bumps the PCI-e frequency from 100 to 125mhz on certain nvidia chipset/gfx card combinations, and since very few people change their PCI-e frequency that's the only way this could be much more than trivia
and since a 25% overclock would crash a lot of 9600 GTs, not to mention that it would be widely interpreted as nvidia 'cheating' to make their chipsets look better, i'd bet the 9600 GT isn't linkboost enabled
or it is linkboost enabled, but the card automatically changes the ratio so the new PCI-e frequency is accounted for
eg
normal = 100mhz/4 * 26 = 650mhz
linkboost = 125mhz/5 * 26 = 650mhz
So the card automatically overclocks.
If it doesn't affect stability, it's great.
If it does, even if a little bit, then nVidia should pay for it. Hard.
I guess it's all true guys. I have been running 9600GT's clocked at 720MHz out of the box, but the card never remained stable in my tests. I got in another sample, same :banana::banana::banana::banana: happened. Now, due to rushing things, I have left my sources wondering what the problem could be, but now it all seems to come down to the simple fact where my PCIe bus speed could have been the problem all the time. I have it overclocked to 110MHz all the time which makes the GPU run at 780MHz instead of 720MHz, downclocking the GPU solved it so I guess PCIe speed did affect card clocks. Makes sense, I really don't know why else my three samples couldn't clock further then 710MHz with 110MHz PCIe speed.
I have an Intel X38 chipset. Those wondering if it will work on any chipset, please spend some time reading the article. It counts only 4 pages and you will not have to spend time posting 10 times here on XS in order to find out. It is based on PCIe speed, nothing to due with Linkboost instead that Linkboost will now overclock your VGA GPU too further increasing the total system performance.
FIY, if you had your PCI-Express frequency defined as 110mhz and you wanted to run GPU at 720mhz, it should have been running at 720x1,10= 792mhz not 780... What programs did you use for readings?
Man I'm so excited with this that I really want a 9600GT NOW!
Do you realize with extreme cooling/voltage we could easily achieve past 1GHZ frequency?
THIS IS A MAJOR TWEAK!
lame! thanks for making this public!
I've not heard of a card that uses a 25MHz multiplier anyway. They all use 27/54MHz multiples.
Tweak? This is a overclock, nothing more, nothing less. 1Ghz is less great when the GPU seems to be less efficient then expected. K, you got the 1GHz screenshot, but performance wise it doesn't change a thing.
I used Rivatuner 2.06 and GPU-Z 0.1.5.
Crystal /= multiplier :) I've seen 29MHz too, but like only once until now. Custum boards are also affected.
Quote:
Originally Posted by Luka_Aveiro
Quoted for those who miss it.Quote:
Originally Posted by wittekakker
who does this really effect? ppl who want to overclock their pci express bus and not worry about changing their graphics cards frequency's?
i dont understand the con's
Vai spammar para outro lado pah, alguém te perguntou as horas?
It's a major tweak because it increases the clock generator frequency, similar to FSB incrasing when overclocking CPUs.
On nVidia chispets, afaik, PCI-Express frequency does not affect in any way hard-drives malfunction, as NB<->SB is made by HT Link.
sourceQuote:
The execution of this from NVIDIA's side is less than poor in my opinion. They did not communicate this new feature to reviewers at all, nor invented a marketing name for it and branded it as a feature that their competitors do not have.
Even when asked directly we got a bogus reply: "the crystal frequency is...". No, there is no 25 MHz crystal and its frequency is not fixed either. I'm not accusing the sender of the E-Mail of course, I just believe he didn't know, maybe this fact wasn't communicated to the marketing team at all. However, if you would get such an inquiry wouldn't you look into this further if it was your job to properly promote a product?
Guys, the fact is rivatunner is buggy, and it does not read your final clock by increased PCI-Express frequency, appears it always multiplies the GPU internal multiplier by 27mhz, as it might refer to the physical existing clock generator (memory ones) according to techpowerup.
http://i1.techpowerup.com/reviews/NV...stal_small.jpg
I hope the GPU-Z author is willing to update the utility to show driver reported clocks and actual clocks.
The fillrate measurements W1zzard made are conclusive:
PCIe link OC-% = G94 GPU OC-%
FYI, a friend of mine has a 9600GT and I am asking him right now to do some tests regarding the confirmation of this theory, please wait :)
I think something is wrong with TPU's bench system.
They are the only people who have this "problem".
Also, linkboost support is now gone since 590sli.
Also, since when did linkboost increase the card clocks?
Linkboost never increased the card clocks, only the pci-e and chipset buses.
Also, since when is the core clock dependent on pci-e bus?
I don't think TPU knows much,. and they are just spreading misinformation.
Hello
I have a e-VGA 9600 GT and this "feature" confirms!
1º, rivatuner report:
http://i275.photobucket.com/albums/j...Untitled-1.jpg
this clocks is equal for PCI-E @ 100 mhz or 110 mhz, but the 3dmark 2006 fill rate multi texturing is not equal!
PCI-Express=100mhz
1-Results at 675mhz GPU: 16349
2-Results at 743mhz GPU: 18284
PCI-Express=110mhz
3-Results at 675mhz GPU: 17948
at 743 mhz, rivatuner report 800 mhz core. Its REAL clock or bug?
if true, its amazing tweak :D at 125 mhz the core clock is pushed to 844 mhz! Interesting, Graphics OC "driveless" :D
but, higher PCI-E frequencies is dangerous to the hard disks?
Also, does this "conspiracy theory" even make sense?
Why would nvidia want to show lower clocks? It does not make sense. There is no reason for it.
And raising the pci-e frequency has given me a performance boost on ALL my cards from the 7900GS up. But it does not change the core clock.
Man, you all read too fast :rolleyes:
This is the deal:
Core clock is affected by PCI-Express frequency. Why?
Because there's no crystal clock generator chip in 9600GT's.
So how is the clock generated? By using a quarter of PCI Express frequency.
Do you got it 'till here? Good.
Now, what if I increase PCI-Express frequency??? What's the real life consequence?
You raise your core clock. If you increase PCI Express Frequency to 110mhz, you will bump your core clock by 10%.
As my friend destroyer (olá, meu :D) has shown, the results speak for themselves...
Apparently shader clock is not affected by this overclocking method, as 743mhz core achieved by PCI-Express frequency is slower than 743mhz manually overclocking.
Once AGAIN, linkboost has nothing to do with this, it olny has to do that it could have raisen core clocks a bit higher than a regular mobo.
Did you know that author of that article is also a creator of GPU-Z, Atitool, SPDtool, SysTool, etc??
I think he knows quite a bit about GPUs and chipsets :)
Lets wait till we can confirm this by independent sources. Validation should be easy, just test FillRates at different PCIe freq... :up:
If this were a 'trick' then I don't see how it really benefits nvidia? All the same, it seems like 90% of the people posting here didn't actually read why it does this. I'm not going to bother retyping, or requoting, so many of you seem to just be ignoring the numerous other explanations so w/e.
My question is, now that we know why this happens, to those claiming this is a benefit, how so? It's not like you can't overclock the cards otherwise. You were never limited because of the crystals clock before. This makes no difference. If your card can reach 800mhz it doesn't matter how you achieve that clock, 800mhz is 800mhz (for videocards anyways). Am I wrong?
destr0yer's results add up the same:
25mhz x 27 = 675mhz in rivatuner overclocking (correct clock)
27mhz x 27 = 729mhz in rivatuner monitor (needs a program update)
and 3dmark fill-rate increases (nearly linearly) with PCI-e frequency.
PCI-e 100mhz, 3dmark fillrate = 16349
PCI-e 110mhz, 3dmark fillrate = 17948 vs 16349 x 1.1 = 17984
here's the part i don't understand, dont reviewers leave the pci express frequency at stock?
this seems like it was just an innocent attemp by nvidia to get motherboards to do hardware overclocking of graphics cards without relying on the drivers, thus adding another tool in the already bloated overclocker's toolbox.
confirmed !!
PCI-Express=100mhz
Results at 675mhz GPU: 16349
PCI-Express=110mhz
Results at 675mhz GPU: 17948
w1zzard from techpowerUp its correct
:up:
I feel the exact same :rolleyes:
Sure there's a big difference between overclocking 25x27 than 27,5x27, I don't know it yet, but you could achieve higher frequencies this way, as internal multiplier remains the same (27, in destroyers case ;))
a 25% overclock would instantly crash a lot of factory overclocked 9600 GTs. we can pretty much assume nvidia wouldn't allow that, the 9xxx series probably aren't link-boost enabled
I don't know what the problem is really? When I saw the specs for the 9600GT I knew Nvidia had to do a tweak or something to pull it off with this card. It's not like we're talking about the high-end here. This is just a boost to this mid level card to make it run decent. Manufacturers have used many similar tricks in the past to raise performance in video cards. I never heard anyone complaining back then. As long as the card performs better with this feature I'm all for it. And btw, who said that RivaTuner is the ultimate tool for Nvidia cards? It's just a program made by a russian dude to overclock cards. If no other tool out there confirms this I'm not buying it...
3dmark 2006 Fill Rate Multi-Texturing with different pci-e frquency and same clocks = different score
http://www.xtremesystems.org/forums/...3&postcount=74
Btw:
3dmark 2006 with 675mhz & pci-e auto = 10900 marks
3dmark 2006 with "675mhz" & pci-e 110Mhz = 11600 marks
3dmark 2006 with 743mhz & pci-e auto = 11500 marks
3dmark 2006 with "743mhz" & pci-e 110mhz (817Mhz) = Crash
:up:
For those who are asking themselves how to calculate the internal multiplier and final REAL clock, this might be helpful:
Using 9600GT reference clocks:
PCI-Express=100mhz
GPU= 650mhz
Internal Core Clock= 100mhz/4 = 25mhz
Internal Core Multiplier = 650/25 = 26
PCI-Express=110mhz
GPU=650mhz (fake, as you will see further)
Internal Core Clock=110mhz/4=27,5mhz
Internal Core Multiplier=26 (it remains unchanged comparing to PCI-Express 100mhz, unless you manually overclock using rivatuner or ati tool, it increases 1 point at every 25mhz)
REAL CORE CLOCK= 26x27,5= 715mhz
In a lot easier way, to calculate RealCoreClock, using the example above: 650x1,10=715
Shaders clock is not affected.
HOLY S! I was wondering why I couldnt clock my cards much past 700. My PCI-E bus was set to 110 originally, then 105 as of right now. If this is the case, I would rather set it to 100 and clock the cards manually to affect the shader clock. I noticed a discrepancy in the Hardware Monitor plugin, but brushed it off as a bug.
Also, I am trying to see if theres an option for bios vmod, but I cant open nibitor in Vista x64. Any help would be appreciated.
the whole fact is not based on rivatuner's readings (that are actually wrong as well), they only started to raise doubts, confirmed by the fill rate benchmarks, so there's no way this can be all false.
I don't see how this can be a good thing for the users since, if they are gonna set an auto overclock on their cards this means that they can all run at that speed...and if they can all run at that speed why not release the cards with that clock?
You CAN overclock shaders unlinked from core frequency ;)
I cannot help you with nibitor though, hope you find your way ;)
Because if they released it with a higher core clock, by raising pci-express frequency, the core clock would be even higher, leading to crashes.
It is always a nice thing to know, and might be helpful knowing your limits and how to get over them. It is nice to know how the game is made, so you can play it well ;)
have a look at their mobile GPUs guys ;)
they have a similar thing going on
i don't think they change PCI frequency on laptops do you :p:
I do not feel that exists any shady trick on it If it turns to be the LinkBoost feature. Newer cards starting from 7900 GTX have the feature added in it. It boosts 25%. Not sure but I think only Nvidia motherboards have this feature added on bios.
SourceQuote:
Originally Posted by AnandTech
So probably some people have this feature enabled or might be probable that drivers enable this when it is being installed after a hard restart.
I think it should show the exactly clocks even after the clocks had been upped by the Linkboost feature. Well some softwares do not work correctly anyway.
Metroid.
Please re-read the article. Your question "Since when did linkboost incrase the card clocks" (all fo the different ways you phrase it) are answered. The answer is "since the 9600GT was released" and was in fact the whole point of the article. They even test this with an 8800GT and show that the 8800GT shows no change with pci-e frequency, only the 9600GT.
As far as this being a major tweak goes, this allows you no more overclocking than could be acheived using normal methods, so I can hardly see any advantage.
The disadvantage I can see is that to many reviews it will make the card look like it performs far better at stock setting than it does, as the nvidia chipset will automatically overclock the card, potentially to the point of instability for some samples. It would also, in a review of chipsets, make an Nvidia chipset look like it performed far better than an Intel one, simply by applying automatically the same overclock that could be applied manually on an Intel card.
I can confirm that Linkboost exist on the 680i chipset, and that its function is indeed to increase the pci-e speed when an nvidia card is used, unless you take direct control of this frequency.
I can only see one reason that Nvidia would do this and then keep it quiet. That's to try to sell more cards, but more likely to increase chipset sales. The unknowing public would think they'd have to have a Nvidia based mb to get the most from their video card.
I see no problem in doing this. In many respects its a nice feature, but they should have been up front about it. Personally, I'd rather OC the card myself.
8600/8700 Mobile chipsets are the same as per screenshot
http://img207.imageshack.us/img207/3...85a2a12rp8.jpg
Well, assuming that linkboost increases PCI-Express frequency by 25%
650mhzx1,25=812,5mhz
Holly crapp, 9600GTs have been running at 812,5mhz while doing reviews!
Do you really believe it or do you WANT to believe it?
And, oh, I don't know about you, but my EVGA 680i SLI doesn't have the link boost option, can you still confirm it exists? Did you know it was an option on first BIOS for 680i boards? And now it is not?
I can confirm I had 2 8800GT SLI and PCI-Express frequency was 100mhz on Slot 1 and 2, so link boost still exists? :rolleyes:
Come on man, do you really think those cards were running at 800mhz core? That would be awesome, wouldn' it?
Peace
I'm confused here. Does this mean that the 9600GT has been having an unfair advantage over 8800GT in terms of core clocks?
Some 3dmark 2006 results (win XP everyday, no special tweaks, LOD's, etc)
3dmark links:
Core 675, PCI-E 110 mhz -> 11602 http://service.futuremark.com/compare?3dm06=5499640
Core 743, PCI-E 100 mhz -> 11523 http://service.futuremark.com/compare?3dm06=5499628
Core 675, PCI-E 100 mhz -> 10971 http://service.futuremark.com/compare?3dm06=5499566
Core 743, PCI-E 110 mhz -> don't run!
Its really more faster than 8800 GTS 320 mb!:eek:
You are right about that. I would be pissed of as well if that happened to me, but I guess I would try a "load default values" at bios and then test it, is I have already done to some cards, but I would never figure out that internal clock was PCI-Express frequency related, and it was holding me back ;)
If PCI Express frequency is above 100mhz, probably YES.
Nice.
Some days ago, you were saying that johnnyGURU doesn't know much,
today you are saying w1zzard doesn't know much, maybe tomorrow
you'll be saying Charles is a noob :ROTF:
It's the chipset automatically increasing the clock, that's what started
this whole debate.
I think you have some reading up to do... start with the article, for example.
Wait, isn't LinkBoost similar to what Asus Peg Link did? From my understand (correct me if I am wrong here) Link Boost bump up the PCI-Express link from 100 MHz to 125MHz. Also link boost increased PCI-E buses up to 3250Mhz from 2500mhz and SPP<->MCP HT bus from 1000 to 1250Mhz.
And
Why are people saying the 780i doesn't use link boost when the techreport review clearly implies that the 780i using "an extreme version" of link boost?
techreportQuote:
Using the nForce 200 seems like a convoluted way to bring PCIe 2.0 connectivity to the 780i SLI. New chipsets from AMD and Intel put PCIe 2.0 right into the north bridge and offer full end-to-end 5.0GT/s signaling rates without the need for a third chip. So why is Nvidia using the nForce 200? I suspect it's because the nForce 780i SLI SPP isn't really a new chip at all. Nvidia MCP General Manager Drew Henry told us the 780i SLI SPP is an "optimized version of a chip we've used before," suggesting that it's really a relabeled nForce 680i SLI SPP.
If you recall the last couple of Nvidia SPP chips, you'll remember a feature called LinkBoost, which cranked up the link speed for the chipset's PCI Express lanes. Nvidia was adamant that this wasn't overclocking since the chipset had been fully validated to run at higher speeds. I think we're seeing an extreme version of LinkBoost in action here, with the 780i SPP simply being a 680i SPP whose 16-lane PCIe 1.1 link has been coaxed into running at 4.5GT/s and validated at that speed. This approach would be fitting considering that second-generation PCI Express is really just gen one cranked up to a faster signaling rate. But it's a shame Nvidia didn't manage to nail 5.0GT/s on the button.