We've been talking about this review and the new features for more than five pages now.
Printable View
We've been talking about this review and the new features for more than five pages now.
Lol...
My bad.
First, I was just pointing out the boost will make accurate benching a bit harder. Not saying which one is better.
Second, both technologies are basicly the same thing. All you need to do is boost the clocks solidly but leave powertune limits low. When an application will put a load high enough not to reach the set clocks, it will appear like the card is boosting the clocks as high as possible to meet target TDP. Exactly the same thing. Except it mechanism is incompattible with marketting. Throotling = boo. Boosting = Yay. :rolleyes:
I am betting Powertune is going to be done like that on the 8XXX cards, possibly even on a 7970 refresher if it comes.
AMD dont throttle anything down with Powertune, even on 6990 .... Powertune and the turbo boost are absolutely not the same and not aimed at the same thing .... In one case it is here for limit the TDP if needed, the other one is here for overclocking the card when it think it is possible or needed.
On the other hand, it will be needed to see how this boost work in details for gaming, instead of just benchmark: If the card is able to OC to 1100mhz under a 2 min phase bench, is it the same when you are playing ? or do just play at 1006mhz ? Will you see the same fps or is it just boosting benchmark score ? what happend when your card is going high in temps after 1hour of gaming ?
The reported graph for TDP show a strange thing, each 3Dmark GT test start high.. ( max...) and go down as the test is going, if you compare with framerate and the load given on the test ( each test start at a low level of charge increasing as it goes to the end, easy to check with fps, as you start high, and end with low fps ), here the tdp decrease from 50W from the start of the bench and the end ( during the test )... does the bench start with 1100mhz? and go back to 1006 then ?... is the difference is really of 50W? if you compare with the 7970 curve, the curve increae in each test, the nvidia decrease in each test.
http://img37.imageshack.us/img37/4781/moddified.jpg
IF the Turbo boost was based on TDP you should see it going to his max limit and trying to stay there, not decreasing it. ( it will not be a flatline, but it will not look like this )
Indeed this is a nice feature, but im not sure the impact is yet exactly what we want it is. Same goes for suround ? can we compare 3 monitors running at full speed ? and 2 at half speed and the main at full speed ?
( what this? we are not able to use 3 monitors on one card at full speed, so we use 2 at half and 1 ? )
( You are testing the card, so you can try a little test: increase the vcore and let stock clock, then run 3Dmark 11, read the increase in TDP and check the difference.. )
PowerTune absolutely can throttle some results. Its entire point is to downlock the ASIC if TDP is surpassed. You are right though: PowerTune is the exact opposite of Turbo Boost. Turbo is meant to INCREASE performance while PowerTune is meant to DECREASE it if necessary.
Wow... really?
One allows the ASIC to be clocked as high as possible but makes certain it stays within TDP limits and gives consistent performance across the vast majority of situations.
The other keeps the base clock relatively low to make certain it stays within the TDP limits in the outlier situations and tries to dynamically change clocks when not in those situations.
http://www.shopblt.com/cgi-bin/shop/...ORDERID!#Specs
Maybe we will see a 500 msrp, if we count the 560 there as preorder being high as usual.
Listed lower than 7970 on neutron pricewise....
Please... outside Furmark ( hpefully ) this was constated only in Metro benchmark... ( in general you will end with more fps in the game instead of the bench, who is really hardcore ).. push the slider to +20% and problem is solved. But this was not the point.. anyway.
I've seen other instances where PowerTune is throttling in games. I'd take a solution that gives me guaranteed performance and possibly more every day over one that might throttle if I push settings to high.
I think this is just a different way of throttling.
Think about it. Other one sets default clock to high, then throttles when power is too high, other one sets default clock low and then oc's when power is low. End result is pretty much the same. Its just about how one describes it, now with turbo on cpus, its marketing wise to sell good old throttling as turbo and instead of saying negatively feeled "throttle" say rising clocks...
Sorry guys but what is this 1006mhz 680 clock? Surely its not default, so is it max overclock, turbo or other??
:(
1006 is default, 1058 is turbo. The 700 MHz that have been floating around are most likely a wrong reading by GPU-Z and co.
I'm not sure it makes much of a difference in a GPU scenario. Let's assume a hypothetical GPU bin that can function up to 1.1 Ghz. A GPU with turbo would start at a lower clock and than proceed to the turbo clock if TDP room is available and then fall back once the maximum TDP is met or exceeded. In a PowerTune scenario the GPU would start at 1.1 Ghz as long as TDP room is available and then similarly retreat once the maximum TDP is met or exceeded.
That's what the theory says, but how does it function in reality ? The last is what really counts isn't it ?
Default voltage maximum overclock. +11% GPU.
http://img.pconline.com.cn/images/up..._1024x1024.jpg
Just giving a different perspective...
Have you got around to playing with it yet? I've heard there are some oddities but hopefully they will be worked out soon.
Guaranteed isn't exactly what I would call it.
Exactly. Two different solutions.
Can only wait and see which one works the best.
So the 1006 MHz aren't guaranteed?
Thanks Andrea Deluxe.
Compare to 7970 xfire
GTX 680 OC SLI @ 1150/1800 -VS.- HD7970 OC Crossfire @ 1150/1500
http://i40.tinypic.com/2vv49lg.jpg
http://hwbot.org/image/747599.jpg
They not want to run games then? /:
i dont think i like what i see about the clocks in these images. it looks like it keep jumping between 700 and 1150, or its just updating one point on the chart as it goes between benchmarks and not tracking anything in the background while the benchmarks are running. im guessing its the latter, but then it should be pretty simple to turn on "log while in background" like any normal person does.
that 2 way sli seems ok for those two cards, depending on the price they're released at.
thats a pretty sizable difference, considering the cpu differences. gpu score is 250pts higher for amd, even though total score is close
i think many of us are hoping these 2 gpus trade blows all over the place, should finally give us a price war once again like the 4800/280 days.
and at the end benefit customers from both camps.... Amen.
Hype came down in the end. And this will bring us better prices so two ways setup will be less prohibitive :D
This is just making me long for the GK110 more. Which is kinda bad.
Looks like pretty much 1:1 with 7970. Lets hope the price tag comes smaller. :cool:
Looks like I will be waiting for the big dog, while both 7970 and 680 look to be great cards still wont get it done for me as my next purchase Im looking for a single gpu that out performs my 480's. Some of the new features they are using look very promising, interested to see how it all comes down even if it isnt the right card for me.
Nvidia for sure will make a dualie out of this. Nice card.
I'm still waiting for something worth replacing my GTX295 Quad-SLI (DX11 notwithstanding). Damn I got my money's worth out of those cards.
Just to clear things out:
Since this is not the high end chip (being a GK104 part) of nVidia's new family of cards why the hell do they name it (GTX 680) and price it (>$500) as such? And since they do so what does that say about the upcoming supposed flagship (GK110)? Will it ever see the light of day or nVidia seeing that she has no reason to release it (since AMD doesn't give much of a competition) will simply let it be on papers alone?
When "politics" are so *strongly* involved with hardware development you know you're not in a good situation...
Because as it stands, this is their high end. Until they decide to release GK110 and adjust prices, this is what we have. I'm hoping for a price battle now...
As we have no verified information on the GK104, or GK110, do you think we can assume the GK104 was not intended to be the high end part all along? What if GK110 is a dual GPU version? What if it never existed, or was a design that got scrapped?
Based on the supposed benches we've seen, the rumored 680 looks to be outperforming the 7970 by about the same margin as the 580 > 6970, the 480>5870. Seems in line with the last two launches?
Hard to say without knowing what it is/was, or if it existed ever. My thought is that when I go to buy video cards, I look at what's for sale and evaluate, not what companies are rumored to have been working on.
What do you mean by this? I'm guessing you're substituting "politics" for "business" based on the context.
If so, you will never be in a "good situation" as you put it. Here's why:
Let's say next time around NVIDIA launches Maxwell first and ATi finds themselves in the position they have three parts they can launch that week, one that beats it by 20%, one by 50%, and one by 100%.
If you launch the 100% faster card you a. make the amount of time you have to work on it's successor as short as possible (bad) b. raise expectations for it's successor (bad) c. get one launch at top tier price instead of possibly three (bad).
Your problem is your expectations are from a consumer perspective, not a business perspective. You want as much as possible as fast as possible for as little as possible. Companies only want to maximize profiit because that's what keeps the lights on and the engineers working.
I think I can almost guarantee you that every product that launches there were alternate designs that were scrapped or put on hold because they were either to expensive to bring to market, couldn't be delivered in a timely fashion, or just didn't need to be brought to market based on reason above.
One reaction might be "Oh man, what about the parts that might have been?!". Another might be, "Let's look at what we have and evaluate merits".
Finally someone who gets it.
In a business point of view, it is a massive improvement for Nvidia, as they did not need a 500+ mm2 chip to match or beat the competition this time around. They will sell their high end chip with a significantly improved margin from previous designs.
This is a a huge win for the company, now in the same ballpark as AMD with efficient chips which don`t require a 1000W+ PSU to run SLI/Crossfire.
Anyone tested the idle wattage of 680 yet? Have they implemented something similar to amd??
Reeealy hope this ends up like the 8xxx/ hd4xxx series price war! New cards to last 3/ years again please
My expectations is to have a healthier industry which is not artificially slowed down. No different than what we had 5 years ago when nVidia rolled out 8800GTX even though the 8800GTS was enough to absolutely kill the competition. If you have what is best -out there- the sooner, the better it is for the whole industry (from a technical point of view) given that the industry will *have to* move faster.
Sure nVidia is also business out to make a profit but also she is the benchmark of hardware development on home computers which has actual, tangible effect on our lives. In short the faster we get the best parts out in the marker, the faster the industry will evolve and the faster it will have solutions to whatever problems may come about. Home computing (on in itself) is a luxury no different than a car yet the development done there affect the society at large multiple times more than any super-car would.
Just think of the applications that such computing is starting to have for medical purposes, scientific purposes. At first we'll get a few more frames, but the side-effect (of the development that gives a few more frames per second) is huge and cannot be ignored. For more purposes than one we have a supercomputer stored in the tiny shell of our GFX cards and the faster it develops the better it is for all...
What can make the industry like it was (is the real question), not why nVidia doesn't release its high end parts (if there are those around to begin with).
That's another common misconception. That the flow of tech is entirely within the control of companies and they can do things when they "have" to. Look at last years Bulldozer launch, or the FX5800 launch for all the evidence you need this isn't true. These things are high tech inventions and you can't just tell your engineers "Invent faster and better! Our competitors have a lead!". Besides which things like the state of the fab process and cost of VRAM and silicon come into play. Maybe in the infancy of 28nm yields on 500mm chips are too low to sell them for less than a thousand a piece. Maybe it makes more sense with current yields to sell 300mm (rumor) at $500(unknown) than 500mms for $600 due to bigger margins on the 300(rumor). You're still thinking like a consumer and "What is best for me is also best for the industry" when the two are unrelated.
History is littered with defunct companies that had one good idea. NVIDIAs responsibility is to the stockholders to provide a profit on investment over the long term. Who knows? Maybe they have a better chip they're holding back because it's all they have in the works now that looks feasible and they want to buy time to come up with something else. Maybe the rumored 680 is what they planned all along for the 680, the best they had come to fruition. Maybe the fabled "better chip" couldn't be brought to market now, and even if it was a 100% increase on 7970, a Q4 sale date would have meant a years lost sales and response "Geez Louise, in 10 months it BETTER be 100% faster! Late!". We'll never know. (and BTW in the life cycle of the development time of these chips 10 months is nothing- time for a couple respins only- main design happens over many years)
This I can't answer. (why there are occasional huge leaps like the Athlon X2, the C2D, the 9700Pro, and the 8800GTX) My guess is they're the product of rare intersections in business/invention/market conditions.
I do know this. Anyone who's a gamer has a LOT of reasons to be excited about Kepler if the rumors are true. Leading performance, lower power, new AA, new vsynch, 4 monitor output, PhysX, 3D Vision, forced AO, CUDA.
If all the rumors are true the only reason I can think of anyone caring about a 7970 at all anymore is if they are one of the eight people on the planet with a 75X16 display set, or their decision hinges on the Skyrim texture pack and high AA at high res. Going to be a VERY small market for 7970s in that case.
Answering to Steve:
There are two problems with your assumption:
1- you assume there actually is a massive gk110 chip in the works and being purposefully held back by Nvidia.
2- you assume that Nvidia must always make huge 500mm2 chips in order to make their customers happy.
Maybe a big chip does not even exist and the infamous high end from Nvidia is just a dual gk104. How are you so sure that they are purposefully holding back?
as an AMD/ATI fanboy i am very excited about kepler
their perf/mm2 and power draw is sounding extremely good and enough make us wonder what they could do with 300W on their bigger chip.
even if i were to still buy a 79xx i would want kepler to force a price drop of 20% or more in the next 2 months. and considering keplers size and specs, they could easily undercut the 79xxs since their gpu has specs that look close to a 78xx (but with a much higher perf)
They really really fixed the memory controller this time around. I think as along as the chip trades blows with the 7970, the hype has been met.....if it wasn't for the damn pricing. Yeesh. It totally screams price fixing I think.
I would hope Nvidia would put some pressure on AMD, because the card has significantly less memory and their die size is 20% smaller. $450 would be completely inline of what the gtx 460 and gt8800 were if it performed up to 7970 with the smaller die and less memory.
If it is priced at 550, AMD doesn't have to do a darn thing and is something AMD can be happy about and it seems like price fixing is afoot.
GK110, seems like it would be a beast card though if it was released today, it seems like a card if released today would be very much like the gtx 8800 making AMD's current generation look last generation. Releasing it later makes sense for Nvidia, releasing it later allow GK104 to have the gtx x80 moniker and allows the chip to be marketed at the price it is being released at. AMD must be also happy gk110 isn't out. It would look bad if gk110 had similar performance to a 7990 dual GPU card, which it should if gk104 performs similarly to the 7970.
I'm not really... at my last sentence I did make this clear ("*if* such a chip exists"). Traditionally though nVidia has in the works a big chip and in this generation this doesn't seem to be the case. Either they had problems on developing that (big) chip, or they changed their strategy (focusing only on medium chips), OR they hold their product away from the marker purposefully. Since I'm not an insider I cannot know which of those is the truth, what I do know (assume in fact) is that if they indeed hold their product from the market (*if* they do that) then it's a damn shame which may even affect them negatively in the long term (the 8800GTX affair made nVidia queen of sales for that 2-3 years period following that release, if history has taught us something is that if you have a strong lead on the market you better show it)...
Anyhow like I said I don't know and I cannot know for sure, traditionally though the even numbers on nVidia's counting system (GF 4xx, GF6xxx, GF 8xxx, GTX 2xx, etc) were denoting an overhaul in architecture and an almost 100% increase in performance compared to the last (even) generation. While the odd numbers were denoting smaller changes (a bit like Intel's tick). This generation around, that symmetry appears to have been broken (which is also what lead me to believe that nVidia actually do/did have a big chip on the works).
Has there been any info on 670's or 660's?
I haven't seen anything specific besides the "7 group" (which would mean 1344 CC) stuff from SemiAccurate, and the "670 Ti," which apparently ended up as the 680.
So Dimi and Rollo both think there is nothing else in the works?
Nvidia just decided to turn it's back on the Tesla/Quadro market after all the work they have done over the years?
What an amazing analysis on the market.
There's obviously an HPC chip in the works and it's very late. They were targeting end of 2011 for a pilot rollout to the Titan supercomputer. They had to fall back to Fermi.
All this mad speculation is hilarious. I knew the truth before I was told the truth :)
Calm the rumor mills down a bit, you're going to burn yourselves out.
You will all know in good time.
Reliable sources tell me that the GTX 780 (the "big chip") is coming later this year.
Hold on to your butts.
hope so, honestly nvidia surprised us all with the gtx 680 ( i can hardly believe it is ) for the gtx 480 and all the laughs that happened in its time of release i guess nvidia was just showing that it can do as AMD-ATI do with lower power consumption and heat. what i was expecting a much faster chip with 8 and 6 pin like a real step into something new. so i guess i will be waiting for the gtx 780 the real big chip
The 680 vs 7970 will be a good thing for consumers because its very likely to lead to price competition.
But for those who want to see a "big" Kepler soon then you should hope that Amd releases the so called Tenerife chip sometime soon.
Here's what will happen... quote me on this later this year.
NVidia will launch the GTX 680. AMD will scramble to get a 7990 out depending on how much the 680 really does hurt them on the high end. The 7990 will launch at $649-$699.
NVidia will show up with the GTX 780 (which will be GK110) towards the summer, after the partners are all happy with their current profits seen out of the cheap to produce GTX 680 (seriously, depending on yields this card shouldn't cost much more than the 9800GTX did to produce). They will then re-release the GTX 680 as the GTX 760ti or GTX 770 at the price we all know it should have been released at.
scramble and being hurt is not what i expect. the 680 does not seem like its going to "kill" the 7970, and amd has had their cards out for so long now that if people wanted to get it, they would have by now. the only one waiting are nvidia fans or people who just want to see prices drop.
also we did see slides about a +1300mhz sapphire 7970 product, if that really is true then it shouldnt be much of an effort to have a refresh come out fast enough to beat the 680
the bigger chip is the real question though. if nvidia is doing a solid 10% better in perf/watt, then in the 300w x2 chip battle it most likely will be won by nvidia.
^^ Ditto, since they have had their cards on the market for longer there is a higher degree of leniency to drop prices. Those two cards are so close to each other that its possibly going to come down to personal preference and price war among the two manufacturers.
I cannot predict or know what those under NDA might know, but the difference in die size is so small that I do not see this as a price decisive factor at all.
Something that bugs me is that at this point in time this is Nvidia's only 28nm chip due out immediately (unless those under NDA know better), where are the rest?
^^^^^ I'm curious about that as well.
I want to know that too, especially gtx670....
It seems like the first wave of Keplers will be the GK104 and GK107, while the GK106 and GK110 will come some time later.
The mobile 600 series lineup is filled out in 2 groups currently: the low end that uses GF108 (and one GF106), and the high end that uses GF114. There's a gap in the middle that appears to be for the GK107 to fill, if the GT 640M is of any indication. I think the GK106 would be around GF114-level (at least for mobile). Given that NVIDIA is waiting for the GK107 to fill out the middle of the mobile 600 series, I don't think they would have used GF114's for the high end if the GK106 is as close to release as GK107 is.
So I'm guessing GK106 is at least a few months away from GK107, at least in the mobile area. And when it shows up they can always do a 665M/685M/690M or something like that.
On a not-entirely-unrelated note, given the GT 640's supposed specs on GPU-Z (GT 545 DDR3 rebrand it seems), unless NVIDIA is planning on overlapping, even a crippled GK107 should be at least GT 545 GDDR5-level.
$299?
If mobile is of any indication the GK107 should be close behind the GK104.
^^ I would not have the mobile market as an indicator since it operates quite independently from desktop and Professional market.
Think of it this way: You're Nvidia. Your mid-range card (so aptly named the "GTX 680" :rollseyes: ) is equal to or better than the competition's high-end card. So, what do you do?
You don't even need to show your high-end card yet. You release your mid-range card as the high-end card and price gouge, of course. More money for you and you get to tweak/finalize your high-end card without rushing it to the market. It's a win-win for you (if you were Nvidia). :D
Sitting on your cards for too long leaves them smelling like crap. Blowing out old stock is fine. Rolling out new parts is fine, but dont do it too long or you might end up with a case of self-inflicted butt-hurt.
I'm glad none of you guys are handling my money.
I'm not totally sold that there is some super secret big gk110 around the corner waiting to pounce, there might be I don't know, but when I think of Nvidia gk104 I get the feeling it is so not Nvidia for a high end product.
Nvidia has always been the go big or go home type brand, gk104 specs seem so much more mainstream palatable across the board than their normal over the top feeling high end offerings.
If they do have a big gk core waiting I would imagine they'll be holding off until AMD's refresh and push gross margin with the gk104 in the mean time.
If gk104 can deliver the needed performance to command a ~$500 price tag, whether it was intended or not, it doesn't make allot of sense to sell a higher cost product at the same price point if not absolutely necessary to be competitive.
As far as I'm concerned if Nvidia is releasing gk104 as a high end product then it is in fact their high end product and named as such. Anything else is going to have to be next gen next release hardware since they've already allocated their high end 680 labeling for the gk104 and will likely be offering a dual card based on gk104. That in my mind leaves little chance or logic of offering a separate gk110 big die uber performance part until 7xx series is up to bat.
We'll see...
Why couldn't Nvidia just call the GK110 a 690 and the dual GK104 a 685 and dual GK110 695? The naming structure isn't holy and unchangeable...
I expect the 7990 to be about $749 as a minimum, more than likely $799. The 6990 was what, like $649 or something? When the GK104 actually hits retail, it'll go one of three ways:
1. It will perform roughly the same as 7970 (at same clocks, not stock with turbo to win. The people that buy $500+ video cards are overclocking, unless they're just rich kids) and AMD will drop prices ~$25 to lure anyone on the fence waiting for Kepler.
or
2. It will beat the 7970 and be priced at $549, then AMD will drop the price $50 and push the 7990 out ASAP.
or
3. It will be beaten by the 7970 and be priced at $499. AMD will either keep their price points or lower them slightly just to put the squeeze on Nvidia for once.
Those sound like the logical outcomes to me. Opinions?
People figure there's another part sitting and waiting for multiple reasons.
1.) It's strange for NVidia to release a new architecture's high end part under a code-name ending in any number but 0. Refreshes are a different story, but this is clearly not just a refresh.
2.) NVidia haven't made a small chip high-end in quite some time, not counting refreshes.
3.) NVidia's statement that they were actually disappointed and surprised by the lack of performance in AMD's 7970. That kind of statement makes you figure that NVidia had a much faster chip ready to go, saw they didn't need it, and set it aside.
Seriously, all signs clearly point to NVidia having something else but see the opportunity to make bank off what would have been their mid-range part.
I still want to know, how do the rumored 1536 shaders and 128 texture units come from only 500m~ increase in transistors?
While the general naming scheme can be changed to whatever it doesn't change very often with Nvidia. If there was some hint that the naming was being refreshed to something new I could understand but its looking to follow the same hierarchy used since 2xx series.
Not only that but I doubt there will only be one video card model based on the possible gk110 gpu which leaves little room in the current name scheme, at minimum there should be two models offered due to binning & harvesting gpus.
GTX 680 SLI from VR-Zone...http://www.microsofttranslator.com/b...11-03182012%2F
http://www.techpowerup.com/img/12-03-19/187b.jpg
Regarding point 3, that's also marketing speak and PR. They've been doing that for ages
As for point 1 and 2... well, same could've been said about AMD when they released the 4870 and then the 5870. It was strange for AMD to release a non-refresh part under the RVx70 code, yet they were stunning successes.
Nvidia having something big in the works is certainly known, but consider two other important points:
1) AMD used smaller chips first when changing processes, e.g. 55nm to 40nm transition. It saved them from a Fermi situation. Nvidia might have learned that lesson
2) The GK110 or whatever you want to call it simply isn't ready yet, and won't be til the end of the year
There's far far far too much speculation about a mysterious -110 chip with little to no hard evidence on it which is why it is absurdly silly at this point to call anything a failure one way or another (cept the pricing it seems)
Nvidia spends hundreds of millions if not billions designing these chips, it's in their best interest to see a return on that investment as soon as possible. The longer they wait to release it, the less it is worth and if they wait to long it may become worthless. The only logical reason why they wouldn't release it if they had is because it isn't ready for what ever reason.
If you can only make a relatively little amount of GK110 chips, what do you do?
a) sell them on $500-600 Geforce cards, demand is high, availability is bad
b) sell them on $5000+ Tesla cards, meeting your contracts with HPC clusters
Holding back is silly, no argument there. Luckily for Nvidia, these chips can cater more than one market.
But it isn't as always. Nvidias beefed up "GTX660 Ti" of this generation (the GK104) is sufficient to be competitive. Would GF100 had come first (or at all for the desktop) if even GF104 could have beat the 5870? Apparently, there is no need to feed the gaming market with faster cards than the 680 right now because you would only be setting the bar higher for yourself and lowering prices (or selling this new GK110 card for $800, then good luck with that).
I hope/believe there is a GK110 card no later than September this year, though. But the problem is: Why should they release that if GK104 fares so well and AMD doesn't have a faster part? Rather wait until it is really necessary and doesn't cannibalize your other products.
That's like saying "why would Intel need Haswell or IB???" SB-E is competitive enough, and surely there is enough stock floating around, even from 775 socket.
The point is that end user train of thought has nothing to do with a company's directive objectives. Every video card the competition sells is profit they have lost on, its that simple.
Intels product cycle is a bit longer than a year now. Of course you release new products, but not 5-6 months after you launched the previous one. Look at how long Nvidia milked G80 and its derivatives without bringing a card that was faster than the 8800GTX/Ultra.
What exactly would Nvidia have to gain if they sold the few GK110 chips they can make in summer for 10% of the profit they could make with Quadros? GK104 will be very competitive if not superior to Tahiti, so it's enough to hold down the fort for the foreseeable future. The situation is ideal: Sell the small performance chip for highend prices and the highend chip for insane prices in the professional market.
Selling the high end chip only in the professional market leaves you open to a blow from the competition, shows the competition what your high end could be (fatal move if they have performance optimization only a respin away, ala 4870/4890, or find that they can create a multi-chip card in the power budget that can beat the new high end card), and weakens your public image because it puts your "high end" equipment so far out of their reach that they will begin to view you as "the man" rather than a competative force or an underdog.
The fact is that nV will either put out a high end consumer part based on "110", or it will be vaporware and they will look like bitboys to some people... that or 3dfx is eating them up from the inside out.
Considering right now they have the opportunity to make the amount they'd have charged for their GK110 on the MUCH cheaper to produce GK104 (cheaper die, easier to produce, and cheaper pcb as well), and still release the GK110 at the same price later, what's the problem on their end?
Yes, it SUCKS for us consumers--but really, no one with any actual business sense can fault them. AMD undershot this round; NVidia saw a chance to capitalized financially and they are pouncing on it. Now rather than selling one of their designs for this generation at high end prices, they're going to be able to get away with doing so with two of them.
I still find it funny how many people are in denial about this whole thing. Think about it for a minute guys...
Last year, AMD 7970 still rumors in terms of performance and specs, we catch word of GK110 and GK104. It stayed this way until we saw the numbers out of the 7970, suddenly it just gets dropped to GK104? Combine that with NVidia's statement about being disappointed by the performance of the 7970. Now add in my previous statements... Only the most extremely naive people would believe that NVidia intended on starting out with the GK104 as the high end. They're doing it because they can. When there's a lack of competition, the consumers are the ones who suffer. Right now AMD can't compete with NVidia in that space, and as such we are the ones to suffer for it.
The next question though, if GK104 got moved to the high end space, what is going to take it's place in the mid-range?
Interesting point there. The question is, could AMD react? I expect GK110 to be at least 40% faster than GK104. According to some it will be a bit above 500mm2 and pack 6bn transistors. Bandwidth would also be considerably higher. It would be a bit like GF114 vs GF110.
But I admit that it takes time to get a product ready for launch, even if the chip is ready. Maybe Nvidia wants to be first with this "refresh". However, they could just clock GK104 higher, they have more headroom in regard to TDP.
I figure GK107 is taking the midrange space.
AMD will have a few options. A dual 7970 is definitely a possibility, and I'm sure a few companies will put out dual 7870's (which could end up the bang for buck champ if priced properly). It's also possible that AMD are sitting on another design and planning a quick launch that'll normalize pricing (think x1800-x1900), although that situation is doubtful since all we've seen is a slide that was proven to be fake.