According to tweak-town, this card uses less power than pitcairn AND trades blows with the 7970...
So much for Pitcairn being more energy efficient than kepler.
Printable View
If the tweaktown figures are accurate, didn't expect it to be quite neck and neck with HD7970 honestly, was expecting it to be more closer with HD7950 (but very slightly ahead in avg), nice!
I still hope $399 would be the final price, if those performance figures are true it could be priced up to $449 if Nvidia wants to but IMO $399 is the ideal target as the card is small (it has to be rather cheap to manufacture probably) and putting more of a price pressure on AMD instead of milking highest possible price point where the card would still sell would make more sense with the size of the card taken into concideration, would do more damage for the competitor.
$399 / 350~370 EUR is about the highest I would ever spend on a GPU so please make it happen Nvidia! ^^
Same difference :D They are just going to have first crack at the original batch before Green-sum lists them.
Yep, OCN has been filtering all posts that mention the website and new monitors for an unknown reason since it doesn't violate the auction-based website rule... I think they're just butthurt they didn't get cut in on the deal.
I'd love IPS + 120Hz but at more reasonable resolutions that doesn't need constant $500 GPU upgrades to maintain a good FPS ratio required to really benefit from that 120Hz as I read that monitor can't support 120Hz in a lower res either so.
Is it confirmed that greensum will be selling them direct on ebay? Much prefer to buy that way if so.
No, it would still need 3d features built into the firmware. Even if it were possible the crosstalk would be horrendous. The pixel response time just isn't fast enough.
Yeah, I really was surprised by how much of a hit in performance my 680 took when going from 1080p to 1440. I've also become accustomed to a higher framerate no thanks to my 120hz display. Thats honestly one thing that I really do like about 1080p. I just wish that the 27" 3d vision displays weren't so expensive.
No it doesn't. Their consumption figures are a total mess. I think your'e looking at one of the whack OC model figures
It actually shows it using more power than a Diamond HD7970, which would be hard to believe
http://images.tweaktown.com/content/...ce_preview.png
Theoretically, yes. A member on OCN (HyperMatrix) attempted it with a 3D Vision kit, but the monitors have to be certified by Nvidia.
Lol, sorry, I don't have that resource :P
Surprise! Lol. Nah, they've had 120Hz LCD for awhile, all of the 3D monitors have to be.
He's going to be selling them on his own website (no idea on a link). Not positive on eBay.
Its the same thing honestly. They're both just going to be forwarding the money to the company, neither is making a profit on this model.
GTX670 stabs the back of GTX680...
Really really small, this is very cool, water cooling kits will be very cheap for those! I just hope they haven't nerfed the power supply on them!
It's literally HALF length, what do you think?
I'm not big on 3D stuff/optics so correct me if I'm wrong but in 3D displays isn't the total Hz split since each eye is effectively only receiving half of the monitor's refresh rate? So that's still only a 60Hz effective, just in 3D. That's why 3D has to have a lower baseline refresh since otherwise even people with the worst eyesight would notice flicker on a 60Hz 3D display.
I don't think I even see 3D televisions at the local electronics store with less than 200Hz (granted they don't display anything higher than 1080p)
Now if someone can figure out how to cram the 670 down to a half height card by lengthening it, would make for a mean slim entertainment/htpc.
Edit 1: Bummer, time to stop this habit of calling 'single slot' as a 'half height' also.:slap:
Edit 2: Not a half height but this should be doable, no?
http://i48.tinypic.com/29bhysy.jpg
^ half height so it fits in shorter cases, not a single slot card.
ehehe
I'm just thinking that I allways end up burning my shorter cards XD
I have a pair of Zalman VF950s sat around to replace the stock cooler on the GTX 670 and get rid off all the extra cooler length. I'll only be buying one on launch though, and maybe a dual GPU version much later on when they go cheaper.
I'll go with EVGA because I want to change the cooler and they should last for a long time with adding more in SLI later on.
Interesting article. Any word on GK110 "in three days" or are they blowing smoke?
http://vr-zone.com/articles/how-the-...ed-/15786.html
Quote:
Originally Posted by VR-Zone
Oops, sorry, misunderstood. I've seen it pretty frequently. Actually, the monitors I use right now, will push 75Hz at 1280x1024. But none to the same level a CRT could.
It depends on the technology used. But for this discussion we're using Nvidia 3D which is shutter style where one lens closes then the other closes alternating back and forth and your brain creates the image. It does exactly what you say, with 60Hz or typically smooth interpretation video in each eye to create a smooth 3D image. A 60Hz 3D display would be useless with this style, they would need something like polarized or those awesome monitors that had two panels behind each other and the glasses filtered the one in front in one eye, lol.
Those TVs are almost always polarized as 120Hz displays are very costly at that size. The "120" "240" "600" Hz (Plasma usually) are typically those lovely "SmoothMotion" or "CinEngine" chips that will double the frames of the movie artificially to create that effect. Hell, I think some of those might even use shutter style and make fake 3D, I'm not sure, I just know shutter is uncomfortable to me.
They're referring to GTC. They will most likely talk architecture and HPC. Don't get your hopes up for any consumer product info or release schedules. Should still be interesting though.
Quote:
Opening Keynote - May 15 @ 10:30am PT
NVIDIA CEO and co-founder Jen-Hsun Huang will kick off the conference with the opening address. He'll review the dramatic and growing impact of GPU technology in science, industry, design and many other fields.
And, he'll announce some big GPU news that you'll not want to miss.
For those of you who can't make it in person, we will provide a video livestream from the keynote.
Both NVIDIA and AMD always rename their lower-end SKUs. Heck, we're still seeing Juniper, Redwood and Cedar parts from AMD.
Sounds like AMD may need another price drop again...
Either way, while we watch the damage get done on the gpu front back and forth I'm going to sit back with a coke and a bag of chips. Picking up my first 3d monitor (deal is FAR too good to pass up) tomorrow, so I'll be ready to have some real fun when prices fall to where I believe they should be.
Hold on, hold on, hold on...
What's all this about GK100 and GK110?
Was it "always" back then? Or a trend triggered by nVidia and adopted by AMD, or the other way around?
By the way... GTX 670 on TigerDirect already.
http://www.tigerdirect.com/applicati...ywords=GTX+670
http://i46.tinypic.com/2hqruvm.jpg
Whats that, a Ninja launch?
By the look of things, ALL companies using TSMC's .28nm process are experiencing the exact same problem with short supply, not just nVidia. But since this is Charlie we're talking about here, publishing such facts would not further the narrative he is always trying to push.
http://www.digitimes.com/topic/28nm_...c/a001191.html
ATI has been rebranding with Radeon since the beginning. The original Radeon SDR was rebranded the 7200 and the Radeon VE became the Radeon 7000. The Radeon 7000 itself was also a 7200 without a T&L unit. 8500 series was a new design but they rebranded it for 9000, 9100, and 9200.
The Radeon SDR became the Radeon 7200, and the Radeon VE became the Radeon 7000.
No its an early release / broken NDA.
Official release is May 10th, 2 pm BST in the UK. I can't wait to get mine, probably going to get an EVGA reference or longer PCB custom version based on the prices (I'm not paying for anything with a blue PCB, Gigabyte, Inno and Galaxy / KFA2 GTX 670s have all been pictured with blue PCBs, so I buy EVGA reference or EVGA / MSI custom design).
I like how Charlie is always saying,"NVIDIA will be bankrupt soon!" and they just keep rolling along making tons of money.
Market cap AMD + ATi this morning: $5.16b Market cap NVIDIA this morning: $7.68b
Q1 2012 AMD + ATi= $590m in the red Q1 2012 NVIDIA= $135m profit
AMD Dual GPU = MIA NVIDIA Dual GPU = 690 launches to reviews saying it's the best video card ever
AMD flagship = price drops and slow sales NVIDIA flagship= every review site in the world says it's the card to buy, sells as many cards in one month as 7970 did in two, outselling 7970 9:1 (how can this be if no one has any Charlie?!)
AMD 7950/7970= about to get another price drop and lower sales NVIDIA 670= rumored launching tomorrow, leaks say it trades blows with 7970 at 7950 costs
Ace reporter Charlie D. looks at all this and blubbers,"NVIDIA is re-badging two low end OEM chips and is selling out 680s! They're imploding! They'll be broke soon!". :shakes:
How did this retard get a web site? Oh yeah, it's funded by AMD advertising.
Note to AMD: The fable of the boy who cried wolf applies to Charlie. He can yell about NVIDIA going broke every year all he likes, but when it doesn't happen, it becomes trite.
Either way you look at it, Nvidia is getting squeezed out slowly. With igpu's becoming more powerful very quickly and very few games needing that much power to run, Nvidia is in trouble. What's gonna happen when they move to the next smaller process? They're having so much trouble already and blaming everyone else for it. Maybe Intel will do a hostile takeover and assimilate Nvidia's tech for themselves RIF :p
they depend alot on their server level products now
and so its going to be good to watch what amd offers in that segment to see if they are really in trouble or not
I'm no fanboy, but I believe Nvidia is far from being squeezed out. I would however agree that the GPU's(for their originally intended purposes) may slowly be squeezed out as a segment. Though high-end GPU's are here to stay...maybe not for gaming, but they have been redefining computing as an alternative to super-computers when used in clusters. Look at the top-ten super computers in the world today, and begin to see a trend.
I dream of the day when I build my own affordable cluster of CUDA cores or AMD's offerings based on 2 or more next generation High-end cards interconnected.
We'll always need more power. New consoles are coming in which will hopefully present a leap also for the desktop market. Then there are 4k resolutions slowly taking off.
The same thing can be said for AMD. The more powerful igp become, the less potential for revenue for their own parts in the dedicated graphics space. In addition, the R and D hit for developing such parts seems to have taken a hit for AMD server and high end desktop processors. AMD server parts used to be huge money for them as it was the area that they remained the most competitive, but their marketshare right now is 5.5%. It seems like after purchasing ATI, their CPU processors parts are either treading water or bailing it out. Same with their desktop side.
The only area AMD appears to be gaining marketshare from a CPU point of view is the mobile space but as you said, iGP is becoming fast quickly and what happens when Intel graphic parts become fast enough which could very well happen with haswell(not to mention this market might be shrinking in the future because of ARM based processors)? With intel having the stronger brand and with the faster CPU part, AMD will be :banana::banana::banana::banana:ed if they don't come up a faster CPU. The public cares alot about a decent GPU, but they care far more about the brand and the CPU performance, hence Intel's massive lead over AMD. People are willing to take a hit on GPU performance for branding and CPU power. And when intel graphic part becomes fast enough for the mainstream, people won't buy AMD processors anymore except for budget builds. This is an AMD that cannot stay afloat.
I think Nv chances of succeeding in the professional space and supercomputing space are a lot better than AMD chances at developing a competitive CPU architecture; both of which are necessary for the long term viability of both companies.
JHH's GTC12 keynote begins
http://smooth-las-akam.istreamplanet...d1/player.html
http://blogs.nvidia.com/2012/05/live...kicks-off-gtc/
Note that this is the live stream: http://smooth-las-akam.istreamplanet...d1/player.html
IMO, VGX (a cloud-based gaming initiative) is BOUND to be announced.
GK110 confirmed:
"The NVIDIA Tesla K20 GPU is the new flagship of the Tesla GPU product family, designed for the most computationally intensive HPC environments. Expected to be the world's highest-performance, most energy-efficient GPU, the Tesla K20 is planned to be available in the fourth quarter of 2012.
The Tesla K20 is based on the GK110 Kepler GPU. This GPU delivers three times more double precision compared to Fermi architecture-based Tesla products and it supports the Hyper-Q and dynamic parallelism capabilities. The GK110 GPU is expected to be incorporated into the new Titan supercomputer at the Oak Ridge National Laboratory in Tennessee and the Blue Waters system at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign."
http://www.marketwire.com/press-rele...da-1657561.htm
If the Tesla version arrives in Q4,guessing the desktop version arrives with with Win 8 .
awesome link jjj thanks
http://www.abload.de/img/desktop_2012_05_15_214jdfy.png
largest and most powerful gpu we've ever built...
wow even bigger than GT200?
http://www.abload.de/img/desktop_2012_05_15_213ouyq.png
kepler is world's first gpu designed for the cloud, to be deployed into cloud data centers worlwide. it does this with:
--virtualized gpu
--no longer does it need to connect to a display, it can render and stream instantaneously right out of chip to a remote location
--super energy efficiency, so it can be deployed in a massive scale
A BIT more info: http://www.hardwarecanucks.com/news/...ng-in-q4-2012/
Plus, GeForce GRID info: http://www.hardwarecanucks.com/news/...-for-everyone/
I also can hardly resist my self from selling my two GTX580 and buy two GX670 :(
A 30 second search found 4x Galaxy GTX 670's and 2x Zotac GTX 670's... http://www.microcenter.com/search/se...x=0&submit.y=0
You're still assuming NVIDIA is going to release a compute oriented consumer product when they have some very compelling reasons not to.
They make more money on smaller chips, and the consumer market just told them that it doesn't place high priority on compute. (we've seen STEAM stats, Skymtl, and neoseeker saying that GTX680s are outselling 7970s by a great margin, and if the market wanted compute, the 7970 would be selling better as it leads the 680 in this area)
I wouldn't be shocked if NVIDIA split their product lines going forward. I wouldn't be shocked if they went back to the old ways either, but I can see some reasons they would not.
newegg has not been out of stock on all brands since they launched to my knowledge. They're down to one brand in stock right now, but others have been coming in and out of stock since launch.
I've seen them at Amazon and on Ebay for MSRP/close during that time as well, people can buy them if they want one.
If AMD actually plays the price war game, which is sounds like they do at this point. In that they place their product line up at similar price points as their previous 6000 series, Nvidia would be forced to drop prices too. If there was enough of a pricing area between the 680 and a potential GK110 I don't see why Nvidia wouldn't release it. Even if it was $600 plus. They would still be able to sell it easily.
Damien Triolet from Hardware.fr talks about start 2013 for GeForce GK110 (end of the year for Tesla version).
What's wrong with CUDA? People in the professional / HPC world are still "shouting it from the rooftops" as you put it. Currently, there isn't a better, more adaptable GPU compute language on the market. OpenCL surely has the chance to make it big but being an open format, there is still very little directed focus on prioritizing many inefficiencies.
So the real question is -I'd guess- whether we're going to see the GK110 line on the desktop or whether we're going to have two separate lines of products from now on and there-after.
If we do, it will be a first to have lesser chips on the gaming arena, I guess side-effect from the console dominance...
the is the same as the last two big chips. they were tweaked to be good for non-gaming work too. the timing of their enterprise work just means it got announced this way first. these supercomputer contracts are a big deal for them, they are official a government contractor now! good work if you can get it! this may also be why they've held it back, to make it just right for those contracts.
all the enterprise stuff in the big chip makes it less power efficient for gaming than the smaller chips. no big deal. 580 is still faster than 560.
I don't think anyone knows for sure... For all we know, it may never come out as a Geforce card... It's not like gamers need compute, and that's what this chip is about.
Sort of like SB-E to SB, larger and more expensive and mostly makes sense only for professional apps. But let's wait and see. :yepp:
I recorded the Press Conference Q&A after the Keynote :) http://bit.ly/GTC2012
Actually, the Keynote is up on Youtube:
http://www.youtube.com/watch?v=8FPQT...7ED9CB13A00DFD
ALSO, FOR EVERYONE...... KEPLER GK110 WHITE PAPER: http://www.nvidia.com/content/PDF/ke...Whitepaper.pdf
Very interesting stuff in there. Pretty much better than any possible article. ;)
I didn't say Keynote :) I know the Keynote was streamed and uploaded to YouTube which is why I didn't upload it. I only uploaded the Q&A session after the Keynote which was a press conference. Press conferences usually have a lot of more info than the Keynotes do ;)
I wasn't referring to your post.
However, this time I found the keynote had more info than the Q&A, but I guess that's just me....
Because every time we talked about gaming perf / dollar, gaming perf / watt, or whatever metric in *gaming*, we got the same refrain of 'CUDA and PhysX, perf regardless of watt, blahblah.' Now Nvidia is pursuing the same philosophy ATI did, and the arguments have (not) surprisingly shifted their focus on the exact same metrics ATI's VLIW4/5 was lauded for since the beginning.
So according to the whitepaper, a full GK110 should be:
Cuda Cores: 2880 (15 SMX)
ROPS: 48?
Memory int: 384bit
7.1B Transistors.
GK104 is:
Cua Cores: 1536
ROPS: 32
Memory int: 256bit
3.5B Transistors
With the Added DP units, and 720KB of data cache, it's gonna be one hell of a big peice of sand!
PS there's a typo in xbit link for Core count.. -edit hangon I'm guessing they're including DP units..
Errr Cegras, you do know the 6 series runs PhysX faster than the 5 series, right?
http://physxinfo.com/news/7862/gtx-6...marks-roundup/
I'd say having PhysX that's better than a GTX580>>>>>>no PhysX at all.
A majority of reviews I've checked don't even bother with PhysX testing. Techreport / techpowerup / anandtech .. and I don't think I saw a mention on HWC of it either. The point is, I could look up a GTX480 review and chances are it would be testing mirrors edge and batman with physx. Now, we don't see any of that. We might have seen some reviewers spend a page or two talking about CUDA, now? Compute is barely mentioned except nowadays too.
http://www.anandtech.com/show/2977/n...th-the-wait-/5
http://www.anandtech.com/show/5699/n...gtx-680-review
Your giving AMD way too much credit here. Besides holding the performance per watt crown for a while, performance per dollar has been pretty similar for both companies for a while(gtx 260/4870, 4890/gtx 275 gtx 470/5870, gtx 570/6970. The only card that has been pricier than their AMD counterpart has been NV top card, which typically justifies its price by being the fastest on the market and being much costlier to produce compared to anything else from both companies on the market. Although the flagship card means a lot about the company, it doesn't represent everything about the company.
Also we are seeing a lot of people including yourself for buying and making the choice of AMD this generation for what Nvidia was strong at last generation. That is it GPU computing abilities so I really don't get your annoyance or anger. Nvidia has been stressing this for years and in the past, a chasm even bigger than the performance per watt was established in favor of Nvidia cards in more GPU compute situation besides bitcoining and it held a tremendous lead for performance per watt in the professional market. This reason holds less water when talking about AMD based solutions at the moment because AMD is still largely untested compared to Nvidia on how good their cards are in the professional market. AMD needs to get their professional card out ASAPand get increase their driver support in this field exponentially greater because NV has a unquestionable lead in the professional driver support market(the consumer driver is somewhat debatable) and start winning more than a benchmark like luxmark in reviews and win in industry standard programs like Nvidia has done in the past. Nvidia's dominance here is unquestionable as well because of there vast marketshare lead over AMD in the pro market.
http://hothardware.com/Reviews/NVIDI...Review/?page=5
In addition, although AMD might be good at openCL, without them putting huge amounts of marketing muscle and money in it like NV has done with Cuda, these efforts might not bare fruit.
Both companies are simply matching design philosophies because their goals are not mutually exclusive. Performance per watt has always been a goal in the past with Nvidia because it is essential I can imagine for both gaming and GPU compute. GPU compute has been one of Nvidia's number one priorities and has directed how they have designed GPU's for the last 4+ years. Performance per watt is one of the biggest factors of success for GPU cards to be successful and to be used in the professional systems and super computers. AMD has wanted to get into the professional market for a while, hence it's shift to making its shaders more like Nvidias. Both companies are merging their design philosophies especially now because the GPU compute is a massive and untapped industry for growth and revenue.
I'm not buying it for compute. Although I wanted a 7850, the 7870 outclasses all cards in the same price region on nearly every metric, even if only gaming related ones are taken into account.
Also, perf / dollar is a constantly shifting comparison between nvidia and ati due to the constant price drops that ati forced. What I was talking about was things like perf/transistor / mm^2 / watt, all sorts of things that were architecturally derived (and not market derived), that are now being used as merits for the nvidia instead of ati, while noticeably physx and cuda have taken a huge back seat in all reviews published thus far. It's hypocrisy.
Not that it really bugs me, but it should be acknowledged.
How it it hypocracy when ON TOP OF THAT even the $400 GTX670 flat out outperforms not only 7950 but also 7970 at a lower cost? Yeah, Nvidia also provides a more feature rich solution.
WTF does 7870 outclass? Its hardly faster than a GTX570 that I could have and did pick up a year and a half before 7870 was ever released. I don't get your logic. Its just hardcore fanboyism.
When GTX680 launched it was cheaper than 7970, faster, and drew less power. The first two are the most important part.
This is simply not correct. It goes both ways and it hasn't been only AMD forcing NV hand, it has been NV doing the price attacks lately. I.e gtx 570, gtx 560 ti, Gtx 460 and gtx 670 and gtx 680. As I have said, you are giving AMD far too much credit, as if they have done all that is good for the GPU in general.
Architectures are purely market derived. Both companies have designed their chips to make money in different ways, both gaming and professional market.
The performance per transister has never been talked about really lol, I don't know why your even mentioning it. And for that metric, Nvidia has been been competitive with AMD. I.e gtx 580 vs 6970 is 3 billion vs 2.64 billion.
If anything, AMD showing this round for performance has shown why NV has fallen behind for performance per watt and die size in the past.
lol so they spend so much time/money/r&d on gk110 only to offer it in tesla form ?? hmmm
ill take ~1000$ 3gb/6gb buffer 2880 core gk110 over a stupid arse gtx690 any time any place
Tesla is primarily what GK110 was developed for.. so what you said doesn't make much sense.
I'm a fan of single cards too, but it's a pretty safe bet when/if GK110 does appear as a gaming card, it will be notably slower than a GTX690 out of the box, so i'd expect to pay a lot less than that..
Also highly likely it wouldn't have all SMX's enabled either.
Like all Power/ thermally limited SKU's though, it would be a mad overclocker with the right cooling
lol what im saying theyd have to be stupid not to offer gk110 in geforce form
less indeed ~600$ for the same gtx690 full performance.. not notably slower
tesla k10 = gtx690 no reason "gtx780" is not going to equal tesla k20 (15x smx)
The small size of Nv chips in these forums, if anything has been more of a negative for Nvidia this round atleast for the debates around here. I hardly see people taking about performance per mm2 and saying it is crazy fantastic. What I do see alot more arguing about is for people to complain is how NV could charge so much for a small chip and I think the same thing applies for both companies. I have to agree gk104 is priced too high and the same argument applies to AMD this round too(not as much after the pricecuts).
I think the big thing why people are lauding NV performance per mm2 if for any reason when it comes to debates is this(plus this is the gtx 780 thread)..If the mid-tier gk104 performs a bit better than a 7970, then just imagine how the doubled up gk110 going to perform? The performance per watt and mm allows for a much grander flagship for the enthusiast from NV which honestly anyone should get excited about since the performance is simply monstrous depending if they screwup or not.
As someone that can appreciate AMD efficient designs in the past, imagine if AMD announced it was going to make a near 600mm2 version of their architecture. You or anyone on this forum would be excited at the performance potential of taking an efficient architecture and scaling it up. NV has made an efficient architecture like AMD, but has the balls to scale up like it has in the past.
What has hurt AMD image this round with the reviewers and the debaters is this. If your going to raise the pricing bar, you better bring the performance to back it up, especially when your on a new manufacturing node. By making only a moderately sized hybrid gaming/GPU computing card(rather than a moderately sized gaming part or monolith GPU hybrid), you simply leave so much room on the table for the competition to make you look bad. Did AMD really think it's pricing would hold with a card that is 20% faster than a gtx 580 while costing 10% more for the next generation?
AMD should have designed a larger chip to compensate for the die area that the compute area was going to take up and clocked higher(but really 75mhz would have hardly made a difference, check the original 7970 thread for expectations of AMD performance this generation) or priced the chip more appropriately. Because of AMD pricing this generation, the gk104, got promoted to gtx 680 status and with the pricing to match and still was the much better deal out of the gate with this inflated pricing.
Forget K10 vs K20.. K20 is aimed at a different market.
I'm sorry, but unless GK110 is a step UP in performance/watt for gaming (despite the opposite being more likely ), then I wouldn't get your hopes up for that.
not saying it wouldn't be fast.. just not that fast.
k10 vs k20 different markets ?? what you said doesn't make much sense :) jk
why because nvidia categorized k10 k20 differently ? its the same hpc crowd
its not about getting hopes up its about knowing/looking at the data :D
gtx690 full sli performance will be matched and in many games it will be surpassed and in some games it will get decimated because of nill sli support
http://www.anandtech.com/show/5840/g...ased-tesla-k20 figure 1
Quote:
its not about getting hopes up its about knowing/looking at the data :D
gtx690 full sli performance will be matched and in many games it will be surpassed and in some games it will get decimated because of nill sli support
What is it about the available data for GK110 that makes you so certain it will match gtx690?
True to a point. If openCL fails to catch on then AMD does stand to lose out. The problem with that is Apple has baked openCL into their Mac OS. Adobe now supports it in their creative suite and rumor has it that handbreak will soon support it. Add to that the fact that nvidia supports openCL as well.
You seem to be forgetting AMD's big push with Tahiti... time to market.
AMD was first out the gates with their "high end" on a new process. They were cautious and conservative.
There is a reason that GTX680/670 is able to be priced where it is and no, it isn't because of it's size.
AMD tested the waters, put a new highend on the market that was unchallenged for a quarter and subsequently an entire lineup that is still unchallenged...
Don't get me wrong. GK104 is a great chip and shows just how well Nvidia can compete when focused. I'm sure big daddy will be good as well but if you think AMD doesn't know this and doesn't have anything planned for later this year...
Here's why I think this chip (GK110) should be released on the desktop as well... in a way it is "owed" to us.
Let us start from November 2006 when nVidia decided to release her first card with unified shader controllers (8800GTX). It seems that nVidia follows an equivalent release/manufacturing pattern as Intel's tick tock, with even numbered cards being the "tock" (new architecture, much greater performance) and the odd numbered card generations being the tick (minor tweaks).
Anyhow let us go back to November 2008. 8800 GTX is being released and takes the world by a storm. nVidia needs to wait another 20 months (June 2008) to release another card architecture which is truly more powerful from the G80s. It was called the GTX 280 and it was (in average) about 80% faster across the board in most games in the most popular resolution than the G80. It is not exactly a doubling in performance but enough to call it a new generation card as it matches the performance (or even exceeds it) of two G80 cards (the least "tock" cards) in an SLI arrangement.
From there on it took nVidia a further 22 months to actually get around 75% more performance from the new "tock" card (GTX 480) which was released in March 2010. Again we see the same pattern of advancement (but sadly the cards start getting hotter and more power hungry and the release schedule moved on -21 months instead of 20-).
And the above leads us to "modern day", March 2012 (24! months since the last "tock" card), and we get a card which is at most 60% faster than the last "tock" card (GTX 480) and it took *even* longer to be released (24 months). My point is that things are obviously slowing down, there's more time involved for the creation of new generation(s) and the end product progressively less impressive from a performance point of view as the generations go by.
I'm not trying to argue that the GTX 680 is a bad card (it's a surprisingly good one if all parameters are to be concerned), my argument is -instead- that nVidia would probably *need* to release the GK110 on the desktop as well to keep pace with the program she initially instigated back in 2006 (and probably did even before that). Of course she is not bound to do that either but it would be greatly dissapointing from a consumer's point of view, as we get increasingly more delayed products with increasingly less performance upgrade.
There will be a day -sadly not too far away- that new generations would mean close to nothing as it seems that Intel is also nearing that point (Ivy Bridge is no better than Sandy Bridge if most of the things are to be concerned). And like I said that it would be sad because it means that consumer grade IT industry is being slowed down, quite aggressively, our hobby of taking new and exciting things and tweaking to the max may soon be to its end or -worse than that- be rendered meaningless as companies would increasingly start to release rehashes of the same product. But worse of all it would slow down the world at large as advances in desktop computing eventually infiltrated in every other aspect of computing or social life indeed; it's as if the whole world is slowing down when nVidia (and Intel) decide that they do not want to release keeping the pace they once used to... I know how silly it sounds but I'm afraid that when all the ramifications are to be concerned it would be proved right :(
HD8xxx is expected soon i guess .... dont think we will see GK110 till ATI has their next gen out .... and most likely its gonna be out a little earlier than GK110. Great times for us consumers you cant go wrong with either party imo, I am a little biased towards nv because red team seem to cheat on IQ every now and then or so they say also drivers are nicer and physx is cool.
On a 6950 myself and stuff looks fine imo.
This battle has been good 7xxx vs 6xx looking forward to a similar closely contested 8xxx vs 7xx.
Well fastest single gpu is 500-550$ .... looks ok to me, dualie is 1000$ which is a ouch but previous dualies have not had fully enabled x80 chips so ....
Edit - But I totally see your point, friend sitting on SLI 580 setup I ate his head to avoid 680 and 690 upgrades knowing GK110 is around. Another friend needed a card 25xx rez so he is getting a 690 but damn its not in stock anywhere(doesnt have anything he relocated and built a new rig)
So coming back to your point of this not really being good value .... its not considering nvidia had the 680 pegged for a 670 and the 670 was a 660. Also lots of people screamed 7970 was overpriced at launch etc etc ..... so you are absolutely correct in that sense , no argument, we are getting RIPPED OFF this gen. Lets hope for a better next gen, im expecting a good jump. Sadly you can bet 780 is gonna be 700$ if HD8xxx is meh .
Yeah its $500 to you but that is $700+ here, 4gb 680 is $800+ and a 690 is $1500+ !!!
Yes its expensive.
:(