maybe when i say "highest end part" your thinking of the name while im thinking of performance
the 6870 wasnt faster than that 5870, or even the 480, so i guess what you really mean is the 480 was the highest end part
Printable View
I believe it was not designed as a flagship gpu but when Nvidia compared it to AMDs best and it won then they named and priced it accordingly. If they had any intention to release a higher spec single gpu vid card then why name this 680 as it leaves no room in the conventional and expected naming system for a higher end card. It won't be a 685 as the numbers are too close and don't represent the gain in power it will have, so 780 it will be. This 680 will then become a $300 760 and everyone will be happy....
They could use up the dual gpu name and call GK110 a 690 but it's hardly likely ...
:(
GTX680 is high-end as a single graphics card alone without taking anything else into consideration.
When I or anyone says it is not high-end, we are referring to the Kepler architecture, If it was high-end then it would be the end of line for Kepler until the next architecture comes around, however Kepler is capable of atleast 50% more performance, which will come in a 2048cores Kepler part, it is in comparison to this that the GTX680 is not classed as high end.
I am with you on that, if it is ATI or NV's fastest card out at the time it is their current high-end card. We can go on all day that they have faster technology, that makes all the sense in the world. But if its not out, the fasted card that is should be considered "top end".
I think they should of called it the GTS 650 and made sure it was ultra low end but still raped 79XX in performance, just to be trolls.
coolface.jpg
-PB
gtx 680 is definitely high end in the current market but it is not Nvidia's highest end single gpu part for the uarch. It was shoe horned into the position because it could perform at what is currently a high end level and command a high end premium.
I remember it was being talked about last year before the 7970 launched that Nvidia was trying to subdue hyping after the fermi fiasco and that high end kepler would be late in the year with midrange launching first but in the end it would seem as though the midrange got a promotion.
I would search for the details but it's really not that important of an issue to me but for those interested the info should be out there from late last year.
I am pretty sure, if we go buy code names, the 7970 would have been designated as a mid range part or sweet spot area. That being it would designated rv1070. The last time their big chip was designated as the high end rx00 was the 2900xt which was their biggest chip ever. Ever since then they have been going with rvx00. The Rx00 being left for their dual chip series.
If we go by the production cost, such as power, pcb, and etc, which people are trying to put gk104 as a rip off and tahiti xt as not. I am almost certain the 7970 will be more similar to a gk104 in production cost, rather than Gk110 or gf110 or gf100. Gx100 are vastly more expensive to make than AMD's rvx70, but AMD decided to price it even higher than that this generation.
To give you an idea, GF100 cost more to make than a 5970, a dual card.
AMD and Nvidia midrange have been always been going after the same market. That being the under 400 market. This time, AMD came out first so they thought to themselves, lets charge high end pricing because we have the performance win. Most of these performance merit were a result of being to 28nm first however and they had no really designed performance wise a truly high end chip(a rx00 chip). What they didn't count on is Nvidia mid range performing like their midrange again, ala 8800gt vs 3870. But rather than set off a price war with AMD, Nvidia followed ship, and now both cards sweet spot cards are much pricier than they are before. If Nvidia is tricking customers, so is AMD.
The difference is Nvidia this time around performance 10% better for more or less 10 percent less, which makes it a much better value. In fact a better value than the 8800gt if we go by pricing vs the competition. 8800gt performed better than the competition but costed more. This just shows how badly the 7970 was priced in the first place.
What if nVidia decided to follow AMD on the number 9 and the "high-end" will be called 690? and the dual card [if nVidia decides to do one at all] gets the 695 or 699 name?
Wow, XS News has certainly gone downhill.
I think Nvidia has found itself in the rare and fortunate situation where there product had far exceeded expectations, this along with AMDs apparently poor performing (as Nvidia said) means we will pay more until things balance out.
A mid range card with great performance doesn't make it a flagship card, same as releasing a mid-high card first doesn't make it a flagship card even if for the meantime it is the best performer on the market, it's only minding the crown for now.
:D
Can someone with a GTX 680 on X79 please do this:
Can someone with a GTX 680 on X79 please do this:
http://translate.google.com/translat...20120323002%2F
According to that article all you have to do is a small registry edit to get your GTX 680 working at PCI-E 3.0 once again with current drivers. This is a huge deal for those of us with multiple GTX 680's. Being stuck at 8x PCI-E 2.0 is a no go.
You can test your PCI-E speed when the card is active (no idle it will go into power save mode) with GPU-Z:
http://www.techpowerup.com/downloads/2120/mirrors.php
Click on the little question mark next to the PCI-E speed line on the right to read the proper speed during rendering.
Most of those tests are ancient nor did they take into account 3-4 GPU's in multi-display setups.
If you are losing 3-4% running a single old GPU on a single monitor I couldn't imagine how bad 3-4 overclocked GTX 680's would be hampered running multi-displays at 8x PCI-E 2.0
If you read the thread you will see that people have already got it working.
Someone reported his second card not getting the fix, but i assume that was because the registry fix was per-device and he didn't perform the same fix to the second card.
The registry fix hurts performance for some people btw, and offers no increase in other situations. A registry edit is not sufficient to "fix" the pci-e setting.
http://i445.photobucket.com/albums/q...08/img55-1.jpg
Hi there
I've had access to roadmaps for sometime and have access to the latest.
GTX 680 was intended for March/April, it is now here. Fact is NVIDIA were always intending to release GTX 680 class card now, yes the spec may have changed but this has always being scheduled launch for their top-end GTX 680 product.
Cards that will come next shall be 670Ti and 670, expect them around May time, maybe end of April, these shall both be slower than GTX 680 obviously.
GTX 680 2GB, aimed at 7970.
GTX 670ti replaces GTX 580 and shall also be 2GB at £320ish range, it will take on 7950 3GB.
GTX 670 replaces GTX 570 and no doubt 2GB also, expect £239.99 and well slightly faster than GTX 570.
GTX 560Ti and 560 are not due to be replaced until much later in the year.
All low-end, 520/550 etc. shall be re-branded into 6xx series, same cards just re-boxed as 6 series with slightly bumped clock speeds.
A dual GPU based card could and can be released when NVIDIA desire to do so, most likely called GTX 690.
Again GTX 680 is flagged on the roadmap as fastest single GPU card, will a faster single GPU card come this year. Well I guess that depends if NVIDIA feel they need one and if they do I suspect October-December timeframe.
What we can expect in April/May is AIB's making much faster and higher TDP varients of GTX 680.
Don't be surprised to see cards like EVGA GTX 680 Superclocked 4096MB soon with twice memory and higher clock speeds for £500-£600 region
Source
^^ That is the most reasonable thing I've seen in here.
Honestly i'm leaning towards the 7970 over the 680.. mostly due to the "you get what you set" clock speeds and that some games make Aero drop/performance drop in multi-monitor mode while gaming with nV..
It is clear the GTX680 is a midrange part. My Gigabyte model only came with a driver disk! In the past, when I purchased real high end cards, they always included a video game, or a web cam, or maybe a poster of a spooky skeleton with a machine gun.
Can we get the Supreme court to stop wasting time with health care and make NVIDIA sell us these mid range cards for $300?!?
Galaxy GeForce GTX 680 4GB / Hall of Fame Edition
6+8pin power connectors and TDP of 300W.
http://www.expreview.com/img/review/...galaxy_68s.jpg
http://www.expreview.com/img/review/...X6804GB_01.jpg
5+2 phase power supply
http://www.expreview.com/img/review/...X6804GB_02.jpg
source: http://en.expreview.com/2012/03/26/g...tml#more-22013
Surely 2GB of extra Ram doesn't equal a jump from 190W-300W TDP?
results made by PcCI2iminal
Desempenho
Core i7 3820 @ 4300MHz
Single
Lost Planet 2 1920x1080 all high DX11 CSAA8XQ
http://downloads.criminalcafe.com/re...le/CSAA8XQ.jpg
Lost Planet 2 1920x1080 all high DX11 CSAA16XQ
http://downloads.criminalcafe.com/re...e/CSAA16XQ.jpg
Unigine Heaven 2.1 tudo no talo
http://downloads.criminalcafe.com/re...le/Unigine.jpg
Desempenho
Core i7 3820 @ 4300MHz
SLI
Lost Planet 2 1920x1080 all high DX11 CSAA8XQ
http://downloads.criminalcafe.com/re...li CSAA8XQ.jpg
Lost Planet 2 1920x1080 all high DX11 CSAA16XQ
http://downloads.criminalcafe.com/re...i CSAA16XQ.jpg
Unigine Heaven 2.1 tudo no talo
http://downloads.criminalcafe.com/re...igine high.jpg
Consumo
Para não atrapalhar eu deixei o core i7 3820 sem nenhum overclock
eu usei o Furmark para stressar as VGAs
Single
http://downloads.criminalcafe.com/re...ingle/idle.JPG
http://downloads.criminalcafe.com/re...ingle/load.JPG
SLI
http://downloads.criminalcafe.com/re...9 sli/idle.JPG
http://downloads.criminalcafe.com/re...9 sli/load.JPG
PcCI2iminal: é possível notar que no Unigine o SLI rendeu demais ,mas no Lost Planet ele esta devendo,
falta driver ...
falta driver....
http://www.criminalcafe.com/showthre...874#post107874
Surround
5760 x1080
tudo no talo
AA CSAA8XQ
Core i7 3820 4300MHz
SLI - OVER - GPU 1250MHz - GDDR5 @ 7000MHz
http://downloads.criminalcafe.com/re...und/over/1.jpg
Screens (sin edición - pesadas!!!!!)
http://downloads.criminalcafe.com/re...und/over/2.jpg
http://downloads.criminalcafe.com/re...und/over/3.jpg
http://downloads.criminalcafe.com/re...und/over/4.jpg
http://downloads.criminalcafe.com/re...und/over/5.jpg
http://downloads.criminalcafe.com/re...und/over/6.jpg[/B]
That's because he posted it all wrong. There's the Galaxy that's 4GB and 1100Mhz core clock with custom cooler.
Then there's the Hall of Fame edition that's got a white PCB and is going to be much faster while using to 8pin power connectors.
Here's the source. http://en.expreview.com/2012/03/26/g...red/22013.html
April-May sounds nice for GTX 670, Nvidia will make a lot of cash on Kepler, you can expect quite nice Q2-Q3 quarter results for sure.
TDP calculations are all over the place anyway. Theres several ways to do this and this number really shows nothing. AMD and NV calculate their TDP differently. For example AMD cards never consume more watts in peak than their TDP value. Nvidia cards can peak higher than their TDP value in high load situations.
That zotac 2ghz news was just early fools day joke. Unless this card comes with a pot of LN2 and instructions to use.
It been getting worse for years now. One of our most prominent posters(at least in terms of volume) Saaya used to post here so much and he has stopped posting for he last 6 or 7 months, I miss his posts. As unbiased as you can really get and he was the originator of the multi-post attack.
You girls need separate trade named "whining and crying about the forum" .
Stop the OT please.
Too soon for April Fool?
Rumor:
Quote:
Second Wave of NVIDIA GeForce GTX 600 Products Due For May
NVIDIA's GeForce GTX 680 literally kicked the door open as it made its entry. We're learning of NVIDIA's plans to milk the GK104 chip by carving out two more SKUs: the GeForce GTX 670 Ti, and GTX 670. These two SKUs will let NVIDIA capture price points deep within the $400-499 and $300-399 ranges, to compete with AMD's Radeon HD 7950 and Radeon HD 7870. These two SKUs will be released in May. Around the May-June time-range, NVIDIA could also introduce the GTX 690, which we're hearing is a dual-GK104 graphics card that's designed to compete with Radeon HD 7990, which launches in April.
Also in May, NVIDIA will launch desktop discrete graphics card SKUs based on the GK107 chip, which makes up its GeForce GT 650M/640M mobile graphics SKUs. Following this, some time in Summer, NVIDIA will release a new chip, the GK106, which will make up the GeForce GTX 660, which will be out to compete with Radeon HD 7850, and HD 7700 series. It looks like NVIDIA is waiting on current inventories of GF114-based SKUs to get digested, including those of the recently-launched GeForce GTX 560 SE, and is hence in no hurry to launch a new GPU to capture the sub-$250 price-points. Besides the dual-GPU Radeon HD 7990, there's nothing new in the works, at the red camp that we know of.
http://videocardz.com/31615/geforce-...specificationsQuote:
GeForce GT 640, GTX 650 Coming in May, GTX 660 Specifications
According to multiple sources NVIDIA may release new cards for mid-range market based on GK107 in May, along with GTX 670 Ti/non-Ti based on GK104.
Specifications of mid-range GPUs are clarifying. NVIDIA will release two cards based on GK107 GPU, one with GDDR5 memory and the other with GDDR3 memory. Those cards will most likely replace current GTX 550 Ti. They will be named respectively GTX 650 and GT 640. GK107 will have 384 CUDA cores, but the count of TMUs and ROPs is not yet revealed. We can expect those cards in May. These cards should be priced around €100.
3DCenter has posted specifications of GK106 GPU, which will likely power graphics card named GTX 660. It will pack 768 CUDA cores with 64 texture and 24 raster operating units. It will most likely have 192-bit memory interface and be equipped with 1.5 of 2GB of GDDR5 memory. This card will be released in summer for around €200.
There are no specifications of the rest cards based on GK104 core yet. Those cards are GTX 670 Ti, GTX 670 and GTX 690. First two should be available in May. GTX 670 Ti will be priced at £320 (around €380/$390) and GTX 670 will have a price tag of £240 (~€290/$300). Even though these cards are based on GK104 GPU (same as GTX 680) it is unknown if they will feature same number of CUDA cores and TMUs/ROPs. They will most likely have reduced CUDA cores or at least lower overclocking capabilities.
We still look forward to first leaks regarding GK110 GPU. Card which was originally to replace GTX 580, was shifted to later date. It would have 2304 CUDA cores and most likely leave all the competition (from own backyard) behind. NVIDIA will surely analyze how releasing of GTX 680 4GB will impact whole market. Then plans for GK110 may even be abandoned.
Wow, I hope they are not really thinking of abandoning the GK110 :( If they shift it just a few months, but it would still be released somewhere in 2012, that would be cool. If not, major bummer!
A question, not very relevant:
Shouldn't be the case that two 6-pin power connectors would be enough for 300W?
Given that PCI-Express 2.0 onwards we get 150W from the PCI-E port (and a further 150w from the power connectors amounting to a grand total of 300w).
Of course that would make them/it incompatible with PCI-E 1.1 boards but that should not be much of a problem given that vast majority of people (all?) who where to buy such a card already own a post-2007 motherboard (which has a PCI-Express 2.0 port on board).
kepler was a good time for nvidia to go 1:3 gpu:shader clocks ratio its more than capable
whats really the hold up on gk110 ? low memory bus clock again ?
Either they don't need GK110 / 100 to compete against the 7970, or they want to skip the 'GTX 480' this time and only release the 'GTX 580' once its done, ready, and needed.
They only just released 680 surely they have a 3 month window before they need to release something even faster depending on market conditions and how profitable the chip will be. Maybe they are right now concentrating on making the most of 680 and refining GK110 to make the next chip also very profitable and have good power and temp characteristics.
Not necessarily.
There are scenarios where it would make perfect sense.
Selling 300mm chips for $500 is more profitable than selling 500mm chips for $500, more per wafer.
Yields of 500mm chips at current state of 28nm process might make the chips impossible to sell at a price where projected demand will make them profitable to produce.
If the imagined 500mm chip's successor is on schedule, it is a good gamble to sell only more profitable 300mm chips. If AMDs next gen is beaten by the imagined held back 500mm chip, that can be released and the successor held while its successor is worked on. Or if AMDs product beats the imagined 500mm chip, the successor can be released. Either eway, NVIDIA wins and got to sell the 300mm chip at high end prices.
Along the same lines, if the imagined 500mm chip wins next gen, they got to sell two chips for $500 or more instead of one, and push back the R&D cycle.
Could be the market doesn't want 500mm chips any longer. Since ATi introduced their "smaller, less power" business model their fans have been all over teh intarebz yelling about how "smaller, less power" is the "way to go". Maybe that, coupled with good cypress/barts sales, and market research has convinced NVIDIA to change focus.
Last but not least, if the product product line is all based on smaller chips thaat beat your competitor, more profit.
I'd say I can come up with a lot of reasons a "done" product should not be released.
And of course it could be all along the GK104 was designed to be this gens "high end chip", we'll never know. Fortunately it gives us a level of performance and features that are worth buying.
GK110 tapeout was March 2012, so I've heard. That would make it impossible to sell any chips before August or so anyway because it takes time to get them ready and work out any kinks. And then they will first go to all Tesla products and sell for a ton of money. Nvidia has contracts it needs to fulfill. I'm sure it will come to the desktop at the end of 2012 or very early 2013, but right now it is not ready.
kinks??? they better kink up that mem bus beatch to nothing less than 2ghz
Memory bus width is measured in bits ;)
I'm pretty sure GK110 will have a 512bit bus like GT200(b).
People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.
NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.
Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).
Sorry, IMO dual GPU cards will never properly compete with single core products. Too much hinges on (usually) buggy drivers, game profiles, etc to really make them viable alternatives for every situation.
In addition, those ultra high end cards are never widely released anyways. Take the HD 6990 and GTX 590 for example: a few were released and about a month or two after launch, stock pretty much dried up.
As was already said. You do not hold back an ASIC effectively pushing back future products because "it is too good." That is how you get humiliated.
The samething with R&D. You simply do not do that.
As far as your whole "smaller chip" argument, there is definitely something larger coming and brings me back to the comment I made to you recently, Nvidia isn't going to turn it's back on GPGPU after using so many resources to secure the market.
They already filled some of them with Fermi...
No it's not necessary in gaming cards but making two different architectures can be tough. Rather than just scaling down and seeing how your design/architecture works with the process you get to play with a bunch of unknowns.
In certainly is nice to have a gaming orientated GPU out there because it is so efficient but on the flip side it isn't so efficient in terms of time to market for an entire lineup or ease of design/manufacturing.
It isn't a whole new architecture though.
Fermi is a great example.
GF100 was very much geared towards compute with a large amount of cache and a relatively efficient compute call order.
GF104 was the scaled down version and while it still retained a good amount of compute abilities, some aspects were curtailed and replaced with additional in-game rendering efficiencies.
So while the architecture didn't "change" per se, NVIDIA was able to implement enough differentiations from one core to another so that GF104's primary focus was gaming (and to a lesser extent OpenGL performance for the Quadro line) rather than high range GPGPU capabilities.
Ummmm.....of course. Again, this doesn't have much do with overall core per core GPGPU performance since PhysX stresses the core in different ways than OpenCL (caching and general use algorithms are different) but rather two items:
- A lack of runtime optimizations for Kepler at this time
- The PhysX GPU runtime dictates that individual SMX blocks be dedicated to processing. In Fermi (GF110) which had 512 cores spread over 16 engines, this meant PhysX would redistribute at the least 1/16 of the card's processing power to the task of processing physics calculations. GF104 meanwhile has only 8 engines which means a whole 1/8 (or roughly twice that of GF110) of the architecture will need to be used for PhysX. So naturally the performance hit will be absolutely massive in comparison to Fermi....
i just drop this here, it just need optimised codes
http://www.tml.tkk.fi/~timo/HPG2009/index.html
Anyone know a page for the US stores to find the cheapest GTX 680s or who has em in stock atm ?
http://www.nowinstock.net/computers/...nvidia/gtx680/ :D
(Thanks to Slizzo over at EOCF.)
Its not ready yet though? Nvidias roadmap shows that the full GK110 wont be ready until early 2013. It would be highly unpractical for them to rush out a crippled 'GTX 480' version right now while the GK104 is performing and competing brilliantly against the 7970.
They would undercut their own product, the GTX 680. People wouldnt be rushing out to buy GK104 at £400 if a crippled GK110 is available at £500 and better. This way they sell their more profitable chip first at a much higher price point than it was designed for, and reserve their high end GK110 'until its ready'.
They arent 'holding anything back because its too good', they just want to release the full beast when its ready rather than release its crippled inefficient sibling, which would only reduce sales of the significantly more profitable GK104.
???
Do you work for ATi or NVIDIA? You seem to have some pretty decided opinions of what businesses like this "do" and "don't do". I'm just speculating on how it could be benficial to hold back, if you're actually in the business, I'll defer to your greater knowledge. (obviously you'd have a greater understanding than me, and I would not want to mislead, even speculating)
I'm looking forward to the 660's.
768 shaders, 64 texture unit's, 192bit mem ddr5.
Looks very good except for the 192bit part, but if you overclock it to the same lv as the 680, I think it would still be 2x faster in mem then the 460 for example.
Think of it this way, just an example.
2x 660's, 384bit mem, same amount of shaders as the 680.
1x 680, likely the same cost as the above 2 cards together, but less mem bandwith in total.
Being that the 680 uses ruffly around 175w, imagine that cut in half.
The 660 is probably gonna be rated around 90w.
With a 660, I'm 'guessing', my pc would run ruffly around 200w on load, that's good stuff.
The tv in the living room uses more power then my entire system would (including monitor) if I had a 660 :D.
I very much like the direction nvidia is going with it's lower power cards as the mainstream.
I don't like how they are gimping out a bit on some things like the mem bandwith.
Edit:
Nvidia only makes 1-2 diff core's per gen.
Say a 685 or 690 comes out and it's single core, how much you wanna bet it's really the exact same thing as a 680 core....
Same goes for the 660's, probably really just a 680 with some chunks of it disabled.
Though I'm hoping this won't be the case, I'm actually hoping for nvidia to cut the core in half physically.
As for the quadro's, they are most definitely the same exact cores.
Always has been.
If something called gpgpu (whatever that is...) is gone, it's either not gone at all but disabled, or it's really gone for good and it's not coming back.
Give me 4GB+ GTX 780s or give me DEATH!!
http://media-2.web.britannica.com/eb...0-C09010F3.jpg
Same here, thinking about the 4gig ver's.
But I would really like a 660 with 4 gigs, the performance/wattage would be unmatched.
Probably not no good for 2-4x displays but good enough for a single display, 3d wise.
Now that i think about it, 192bit...
What was it, 32bit or 16bit per ram chip?
I don't think..., if the 660 is truly 192bit, then it's not gonna come with 2gigs.
It might be 1.5 and 3gig configs.
3 gig would be ok for me.
Waiting for GTX 685
Waiting for the mythical Big Kepler .....
:rolleyes:
I dont need any lol I like benching. I bought a 680 on release, blew away all my PB's and sold it again before early pricing dropped, thats why Im waiting for the next big gun.
:D
People have always been interested in video ram size. It's been one of the main marketing numbers.
Don't know how many times I've shaken my head when someone chose a card with more vram that was actually slower than the card i was trying to sell them for less money which was faster.
Size matters. ;) If not in practice, at least in people's minds, and marketing will do its best to promote that idea.
I know what you mean :)
Some live their whole life in denial.
Some may start to belive.
Some have known in all their time, that size indeed does matter.
You must be a single display user. Of course you don't care, lol. You don't need anything more than 1-2GB.
You see, people that proudly sport multi-display setups (IE, REAL situations such as THREE or MORE displays, YES, I'M BRAGGING!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!), are the ones that NEED and are interested in the extra VRAM.
Ok. You folks have a nice day.
Or single display at 2560x1080/1440/1600.. or you know, games that might in the future utilize more gpu memory.
Its a null argument really. You buy for your situation.
lololol. Or maybe I'm someone who realizes how evident it is that a 660 would run out of GPU power before ever being able to fill 4Gb of memory.
In any case WE DON'T CARE if you think you need more vram. Get a custom card with twice the ram, they always have been and always will be around. It's your money. Just f'ing stop complaining about vram on the forums all the f'ing time.
And there's really nothing to brag about 3 sh*tty TN screens of the lowest quality.
I think 2gb is completely fine as long as they make 4gb for the people that want them. It allows alot of people to save 50 dollars which the savings will probably be. I think 4gb is the absolute most, people will need unless they run 6 screens and is a sweet spot number. 6gb of vram I see companies charging 100+ dollars instead of 50 because it is significantly more memory and for the most part impractical. What i mean by impractical is there are no situation that really need it, and as a result, card makers don't need to make it. If they do, it will be entirely for the epeen crowd who will pay anything for a card and think bigger is better. Because of this, most card makers don't make such a configuration and when they do, it's only one company doing it and they charge more as a result. Considering how efficient the gtx 680 is with its vram, or the architecture in general, I don't see very many scenarios where 4gb+ is used and for most situations 2gb will be enough.
Still wouldn't hurt to have a 512bit bus. I think more buswidth will be more beneficial towards performance than more vram for the gtx 780.
That's how I feel aswell with my 2560x1440 IPS 27" monitor. By the time the RAM is an issue the card is likely going to need to be replaced anyway.
Sure if the GTX 680 4GB is at most €50 more with some added clocks ( like the eVGA 680 FTW ) I might pick it up.
Theres always two sides in each story. I see people complaining and making fun of cards having a lot of memory and others to complain cards not having enough. Forums are for discussion, but most of the posts are Nvidia users claiming about how 1 and/or 2Gigs is enough and AMD users about how it isnt.
Each of its own and lets stop it here OK?
I already posted few pictures and links how AMD card benefits from additonal memory if you have high resolutions and details -and also how GTX 680 could benefit from 4Gigs as well on same situations. It should be clear as a whistle now for everyone. So i dont see the point of continuing :banana::banana::banana::banana:storming eachother.