Take this as simple as it is. Nor AMD nor NVIDIA can win as both are constrained by TSMC's 28nm capacity.
Printable View
Just to be clear on this, I'm not blaming either company (user of both brands, depending on what's the best buy at any given point in time). However, if it's relative to the competition: the HD7970 launched at GTX580/GTX480 levels, as that's what it competed with. In essence nVidia then said: Well here is our new card, that manages to perform at the same level, so we put it at the same price. Does that mean that the next crown (HD8970 or GTX780) would be placed at >$500, as it beats the current top of the competition?
Surely companies are companies, but paying the premium for cards that were not aimed at said market feels so wrong. Had nothing gone wrong you'd now have the GTX660 & 7970 for almost half the cost.
GK104 was supposed to be a $299.99 - $349.99 card. The only reason it will be priced a $500 is because AMD has no answer, The only way AMD can push prices down and compete as of this moment, is to price the 7970 @ $399.99 and the 7870 @ 249.99.
this part makes no sense. the first to market usually gets to set their price based on current models for a hefty margin. as competition comes out, and as yields get better and ram up production, prices fall.
if both camps came out with 600mm2 beasts that were 3x faster than current generation. we cannot expect them to be 400-500$ just because prior top end was in that range. the model number is the LAST thing that affects price.
i do agree with this, IF nvidia is launching theirs at ~450-475$. however i think they will just push the fact they have more GBs and show eyefinity gaming with high AA levels where 2GB starts to choke a little, and then try to keep prices closer. but we all would know what we want for what we need it to do. because i doubt im going to multi monitor gaming, i think 2GB is fine for what i do in the next 2-3 years.
well, we dont know the hw setup of the gtx680 test bit im quiet sure it stays the same. they wont use slower hw for the gtx680 test.
so, why are the hd7k cards this much slower in the gtx680 test, while all the other cards (gtx580, gtx590 & hd6990) stay roughly the same?
i smell that the gtx680 will have a rough time fighting for the crown, specially at high res + lots of real MSAA.
No, this leak is exactly that; a leak.
Wait for reviews.
-PB
Wow that's quite a catch. I wonder what Tom's has to say about this. In the review, The 7950 is neck and neck with the 580, and the 7970 beats the 580 by a healthy margin. Yet, in the leaked GTX 680 graph, the HD 7970 and GTX 580 are on equal terms using the same settings?
They're going to have a rough time explaining that one away.
Easy, THG cheated on the AMD benchmarks at launch :P
Hmm.. could be drivers??
other comparison:
battlefield, seems nearly identical
crysis 2, the dx11 is the same, but dx9 is WAY faster in todays review (87 vs 61)
3dmark 11, 7970 same, but 7950 dropped 150pts in todays review
WoW, both 7900s dropped a few frames (like 3%)
in elder scrolls the difference is massive. the older review was limited by the cpu (4.2ghz 3960X) and in the 70s fps on all gpus and the new one is over 120fps. so now im thinking they changed their platform around and why were seeing such differences.
Depends on what you are using the cards for ... I have a big interest in the GPU version of the HCC app currently in beta on WCG... it's OpenCL and heavy on float so the 7970 actually looks great (*assuming the chart is accurate*)Quote:
Not Good For AMD HD 7970 .
The 7870 seems like it is in another performance tier down. At 350 dollars, your simply getting the difference in cost/performance. The gtx 680 is 45% faster and if you do the math, your getting more or less the same bang for your buck compared to the high end, even better if the gtx 680 is priced at $499. This should not be the case for a companies bang for your buck midrange card compared to the competitors high end. Of course we have to look at other reviews, but depending on the pricing of the gtx 670, It looks like AMD entire line is going to get a price drop.
Nvidia chip has one 1gb less memory, it is a smaller die and most importantly, the Nvidia brand is stronger than the AMD brand to gamers. Meaning at the same performance/price, overall the public will pick Nvidia. It not as strong as the CPU market of AMD/Intel, but it is definitely there and steams hardware survey backs this up. AMD also has a bit more to lose at this point too. With almost all of AMD entire line out, price drops on the high end are going to cause a chain reaction for the rest of the lineup. E.g if the price of the 7970 drops, it could effect the price of the 7950 and with the 7950 at a lower price, the price of the 7870 has to drop. This is a good thing for the consumer.
I think we have to wait for more reviews, but most of us 3 weeks ago didn't expect the 7970 to be beat or matched by Nvidia's smaller chip.
I don't know how some people can expect to be disappointed with the performance though, Maybe the 25% rumors were untrue, but those were unfounded and most were expecting trading blows with a 7950. Considering the size of the chip, and most importantly the bandwidth, I think these results are better than we can expect. The bandwidth was more a less a given since 256bit bus was the most confirmed things out of all the rumors, along with the pictures. I myself thought a bit better than a 7950 and probably losing to a 7970 and I am surprised this chip is doing so well with the same bandwidth as a gtx 580.
lets wait till 22nd!
Weren't we expecting price drops either way? I mean, if GK110 would have been GTX680 and performed relatively to 7970 as GTX580 to 6970 we could assume a slight price drop on AMD parts because Nvidia would probably not price their cards above $650 anyways..
Maybe Nvidia will wait until 8000 series release then the gk110 will come out as gtx780 and gk104 will be rebadged to gtx760 .....
NDA is over soon so the truth will be known then ..... Hurry ffs
:D
Home of the world’s fastest Graphics Card for 853 days and counting.
Anyone want to take a guess when this will be taken down? AMD might not let Nvidia take the title
well not until something beats the 6990.
Saying that, I'm quite looking forward to NVidias new card. If they have a mid-range card, that takes less fuel, is quieter, costs the same and is att the same performance as an AMD chip, I'll be upgrading.
only time will tell.
Well it would've been the greatest fail ( such as Bulldozer) if 680 failed. I mean it's been almost 3 months since release of 7970, Nvidia was bound to make something faster to justify the time it took them to come up with the answer. At any rate, I'm more interested as to how good of an OC this card is, and what performance gain will the OC show. 7970 can go +300 mhz without too much of a hustle, giving a solid boost over the base 925mhz core clocks and a nice performance increase as well.
If Nvidia released the card 1.5 months earlier and had 3gb memory with it.....Witcher 2 pushed 1.5gb v-ram at uber settings at 1080p resolution ( one of the reasons I switched from my 580 3-way sli setup). I bet at 3-monitor setups with 5760 x 1200 res, those 2gb wont be enough either =\. On the other hand gratz greens on getting their new toy.:up:
All THG links removed by request of THG, so please no more THG links.
LOL GTX 680, any how if its priced at 499 i will not buy, again after 6months there would be a big price drop killing even the resale value like it happened with GTX470.
Its better to go with 7870..... everything playable at expected performance.
Why has everyone forgotten that MSI has confirmed a 1500mhz 7970 card? That's a heck of a lot faster than stock.
Obviously, nvidia's overclockability needs to be considered. But I don't think anyone in their right mind expects a 50% overclock from nvidia (we shouldn't have expected it from AMD either).
End result: All these "amd has no answer" comments are rubbish.
Looking forward to the reviews and the response from amd. And it's about time for me to buy, so I'm quite prepared to buy nvidia if proves better than amd at 3*1920*1080.
The 7990 (dual card) is coming in April.
As a triple surround gamer (possibly future 5x1 120/240hz) , I'm looking to the big guns, not the middle dog. So, for me, it's three GTX 780 (685s?) with 4GB or more, all water-cooled, of course.
Looks like this upgrade is coming late 2012, early 2013.
Links to such a card? On air.
http://www.hardocp.com/article/2012/..._card_review/3
They only reached 1190mhz with over volting and this is a MSI lightning series which has some of the best electronics components and cooling out there ?
1500 mhz if actually reached, how stable is it, how many hundreds of 7970 did they go through to find that cherry picked sample. Because from what I have seen, even on forums, 1500 on air is a number I haven't seen and definitely not stable.
I think most of us agree that will be the end-game...subject to release order.
It could be something like:
gk104/106/107 are released. (Q1/2)
AMD releases Trinity smack dab between gpu generations. (Q2/Q3)
AMD replaces Tahiti with gk104-like chip. (Q3/4)
nvidia releases gk110, rebrands everything...maybe respins and/or faster/lower power gddr5 for skus. (Q3/4)
AMD releases high-end chip and/or dual-chip solutions to compete with gk110 and low-end chip similar to gk106. (Q4-Q1)
AMD releases Kaveri...which IGP 'oddly' matches a rebrand of 7750. (Q2-Q3)
Rinse. Wash. Repeat.
I totally understand that TSMC's 28nm is a uniquely hot commodity that needs time to ramp/grow in production resources, and it is probably the first in a new reality of how products will come to market on a new process. I also fully respect and appreciate what both companies are doing for efficiency and yields (perhaps because of that). That said, it kind of makes one yearn for yesteryear when big chips caused a quicker turn-over of greater amounts of logic lower down the food-chain faster. Where-as before we may have had a 7900ish/gk104 battle at the mid-range in the coming months, we are now forced to wait another whole product cycle for the exact same performance(~/watt), perhaps in a slightly different configuration, at a more palatable price tag. It seems clear nVIDIA was ready to have that fight now and AMD, whom typically thrives on that concept, took the former nvidia route to maximize profit...nvidia ain't gonna argue and will revert back to taking the inflated income. It makes me ponder how much of the decision-making is yield-oriented, and how much is marketing/capitalization knowing they have to use the same process for two cycles. Each product supports and refutes the opposite's theory not only of each other, but of their company's own past.
The 7900 series is (imo) directly influenced by both, but which is more influential is the question. Sure, you have a bigger chip (with tahiti) with more redundancy for higher yields, faster time to market, clockspeeds that beat the old competition and have some higher-end features (for high-rez, gpgpu) the midrange won't, but on a pure gaming logic level you also have the clockspeed and real tdp (188w at stock, ~225w overclocked under avg load) that make it clear they will sell a similar product sporting similar power consumption and less pci-e connectors down the line as an 'improvement in efficiency'...or pretty much exactly what gk104 is.
In the end, I blame Gordon Moore and nVIDIA's past over-zealous engineering for my jaded outlook. :p:
No. The error correction zone is fluid from one ASIC to the next and is based upon the overhead differences between one set of ICs and another. Even if the memory starts throwing errors at a 4MHz overclock, it will keep error correction in place throughout every subsequent clock speed increase.
They probably got a bad sample or average sample. They got 1300mhz with the gigabyte, so it obvious they know how to overclock as much as the average consumer. MSI I doubt wants to push this as hard, as most consumers who overclock do. That just means a future RMA request is far more likely.
Is this where the 1500mhz rumor came from?
http://www.xtremesystems.org/forums/...t=#post5070673
I posted an update about the 7970 Lightning: the card will be 3GB memory with 1500 MHz rated chips, the cooler will be new generation Twin Frozr 4, it will be the second-fastest clocked custom 7970 (so maybe Asus or Gigabyte will have a faster 7970) and, the best of all, the hole in the backplate has a cool purpose: there you mount something called "GPU Core reactor", esentially a small PCB with tantalum caps that should provide adittional filtering for the GPU core voltage; if you want to use multiple cards though you need to remove it, as it takes some space.
If people are referring to this, this is obviously the memory and not the chips themselves. As googling 1500mhz on air yields nothing. Even on water, I haven't really heard 1500mhz really being thrown around. This doesn't change that 1500mhz clocked 7970's are not an option as a sell-able product which is what I am primarily talking about. It would take 1 in thousands binning which is not an option.
I think we should all remember that its been known for a while nvidia wasnt planning to release their high-end till september (end of year) time frame. IMHO I think gk110 is as late, as fermi was late to market. Im sure Gk110 was suppose to be released now or even earlier on the roadmap. Nvidia isnt holding back a gk110, it just plain isnt ready to go yet. Nvidia just got EXTREMELY lucky with the performance of gk104, and the underwhelming power of ati's 7 series . Its simply amazing what they were able to pull off with just a performance based design. Im just as sure though that ati has another design ready go when needed. The 7 series was intended for 32nm not 28nm, ati didnt want to scrap a whole generation of products that have been worked and developed for months if not years and lose revenue. If anything the 7 series is a bonus generation to ati. It buys them alot of time to rework the intended 28nm designs.
Can't wait to bury this thread lol.
-PB
Release already. Im tired of my 580. I want to bully some 7970 owners and troll the hellz out of them with this new weapon ! :devil:
:frag:
:owned:
... MSI confirmed a card with 1500mhz ram, NOT 1500mhz core. If ANY partner pulled off a 1500mhz CORE 7970 as an actual card for sale on newegg, I will personally bake them cookies and WALK them to their place of business. From there, I will go get them a glass of milk. Why would I do all of this? Because clearly they are santa claus himself.
The information didn't describe whether it was on air or not IIRC.
I dont think it was from that thread. I've been hunting for it. It was a photograph of a printout (company letterhead kinda paper, it looked reasonably legit to me) with a list of cards they would be releasing. It detailed clocks and video output ports for each card.
It was a thread on xs, because I don't read other sites. But I just can't find the bloody thing now.. wondering if it could have been removed.
(Very easy to remember this particular pic, and no I dont believe I'm mixed up with the mem clock. I thought it was an absurd value and was stoked to see it).
Edit: still trying to find it but running out of lunch break. So I give up. Just assume I'm wrong.
I think 7xxx series will buy them some time but they are still stuck with GCN. I have a feeling that AMD will needs to exceed the 400nm2 and jump into or near the 500nm range to compete with gk110, which might be against their design philosophies. If Gk104 performance indicates anything, we might get a gk110, that is 40-60% faster than a 7970; that is if gk104 is 10% faster in real world testing than the 7970 and the die size of gk110 is 60-70% bigger. GCN is AMD's new architecture and it is here to stay at least one more generation. It is called GraphicsCoreNext for a reason.
Increasing the frequency will help of course, but 1305mhz 7970 consume in excess of 300 watts and 62 more watts than a power hungry gtx 580. The biggest deterrent for high clocks is chip reliability over time and binning of course. Also there have been rumors of Keplar clocking quite, similar to what 7xxx has been doing so this won't help that much either. Considering the high clocks at stock voltage, i think this rumor is actually likely.
http://www.hardocp.com/article/2012/..._card_review/9
A 360mm2 high clocked a card would probably still lose to gk110. Losing the performance per watt/die size at the high end has alot of long term consequences. Unlike the lower end, they can't rip out unnecessary components for GPU compute because they still want to tackle this market. A big performance based pure gaming part(uncompromised gaming performance with no GPU compute features) doesn't make sense for any company. When your competitor has the more efficient architecture and is willing to make chips much bigger than you, it will be crazy hard to catch up. AMD has to either have another architecture altogether or be willing to make big ass chips like Nvidia. I would happy with scenario 2 but I don't think AMD will do this.
Instead AMD may be more content with second place again and I think is a more likely scenario, try to compete for the bang for your buck area and avoid the monolith. AMD's market share is much smaller than Nvidia's in the professional market and without the assurance of revenue coming from the professional market(as Nvidia is the industry standard here) there is much less incentive to make the monolith monster dies which funds these projects and cards.
Unless they pull a shark out of their ass or GK110 simply is a terrible products which loses everything that makes gk104 run well, 8970 could be very much slower than the GK110. If Fermi wasn't rushed and gtx 580 was the chip released initially, the 6970 would obviously still be slower even though it was second generation. From what it sounds like, with a product to sell, Nvidia is in no rush like Fermi to get gk110 on the market. Partners have something to sell in the form of gk104, gk107 and I am not sure about gk106. I can imagine Nv are likely trying to get it right the first time as their competitors don't even have their professional market products out and they would rather avoid another gtx 480 debacle(henced why gk100 was scrapped). I don't think anyone is sure what AMD will do at this point, but there is no simple solution and I have a feeling there is going to be a lot of compromises made by the company and hopefully price drops this generation.
All this speculation is based on a lot of rumors but I am also following the history of what both companies have done in the past and of course chip planning is done years in advance.
Look at its eyes for 5 seconds....
http://img0.etsystatic.com/il_570xN.155121548.jpg
Do you feel the power ?
March 22... :poke:
That's a ton of speculation on GK110 with little known
Given the efficiency in gaming of the 7870's, and how much smaller it is than the 7970's, its very possible that GK104 to GK110 is a similar situation. GK104 is small and efficient at gaming, GK110 is big but much more compute friendly.
Yes, AMD might have to go into the > 400 mm^2 range, but at the end, they're playing the same ballgame
GCN is new, but that doesn't mean it's inefficient. The 78xx's show that they're quite good, and can be quite efficient power and die size wise if you strip out a lot of the excessive compute stuff. Would not be surprised to see GK104 reviews echoing the same thing
something is not right about the scores imo can be premature driver. it is a good card thats for sure compute power may not be great but enough. so as it is nvidia buying turn for me :) i may be my next card but i am not satisfied with the price if rumors are true.
Sorry AMD.........But this is what happened :D
http://www.abload.de/img/2my6gpt176kj8.jpg
*subscribed*
The type of performance they would need to be competitive I would think it basically a 7870 but double everything and possibly double the size and even then it might not be guaranteed. Such a huge die I could imagine would consume possibly more than twice as much energy as well, with the power leakage and all. The problem with this idea, is also AMD would be making something 400nm2+, with its GPU compute features thrown out and it would be a pure gaming card. That a really big chip to sell for purely gamers. And with the R and D and small volume of enthusiasts, it might not be worth it for AMD. AMD has avoided the monolith chips for the longest time and I think they would be more content with being second place in the single chip arena. They could have captured the single chip performance in the past I think, but they didn't(they sort of did with the 5870 in somewhat recent times, but this was because the competition was late trying to make the big chip. I think AMD would rather avoid this).
Looks like I need to take some VerdeTrol.
I don't think this will overclock far at all using sub zero cooling, only four phase VRM :/
It's fairly amazing to me how last year (3 months ago) 2GB of 256 bit access VRAM was "the thing to have" and "what you need for Eyefinity and 25 X16", and the 3GB 580s were "overkill".
Case in point, HardOCPs review of the 3GB 580s:
http://hardocp.com/article/2011/06/2...card_review/11
And he goes on to say in FEAR 3 the 3GB helped at the setting run.Quote:
While this was a big improvement over standard GTX 580's, it is less of an improvement over two 6970 video cards, which already have a good amount of RAM on them. For the most part, these two setups will perform very similar, except in some specific cases depending on the game.
This "gotta have 3GB VRAM" stuff is out of control, and very inaccurate.
I have gamed a TON on 57X10 and 25X16 with 1.5GB/2GB/3GB NVIDIA cards and can tell you 1.5GB is enough for 25X16, and most of the time for 57X10 4X16X. I don't think I've ever run out of VRAM running 57X10 with 2GB, but I've never played SkyRim with the texture pack and 8X MSAA, nor have I played BF3 on Ultra settings. (don't have either game)
My thought is 95%+ of the people saying you need 3GB for 25X16 or 57X10 don't have either, and a review of how many people actually have those resolutions on the STEAM survey would back that up. I'll post some 57X10 benches of the 680s after they go one sale and hopefully put this to bed for good.
People need to remember you can exceed the VRAM on any graphics solution with things like texture packs on the handful of games that have them, or super high AA settings, so proclamations like "I want to play Skyrim at 57X10 8XMSAA with the texture pack settings maxed!" need to be tempered with "and I'm willing to have use a slower, louder card that doesn't have PhysX, good 3d support, CUDA apps, forced AO and needs more power to do so".
A graphics card investment is more than the ability to play one game at one setting, unless that is all you plan to do with it.
Already taken care of Vardant.
Guys...NO LINKS TO THE LEAKED NUMBERS FROM THG.
I am just assuming here... but all this talk about gk110 being more efficient than gcn, depending on benches gk110 is clocked 14% higher then 7970. So all this "AMD is in trouble stuff" is really irrelevant. Unless however, 680 totally surpasses 7970. All I want is one card to be a little faster so the price goes down on the other!
ok, now AMD can release the "PE" BIOS update for all the 7970...
After going through the synthetic GPGPU programs over the last week, I've come to the conclusion that they aren't worth squat.
If they don't support a particular architecture (case and point: Kepler) or use OpenCL / Directcompute extensions that aren't yet supported at the driver level (ie: Sandra, ComputeMark, etc), then there will be absolutely horrible performance.
Not only that but certain architectures are specifically designed for certain tasks. For example, Tahiti is geared towards Double Precision tasks in a big way but will that superiority actually translate into faster compute performance in actual applications? Only if the applications in question are actually optimized for AMD's implementation of OpenCL.
Agreed, maybe for CUDA you could use 3dsmax and glu3d on a standard candle scene and just bench simulation times. CUDA only though.
https://lh6.googleusercontent.com/-U...300-170683.png
in your world..... maybe... not in this
I can't speak for any other games as I have not tracked VRAM use.
For Skyrim with not too much more than the HD pack and the 1.5 texture packs I easily go over 1515MB vram use even after running the texture optimizer and this is not in eyefinity/surround this is on Ultra settings at 1920x1200 with 8xAF and 0xAA and a lighting add-on.
I even tried Eyefinity myself and could not get it to run favorably on a 5990 with 4GB total vram (2x2GB) on Ultra it would run for about 1-2 minutes then turn into a slideshow I'm guessing due to VRAM starvation in Eyefinity going across 3x 1920x1200 monitors so I'm not sure even 3GB would be sufficient.
Skyrim is probably the single most important game for those into RPG / games in this setting and many of us were waiting on NVidia to release something competitive, but since it does not have nearly enough VRAM for us, the 680 ended up being a disappointment.
MSI GeForce GTX 680 2GB GDDR5 492 €
28nm GPU with PCI Express 3.0
1st NVIDIA 28nm GPU.
Support NVIDIA GPU Boost Technology.
Support PCI Express 3.0
Afterburner Overclocking Utility
Support GPU/Memory Clock offset and Power limit control.
Support in-game video recording.
Support wireless control by android/iOS handheld devices.
Support built-in DX11 effect test
3DVision Surround
Support 3 displays in full stereoscopic 3D with a single card.
Support up to 4 displays with a single card
All Solid Capacitors
10 years ultra long lifetime (under full load).
Lower temperature and higher efficiency.
Aluminum core without explosion
Product Specification:
Product Name N680GTX-PM2D2GD5
Model V282
GPU NVIDIA GeForce GTX 680
Codename GK104
CUDA Core 1536 Units
Core Base Clock 1006 MHz
Core Boost Clock 1058 MHz
Memory Clock 3004 MHz
Memory Size 2048MB GDDR5
Memory Bus 256 bits
Output DisplayPort / HDMI / DL-DVI-I / DL-DVI-D
TDP 195 W
Card Dimension 270*111.15*38.75 mm
Form Factor ATX
Technology Support
DirectX 11
OpenGL 4.2
PCI Express 3.0
CUDA Y
SLI Y
PhysX Y
PureVideo HD Y
HDCP Y
Accessory
Driver CD Y
Manual Y
Installation Guide Y
6-pin Power Cable Y, 2
DVI to VGA Dongle Y
Garantía: 2 años.
http://www.pccomponentes.com/msi_gef...2gb_gddr5.html
http://tof.canardpc.com/preview2/c40...85cdcf5396.jpg
GTX 680 OC or SLi one ?
http://tof.canardpc.com/preview/cefe...e2fa566a00.jpg
http://www.facebook.com/photo.php?fb...type=3&theater
Thanks to my man kaktus1907 :up:
http://forum.beyond3d.com/showpost.p...postcount=3214
Wow, indeed.
Some people toss in absurd texture packs with super-duper high resolution foliage (that you never see the resolution of) etc. that jacks the memory usage up with high inefficiency. It's doable, but you really gain nothing from those mods other than using the official high-res pack + a couple of select general packs and modded ini.
+ some crazy gridsize levels ^^
Apologies for thinking that the console optimized textures are complete garbage for the most part and not worthy of PC gaming. Yes they are user generated and probably not as optimized as they would have been from the developer but we do not have much at our disposal currently.
when i first downloaded skyrim for steam, i thought the 5GB game file was a joke. even fallout NV has 30GB of textures
i then got the high res pack and play with max textures at 1080p on a 1GB card and still have no problems. but i also think the textures look HORRIBLE. atleast let my old gpu choke to death but look good doing so. instead they just gave us a mediocre update thats suppose to run well on crap and give nothing to the high end people
2GB is plenty even for 2560x1600 AA'd with reasonable Skyrim mods... yes, you can slam into the wall if you add on the tiny but bloated ones that make negligible visual gains, but if you leave those ones out and optimize properly, you're well within limits and it looks 99% as good. Other than that game, I can't think of one that runs into the wall (BF3 doesn't *NEED* more, it will consume it if available).
Radeon HD 7990 (April)
4096 (1D) Shader-Einheiten, 256 TMUs, 64 ROPs, 2x 384 Bit DDR Interface, 850/2500 MHz, Spieleverbrauch vermutlich ~350 Watt, Performance geschätzt ~500%
GeForce GTX 690 (Mai)
vermutlich 3072 (1D) Shader-Einheiten, 256 TMUs, 64 ROPs, 2x 256 Bit DDR Interface, Spieleverbrauch vermutlich ~330 Watt, Performance geschätzt 500-530%
GK110 (August)
vermutlich ~2500 (1D) Shader-Einheiten, 512 Bit DDR Interface, Spieleverbrauch vermutlich 250-300 Watt, Performance geschätzt ~495%
This monster ... GK110 will be 5% less than a dual of the same generation? OMG!
http://www.3dcenter.org/news/vermutl...90-aufgetaucht
And how do you know said unofficial texture packs have been properly QA'd before release, ensuring they are properly optimized? People say that many 3rd partly texture / graphics mod packs eat up a ton of memory but has anyone stopped to think that some (or MOST) of them may be poorly optimized, resulting in an unnecessarily inflated memory footprint?
A few examples of improvements through patches implementing proper in game texture efficiency have been seen in a number of tiles:
- AvP: Large (10%+) increase after the 2nd patch improved texture efficiency
- Wargames: EU Conflict: latest two patches have focused upon texture performance and the result has been ~20% performance increase in my tests.
- Shogun 2: Patch effectively improved across the board performance with a special focus being put upon fixing a memory leak in the texture caching system
I could go on and on. Basically, picking out a 3rd party mod results in a VERY poor comparison, particularly considering that most of the time an architecture's rendering limits will be reached far before memory will come into effect.
Here's a few games that go above 2GB@5840x1080. The file names point to the game/setting. Some of them are a bit too much for my single 7970 but are still valid points for CrossFire use. Also some would exceed 2GB at lower settings too.
Attachment 124740Attachment 124741Attachment 124742Attachment 124743Attachment 124744Attachment 124745Attachment 124746Attachment 124747Attachment 124748Attachment 124749
While many games will allocate more, they do not *need* more to run at full performance. BF3 and Crysis 2 especially are well-known to do so. Skyrim can be pushed to with excessive modding that offers little visual gain compared to a more modest modded install. Others can be pushed with extreme settings that VRAM would have no impact on even if present due to performance. Notice the framerate in the few you do have that are legitimately over 2GB, and their settings from the filenames, and that's with an oc'd 7970. Unplayable with those numbers, let alone minimums...
[As was said, GPU speed is an issue long before VRAM ever becomes one. Some extreme settings can result in higher than 2GB can run properly, but that's more for show than actual use.
Yeah, there is a reason (though it's not exactly known on any specific case of course). My guess, as a coder, would be partly caching to transition any load stutters (pre-caching of assets), partly less frequent garbage collection (to limit performance hits during gameplay) if excess VRAM is available past what is needed to run, and partly extra VRAM being put to actual use. However, this is speculation on my part as to why many titles seem to exhibit this behavior currently.
Unfortunately, there really hasn't been a ton of testing done on brand-new cards regarding this... WSGF had a 1gb vs 2gb article for GTX 460, and I've seen some tests of 1.5GB vs 3GB GTX 580 with negligible differences including BF3 at 2560x1600, but the line as to where you need to be is pretty unclear. :( At this time though, barring surround setups (and even including some without applying tons of AA, or even with in non-bleeding-edge titles), it *appears* (and I emphasize that keyword) to me that 2GB is enough.
Also presumably you want the card to last for a while, so when will 2gb be breached? 18 months time?