I heard the 22nd.
Printable View
I heard the 22nd.
550$ is just too much for 2GB. All that cores might pack a lot of horsepower but 2GB is not going to be enough for very high resolutions. Nvidia really made a mistake, unless they want to force enthusiasts to first buy a 2GB version and then a 3 or 4GB version.
Anyway, 2GB will be enough for 1080 screens. This card will blow minds at 1080.
Food for thought: maybe NVIDIA realized that they don't NEED what would be deemed a "high end Kepler" to trounce AMD this round. Maybe their mid range core is more than enough.
/runs back out of thread
$549 sounds too much, $499 sounds reasonable unless it's on average at least 15~20% faster than HD 7970 which seems unlikely. Coming several months late means it needs to have at least $50 lower price at same performance as the competitor, that's how this business usually works.
expensive and I fully expect the crowd to state its a robbery.:ROTF:
wooooow, nice :D
if what you say is true, I can say that Nvidia proudly surpass with its- new generation mid range -the new generation high end by AMD.so let's put it simple Nvidia is the big boss in this competition
( Imagine the look of CEO face when he when that happen) :eek::eek:( Mid range beat High end) ....that's kind of cool name :D
Same price as 7970 :
http://www.newegg.com/Product/Produc...rder=BESTMATCH
But what about performance, power consumption, OC capabilities, noise ... We don't know yet :shrug:
On the other hand, you can't say the fastest card avalaible from Nvidia now is "mid-range". It will be mid-range once a fastest model comes out, meanwhile the fastest and most expensive is high end, independently of Nvidia's plans of releasing a faster card.
A mid-range card sold for $550 would be taking the piss. Smells like price fixing.
With a small die like that and just 2GB of VRAM it can't possibly cost more than $200 to manufacture (probably WAY less).
Well if they name it 680/x80 thats been the norm for high end, if they price it at $500+ thats definitely priced to the high end, if it performs in the same range or better than the 7970 that should indicate its high end as well.
Not sure what there is to consider the card any sort of midrange based on whats being said.
If price/performance/model numbering is in line the card will high end.
I agree with you Zalbard which might be a first. All of AMD pricing hasn't really effected any of Nvidia line up besides the gtx 580 which it basically had too considering that I don't think the 7970 would sell too well at the 650 range(they also have worse price/performance over their last generation which is a first for them). I think this might be a coordinated effort on both sides to raise pricing because the low end market is gone with most processors having built in integrated graphics. If Nvidia follows AMD's new pricing, then both sides can enjoy higher profit because this is a duopoly. The new naming of the gtx 680(gk 104 should have been gtx 660) will let them get away with it for the vast majority of the market; especially if the performance is there.
You know, this is why I always wished that some sites would step outside the box and test some games with mods since that can make a huge difference. Darkplaces, a quake source port with the mods that I use uses a ton of vram and beats the crap out of the cpu (it makes for a great cpu bench).
Oh, did AMD ever fix their performance issues in GTA with the latest patch?
challenge accepted
nahahahahahaha
Attachment 124592
Thats the greatest Ive ever seen in my whole life.
^^ what were you running to set that house on fire? quad SLI?
But IF, (everything based on rumors)
1) GK104 has almost the same die size as 7970
2) GK104 has almost the same price as 7970
3) GK104 will be called GTX680,
canīt we consider it high end?
... Will we see a GTX 685 as a high end card?
... GTX 690 wonīt be two GPUs on the same PCB?
I think thereīs been a long time since AMD and Nvidia are not sharing the same vision on what a high end part is.
since you guys mentioned 1080p/2gb/skyrim
heres skyrim/latest update/official hd pack/misc pack/step v201a pack/2x ultra settings/1080p/3x gtx580 3gb
i can only show @ 4096 shadowmaprez because @ 8192.. 3060mb is reached very fast and i have to esc the game before it crashes
iShadowMapResolution=4096
fShadowDistance=2000.0000
high rez short distance shadows
https://lh6.googleusercontent.com/-e...s1152/2000.png
no shadows at long distance
https://lh3.googleusercontent.com/-M...s1152/2001.png
iShadowMapResolution=4096
fShadowDistance=32000.0000
low rez short distance shadows
https://lh5.googleusercontent.com/-E...1152/32000.png
shadows at long distance
https://lh4.googleusercontent.com/-_...1152/32001.png
Fallout 3 even maxes out the vram on my GTX480s with the mods that I'm using at 1920x1080. I'm sure that higher resolutions could easily top 2Gb.
$550 launch prize
$339 in 2-3 months after launch...:D
This seems to be a common misconception from lots of people. CPUs and lots of other hardware cost very little to manufacture (imagine the SB-E chips are probably $50 a piece, if that). However, the reason they cost so much is because of R&D. It costs hundreds of millions in research for each chipset and years of work. I once spoke to a man who tried to start manufacturing his own CPU and by the time he got done he was in about $50 million and the chips were $10 each to make.
are we there yet?
^^ LOL I wanna put down bets that we'll hit 75 pages... any takers?
oh by the way Stevil
http://www.xtremesystems.org/forums/...=1#post5067626
GK104=GTX680, NDA ends on March 22nd
Pictured at chiphell:
http://img835.imageshack.us/img835/3...l8grzl2fwj.jpg
http://img14.imageshack.us/img14/611...ho3ms7zs11.jpg
As of now, the author says he still haven't got the driver yet (so no benchmark possible).
Link: http://www.chiphell.com/thread-367824-1-1.html
If chiphell can't benchmark it because they don't have drivers then are all other leaked benchmarks fakes?
High end GK110 will be a GTX780 ..... Unless AMD can pull something magical out of their hat/bag/ass lolol
Nvidia should call this a 670 and leave the 680 name in reserve, otherwise we have the weird situation where next release will have no new tech/new architecture etc. I want the 110 now, even of its $1000 per vid card, not like we haven't done that before ( maybe you guys haven't paid us$1000 but we have paid over au$1000, that's for sure)
:D
http://www.chiphell.com/forum.php?mo...9&pid=11424899Quote:
294mm2
3.54 billion transistors
195W
Oh, I am fully aware of this. However, it makes the calculation far more complex, since you'd have to account for money gains from future products as well (R&D doesn't have to pay off from a single card model, there will be a whole new line-up using the same tech, quite similar professional cards, and future refreshes may be largely based on it as well).
My point simply is, while there may be lot of R&D involved, there is no doubt that calculated profits from selling these cards (at rumoured $549 MSRP) are going to be FAR greater (per card) than what you'd typically expect from such products, thus implying that they are priced too high and Nvidia is hardly trying to undercut competition and bring more reasonable pricing to the table.
To put it short, this is a rip-off. These cost way too much (due to poor competition, price fixing or just sheer greed). If all of this is true, I sure hope that these sell in low quantities, so while profits per card are going to be WAY up, the overall profits from selling these are lower than expected, teaching Nvidia a lesson.
And talking about performance... So, let's say these are 5% faster in games than 7970 (which is 25% faster than GTX580), making them ~30% faster than GTX580. Is this impressive? Not really. A simple die shrink of GTX580 (which has a hot and large die) would easily allow for a 30% overclock, so we'd end up with a cheaper (smaller die), 30%-higher-clocked-than-GTX580 card with the same 250W TDP. So, according to the rumours, we are going to pay good $250 extra for Nvidia lowering the TDP by 50W (and some GPGPU performance improvements most people don't care about). Doesn't really strike me as an awesome investment.
We need GK100 (GK110?).
Something is sure with 2x SLI connectors on it, i doubt they will realise anything faster before a while.. Or they have drastically change their politic about it. ( allow for tri and 4 quad SLI with midrange.. huum. )..
IF there's a higher end parts who should drop, i hope we will get will get real infos. Not just marketing and rumors for 4-5 months more.
If that's true then Nvidia is adorable. I have no problem hanging onto a GTX580 3GB until they release GK110 or GK112 (whichever model number is the high-end single GPU). If the 7970 were a better value (or anything remotely approaching value) I'd have jumped on it already.
Unless GK104 gives significantly better performance than an overclocked GTX580 3GB @ 2560x1600, Nvidia can bugger off. And if they're serious with 2GB VRAM then that'll mean a few games will either be pushing or maxing out its VRAM already, let alone any games that haven't even been released yet. I mean 2GB VRAM on a "top end" card.........in 2012? I sure hope not.
Whoa, whoa, whoa wait a minute. a 7970 is 25% faster than a stock GTX580 1.5GB and that's in best case scenarios. A brand new 3GB GTX580 is still 40-50 Euros cheaper than the 7970.
Minimum frame rate is equally (or more) important than the average. A 45FPS average is worthless if you're dipping down into the teens. At 1920x1200 and below the stock GTX580 1.5GB actually beat the 7970 in minimum frame rate in Dirt 3 (coincidentally the same title that lots of 7970 manufacturers seem to bundle with their cards) http://www.anandtech.com/show/5261/a...7970-review/18
40-50 Euros higher for a whopping 6.7 fps increase in minimum frame rate @ 2560x1600 and potentially equal or lower minimums at lower resolutions? I don't see the incentive. It's still overpriced. If you ignore brand-specific features (eyefinity) I fail to see any justifiable reason for paying the 7970 MSRP.
I have zero loyalty to Nvidia but let's try to avoid Apple-esque reality distortion.
I agree that we need GK110/112/100/whatever the hell the high-end single GPU is called.
Has anybody actually discussed how awesome it would be if 190w beats the current high end cards by a fair margin? I mean 190w!!! That's fantastic! (if true)
A few months behind a competitor isn't late. 6+ months is late. We've discussed that fact earlier.
GK104 is only being looked at as high end because NVidia found themselves in a unique position where their midrange gpu was strong enough to beat their oppositions high-end. As such, they can sit and reap major profits and force AMD to show their next card, which will have to deal with the GK110.
While competing with Tahiti using a smaller chip sounds like a great improvement for Nvidia, the same can't be said about GK110 launch schedule, so I guess it balances out. Most 580 owners probably won't "upgrade" for Tahiti level performance even if GK104 were a lot more efficient. It's not surprising though that Nvidia will follow AMD in slapping their performance GPU with an enthusiast price tag.
That's my reasoning as well. Until we see actual GK104 benchmarks though it's hard to be certain.
The problem this presents for some people is that if GK104 is only a smidgen ahead of the 7970 that we wind up with two overpriced and unimpressive cards because AMD doesn't have anything ready to regain the top spot (dual GPU monstrosities are irrelevant in this case). So Nvidia rides out GK104 until AMD can release an 8000-series card at which point they finally dump the flagship GK110/112/100/whatever it's called.
And that's what I don't understand. The GTX580 has been out since November of 2010. Plenty of 580 owners are ready to upgrade (myself included) but I'm not going to do it for a little gnat-fart GK104 when everyone knows the big leap will be the GK100/110. GK104 is seeming more and more like a stop-gap card to keep Nvidia from hemorrhaging any more customers to AMD.
I'm with you. Recall that the S3 ViRGE had 2MB VRAM about 15 years ago. Now we have 2GB VRAM on mainstream cards. It's a 1024-fold increase, which is strictly and strikingly consistent with Moore's Law (doubles every 18 months)!
For an enthusiast dumping serious money into high-end graphics cards (especially for SLI to max out graphics including AA), it doesn't make sense to make a quad-SLI with only 2GB VRAM in Year 2012, not even for 1920x1080.
I guess 50% better than 580 in 3dmark11 isn't to be sniffed at.
If the GTX 680 is faster than the HD7970 GPU clock with a smaller (~ 200MHz) It's really amazing the power of this chip, smaller, less clock and stronger than the competition .... lol
How is QDR triple-pumped (6GHz vs. 2 GHz)? I vote fake.
Yeah evidently it's 1.5ghz, guess that's ruined another day of speculation.
Dual GK104 in May:
http://www.techpowerup.com/162275/Du...es-in-May.html
You seriously believe Nvidia will choose to be on-par (IF the rumours are true which i don't think so) with AMD when they can easily be on top as well? I mean seriously?
Its better to be described as: they needed their GK104 to compete with the high-end because their own high-end isn't production ready.
specs from wcf are wrong.
You forget Nvidia doesn't think like you or like any 1 person. Nvidia is a company who's major concern is profit and sustaining that profit.
So if they can sell a 300mm2 chip for 550$, and in order to commit their limited resources to a 520mm2 chip they must forego 2x 300mm2 chips (just as example), they will most likely choose the most profitable route first route.
The consumer is not really missing anything as there is head to head competition on all ends. And performance wise, we don't "gain" very much by having an 800$ card in the market you know. Very few bought the 8800 Ultra, as the GTX and GT were clear better choices perf/$.
That's flawed reasoning IMO, I'm much more with DilTech on this one. But then again I think it's rather silly to speak about a product we don't have solid performance numbers on yet haha. However let's say the rumors of GK104 beating HD 7970 in average with a couple of % would be true it doesn't make any sense roll out the highend chip now, why? Think of it from this point of view; provided GK104 is meant to be a midrange/performance chip that may have been initially targeted towards 299~$399 pricepoint range or so and the GK110/100 probably 549~$649 range. Due to AMDs subpar performance and high pricing strategy (squeesh as much cash as possible the months advantage they have strategy) Nvidia noticed GK104 is enough it can make it compete very nicely with AMDs new cards they probably decided to only postpone the highend chip until later when AMD releases its next refresh and this saves Nvidia time & money as they don't have to ramp up another chip (call it for example GK114b or whatever). It buys them time and hazzle with the next generation in mind too, if they can use GK110 as competitor against AMD's next refresh, that's a really big win for Nvidia if true.
Besides that you have to remember GK104 doesn't seem to be a very big chip at all, actually from Nvidia's point of view a very impressive size from size/performance point of view, it's not a big & power hungry chip that shouldn't be too expensive to manufacture so if they purposedly leave out the big chip now, they can instead jack up the prices of the what was meant to be the "performance" chip and that means A BIG profit increase per chip. If they'd also release the big chip now they'd potentially both sell less GK104 chips but also the pricings would have to be adjusted somewhat as Nvidia has learnt by now customers doesn't accept much above $649 for a single highend GPU card, not in this day and age at least so the GK104 would also then have to be adjusted somewhat lower, say $449 instead of $549. There's a lot of cash to be made out of a ~300mm^2 chip if it can both COMPETE in performance and SELL nicely at a pricepoint of $500+ xD
So speaking from maximum profitable business strategy Nvidia is certainly playing the cards right if they are delaying the highend chip and competiting GK104 with HD79xx even if it's about a draw, that would be a big victory already for Nvidia under these circumstances. These circumstances are very rare sight in these businesses (midrange/performance offering can compete with highend offering) and I bet the green camp is feeling quite confident at the moment.
Of course for us customers we had heavily benefitted from the highend chip being released at the same time, it had meant AMD had to lower the prices a lot more significantly and Nvidia's pricings also would stay lower especially for GK104, by the looks of it Nvidia might get away selling GK104 for a high price and making a lot of profit out of it without AMD having to cut prices significantly either => bad for us customers but insanely good scenario from Nvidia's point of view.
I don't see any reason or logic in delaying products in GPU business. There's always people who are willing to pay more for more performance, and the best time-to-market is as soon as possible, otherwise you're just letting the competition catch up. By now it seems fairly obvious that GK100 was cancelled early on and they need to take their time with GK110. The release time frames are getting too far apart between AMD and Nvidia, so comparing gen vs. gen is useless, you can only look at what's available on the market at a given time. So far AMD has had a quarter of sales whereas Nvidia simply hasn't shown up to the fight. I'd say AMD's strategy of showing up in time has worked quite well. It doesn't matter if Nvidia will end with the faster card since by the time they get their chips out in meaningful quantities, AMD is already preparing for the next gen.
If it is so obvious I wonder why people still claim that they will make decisions that will go against this principle. Scarce resource allocation is not that hard to comprehend. And until 28nm really ramps up well, there isn't any clear reason to rush out the bigger chip if the smaller will sell at the same price point one would previously expect the 500+mm chip to sell for.
While i have the up most respect for the engineers at Nvidia(AMD for that matter) and i have no doubt that their next part is going to live up to its performance claims, my hats comes off to its marketing team. Not only its buyers is convinced that a non-existing product is far better than its competitors. Never mind that the product is coming 3 months after their competitors, rumored to be 5, or the fact that is only their "mid-range" part with its high-end product at the at the end of 3rd quarter, but the customers themselves find ways to justify its profit milking by the company.
By that logic my 580 must of been a steal since it was a massive chip....
Green camp is starting to sound like a certain fruit vendor user base, it makes me hang my head in shame.
If you are referring to this page of the thread then I think you are confusing an economics/business strategy analysis with fanboyism. People CAN make a discussion based on what they believe each company should or will be doing. But nobody is going to be thanking NVidia for selling GK104 at $550.
As for performance estimates, this is a general forum discussion based on rumoured performance of Kepler, if we didn't discuss based on rumours, there would be no thread at all.
What about R&D costs/time/energy company would have to put into a revamped Kepler refresh to combat ATIs next refresh instead of saving current Kepler highend part for that? Ramping up production of a different chip isn't that "simple". About being first to launch a product has positive but also negative sides, since Nvidia launches this much later they've had at least some time on their hand to adjust clock speeds to perfectly compete with the competitor, if it would be just a month or two beforehand the time would be limited now Nvidia have had at least some time to do some final touches (internal design decisions and such would be already decided by this time though so no time to do any drastic changes at that point).
The best time to bring up highend Kepler part would be like 2 weeks ~ a month before AMD launches its next refresh.
I'm just trying to look from Nvidia's point of view and to me it's clear why they'd decide to delay highend part Kepler and jack up prices of GK104 as much as possible at this point, you have to constantly see into the future making decisions today.
If these really do turn out to be killer GPGPU cores I can't wait to see what it can do on the Help Conquer Cancer project on WCG. If the world thought XS was leading in CPU crunching I think their jaws are gonna drop when we get the GPUs fired up :-)
http://www.xtremesystems.org/forums/...-march-13-2012
so any news on the "high end" kepler? the non dual one?
There's no need to "save it for later", it can compete against AMD next gen just as well then regardless of when it was originally launched. There's only so much polish you can do for GPUs, and the benefit of having the knowledge of competitor's released cards and the current market situation is IMO insignificant compared to being the one selling the cards. Prices can be adjusted post release, but you can't make a chip go any faster, or if you can, you were doing something wrong before. :D If GK110 launches in September by that time it's more likely that the prices have already gone down and there's plenty of room for yet another Ultra enthusiast card. I certainly see it as far more advantageous proposition than waiting until AMD comes up with something similar. And when they do, Nvidia would have the edge in production quantities and can easily adjust the price accordingly.
Basically what we have here is the new feel good Nvidia gimmick...
After the single GPU nonsense, i present you the "midrange" nonsense.
Let me give examples on how to use this gimmick:
"Hey guys food for thought, Bulldozer is the midrange, Piledriver is going to be the high end!!, so Bulldozer it's not so bad after all"
"hey guys food for thought, the 7xxx series of Southern Islands are the lowend, the midrange and highend are coming end of the year!!!, Basically AMD just needed his lowend to fight NVIDIA's late "midrange, how about that, what a spectacular cad 7970 is! One of the best ever"
Of course this level of idiocy had to be coming from the woodscrew green team. Where Dual gpu's now come before the "high end".
Food for thought, how about calling it what it is? The BEST thing Nvidia can put for sale 3 months AFTER the competition. How about some sanity?
Nvidia doesn't put for sale anything better, because doesn't need to...:shakes:
So basically Fermi was late because Nvidia didn't need to, and cores had to be taken away so that Nvidia was able to put something on sale, because Nvidia didn't need to, right? :shakes:
Does it really matter, if they can put something better on the market, if this card can compete and probably win in most cases against AMD? The MSRP hasn't been set in stone, yet.
If the current Trend is to overpriced cards in upwards for 500+ for top cards, I think my bleeding edge days are over, this is just ridiculas now...
2GB is epic fail. I use more than that on a single monitor with high res texture packs in many games. Forget about Surround users. Hopefully they come out with 4GB versions.
@ Vega-- Epic Fail is using quad 3gb 580s with only 6gb system ram....
Why does it smell like price fixing this generation?! Or is it just greed?
It's not like there won't be a 4gb version, maybe not a launch but just like there where 3gb 580's there will be 4gb 680's available.
Hype is reaching epic levels,
Attachment 124599
GeForce GTX 680 Up To 40% Faster Than Radeon HD 7970: NVIDIA
http://www.techpowerup.com/162305/Ge...70-NVIDIA.html
Compare:
http://h9.abload.de/img/220329m6tnt4axtfl8tni3zbek.png
:lol: :rofl:
oh what a shocker
Haha, the new way of testing performance.... Note down Nvidia's official benchmark figures at a Nvidia presentation, test HD7970/7950 FPS numbers and apply the correct % to GK104. xD Only one prob, Nvidia's own numbers are always the "best case" scenario and won't reflect the truth. Oh well at least some1 wasn't sleeping at the meet at least.
http://img.parachan.net/mc/src/132943710997.gif
Only looking on BF3 numbers should have give an hint.. +18% without AA, +42% with MSAA .... adding the result are based on the "slide" posted above.. should give the second.
i cant imagine AMD would just be sitting by with idle hands.
i would guess that AMD would drop their entire gpu line up ~$50 for every model, i figured they already incorporated this $50 buffer since they were the first to market.
i cant wait to see these gpu prices just start tumbling and totally re-structure the market.
If Kepler is truly 40% faster than the 7970, I would assume they would also release a 7990 immediately after it or as soon as they can. I personally would prefer for a 7980 type card where it's just a significantly more powerful single gpu setup, but that doesn't fit in with their current platform
From (ugh) facebook.Quote:
PC Perspective
11 minutes ago
Stay tuned, we'll have a quick preview of Kepler GPU performance on the mobile site coming later tonight!!