Thanks Olivon! Looks like there may be a large fan on these things?
Printable View
Thanks Olivon! Looks like there may be a large fan on these things?
Dang looks like those links no longer work.
How is AMD's not even announced 8000-series known to be closer to a refresh of GCN than Kepler is to Fermi? I thought Fermi was the new architecture and what we'll see from Nvidia will be upgrades of that arch but nothing revolutionary like GT200/300 - GF100.
The post you replied to makes me really wonder how many posters are actually plants by marketing teams trying to divert attention from a competitors' product. But then again, there will always be someone who is "disappointed" or "waiting".
The fact of the matter is that AMD's next generation parts are nowhere near release. Think more along the lines of 2013 considering Tahiti refreshes haven't even taped out yet, let alone a move towards a more advanced architecture.
In addition, I think people have to be clear about one thing and I've been saying this from the start: The first generation of high end GCN parts were never meant to compete against Kepler. They were originally slated to be fabricated at 32nm and be a a direct competitor to NVIDIA's refreshed Fermi cards (GTX 580, GTX 570, etc.). Their performance results pretty much back this up as well.
IF (and that is a big IF) NVIDIA has designed Kepler to be a real step up from Fermi then AMD's current generation doesn't stand a chance in the high end market. On the other hand, AMD has proven that they can compete on price and all of their HD 7000-series (particularly in the $200+ price brackets) have more than enough padding to endure cuts of 10% or slightly higher.
That's what we call a test mule. It is an engineering card with localized PCB I/O connectors for diagnostic equipment.
still have a long ways to go for the announcement of the announcement
I'll go out on a limb here and predict you won't see any "announcements of announcements" this time around. :)
can people measure the chip size from that last photo? it actually looks normal sized (like 300-350mm2)
Original link : http://www.chiphell.com/forum.php?mo...8&pid=11248448
From h...
Not seeing anything on the "usual suspects", that would be pretty nice.Quote:
Old Today, 12:15 PM
Kyle_Bennett HardOCP Editor-in-Chief, 14.8 Years
I am seeing information out of China this morning showing 45% to 50% performance increase over 580 in canned benchmarks.
If this is being called a gtx 680, this doesn't bode too well for pricing.
If 660 is looking at being released fairly soon then how long would they hold back 680? Until AMD catch up? Maybe that's why a long delay has been hinted until 680.... That sucks
I was under the impression that the first release wouldn't take back the crown and the card that WOULD.... is delayed.
I really don't know which rumors to believe but personally I feel its better to have mediocre expectations and be surprised rather than big hype and big let down.
One set of rumors puts it around 580 level and another set of rumors has it unbeatable by anything currently on the market.
I play it by common sense and some well educated guessing.
If the shader count is true, even without the hot-clocks, there's just no way this thing is only going to perform at 580 levels. Even though it only has a 256-bit memory bus, the ram speeds are apparently a ghz higher than what we saw in the GTX 580 (albeit it would need a memory clock of 6 ghz to match the bandwidth of the 580)...
Now, I know what people are thinking... "with that big of a memory bandwidth handicap there's no way it can beat the 7970, and will only be around 580 performance"... I'll happily remind people of the fact that memory bandwidth is only part of the story. The 8800GTX had a 384-bit memory bus, while the 8800GTS 640mb had a 320bit, we all remember that. Meanwhile, the refresh for the 8800GTS and 8800GTX were both 256bit and slightly higher memory clocks (NOT enough to close that gap in bandwidth). How were they able to stand above in most cases (the 8800GTX beat the 9800GTX when the 9800GTX hit it's memory barrier) with lower memory bandwidth? Efficiency. One thing that was heavily tweaked for the G92 was it's efficiency in usage of its bandwidth. Now mind you, the 9800GTX only had a 60'ish mhz clock speed increase while losing more than a few parts and STILL beat the 8800GTX for the most part.
Fast forward to now, you're talking about a very different situation. 3x more shaders and 33% more rops (and who knows what else) and a bit of a memory bandwidth reduction. You'd have to be crazy to expect this card to only tie the 580. I'd say ~50% faster than the 580 sounds about right. That's not a crazy talk assumption, that's going by NVidia's own history and what we know so far.
Anyone expecting 100% across the board though, THAT is a pipe dream if I've ever heard of one.
To me common sense in this type of situation has shown that new uarch plus late launch/delays mixed with possibly poor yields should be cause enough to pause and think about the situation. We've all seen this situation now numerous times over the years, everyone here reading these threads should be feeling a little dejavu.
We've seen plenty of hardware that looks killer in the specs and considered the second coming in community hype only to be so so in reality, doesn't matter the brand.
Not that I doubt kepler can or will be a great product, simply kepler has yet to prove itself to me to be a great product.
Late really doesn't mean as much as we make it out to be. While to us 3 months seems like a huge gap, in the grand scheme of things it's truly not. In most markets, there is usually a gap between competing products. Video game consoles, for example. The PS2 launched a year before the xbox, the dreamcast a year before the ps2. The xbox360 launched a full year before the PS3... 3'ish months isn't late, considering it takes longer than that for a new graphics series to even make an indication of its own existence in their market share or sales--especially when there isn't a game out right now that can't manage just fine on last gens hardware. It just annoys the enthusiasts to have to wait. Personally, I was hoping the card would've launched by my b-day, since my woman was going to buy me one for my bday. I told her to give me a gift card for the money amount to microcenter. I'll wait and see for myself who has the better card.
Fact is, a rather sizable majority of the market don't do their research. They don't know a company has a new product out, or they have a new product coming out for that matter. They walk into their local store (which is likely employing people who also don't know anything), tell the guy in the blue shirt that they have a said amount of money to spend, and the guy in the blue shirt says "buy this". They then pay someone in a white shirt at the same store $50+ to put the card in for them because they don't know how to do that, and go on their merry little way home. I can honestly say 3 out of 5 people who's systems I work on are still running the drivers that came on the disc with their cards, since they haven't updated since the initial installation! Considering how many systems I've worked on, that pretty bad odds.
As much as people like to claim doom and gloom every time a company is behind the opposition's release cycle, the market really doesn't care, because the market doesn't even know. To us (who mind you, are NOT the majority of the market...enthusiasts are always the last person noticed by big companies), what should matter is who has the best product for the amount of money we are willing to spend. To them, what matters is what they were told to buy by someone else.
It's just like SkyMTL said... The 7970 was suppose to be the 6970 before TSMC cancelled their 32nm process. AMD just made a few changes to the design and released it as the 7970. So in that sense NVidia isn't late, AMD was just early. Now Fermi, on the other hand, was late. :rofl:
Unfortunatly there is only wishful thinking in your post.
You are basicly saying, compared to a big Fermi, the mid Kepler GK104 drops die size by about a thrid ( ca. 510mm^2 is 50% more than ca. 340m^2), drops TDP by around a thrid (ca. 270W real peak TDP is 50% more than ca. 180W real peak TDP), increases transistor amount only a little bit (The rumours mention 3.5 Billions trannies, maybe up to 4. Billion) and the performance goes up by 50% too??? I can only see three possibilities how this could happen:
1) Fermi is the least efficient chip ever. Hot, broken, unfixable. Kepler is a heavily reworked architecture, picking up some low hanging fruits.
2) nV engeneers put physics and TSMC engeeners to shame, by doubling the gains from going to 28nm process TSMC would ever admit were possible.
3) What you say is wrong.
So what is going on, if there is indeed 3x times more Cuda Cores? Where does that 50% more perf come from? Dropping hot clock looks almost certain by now, but there seems to be still a lot more raw GFLOPs available. Will they translate to more performance?
It depends. My educated guess is ... there will be no SFUs anymore in a SM. That's were the space will come from, to fit all the extra CCs. Special functions will be done on the CCs in multiple clock cycles, just like... in GCN! There are a few advantages from this approach. First, you can reduce the data movement inside a SM, registers can be kept closer to the SIMDs. Data moving is expensive, so you save power by avoiding that. Furthermore, SFUs do nothing for linpack numbers, they don't increase the FLOP count. And nVidia promised to deliver 3 times more GFLOP/Watt with Kepler and HPC is a very important market for them. So if you exchange "useless" SFUs for shaders, saving some power by doing that, this goal becomes possible to achieve!
So if you look at artificial, "canned" benchmarks that rely on raw GFLOp power... yes, there is going to be 50% more performance. But if you look at others, ones requiring special functions... performance will start to tank, likely under the performance of a GTX580. Games require a mix of both, so who knows how it will balance itself out. Overall faster than a GTX580 is very likely IMO, but not by much.
Well if we're going by common sense and educated guessing :P mine would be (without going into the boring details):
- GTX580, GTX570 will be EOL ASAP
- GK104 on avg 10~15% faster (will mostly range 5~20% faster than GTX580)
- GK104 was never meant to be a highend chip but due to AMD's not overly impressive performance for a new gen, Nvidia can now market is as that and can instead delay highend chip to try squeesh as much lower TDP as possible as that's most likely the reason it's delayed)
- GK104 was targeted to be around 300~$350 MSRP but due to AMD's high pricing of HD7950/HD7970 can place it at up to $399 and still have a very impressive performance/price ratio which is obviously needed to lure out customers into buying it for coming so late
- Noticably improved power consumption vs GTX580 (both beats it in performance and TDP)
- Will be a good card to buy from price/performance ratio at the time (due to coming months later than AMD, this is basic business 101)
- Most likely slightly better overclockability due to TDP headroom => overclockers can benefit a bit extra
- AMD's pricing will be dropped by $50 (max) or so for both cards
- What you get in comparision to Nvidia's last gen => save $100 ($150 comparing launch pricing) for ~15% better performance (slightly more for overclockers) vs lower power consumption (that you can expect reviews will point out like there's no tomorrow) + maybe some basic new feature we don't know about yet
TO ME this is reasonable. Not too much expectations, not too little, reasonable.
Further to reinforce my speculations, in case above would turn out to be true you can expect AMD's strategy was to bring out the cards as fast as possible and price them as high as possible as they knew a good chip from Nvidia was coming so they expected a drastic drop in sales while Kepler is out so therefore their strategy was to bring out the cards ASAP with as high price as possible to squeesh as much $$$ as possible from the months advantage they have to Kepler.
I think your on the right track if I were guessing^^
Ya know what this reminds me of after looking at the die size? the 7900GTX. It wasnt as fast as the X1900XTX however it sure as :banana::banana::banana::banana: sold a whole lot more. I think probably 5-10% faster than the GTX580 @ 250-300 is about what its going to shake out to be
50% would be awesome, even if canned benches. Especially if this would turn out to be the mid range Kepler.
On a side note, from memory I thought the 7900GTX was faster the X1900XTX. At least in a most games.
Seems like from this review in most of the games the 7900GTX is faster, think that was the general consensus? Pretty sure the X1900 was a whiner as well, whilst the 7900 had the Quadro cooler.
http://www.xbitlabs.com/articles/gra..._13.html#sect0
The only test I can find that shows minimum fps is the review from bit-tech where the xtx had HQ af turned on and it was (give or take) about even with the 7900GTX, I doubt HQ af made much of a difference. Would like to see otherwise though.
Anyway this is going way off topic lol.
I think there's a lot of misinformation or misleading between middle range and high end for Kepler.. and probably a lot of information, including the performance who have been mixed between sku number ( GK10x-gk104 ) and mddle range, top range... Basically if the card is 50-60% faster of the GTX580 in average, this is not the middle range and i doubt it have been designed for be this middle range. In this case this should not be the GTX560 replacement. ( or Nvidia have change their sku number scheme: GK100 > Quadro / tesla only aimed at full DP ratio, computing, GK104 > top level gaming....
This will bring the middle range card close of GTX580 SLI performance level... ( a bit lower, let say GTX590 level )...
Top range to top range, i can imagine without worry a +50% gain in average... But if the middle range top the GTX580 by 50-60%, this mean for middle range to middle range, the gain is of 90% easy ( 560 to GK"104" )... This will too said the low end or entry gamer will be capable of match or even be 10 % faster of a GTX580.. ( if the gk 104 is 50-60% faster of the GTX580, should we believe the GK106 can be 10% faster of a GTX580, 40-50% slower of a GK104? ) something dont match there...
Ofc, in some bench ( i think to Unigine for the example, assuming the gk104 should have same or more polymorph engine ( tesselation ), it couldend pretty higher of a 580 )
I think some source, insiders are using the term "Kepler"; and peoples write "GK104", then assume this is the 560TI replacement...
I dont even make a comparaison with the 7970 or AMD as you seen this is not the question.
I really wonder how could they even test these cards in china, why would nvidia release the proper drivers for the new cards.
So I won't take seriously any rumor which mentions performance and a Chinese source. Other details like ram, BW, they can tell but I doubt they can test the performance accurately.
They would be given test drivers specifically with the card/sample. Reviewers get cards days/weeks/months in advance to do all the testing. They had drivers obviously :/
A side note. Lanek, 50% over 580? I don't think it'll be that drastic. Somewhere around 30%, maybe 40 (tops). But this is a lot of speculation and nonsense that almost never holds true :)
we should expect it to perform close to a 7950/7970, and anything else is just crazy speculation
50% faster than 580 with 180W and 350m2 die.. now I do want to see that happen. Count me in for purchase on sight.
But.. let's wait and see, not very long left now. But if you want my opinion, expecting that kind of performance from midrange is a pipe dream.
GK 107 (GT650M) with 384 CUDA cores tested on 3dmark 06
Attachment 124344
http://www.ozeros.com/2012/03/gk107-...-en-3dmark-06/
That's basically 8800gt levels of performance isn't it?
one day before GDC.......
http://framebuffer.com.br/wp-content...enerife_B3.png
funny situation!!!
Tehee that's going to be fun! Could be so fake tho, just something that is uploaded to googleusercontent it seems?
lol Tenerife, the place where 2 Boeing 747 collided head on, 500+ deceased, AMD must mean something here :D
Looks like somebody is scared of GK104.
This hilarious for some reasons.
First, one have to dig deeper to find a post where you don't praise\defend Nvidia and\or bash AMD. You calling someone out it's pure hypocrisy.
Second, you're so biased that someone commenting on how a GNC refresh could happen soon, made you go full tilt, making you miss the obvious, your “AMD’s marketing teams trying to divert attention from a competitors' product” is in fact a nvidiot, that only buys Nvidia but tries to be rational about it, by coming up with excuses like the well knowed “Let’s see what both have to offer” and was rationalizing again, by coming up with the excuse on how it acceptable to wait months for fermi\Kepler but it's not to wait months for the new AMD card, because one his head one was a refresh and other was a new arch and the guy that you quoted was in fact pointing the flaws on that reasoning is, because Kepler is as much of a new arch as the GNC refresh will be.
What a monumental epic fail from you.
Third, AMD is about to releases new card, a flood of news of how awesome Kepler is and how close it is, AMD is about to release other card, another flood of news of how awesome Kepler is and how close it is, Nvidia past years does nothing more than to try to stall the competition sales with diagonal schemes and propaganda for idiots. You have no issue with it. But the question is raised on why is makes sense to wait months for Nvidia, but it doesn't to wait for AMD, you take a issue on it, even if it was a nvidiot bringing it up.
:shakes: I’m sorry it’s been how many years in a row now that AMD have the fastest card? With comments like that that one, you sure can call someone out for doing marketing for someone.Quote:
if NVIDIA has designed Kepler to be a real step up from Fermi then AMD's current generation doesn't stand a chance in the high end market.
Oh a pre-emptive strike on the moronic single-GPU argument. Matrox have the fastest card with a niche configuration for a niche market, and no one cares, just like no one but nvidia fanboys care about the single cpu nonsense. But hey if believing that AMD, if they wanted couldn't make a 600mm gpu also, makes you sleep better at night, whatever.
Little correction for you... Both the HD 2900XT and the FX5800 were FAR less efficient than fermi. The HD 2900XT consumed more power than even the 8800GTX, while performing far under, for example. Fermi at least had the performance lead, neither of my examples could say that.
Show me an AMD chip from the 7 series that you could buy 6 months ago.
That would've happened, had TSMC not of cancelled 32nm. It was that cancel that pushed back both southern islands and kepler.
Exactly, and that is the point of my statement. Saying GCN was "supposed" to be released in 2010 adds nothing to the argument that Kepler high end chip is late. GCN wasn't released in 2010 but 6970 was, even if late.
Its pointless to dwell in "what ifs", you should deal with what is, and GCN is now competing with Kepler and that is the correct comparison to make. You don't compare architectures by when they should have come out.
"Entire top-to-bottom fresh lineup"??? I count 4 models of HD7xxx cards currently being sold. The HD6xxx series had 10 card models in it. How on earth can you claim that the current line-up of 4 cards is a top-to-bottom fresh lineup? LoL.
It's only been 6 weeks since HD7970 was released. AMD typically releases video card product lines every 1-1.5 years apart.Quote:
But seriously, I think Tenerife is planned to fight Big Kepler (gk110) in a couple of months
That being said.... there is absolutely no chance of AMD releasing an HD7970 refresh in a couple months.
At the same time, 7870-7850 are waited, for the 7march if i dont do a misstake...
( 7970 have been in store since the 9 Jan, 7950 have follow the 31th Jan, 7770-7750 on Fev.. ) .... all in one they will have 6 cards out in some days.
Now you are right and i doubt too AMD will release a refresh before the end of the year... whatever it will be called, maybe end of Q3, but most sure for Q4. They will maybe be some cards for take place in between in Q2, but yet again rumors. ( 7790-7830-7930 or whatever they call them. )
In general AMD release the cards for Oct, November, when they dont wait about the production process to be ready.
6 months? lol math skills apparently aren't what they used to be....
This thread is so intense =|
-PB
R420 - June, 2004
(16 month gap)
R520 - October 2005
(20 month gap)
R600 - June 28, 2007
(12 month gap)
R700 - June 2008
(16 month gap)
Evergreen - September 23, 2009
(13 month gap)
Northern Islands - October 22, 2010
(15 month gap)
Southern Islands - January 9, 2012
Now taking into account the fact that ATi/AMD has NEVER released two product generations less than 12 months apart, with most releases averaging a 14 month product cycle.... I highly doubt we'll see anything different this time.
first page updated with NVIDIA GK104 PCB
Geting closer....
http://assets.vr-zone.net/15095/gk104.JPG
this chip looks to be roughly the same size as the 8800GT.
thanks TRANCEFORMER.
to people who believe in Tenerife, or even its demonstration in March 2012 ... are you high?
[align=center]NVIDIA Kepler GK104 (GTX 680) Final Specifications Leaked[/align]
*The site reports that Nvidia will launch a 28nm product based on the GK104 chip on 12th March and would be available globally on March 23rd.
*the GK104 would hold a total of 1536 Stream Processors, Core clock would be set at 705Mhz and Processor clock at 1411Mhz. Memory would consist of 2GB GDDR5 (256-bit) buffer clocked at 3004MHz (6.0Ghz Effective Dual Data Rate)
*that the card would either feature 2 x Six Pin or 1x 8 Pin + 1x 6 Pin PCIe connector and a die size of 320mm2 which is 45mm square smaller than the Tahiti XT chip. A 5-phase NVVDC configuration powers the GPU, The green pcb suggests that the card is in engineering phase, Final product would feature a black PCB.
How fast do you guys think GK104 will be against the GTX580?
Well, for those who know architecture leave a question. With 1536 Stream Processors (3x GF110) to 1411Mhz, 705Mhz GPU, 2GB GDDR5 (256-bit) clocked at 3004MHz buffer and a BUS 192 GB, you can expect? Working with rumors of performance that we have, you can expect something big (I mean performance) as in the G80? I think not
I donīt know how it will perform, but if Nvidia can beat Tahiti with 320mm2 chip, AMD is definitely in trouble. Perf/mm2 was their strongest point in the post-Ati generations.
The is a big catch that is left out of the leaks.
As if he's gonna say that :D
If i want the most powerful single gpu card and i have time to wait until 1st april. Should i buy 7970 now or wait for greens card?
Giving advice on this shouldnt be an NDA violate right? :rolleyes:
Maybe the big catch is Kepler has some sort of hyperthreading?
Taken from Beyond3D: http://forum.beyond3d.com/showthread...=58668&page=85
http://www.microsofttranslator.com/b...20120306077%2FQuote:
I think the details I want to tell opportunity again, NVIDIA's approach is that "to make as we think 'Hyper-Threading' Intel's seen, CUDA-Core free resources available other threading, increased performance per Core CUDA, lowered power consumption per performance to" (official NVIDIA).
That looks like LenzFire's specs.
Notice it also says GK110... anyway irrelevant, that's still not confirmed specs for that chip either, it's been said it's based on rumors.
in the news talks about the use of HT in Kepler, little consumption and performance of a 1.5 x higher than the Cayman, the words of a director of Nvidia -> http://www.microsofttranslator.com/b...20120306077%2F
It does not speak anything about cayman. Then it goes on and the big one i think is this "absolute power should be significantly improved". So first part with 1,5 times "conventional" is reletive one...
Also
Quote:
Originally Posted by Yahoo
I think we need someone native to tell us what it really says...Quote:
Originally Posted by Google
I think he says that "performance only" users wonīt be happy with that 1.5x scaling over last gen, but they will bring good power consumption improvements.
I wonder if the "hyperthreading" function being mentioned is just and update to fermi's gigathread scheduler.