Today we are at 40nm and still below default 1 ghz.
Printable View
Today we are at 40nm and still below default 1 ghz.
Will it support Bit-streaming audio?
Yes its fake, nvidia is just trying to turn away the hype from AMD atm, that's all.
Many of those games exceed the FB on 580 with those settings...in those instances, the 780 with its lager FB would absolutely demolish the 580....which makes it obvious this slide is fake IMO ;)
first page updated. thx
Now, this is surprising ?? The buswidth implies the chips would still rather be big, but the performance isn't, those expectation figures go toe to toe with Tahiti and Pitcairn ones.Quote:
GK100 features 1500 CUDA Cores, GDDR 5 memory with memory interface of 512-bit, the performance is close to the dual-core GeForce GTX 590, it’s scheduled to debut in Q2 2012. While GK104 features 1000 CUDA Cores, GDDR 5 memory with memory interface of 256-bit/384-bit as well as core/Shader frequency of over 1GHz, the performance is likely to be slightly higher than that of GeForce GTX 580, it’s expected to arrive in January or February 2012.
Then, this start to make some sense. NVidia might not trying to make a monster big die chip on this new, capacity limited TSMC 28 nm, but instead going the AMD's route of smaller but more yielding chip as a safe route.
That 1500 SPs unit seems to be Tahiti counterpart, and the 1000 SPs going against Pitcairn, as the performance expectations leak suggested. But judging from the buswidth info, i guess NVidia's smaller chips are not really that small anyway, and the cost of the board with the wider buswidth would be higher too.
Kepler to show its face in Q1 2012 - Fudzilla
and also some recap based on 3DCenter speculations from MfA (Happy New Year my friend ! :D)Quote:
We would not be surprised to see Kepler in some form during CES, next week in Las Vegas, at least as a behind-closed-doors presentation for select members of the press and investors.
Sources close to Nvidia and AIB partners are telling us that Kepler is faster than Radeon HD 7970, but of course you would expect them to claim that their upcoming product is better.
http://tof.canardpc.com/view/685d64f...d96cc9dc9b.jpg
http://www.chiphell.com/thread-338350-1-1.html
Edit :
http://vr-zone.com/articles/amd-post...ary/14404.htmlQuote:
In a way, this is an appropriate delay as well since NVIDIA is widely expected to introduce its GK104, its first part based on 28nm Kepler architecture in the last week of January or in early February
Happy new years to you and all mate :)
lol Fuad.. he jumps anything he saw so damn quick, good numbers though :p:
Those numbers look fake to me. It seems they literally just took the AMD shader numbers at put them in for the CUDA core count and made the fillrates work appropriately. The rest seems accurate though
Well the cuda cores can be true, but only if they let fall hotclock, so shaders work at half speed ( half, i mean 1:1 speed )? how this will traduct in term of gain or lost in performance ? knowing the old one was run at 772mhz/1544mhz ? this look really similar to Tahiti, maybe too much for be honest...
that's what I'm saying. I feel like it's fake out of the sheer fact that it looks almost identical to AMD's shader count
It's nothing weird with shader counts being similar. Both are based on the same multiples. And with dropped hotclock they might have room for more shaders.
yep you are right... and even if they let fall hotclock, can they really goes with their CUDA cores ( who are not the same of ALU from AMD ) to 2304ALU ? this is more of 4x Fermi. .. I dont think Nvidia have decided to completely redesign their cuda cores from Fermi.... if thoses are true, this mean GCN and Kepler are nearly identical in the way they work ( outside polymorph engine etc ), maybe a 32ALU/SMID or 64 and less CU (called GPC in Fermi) and SMID of AMD.
But well, again outside polymorph engine etc, this is will just change the cache configuration.
1AMD ALU ( or Vertex shader ) is absolutely not the same of 1 actual Fermi ALU ... so if they arrive in the same range of ALU, a big change have been made by Nvidia, and this is not only related to the hotclock.
Its clear at each new "specification leaks" they double the number of shader.. we have start with 768SP, 1024SP, sometimes then it was 1536SP, and now 2304... in 2 months we should end with 4176 SP for the GK100. ( its still possible if Nvidia have nearly design a complete new arch. but well... )
I'm liking the both 768 or 1024 CUDA cores unless if the new architecture completely revamped how they did their shader clusters. I'm guessing that Kepler will be a new architecture in the way that GCN is a new architecture. There's a lot of big changes under the hood, but the same general ideas are used. I highly doubt we'll see anything in the realm of 7900gtx -> 8800gtx complete overhaul (though the performance gains might be just as big, unlikely but possible)
im not sold on the idea that they are dropping the hot clock on the new cards for one reason. Size... to maintain performance they would need to more than double the CUDA cores from the GTX 580... which would take up a huge amount of room and lead to a mind blowing transistor count. where as if they keep the hot clock they could simply double the CUDA cores to get close to double the performance which should be attainable on the die shrink.
^this
Even with a new architecture, they would lose a lot of performance by dropping the hot clock. In fact i've always wondered why AMD hadn't tried to implement a similar idea with their gpus after the success of the 4870. A modular gpu design with a hot clock in theory could lead to very high performance without too much additional power consumption
It's bull:banana::banana::banana::banana: because nVidia had an official slide that showed an 2x (+?) increase in sustained DP-FLOPS/W.
Believe all you want, but going from a 1:2 DP ratio to a 1:4 one on GK100 is just pure retardance.
They might drop the hotclock though, it gives them more allowance in clockspeeds at base- it's fundamentally harder to clock a higher clocked design faster by the same magnitude if you get what I mean. I just don't know how much extra space it gives them- but either way, probably not enough for them to move to 1:4 DP:SP, and the others have a crazy ratio.
They probably gain the extra DP-FLOPS/W for the same reasons AMD does because of doubling the shaders and dropping the hotclock. AMD has a lot more TFLOPS power than Nvidia, but the cards still perform the same. It means nothing really.
What we want to see is the performance/watt, as in, framerates in the important games these cards will be used. It's obvious Kepler isn't going to be slower, but if Nvidia is going to continue to make cards that have a ridiculous power draw, then it better damn well make up for it in performance, better than the 580 vs 6970. The single GPU flagship should at least be 30% more than a 7970, cause a 20% gain and sucking 120 more watts will be a joke.
I just came across this... it seems more complete than the image posted earlier.
http://www.hd-tecnologia.com/imagene...idiakepler.jpg
If the GTX 680 does have a 512 bit memory bus, I hope it has 4GB of VRAM, because the 7970 has 3GB and it would look pathetic if it had 2GB, hopefully NVIDIA won't use smaller VRAM if they are going for a higher memory bus.