Nice info ! Thanks Theo !
Printable View
Yessir, that's why I made sure I posted it in this thread and nowhere else :)
I wanted to make sure I posted something of value when I got it, especially when you consider the fact that I was originally one of the first few people propping up this thread with my own BS. :)
Realistically speaking, nobody should be surprised at all. Anyone who's been following the situation could tell they were saving the fully-fledged part for GTC especially after I hounded them about CUDA performance and they were mum.
Expect a new CUDA Compute and fully-fledged Kepler :)
Yields already suck. There is no way they will double the die size. Not without a good expectation of solving the process problems.
At this point, I'm not entirely sure. I don't recall them EVER making a dual-GPU Tesla part. That's the only reason why I'd shy away from such a view...
http://en.wikipedia.org/wiki/Nvidia_Tesla
And yes, I think we would be back to 588^m (to be exact)... which we all know NVIDIA has a lot of experience of dealing with :P
But at the same time, i'm not entirely sold that they'd necessarily have the exact same chip. I feel like there would be tradeoffs... I'm just not convinced they'd build a HUGE chip that's double the size.
i could easily see their 780 being ~2400 cores, but not much higher than that.
and why not use dual gpu for cuda. i thought that servers wouldnt even care how many gpus they have since they are not limited to 4 like windows for home is. this is the first time theve had such a small chip that had great perf/mm2
do you think they would take the chip as is, or need to make some serious modification for cuda apps?
There is much much more to GPGPU performance than the number of cores. Two GK104 put together would not cut it, not at all.
I concur, I think that they will have to re-work it and I'm not entirely sure the GPGPU part will be a 3000+ core part, I think the 2400 core part could potentially be more plausible (I was thinking more like 2034).
The worst part is that I think people are confusing the dual GK-104 part with the GPGPU one being shown at GTC. I don't necessarily think they'll be one in the same.
So, if single core Kepler is two slots, will dual-core Kepler be 3 or 4 slots?
What waste? If you designed the GPU with multicore in mind then there's no reason for tons of wasted resources.
Beyond that he didnt mean "dualcore" as in two cores glued together...
Exactly, I should have said two-GPU, sorry.
If you design such a large die could it not be possible to design it with salvage in mind rather than redundancy - ie. instead of having duplicate parts to ensure designs can meet the minimum specs which means an increase in die size, could you not design a chip so that it could be cut into smaller die and still sold? The increased interconnect logic on a fully working die may not be much greater than the otherwise redundant logic, and the percentage of fixed logic required by a GPU is shrinking each generation so the parts replicated to ensure dies could be cut would have less and less of an impact, and could be countered by the increased yield of saleable stock.
This would allow as an example:
100% of die intact = gk100/110 - sold as Tesla at big bucks
small defect on one half - die can be cut into 1 fully enabled gk104 and one crippled gk104, or 1 cripped gk100/110 which could be sold as a Halo 'ultra' gaming card.
small defect on both halves - die can be cut into 2 crippled gk104.
large defect on one half - 1 fully enabled gk104, defective area discarded.
Excess gk100 and demand for gk104 - cut die in half to give 2 gk104.
Makes a large die design more feasable as there is far less waste, and the costs of design would be shared between the product lines. Is there any glaring reason I'm overlooking why this couldn't be possible?
Not sure if GTX launch or zombie apocalypse... Got a package from Nvidia a couple of days ago... they sent me a crowbar...