Regarding the tesselation performance of Fermi, I'd wait to see the comparison in real world games. IF (and I don't know if that's true, false, or somewhat in between) NV100 relies more in CUDA processors to solve tesselation calculations and RV870 relies more in fixed performance dedicated hardware, I'd expect the first to have a huge advantage in a nearly synthetic benchmark with most of its load being tesselation (because Fermi could use more resources to do it), but then, the situation would rebalance severely when complex shaders should be computed in addition to tesselation (like it would be the most likely case in most real world games). The heaven benchmark seems to be pretty heavy on tesselation, but much lighter in any other kind of shader. Maybe that's why NV are focusing so much on Unigine Heaven benchmark. Maybe not. We will see... when we have proper reviews and real world use cases.
Same like with general performance... I find hard to believe numbers like what Charlie is claiming.
They can't do that. If they did, CUDA would quickly become the standard of GPGPU in mainstream market, most of developers would use it and that would give NV a huge competitive advantage against AMD/ATI. Even if it would only mean that NV could know much sooner than its competence how CUDA would evolve in the future and be always a step ahead (with no chance to catch up) in mainstream computing. And they always could take much more advantage by exploiting their rights on their propietary API one it had been settled and standardized in mainstream market, similarly to Creative Labs with EAX.
My impression is that AMD is focussing on gaining some market share in mainstream market while NVIDIA tries to open and take the emergent HPC market, and I think that could be a mistake in the long term... except if they directly don't want to enter in that market. Which could be another different mistake. In mainstream... well, I think that's more the terrain of non-hw vendor dependant solutions like Direct Compute and OpenCL.
Yeah, I have always thought that was going to be the worst problem for Fermi since we knew the cards would be SO late. Competing against a hardware product (even more talking about graphics cards) which life cycle is 6 months older is painful. The price of such products always shows a diminishing curve along its life in a natural way, and a 6 months delay on a product like this is too much IMO. Indeed I think that the current prices of HD5800 series at this point are not natural and only due to not having any kind of competence until now. I'd expect price drops as soon as GTX400 cards are released (or little after that) unless Fermi is worse than what I think it will be.
Bookmarks