Quote Originally Posted by DilTech View Post
Yes, in hardware, it just doesn't have total dx10.1 support. It supports some features in hardware. Read the link.
I did read the link and the entire article when it came out.

What Nvidia is actually saying is, we CAN support DX10.1 features by coding drivers to "expose" these features in hardware. In other words, get the same result by exploiting specifics in the hardware. But Nvidia does not have a DX10.1 part.

The entire page needs to be read to get proper context. If NV could actually exploit DX10.1 features from a performance standpoint does anyone actually think they would force the removal of 10.1 support in Assassins Creed?

We know that both G80 and R600 both supported some of the DX10.1 featureset. Our goal at the least has been to determine which, if any, features were added to GT200. We would ideally like to know what DX10.1 specific features GT200 does and does not support, but we'll take what we can get. After asking our question, this is the response we got from NVIDIA Technical Marketing:

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."

The policy decision that has lead us to run into this type of response at every turn is reprehensible. Aside from being blatantly untrue at any level, it leaves us to wonder why we find ourselves even having to respond to this sort of a statement. Let's start with why NVIDIA's official position holds no water and then we'll get on to the bit about what it could mean.

The statement multisample readback is the only thing some developers are interested in is untrue: cube map arrays come in quite handy for simplifying and accelerating multiple applications. Necessary? no, but useful? yes. Separate per-MRT blend modes could become useful as deferred shading continues to evolve, and part of what would be great about supporting these features is that they allow developers and researchers to experiment. I get that not many devs will get up in arms about int16 blends, but some DX10.1 features are interesting, and, more to the point, would be even more compelling if both AMD and NVIDIA supported them.