Quote Originally Posted by saaya View Post
thats not the point, to be able to run it, at all, the architecture has to be working 100% like it should, all instructions and combinations of instructions need to work exactly as they should... there are usually always some bugs, and yes you can work around them on a compiler level afaik, but it takes time to figure that out... you need to know about a bug first before you can work around it, and do it in a way that doesnt cost you a lot of performance...

they showed gt300 silicon which was supposedly so fresh out of the oven it was still steaming, yet they had it running highly complex maths pounding every transistor of the new pipeline like theres no tomorrow, at very high performance and without any bugs... im not saying its impossible, but its def something that raised my eyebrow... especially because its not the only thing that they showed supposedly running on gt300... according to those demos it seemed gt300 was 100% done, no bugs, no driver issues, nothing... just waiting for lame old lazy tsmc...
I'm seeing a lot of conjecture on your part, but not much else. While I'm willing to give them the benefit of the doubt, you're simply doubting, based on no less than what you assume the application demands. GPGPU in general demands very little from a substantial portion of a graphics chip, particularly the texture units and ROPs. To claim all transisters need to be pumping at full throttle at all times is a bit silly. Again, they might have also had to clock it way down, use crazy cooling, high volts, whatever to get the transistors (the ones related to computation) in working order. Who knows?

But neither of us are going to get anywhere with this. Like I said, debating this is a waste of time.