Has this been posted?
Nvidia GT300 whitepaper disclosed
http://en.inpai.com.cn/doc/enshowcont.asp?id=7137
Printable View
Has this been posted?
Nvidia GT300 whitepaper disclosed
http://en.inpai.com.cn/doc/enshowcont.asp?id=7137
I thought they would simply decompose the DirectX tesselation calls into CUDA processors (formerly shader processors) instructions instead of some specific tesselator instructions in the drivers, since I've always supposed that the instructions set of each architecture wouldn't directly fit into the external interface that the drivers expose, but I'm not surprised if I'm wrong, since I'm more in the software side of things than in the hardware side :D
Yeah, that part I knew about and I have already mentioned it. What I don't know is how many CUDA processors they are going to have to use to do the tesselation, and therefore how much of an impact it will have on performance when used in real world apps, that's why I said that it can be seen when we have some real world data.
Of course it might turn into a huge performance impact :yepp:. Then... ouch!
Yep, the whitepaper is accesible through the Fermi's website along some analysts white papers that NVIDIA has payed to write about Fermi, and I think it was in this forum where I got the link to the white paper.
Here is the link to the Fermi's site, and here the white paper itself...
Well yeah, but how do they decompose DirectX tesselation calls ? through Cuda, the aditional software layer i was talking..
While if you have hardware tesselation, you just send those calls directly to the tesselators, no need for intermediaries.
At least that's how i assume it will be done, we will wait to see the reviews in dec/ian for more info on this issue.
well, everyone is "assuming". We do not know any facts, so we can just "assume". No harm in doing that.
But if you have some "hard facts" you want to contribute to clear some matters, please do, we would like that.
Seems like assuming the worst is easier than good nowadays.
How naive could somebody be to think that nVIDIA or any other manufacturer would decide upon something without weighting the effect of their decision with several performance tests.
Anyways... I stopped giving a f*, why would I want to share any information now ?
No matter what people could show or not, it would start a flame war and a conspiracy theory as usual.
No way of stopping this negative assumptionism, and TBH I wouldn't like to see this stop, it's quite entertaining :D
Well, the fx series did not properly support the DX9 standards, and it was shown later that it did poorly in DX9 games, so i think nvidia already showed it can make such a "bad" decision, overlooking the full implementation of a Directx API.
Very true, I belive that was because nVidia put all their eggs into the OpenGL and Cg basket where the FX series did do quite well. I recall that Tomb Raider Angel of Darkness (DirectX9 game) brought the FX cards to their knees and even ran choppy in some areas on the Radeon 9700, however using the Cg shader path it ran quite well on the FX (almost as good as the 9700).
I might be wrong though...
As for the tesselation ATi used to do hardware tesselation on the Radeon 8500 (Truform I belive it was once called), this was really good in Return to Castle Wolfenstein and Serious Sam, however the Radeon 9700 series cards dropped the hardware tesselation and opted for software Truform II, this killed performance... it would be interesting to see how the GT300 copes with tesselation as IMHO I can see nVidia having the edge with DriectX11 compute, but ATi having the edge with the tesselation features.
John
fermi to be in petascale supercomputer.
http://www.brightsideofnews.com/news...rcomputer.aspx
http://www.evga.com/forums/tm.asp?m=...ey=�
Quote:
I was forced to use usb monitor as GPUs haven't any video output (this engineering samples of Fermi are Tesla like, but they have 1.5GB of memory each like GT300 will).
Because of the new MIMD architecture (they have 32 clusters of 16 shaders) i was not able to load them at 100% in any other way but to launch 1 F@H client per cluster and per card. Every client is GPU3 core Beta (Open MM library). I supose it is much more efficient then previous GPU2. In addition they need very little memory to run. Having 16GB of DDR3 and using Windows 7 Enterprise I've managed to run 200 instances of F@H GPU and 4 CPU (i7 processor HT off). The 7th card is not fully loaded. This could also be an issue with EVGA X58 mobo.
I use together two Silverstone Strider PSU's 1500W each that is probably too much but
now I experiment with overclocking (cards are factory unlocked). Max power consumption
I've noticed was 2400W.
The whole system is cooled by my own construction of liquid CO2 which is heavy an inconvenient and I have to supply a new cylinder every 5 days.
Quote:
It's probably actually closer to:
(30) 275's + CPU now a push = (7) Fermi
(30) 275's / (7) Fermi = 4.285 X as fast...
Power considerations:
He has (2) 1500W PSU's, using 2400 watts.
So lets say his i7 CPU uses 150 watts...
2400 watts - 150 = 2,250 GPU watts total.
2,250 / 7 Fermi = 321 watts per GPU...
Heh, so I guess you guys figure the PSU is 100% efficient too :) In any case there's no reason to believe him until he provides some kinda proof.
http://foldingforum.org/viewtopic.ph...=11717#p114890Quote:
A little bird told me 4 SMP, 31 GPU, and the rest are CPU clients (at least in the last 7 days).
Or perhaps they just decided not to take part in the discussion which would be the better choice. I mean we have som many "know it all" people on this forum.
Fans may stay out of the discussion...
But fanboys...no chance! They will enter every discussion regarding their beloved brand(not prouct) or its competition and will blindly deffend/attack the beloved brand/cometition in every opportunity they will have, without facts, arguments and logical explanations.
I don't know. I just see a fair amount of assumptions from both sides of the fence.
But as BenchZowner mentioned in an earlier post, it is easier to assume the worst.
http://img520.imageshack.us/img520/1365/54632720.png describes this guy
Heh, looks like a lot of guys are Nvidia fanboys and don't know it yet.