Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
well, everyone is "assuming". We do not know any facts, so we can just "assume". No harm in doing that.
But if you have some "hard facts" you want to contribute to clear some matters, please do, we would like that.
Seems like assuming the worst is easier than good nowadays.
How naive could somebody be to think that nVIDIA or any other manufacturer would decide upon something without weighting the effect of their decision with several performance tests.
Anyways... I stopped giving a f*, why would I want to share any information now ?
No matter what people could show or not, it would start a flame war and a conspiracy theory as usual.
No way of stopping this negative assumptionism, and TBH I wouldn't like to see this stop, it's quite entertaining![]()
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
Well, the fx series did not properly support the DX9 standards, and it was shown later that it did poorly in DX9 games, so i think nvidia already showed it can make such a "bad" decision, overlooking the full implementation of a Directx API.
Very true, I belive that was because nVidia put all their eggs into the OpenGL and Cg basket where the FX series did do quite well. I recall that Tomb Raider Angel of Darkness (DirectX9 game) brought the FX cards to their knees and even ran choppy in some areas on the Radeon 9700, however using the Cg shader path it ran quite well on the FX (almost as good as the 9700).
I might be wrong though...
As for the tesselation ATi used to do hardware tesselation on the Radeon 8500 (Truform I belive it was once called), this was really good in Return to Castle Wolfenstein and Serious Sam, however the Radeon 9700 series cards dropped the hardware tesselation and opted for software Truform II, this killed performance... it would be interesting to see how the GT300 copes with tesselation as IMHO I can see nVidia having the edge with DriectX11 compute, but ATi having the edge with the tesselation features.
John
Stop looking at the walls, look out the window
fermi to be in petascale supercomputer.
http://www.brightsideofnews.com/news...rcomputer.aspx
http://www.evga.com/forums/tm.asp?m=...ey=�
I was forced to use usb monitor as GPUs haven't any video output (this engineering samples of Fermi are Tesla like, but they have 1.5GB of memory each like GT300 will).
Because of the new MIMD architecture (they have 32 clusters of 16 shaders) i was not able to load them at 100% in any other way but to launch 1 F@H client per cluster and per card. Every client is GPU3 core Beta (Open MM library). I supose it is much more efficient then previous GPU2. In addition they need very little memory to run. Having 16GB of DDR3 and using Windows 7 Enterprise I've managed to run 200 instances of F@H GPU and 4 CPU (i7 processor HT off). The 7th card is not fully loaded. This could also be an issue with EVGA X58 mobo.
I use together two Silverstone Strider PSU's 1500W each that is probably too much but
now I experiment with overclocking (cards are factory unlocked). Max power consumption
I've noticed was 2400W.
The whole system is cooled by my own construction of liquid CO2 which is heavy an inconvenient and I have to supply a new cylinder every 5 days.It's probably actually closer to:
(30) 275's + CPU now a push = (7) Fermi
(30) 275's / (7) Fermi = 4.285 X as fast...
Power considerations:
He has (2) 1500W PSU's, using 2400 watts.
So lets say his i7 CPU uses 150 watts...
2400 watts - 150 = 2,250 GPU watts total.
2,250 / 7 Fermi = 321 watts per GPU...
Heh, so I guess you guys figure the PSU is 100% efficient tooIn any case there's no reason to believe him until he provides some kinda proof.
http://foldingforum.org/viewtopic.ph...=11717#p114890A little bird told me 4 SMP, 31 GPU, and the rest are CPU clients (at least in the last 7 days).
Or perhaps they just decided not to take part in the discussion which would be the better choice. I mean we have som many "know it all" people on this forum.
Fans may stay out of the discussion...
But fanboys...no chance! They will enter every discussion regarding their beloved brand(not prouct) or its competition and will blindly deffend/attack the beloved brand/cometition in every opportunity they will have, without facts, arguments and logical explanations.
I don't know. I just see a fair amount of assumptions from both sides of the fence.
But as BenchZowner mentioned in an earlier post, it is easier to assume the worst.
describes this guy
Heh, looks like a lot of guys are Nvidia fanboys and don't know it yet.
I was quite sure it was fishy right from the beginning. NVIDIA might have a few cards running but why would they give such an amount of cards to one person? Just doesn't make any sense at all imho.
With USB-monitors such things could be meant:
http://www.lindy.de/usb-2-vga-adapter/42983.html
Notice any grammar or spelling mistakes? Feel free to correct me! Thanks
Maybe not that weird after all:
http://www.evga.com/products/moreInf...-A1&family=USB
so anyone have cold hard fact of proven fermi performance??? or is it all going to be some lame video claiming it rendered some movie who came out 6month ago???
Bookmarks