Has this been posted?
Nvidia GT300 whitepaper disclosed
http://en.inpai.com.cn/doc/enshowcont.asp?id=7137
Printable View
Has this been posted?
Nvidia GT300 whitepaper disclosed
http://en.inpai.com.cn/doc/enshowcont.asp?id=7137
I thought they would simply decompose the DirectX tesselation calls into CUDA processors (formerly shader processors) instructions instead of some specific tesselator instructions in the drivers, since I've always supposed that the instructions set of each architecture wouldn't directly fit into the external interface that the drivers expose, but I'm not surprised if I'm wrong, since I'm more in the software side of things than in the hardware side :D
Yeah, that part I knew about and I have already mentioned it. What I don't know is how many CUDA processors they are going to have to use to do the tesselation, and therefore how much of an impact it will have on performance when used in real world apps, that's why I said that it can be seen when we have some real world data.
Of course it might turn into a huge performance impact :yepp:. Then... ouch!
Yep, the whitepaper is accesible through the Fermi's website along some analysts white papers that NVIDIA has payed to write about Fermi, and I think it was in this forum where I got the link to the white paper.
Here is the link to the Fermi's site, and here the white paper itself...
Well yeah, but how do they decompose DirectX tesselation calls ? through Cuda, the aditional software layer i was talking..
While if you have hardware tesselation, you just send those calls directly to the tesselators, no need for intermediaries.
At least that's how i assume it will be done, we will wait to see the reviews in dec/ian for more info on this issue.
well, everyone is "assuming". We do not know any facts, so we can just "assume". No harm in doing that.
But if you have some "hard facts" you want to contribute to clear some matters, please do, we would like that.
Seems like assuming the worst is easier than good nowadays.
How naive could somebody be to think that nVIDIA or any other manufacturer would decide upon something without weighting the effect of their decision with several performance tests.
Anyways... I stopped giving a f*, why would I want to share any information now ?
No matter what people could show or not, it would start a flame war and a conspiracy theory as usual.
No way of stopping this negative assumptionism, and TBH I wouldn't like to see this stop, it's quite entertaining :D
Well, the fx series did not properly support the DX9 standards, and it was shown later that it did poorly in DX9 games, so i think nvidia already showed it can make such a "bad" decision, overlooking the full implementation of a Directx API.
Very true, I belive that was because nVidia put all their eggs into the OpenGL and Cg basket where the FX series did do quite well. I recall that Tomb Raider Angel of Darkness (DirectX9 game) brought the FX cards to their knees and even ran choppy in some areas on the Radeon 9700, however using the Cg shader path it ran quite well on the FX (almost as good as the 9700).
I might be wrong though...
As for the tesselation ATi used to do hardware tesselation on the Radeon 8500 (Truform I belive it was once called), this was really good in Return to Castle Wolfenstein and Serious Sam, however the Radeon 9700 series cards dropped the hardware tesselation and opted for software Truform II, this killed performance... it would be interesting to see how the GT300 copes with tesselation as IMHO I can see nVidia having the edge with DriectX11 compute, but ATi having the edge with the tesselation features.
John
fermi to be in petascale supercomputer.
http://www.brightsideofnews.com/news...rcomputer.aspx
http://www.evga.com/forums/tm.asp?m=...ey=�
Quote:
I was forced to use usb monitor as GPUs haven't any video output (this engineering samples of Fermi are Tesla like, but they have 1.5GB of memory each like GT300 will).
Because of the new MIMD architecture (they have 32 clusters of 16 shaders) i was not able to load them at 100% in any other way but to launch 1 F@H client per cluster and per card. Every client is GPU3 core Beta (Open MM library). I supose it is much more efficient then previous GPU2. In addition they need very little memory to run. Having 16GB of DDR3 and using Windows 7 Enterprise I've managed to run 200 instances of F@H GPU and 4 CPU (i7 processor HT off). The 7th card is not fully loaded. This could also be an issue with EVGA X58 mobo.
I use together two Silverstone Strider PSU's 1500W each that is probably too much but
now I experiment with overclocking (cards are factory unlocked). Max power consumption
I've noticed was 2400W.
The whole system is cooled by my own construction of liquid CO2 which is heavy an inconvenient and I have to supply a new cylinder every 5 days.
Quote:
It's probably actually closer to:
(30) 275's + CPU now a push = (7) Fermi
(30) 275's / (7) Fermi = 4.285 X as fast...
Power considerations:
He has (2) 1500W PSU's, using 2400 watts.
So lets say his i7 CPU uses 150 watts...
2400 watts - 150 = 2,250 GPU watts total.
2,250 / 7 Fermi = 321 watts per GPU...
Heh, so I guess you guys figure the PSU is 100% efficient too :) In any case there's no reason to believe him until he provides some kinda proof.
http://foldingforum.org/viewtopic.ph...=11717#p114890Quote:
A little bird told me 4 SMP, 31 GPU, and the rest are CPU clients (at least in the last 7 days).
Or perhaps they just decided not to take part in the discussion which would be the better choice. I mean we have som many "know it all" people on this forum.
Fans may stay out of the discussion...
But fanboys...no chance! They will enter every discussion regarding their beloved brand(not prouct) or its competition and will blindly deffend/attack the beloved brand/cometition in every opportunity they will have, without facts, arguments and logical explanations.
I don't know. I just see a fair amount of assumptions from both sides of the fence.
But as BenchZowner mentioned in an earlier post, it is easier to assume the worst.
http://img520.imageshack.us/img520/1365/54632720.png describes this guy
Heh, looks like a lot of guys are Nvidia fanboys and don't know it yet.
I was quite sure it was fishy right from the beginning. NVIDIA might have a few cards running but why would they give such an amount of cards to one person? Just doesn't make any sense at all imho.
With USB-monitors such things could be meant:
http://www.lindy.de/usb-2-vga-adapter/42983.html
Maybe not that weird after all:
http://www.evga.com/products/moreInf...-A1&family=USB
so anyone have cold hard fact of proven fermi performance??? or is it all going to be some lame video claiming it rendered some movie who came out 6month ago???
Theres no hard anything about Fermi's actual performance.
Specs aren't even hard yet.
Thats how little we really know about it.
It was confirmed by Folding Forum as fake.
To me it was very strange right from the start.
It was first posted at EVGA forum by FahMan with single post at that forum. The link he provided in that post www.fahmanfolding.webs.com doesn't work.
Strange that FahMan would have 7 Fermi's, which is about 7 more than Jen-Hsun Huang could display at the GPU Technology Conference September 30, 2009, (not counting the ones with wood screws or funny wiring). :)
Talonman who used to flood the Xtreme News Forum with many threads and posts about the NVIDIA CUDA and PhysX used the single FahMan post to start another thread about it at the same EVGA forum.
Right now the last post in that thread (#49):
Quote:
Well, the speculation was fun while it lasted
Did you guys see this?
http://www.pcgameshardware.com/aid,6...as-Fermi/News/
come on
Meh, the author is a good guy and just trying to strike up some discussion and hits. Nothing wrong with that.
These are my thoughts as well. I do hope for our sakes that it does well and is competitive ( and in more than just performance, competitive *ALL* around which is what truly matters ; power usage, cost, scalability ect ) I am personally worried about power usage and cost myself. I am expecting their high end single gpu part to have a TDP of 225-250 which isn't that attractive. As far as realworld usage though, I do expect it to be under 200 ( eg games )
Either Nvidia are worried about where Fermi stands gaming wise or they are confident and are just playing up the CUDA side of things as a plus and not the main dish. Again the proof is in the pudding so until we see something concrete is it still speculation and guesswork at best.
lol@ the youtube video...
:lol:Quote:
they are expected to sell like hotcakes, especially in ati fanboy villages, here... and here...
comparing atis and nvidias financial situations though... shouldnt it be the ati guys sitting in a bunker sorrounded by enemies? ;)
The guy that made this is definitely lacking some brain functions regarding Logic. How can that hand smell like lotion, when the penis locations is empty. :shakes: From this point of view, part of that so called description, could be related to nVidia Fangirl (not boy).
Anyway doesn't really matter, is just busyness.
FahMan's results are not fake. He gets legitimate points from a single place, despite the fact everybody is watching him.
Folding@Home, analyzed the number of processors and GPUs FahMan uses as follows.
Uniprocessor (sloooow CPU client) : 168
SMP (high performance CPU client) : 25
GPU Clients: 73
Total : 266 clients.
The verdict is that FahMan is probably using workstations at work or school.
I'd like to come back here after the release of Fermi and see the way 6.5 GPUs appear at Folding@Home.
Liquid CO2
"I have one sitting right here. This, ladies and gentlemen, this puppy right here, is Fermi."
http://img29.imageshack.us/img29/6919/fakefermi.jpg
Sure I did watch it . Hard to miss it since it was posted on thousands sites all over the world. I actually watched the live broadcast eager to learn more about the Fermi.
The fact is the puppy was not Fermi. Initially the NVIDIA PR denied it was a fake, I can only guess some commutation problem there. Only later NVIDiA confirmed it was mock-up.
Now why should I believe the rest of the story?Quote:
The real engineering sample card is full of dangling wires. Basically, it looks like an octopus of modules and wires and the top brass at Nvidia didn't want to give that thing to Jensen.
Anyway the point I was trying to make is, if the engineering sample was in that stage, than FahMan could hardly have seven of the octopus Fermi.
The point some people in this thread want to make is:
Some "nobody" on the internet has 7 working "fermis", while Nvidia's CEO couldn't have even one for his presentation.
:ROTF:
Bypassing the swear filter is not permitted on the XtremeSystems forums. For your convenience an automatic system has been instituted where unedited words classified as swears will turn into a number of banana gif's ( :banana: ), which we feel is sufficiently expressive of your derision towards a certain topic without resulting in language not appropriate for all ages.
Thank you,
Serra
I read Nvidia got a second tap out of fermis a while ago.
What happened to the first alpha fermis?
We know they had no video out and were no good for any presentation.
Though they should be good for folding.
How comes and someone uses workstations at work or school 24 hours a day?
I mean, even though I don't believe FahMan is using fermis, it still is a mystery what he is folding on and I cannot exclude a slight probability he is actually folding on fermis.
FahMan became a legend just in a few days, braking every statistics and he is still folding.
He also gave us a hint that we need to create 16 clients to fold with fermi.
Is this correct? How did he know?
yes, its possible to cool with liquid co2, so what?
nobody does it, cause it doesnt make sense...
and you certainly cant cool 7 cards sitting right next to each other in a case with liquid co2... to run folding@home... thats just beyond ridiculous!
right, they fly press from all over the world to show them their new gpu... and then dont show it...
but the DO have it! i saw an ihs with its code name written on it!
and i saw some flashy animations on a big screen that were run on it...
just like rumsfeld had proof of saddams weapons of mass destruction...
i saw tons of satellite pictures and documents and videos! :P
im not saying nvidia didnt have working gt300 at gtc... what im saying is, there is no proof... it would have been possible to do the entire show they did without even having finished the gt300 design concepts and having no silicon whatsoever... as such, the entire event was a failure and waste of money imo... they didnt show a gt300 wafer, they didnt show a gt300 chip, they didnt show a real gt300 card, they didnt show a real gt300 card in action...
back on topic, and back to reality...
3-4 weeks until gt300 is supposed to be available in retail...
still no signs of it, nothing...
none of the usual suspects is even hinting that he already received a sample or hinting at the performance...
shops usually get cards just days before the launch, sometimes a week or two... distributors usually have cards 1-2 weeks before launch, card makers have them 2-3 weeks before launch... so evga, xfx, asus, gigabyte etc "should" all receive their gt300 cards this week or next... lets see :D
First time I found out Santa wasn't real, I was livid.
Parental Units: Ummm, you know Santa Claus is only make believe, right? We think you are old enough to...
Me: Noooooooooooo!!!!! You are lying! He is REAL!! *slams bedroom door*
When nV gets some working Fermis out, they will run GPU2 clients on them.
It's really not that hard to believe they showed early silicon running a fairly simple (or at least short) demo at GDC. I'm not going to go through the arguments of why, because honestly that's a waste of time. But I think people here are being skeptical for the sake of being skeptical.
While I agree with you on that's probable they had some early silicon able to run when the press event took place, the fact is that they didn't show any kind of proof. I mean, yes, they showed one. One which was proven false...
I think people is being skeptical because they haven't been shown any more than a bunch of gimmicks: a whitepaper by here, a mock-up by there (presented as if it was a real working card)... some people is starting to feel tricked, and so they are willing to be skeptical.
Anyway, I think that the point is not in having actual silicon working, but cards. They didn't even have a single card for their CEO to show to the public in the event, and there are 7 on the hands of some folk out by there? :rolleyes:
I have good experience with triple SLI 8800GTX overclocked and quad SLI on air. Even though I have no experience with more cards, I find it very difficult to believe CO2 is not enough. Liquid CO2 is very cheap. I believe it makes sense to refill a liquid CO2 tank every five days when we need to run 7 overclocked cards 24/7. We are not talking about extreme overclocking 7 cards for 24/7 folding here.
While it is not *impossible* to do, consider:
1. The amount of heat output generated by 7 overclocked cards + CPU is quite high. This will result in a very high burnoff rate for the liquid CO2. Whether a single tank is enough for 5 days (5 days!) is debatable since it all comes down to compression and cylinder size, but it would be a fair amount of CO2 required, which brings us to;
2. Unlike the air you have used to cool your cards, the exhaust from his rig is able to render a man unconscious in a number of seconds and kill him in just a bit longer. There are ways of dealing with this as well via ducting to outdoors, but this just adds in to how elaborate the setup has to be. Again, not impossible... but given he made a single post claim on a random Internet forum, let's not give him too much credit, shall we?
Who says he knows and didn't pull the numbers out of some place where the sun doesn't shine? ;)
There's still no objective reason to believe his statements were true. Why should NVIDIA give cards out of hand, even when they are "old" silicon? It's not like they got hundreds of working parts...
The only possibility (imho) would be fahman works for NVIDIA and they tested folding on the cards. But would NVIDIA test with extreme cooling methods? I don't think so! Would NVIDIA do it without letting the world know how kickass Fermi is at folding? I don't think so!
Just my 0.02 Euro-cents.
i totally agree, kinda like intels lrb demo...
but they didnt... they "supposedly" ran very taxing gpgpu apps and saw almost 2 digit fold increases in performance... for that to happen gt300 has to be fully functional, gpgpu uses all of the gpus resources, more than when you play a game most likely...
about being sceptic towards what nvidia claims... well it doesnt exactly help that they held up a fake card and lied to everybody out... ;)
good experience as in heating your room?
as in flickering stuttering crashing game sessions? :P
and dont get me wrong, crossfire isnt any better either... and yes its useful if you need to get the fps up at high resolutions or in a demanding game... but you DONT want more than one gpu if you CAN play the game with one gpu at the same or almost same settings... thats at least my experience...
but what does that have to do with cooling 7 cards inside a case with liquid co2 from an external container that you refill every now and then... oh please... kinda like a watercooling loop but with liquid co2? thats a retarded idea... you can get the same effect by using chilled water, and guess what, thats what everybody in the industry does... thats how super computers are killed and thats how amd (ive seen the machine myself) and most likely intel and others as well cool very hot chips when running tests and thats how they cool them when doing thermal tests and power tests of some chips.
really, trust me... whoever came up with this 7 liquid liquid c02 cooled fermi cards running f@h thing there is a) mocking people desperate for gt300 infos b) desperate for gt300 himself and doesnt know what hes talking about
as far as i know they do ;)
they had at least 2 cascades and used them to test their chips...
nvidia was the first chip maker to test their stuff on subzero and make sure they dont have coldbugs and scale properly... at the same time or even before ati, and long ago before amd...
its possible they use liquid co2 cooling, but unlikely...
and think about this, a cascade keeping a 200W+ gpu at -100C is going to need quite some juice, my guess is around 800W+ (fuggers tripple stage pulls up to 2000W iirc?), and its not a small cube either... now think about cooling 7 cards... and you get an idea of how ridiculous this is...
and while i can totally imagine nvidia testing gt300 running f@h on it, im sure they do it actually, its unlikely they do it with supercooling, and its completely ridiculous to think theyd do it with 7 cards in one rig... why would they do that? what for? how does it stress the card in any special way to have 7 in one rig? it actually stresses them less if anything cause there is a bigger lag from the cpu assigning them work and there is less pciE bandwidth per card to send and receive data... and if theyd want to test 7 cards to make sure it works fine, why would they supercool them? for what? so they can brag on some enthusiast forum about having a bigger e-penis... yeah right :rolleyes:
enough with this 7 liquid co2 cooled fermi cards running f@h already! lets move on please :P
I'll wait and see, and I'll be optimistic about it. What do I have to lose? Nothing, zero. Why? Well maybe because I can run most games without any problems, and maybe because the 5870 won't just disappear, and maybe because I might end up getting the 5870 for cheaper then I'll get one right now.
We cry because Nvidia showed a fake card. Shame. Somehow I think we're in for a big surprise. But if it doesn't happen, what do I actually lose? Nothing, absolutely nothing.
Just my 2 cents.
+1 to first bit
2nd bit is soso yeah fake card etc.. but they been very arrogant and made some questionable decisions also ... but meh ... I hope we are in for a big surprise tho...
tbh though im always quite pessimistic with hardware as i always seem to get my expectations met or a bit above the,... even then you say you think we are in for a big surprise.. and if we arnt your gonna be a bit like uh... what a joke lol :D
it seems like the fermi speculation has been nothing but negativity. all the fonbois are saying it uses 300 watts, it wont come out until sept 2010, it will cost $700, the 5870x2 will destroy it, and the funniest one of all 2% yields.
evergreen rumors: its 2x faster and it will be $250! then people start understanding gpu architectures and their need for bandwidth. gt300 will have ~240 GB/s compared to 150GB/s with rv870. i think this will be a great gpu and too close to the 5870x2 for ATi to be comfortable. its just a matter of time. once nvidia launches this then its clear sailing. especially in hpc market with insane margins and great per/watt. i am also very excited for larrabee. i read a whitepaper that said a 16 core 1ghz larrabee is 1.5x faster than a gtx280. that could put a 32 core 1.5GHz larrabee ahead of gf100!cant seem to find the paper now though.
The only "people" who are skeptical or suspicious right now are the small % of the market who are the the bleeding edge enthusiasts.
The main marketshare target couldnt even tell you what the hell a "Fermi" is, or who Jensen is.
I am guessing it will come down to benchmarks and pricing, not what happened at an event months earlier.
Rumours of the 5870x2 being $250? Or are you talking about something else?
It will only be clear sailing when Nvidia has a top to bottom DX11 lineup to counter AMD. And the HPC market is a tiny slice of revenue for Nvidia. Yes it has potential, but that won't happen overnight.Quote:
once nvidia launches this then its clear sailing. especially in hpc market with insane margins and great per/watt.
so this card is going to hard launch within a month ? and we have no info ? lulz.
Can't make cards without chips ;)
We had little hard info on the 5870 before it launched.
whos crying? cant speak for others but me, no, im laughing! :D
well at least thats what i heard, but i trust that guy 100% :)
oh wait i just remembered, a technical nvidia guy at the nvidia apac HQ here in taipei confirmed that to me when i visited them with shamino a while back...
true, to most people and to nvidia themselves actually, itll only matter how gt300 cut down parts perform... cause thats what they will make some real money with... but you cant have cut down mainstream gt300 parts without gt300... thats the thing :/
hows it a flame bait?
we did have infos about 5870 before launch... we knew thered be rv840 and rv870 and later some even further cut down chips. we saw wafers of rv840 at computex, months before it launched, we saw pics of the cards months before they launched as well... amd did live demos with REAL rv840 cards months before the launch, actually i think they had that at computex already... and they had rv870 demos behind closed doors as well back then i heard...
thats not the point, to be able to run it, at all, the architecture has to be working 100% like it should, all instructions and combinations of instructions need to work exactly as they should... there are usually always some bugs, and yes you can work around them on a compiler level afaik, but it takes time to figure that out... you need to know about a bug first before you can work around it, and do it in a way that doesnt cost you a lot of performance...
they showed gt300 silicon which was supposedly so fresh out of the oven it was still steaming, yet they had it running highly complex maths pounding every transistor of the new pipeline like theres no tomorrow, at very high performance and without any bugs... im not saying its impossible, but its def something that raised my eyebrow... especially because its not the only thing that they showed supposedly running on gt300... according to those demos it seemed gt300 was 100% done, no bugs, no driver issues, nothing... just waiting for lame old lazy tsmc...
Bring it on! I want 5870x2 (probably will release with gf100 or gt300 w/e) or fermi... my gtx285 stumbles in half my games (fallout 3 is nearly unplayable, risen lags like crazy)
you dont think its a little weird that most g300 rumours are negative? not saying it is specifically fanboys.
it was in a white paper published by none other than intel. still thats the first real performance figure of larrabee.Quote:
Anyways, not sure where you think you read about 16core Larabee @ 1ghz being faster than GTX280... Certainly not at gaming since they will need 32-64cores on the final product.
http://techresearch.intel.com/UserFi...2009_FINAL.PDF
Well, with mods it is totally unplayable.
And fo3 with mods is way better.
Also benchmarks don't say everything... average frame rate in fo3 with gtx285 is like 70, max details with 4 aa
I cannot get over 50 fps on max details no AA on default game at 1920 by 1200 and 50 fps in fo3 feels laggy... its often close to 30 - 40 to
That surprised me too.
How rarely do you ever get a complex chip like that working flawlessly... never. Many people agree that Core 2 Duo was a stunning break through in performance for Intel. But just like P4 and P3 before it, there was a long errata list. Thankfully, many of these bugs can be fixed in microcode, and dont require any OS patches.
Although GPU's dont have an ISA as complex as x86 with 800+ instructions, all those "programmability" features added with DX9, DX10 and DX11 add more logic and thus more chance for bugs. Fermi **MIGHT** have been running a limited set of instructions in whatever demos were real.
Finally, why its much easier for AMD to get DX11 out the door: they already had 40nm chips for months. they already had GDDR5 for years. they already had DX10.1 since 3870. And ofcourse, they already had tesselation engine for years. The block diagram for 5870 says it all... just double 4870. Even the 5way SIMD stayed the same. As for nVIDIA, each of these items listed are hurdles for them to overcome.
I'm seeing a lot of conjecture on your part, but not much else. While I'm willing to give them the benefit of the doubt, you're simply doubting, based on no less than what you assume the application demands. GPGPU in general demands very little from a substantial portion of a graphics chip, particularly the texture units and ROPs. To claim all transisters need to be pumping at full throttle at all times is a bit silly. Again, they might have also had to clock it way down, use crazy cooling, high volts, whatever to get the transistors (the ones related to computation) in working order. Who knows?
But neither of us are going to get anywhere with this. Like I said, debating this is a waste of time.
But it's mostly the main reason to purchase a brand new video card.
Provided it is marketed (Labree) as being a "highend gaming sku" then I would agree. However if Intel, not unlike Nvidia, cater to the whole CUDA/OpenCL type deal then obviously that is their target market. Just don't say that these products should be "only" for gaming as that is ignorant to assume given the direction GPUs are going. At the end of the day Nvidia and Intel don't give a rat if the product is bought for gaming or processor purposes. If they can meet a demand and be competitive, their product sells, they create revenue and share holders are happy. Simple as that.
So this is gonna "hard launch" in one month? And we still know nothing? yeah, right
to much talk from invidia no action i want some new cards.
it does support Dx11 right??
I wouldn't say we know nothing. In fact we probably know more about it than we did about the 5870 in the same time period, especially about the low-level stuff. Rumors currently suggest 128 TMU, 48 ROP, and that pretty much covers the gaming side of the hardware. We're just missing tidbits at this point, and of course clocks, which is always the last to be finalized.
At this rate, would it be unreasonable to say that we might only see real availability of Fermi early Q2 2010?
GTX380 vs 5950 that will be the fight that goes down as GTX285 vs 4850 X2 did most likely. In games that are not CF optimized GTX380/Fermi would win "Since a single 5850 cant destroy a Gtx380/Fermi no matter how bad Nvidia made the card, it has to kill 5850 in game otherwise its a retard" and in games that are CF optimized i expect ATi's 5950 to win.
But the Nvidia card would offer more than just gaming, you get a partial/fake CPU that can in theory do most of the work ARM cpu's can do. Its totally upon the end user to buy which card, i am on the value for money bandwagon how much the GTX380/Fermi costs and how much 5950/5870 costs is important factors for me.
Yeah, of course.
Speculations, speculations...
5870 is 2x of 285. Are you saying that 5970 being 5870x2 is slower than Fermi? Delusional much? It would be such a massive die size for a single chip that the card would cost 1000$...
All speculation folks. From what I do know for sure I think ati's ideals of a graphics card is better then nvdia. To me I want a gfx card not a pseudo CPU. Bottom line though we really need new games, none are going to push the last, current and next gen.