:eek: If this is the 9XXX seris yet to come what does the GT of this series posses can u imagine the raw power of that card!!!!
http://www.fudzilla.com/index.php?op...=3245&Itemid=1
Printable View
:eek: If this is the 9XXX seris yet to come what does the GT of this series posses can u imagine the raw power of that card!!!!
http://www.fudzilla.com/index.php?op...=3245&Itemid=1
i'll believe it when the cards have materialized.
I'll just plainly stop following this until the cards hit the desks of a few review sites... Fud doesn't even seem to be able to decide what they think themselves, even less what they want their readers to believe...
Best Regards :toast:
Remember, if you guess infinite amount of times, you are bound to get something right ;)
This is the second time Fuad has contradicted what BenchZowner told us :shrug:
So instead of November and single PCB, now it's "early 2008" and dual PCB.
I'm just hoping that we'll hear some news from some reputable sites soon.
:D
Fud's response to the last post:
Maybe they're for real this time? Or maybe they have gone too far...again..?Quote:
If you don't trust us, ask Nvidia.
I really have to wonder if his sources aren't just screwing with him to see if he will post all the ridiculous info they 'leak' to him.
Maybe i should be a source and "leak" "on" him...
Nvidia defenetly has a new GX2 card coming based on the "die-shrink".
Microwave ovens are obsolete. 7950 GX2 was hot, this will be HOTHOTHOT^3.
I'm going to start a pc tech news site and make up a bunch of crap so companies send me pre-released products. That and so I can make people lose all respect for me. I'll call it "speculation.com where you don't know my BS from the truth." Fud and the inquirer make me want to slap someone...
So i guess they put 2x G92 on a card right?
No pictures, no care from Fud.
another Gx2 card from nvidia? why not... but i doubt it will be their only highend offering....
I personally believe Fuad over Benchzowner. A single card capable of 1tflop is physically difficult to achieve, much less with acceptable yields.
G92 is more than likely both a mid-range and high end card. Two G80 type cores with 256bit ram bandwith striped together would indeed be faster than current 8800GTXs
It makes perfect sense. A single G80 is rougly 520 gflops on paper and 430 in real world applications. So that would mean it would need rougly twice the peformance to achive this mark since nvidia claimed it would be *over* 1tflop in performance.
If you go by the logic that die-space is rougly equated with shading units which rougly translates into raw peformance, the G92 would need to be twice as big as their previous G80s in order to achive this goal.
That would mean that in order to produce a chip that had twice as many shading units would by the exact same size as current G80s if it were produced on a 45nm platform. Logic dictates that since they will not make it on a 45nm platform since not even TSMC is capable of this right now. That would mean at 65nm (which will most likely be the process of choice) the "G92" chip would be rougly 20% larger than current G80s on a 90nm platform.
#1 G80s are at the absolute limit for heatsink weight, so the new card *CANNOT* have a higher TDP or they risk losing their PCI-E certification.
#2 G80 yields are not good which is why there are many 96sp 8800GTSs and no cut-down card with the full shading units enabled. In addition the sheer size of the die makes its core extremely expensive to produce at a very mature 90nm process. An unproven 65nm,55nm or even 45nm process would have an even higher defect rate not to mention has the potential to be very leaky which would certianly revoke their PCI-E liscense for that card *IF* they could get one to work properly from the get-go.
Given this information the G92 *cannot* have a larger die than current G80s and in addition cannot have a higher TDP either. That means that the G92 as a single die-single card is *not viable* if it is to go beyond 1tflop.
However, if the G92 was infact a reduced G80 with 512bit ram or even 256bit ram, two of those cards in SLI would easily reach 1tflop and would be much easier to produce since the die would be significantally smaller. Even if the defect rate is high the sheer number of cores would offset this much like Winchester did for AMD.
If I were to take a guess from a buisness standpoint as to what would be the most profitable per peformance for nvidia it would be a 96sp or a 128sp G80 like core with a 256bit memory controller. A card with those specs on a 65nm process would provide a tremendous amount of profit @ a $250 price range and $400-550 as a dual pcb version.
Also the manufacturing cost would be substantially cheaper since their mid and high end use the same GPU and potentially PCB, which is exactally what they did when they released the G71.
Fud so I don't pay much attention.
NVIDIA is becoming notorious for stirring up fud lately, so I don't we'll be given any actual proof of what their first gen PCIe 2.0 DX10 card will be until shortly before launch.
Sent-
I was stating that it didn't make any sense to believe Fud over BenchZowner.
Either way your logic is flawed.
Nvidia stated NEAR 1TFlop performance, not over...
Also there are numerous other ways to "double" performance without doubling the amount of shaders.
You seemed to have missed the earlier G90/G92 threads...
BTW- the G80 does ~330GFlops, not 430.
I couldnt find the original PDF so I hope this quote from nvidia forums suffices:
346gflop max computational MULQuote:
G80 has 128 fp32 ALUs at 1350MHz with MADD.
So it should crunch until 256*1.35 GFLOPS (346 GFLOPS) if one feeds it right.
But AFAIR "GF 8800 GPU Technical Brief" document talks about 520 GFLOPS.
Is there some branch-unit or texture-unit ALU added and summed up?
Are this units usable by CUDA C code or only available as one does
texture array access with some interpolation ?
Sorry for all this questions about tech spec details. But I got first to convince
some people that CUDA/G80 is worth the time and effort to port some stuff on it.
Til now this is regarded as "new toy stuff" by some people.
Greetings
Knax
~430gflop Real World Peformance ( I cant remember where I heard this, it was on the forums some place)
520gflop w/t ADD
EDIT: listed here too
http://www.3dcenter.de/artikel/2006/12-28_a.php
Read my previous post, on paper G80 is a 520gflop card. The 346gflop includes only one part of the overall GPU's abilities.
EDIT: I found the CUDA pdf
http://www.cs.ucsb.edu/~gilbert/cs24...#37;20CUDA.ppt
Quote:
The nVidia G80 GPU. 128 streaming floating point processors @1.5Ghz; 1.5 Gb Shared RAM with 86Gb/s bandwidth; 500 Gflop on one chip (single precision)
No I am not confusing myself, read what I posted. In order not further de-rail this thread, listed below is the last I will post on this topic:
G80 Unified Shader:
173 GFlop/s ADD
346 GFlop/s MUL
(518 GFlop/s ADD + MUL)
___________________________
I agree with the rest of the posters who feel that if Kinc claims there will be a GX2 card in the works, I believe what he says.
Earlier this year, an nVidia engineer was quoted as saying the G80 was designed with 160 shaders but only 128 were enabled at 80nm (for a variety of reasons like yields, heat, power, etc). The other shaders would not be enabled until a die-shrink. So it's possible the G92 will have 160 shaders per chip making a x2 variant, fairly potent.
Also, if history repeats itself, the next high-end single card part will come 1 quarter following the dual card product.
As discussed at length elsewhere here, this is just more evidence that nVidia is going to milk the profit and margins this current gen GPU as much as possible before launching a next gen part.