yeah true, but with rated TDP for those frequencies, and shader count I don't see how GeForce lineup can have higher specs! do you?
Printable View
Thing is, isn't this just for Tesla though. Professional cards are almost always different to the retail cards. Who says this is for GT300 as well?
Or am I completely on the wrong track here?
Edit : Didn't see Nedjo's post. Makes sense in a way.
It could have an extermal power brick like the old planned Voodoo cards. That would be rather amusing actually.
so given those specs, what would a fully clocked fermi with 512 shaders cost in TDP? like 260-280W?
perf per watt, looks like ATI might win by an obvious amount (20% is my estimate)
that does sound about right. It's reasonable to believe that GF will have 768 and 1536 MB of memory, but again, it's questionable how much does memory brings to power draw? Also you're correct to power limitation, 300W is the nominal wall, but c'mone 300W for single chip GPU? that doesn't make sense!
for the sake of argument let's say that 448 SP Fermi @ 1400MHz and 3GB of DDR5 @ 4GHz consume 225W, how much do you think it would consume with 1,5GB? IMHO best case scenario is 190W, and if I recall correctly that's the figure someone mentioned for GeForce based Fermi.
Anyhow best case scenario for performance is the level of GTX275 SLi, and that would give NV right say that GT300 is twice powerful than GT200, and faster than 5870, but slower to 5970. It will all comedown to price and availability.
What's safe to say is that NV will not have fastest card on the planet (no bragging rights), and that we'll not see DualGPU Fermi until TSMC provide shrink of their 40nm tech.
The thing is, how are GeForce parts going to be 512 cores if Tesla is only 448?
We don't know anything about the efficiency of the new architecture, but judging from those specs, I believe it should perform on par with GTX 295, which I don't think is so bad.
there maybe a bit of confusion in the air due to ECC. The low low speed of the memory subunit can be for higher dependability and lower ECC req. which will lead to better data submergence but i have no idea why nvidia would lower the core count maybe and this is maybe the decrease in bandwidth when ECC is active is so much that data can not get to the 512 cores and as a result the card gets bandwidth starved "HPC uses S*it loads of memory bandwidth" the reduction in cores may lead to better bandwidth sharing ??
I expect the geforce board to be faster at least around 1700-1800mhz would be better...
if the GT300 is as fast as the 295 I'll be happy with the progress (and trade in for a GT300).
Dont mind if the card is a 2900 XT until it gets performance is more than the 5870 around GTX295 at least.
GT240 is decent well its a 9600 GT :( but AMD's HD5xxx is just too much...
448SP monster at a low enough clock will make the card perform around 5870's level which would be a big mistake.
If GT300 is only as fast as a 295 then I won't be happy at all. The 5870 is already about as fast and ATI has plenty of headroom for a 5890. Being tied for single GPU performance is not what NV needs. They need to gain the single GPU crown and have a competitive lineup from lowend to highend.
That's not really a counter to anything I said. GTX280 was still faster then 4870 even if it didn't beat a 9800GX2. If GT300 is only as fast as 5890 then they will be in a tight position considering it probably costs more to manufacture.
And you seem to be forgetting 7800GX2 vs G80, 3870x2 vs 4870, etc.
Well I wasn't. I was comparing the whole market. I'm not interested in only buying along brand lines.
The 295 has about the best driver optimization it's going to get and it barely beats the 5870. The 5870 has room for more optimization plus there is plenty of headroom for a 5890.Quote:
Also the 295 is faster than the 5870 in any SLI optimized application, so if the GT300 equals the 295 it will be the fastest single chip GPU on the market.
I guess it's good that you guys are finally moderating your expectations. Thought probably not for the reasons I advocated...
What has changed with HD5xxx that makes you think there are a lot of optimizations left to do that haven't already been covered with HD4xxx?
295 is current/last gen, the same potential for driver optimizations will exist for the geforce gt300 as much as the 5870.
We still do not know of how the architectural changes will affect geforce performance but its safe to say at minimum a doubling of performance of a single gt200.
Even if the final geforce shaders are 448 total thats still an over doubling of the 275/295 while having all the extra memory bandwidth. I would say its within reason to expect more performance than the 295 without the worry of sli profile optimizations.
This is by faaaaar the most hilarious Fermi "preview" you can read:
http://www.techradar.com/news/comput...plained-657489
:pimp:Quote:
Originally Posted by Chris Lloyd
:nuts:Quote:
Originally Posted by Chris Lloyd
:shock2:Quote:
Originally Posted by Chris Lloyd
:explode:Quote:
Originally Posted by Chris Lloyd
:eleph:Quote:
Originally Posted by Chris Lloyd
Because it will be based on a newer chip revision?
Again folks, Tesla is based on A2. Everyone worried about power consumption and SP counts because of HPC cards should only be worried if they are buying a Tesla card. There's a reason NVIDIA is waiting for A3 before releasing consumer GeForce cards.