Who cares if it is benched at 2560x1600 as long as it can provide playable frame rates with most of those games :confused:
Printable View
well according to the numbers weve all seen, the 470 needs every little bit of additional perf it can get since it seems to be behind the 5870 in most scenarios...
which is what, in 2 weeks? they dont even have the box layout finished and plan to have thousands of them printed AND already shipped to distries and shops in 2 weeks? im finding it harder and harder to believe that the march 26 event will be a propper retail launch...
capacity and speed are two different things...
right, there is some relation between speed and capacity, but just because there are 3 and 6gb cards you cant derive what speeds 1.5gb cards will clock at... for cpu/nb memory controllers this relation is common, but can you remember any differences in memory clocks in 1gb vs 512mb cards? and 512 vs 256, 128 vs 256 etc that was NOT based on how well the memory clocked? i cant!
I'm pretty sure if you design a memory controller to take 6gb in a certain configuration, you'd have to be pretty hard pressed to mess it up. That sounds like a outright logic error,
Much easier for it to fail at a rated speed than be unable to take larger memory IC's, it's a 384bit bus anything below 1gb and they wouldn't even release it.
Maybe I should go back to my original guess and just say nvidia are being cheap, if yeilds are sucky then theres really no point losing more money per card than they already are.
They could be sand-bagging, but I'm hearing this batch on release might be the only batch unless they can fix the issues. So once this batch of Fermi sells out on the 27th you won't get one until June-July.
I never said that you could derive speed from the amount of memory placed on higher end cards. However, both memory speeds and the amount of memory associated with each controller has an effect on stress levels. :up:
If the card uses lower-end GDDR5, it is likely due to TDP and price issues rather than a design flaw IMO. Especially considering the GT 240 already uses higher binned modules in some cases.
tdp? i find it hard to believe that the memories power consumption is an issue :D
its possible that since its 320bit, they dont need high clocks, more bw barely scales... so they go for the cheapest low clocked stuff... makes sense, thats kinda what they did with gt200, very wide interface and then use cheaper gddr3.
It most likely is a TDP issue, and that itself can be identified as a flaw. Not of the memory controller, rather the GPU design.
I think nVidia has already started a new radical GPU design. They know they cannot go on forever in this path. I don't know what too much is to them, but 3.2 billion transistors is 1 billion too many in my book. If ATI can get you the same performance out of 2.15 billion of the little guys then you have got a problem that cannot be ignored.
Absolutely hilarious box design! :rofl:
And the card looks like a render. Not very impressive design anyway...
I don't think the price difference is huge. Look at 5770's memory. Can overclock even higher than 5870's GDDR5. And that's a budget card.
TDP? Yeah, maybe... But not very likely imo...
Dual Geforce 400 to come later
http://www.fudzilla.com/content/view/18038/1/
If I'm not mistaking, Fudz is trying to say the double Fermi is "postponed" to some unknown date in late Q2. Isn't he trying to say, forget it for this round?.
Zotac planning GeForce GTX 480 AMP! version
http://www.nordichardware.com/en/com...p-version.html
that doesnt make sense, then why does the 5870 have such low idle numbers with even faster memory? i find it hard to believe that clocking gddr5 memory higher increases the power consumption notably... a 5870 consumes less under load than a gtx285 iirc, and dont tell me the gpu is running that cool that it can even make up for the gddr5 consuming more power than the gddr3 on the 285...
isn't it because the 5XXX series have better idle clock changing mechanism to reduce power consumption when idle?
its downclocked quite a bit. 4870 idle was from that and terrible power gating. its ~3 watts per IC with gddr5 and there are 12 on the 480. thats 35 watts. tesla was delayed because they are waiting for hynix 2Gb IC's. even then with 6GB you are still consuming a lot of power.
hmmmmm
http://www.pcper.com/article.php?aid...=expert&pid=10
37W higher power consumption of 1gb gddr3 vs 1gb gddr5?
that CANT be caused by the memory chips alone... impossible...
there are only 4 memory chips on those cards, it would mean almost 10W per memory chip... MORE than gddr3... which would mean above 10W per memory chip! they would burn up in no time without a heatsink on them... normal mem chips have 1-2W tdp...
and lets say the imc runs much hotter if its driving gddr5 and the memory chips ONLY consume 5W each, then the memory chips would still burn up without a heatsink and the imc had to run over 15W hotter just by driving gddr5 instead of gddr3? and thats only a 128bit imc? that CANT be right... that gddr5 card must have a terribly unefficient design and they somehow fcked it up or used super cheap very unefficient components in the pwm...
i couldnt find any useful information on wikipedia or samsung or hynix sites regarding gddr5 power consumption... anybody?:shrug:
http://news.softpedia.com/news/NVIDI...y-137157.shtmlQuote:
NVIDIA Explains Fermi GTX 470 and GTX 480 Delay
We redesigned GF100 from the ground up to deliver the best performance on DX11. This meant adding dedicated h/w engines in our GPU to accelerate key features like tessellation. We also made changes on the compute side that specifically benefit gamers like interactive ray-tracing and faster physics performance through things like support for concurrent kernels. Unfortunately all of these changes took longer than we originally anticipated and that’s why we are delayed. Do we wish we had GF100 today? Yes. However based on all the changes we made will GF100 be the best gaming GPU ever built. Absolutely.
I don't buy it lol...:rofl: If we knew when and why this so called " redesign " started, then perhaps it would be more believeable. Otherwise I call shananagins!