on 26th it can be 480 again depending on Jens mood lol
Printable View
on 26th it can be 480 again depending on Jens mood lol
Your are totally right, hence does this also means that regarless of the GPU memory speed it's set at, that it's not going to be any faster than the memory installed on the motherboard. which in turn if you want to take advantage of a GPU overclocking you have to buy better memory speed or either OCing. Meaning that the memory speed installed on the mobo will also limit the GPU performance.
It as too logically.
Regards
I hear GTX 4XX specs and prices are the only things being shown on the 26th? Benchmarks on the following Monday?
Quote:
NVIDIA was planning to launch GeForce GTX 400 series on March 26. This meant that it would launch a new graphics circuit architecture on a Friday, which is quite unusual. The reason is because NVIDIA was planning to show its GeForce GTX 400 series during the LAN Party PAX 2010, but it looks like the plans has changed.
Even if it hasn't been entirely decided yet NVIDIA is expected to go through with the launch event of the GeForce GTX 400 series March 26 at PAX 2010, but there won't be any benchmarks. Specifications and prices will be revealed, but the media embargo with complete benchmarks and such won't be until March 29th, Monday the week following PAX.
During the same week NVIDIA GeForce GTX 480 and GTX 470 will be available in stores and ready for deliveries. The latest information speak of April 6th, but we wouldn't exclude that you will be able to get cards sooner.
http://www.nordichardware.com/en/com...arch-29th.html
I thought neliz was supposed to have inside info...seems like that's not the case, what a surprise :rolleyes:
But hey, at least Charlie was right, again. He better hope they can't put more than 512 cores there, he hasn't covered that, yet.
It's becoming really interesting, that's for sure. Can't wait for the 29th.
Theres the possibility that graphic designer of this website got outdated info ...Remember the "fermi" boxes with ati info on them ;-).It would be nice to see at least one leak of 480 performance and not 470.
Probably NV made 2 different versions of the GTX480 prototypes.One is 512SP/290W TDP,the other is 480SP/275W TDP.
Another leak by Neliz:
Quote:
TDP for the 512CC part is 295W.
http://www.galaxytechus.com/usa/about.aspx
Not surprised you haven't heard of them. They focus more on Europe and Asia.
Ops.. Fixed :p
By the way, I dont believe that slide. It was revealed weeks ago in advance...
And you can the see the background is clearly not the same. The fake one (for me) is pure black. The new one, which i believe its truth, its a graduation of green and blue. Even the textures in the tables dont fit...
http://vr-zone.com/articles/nvidia-g...aled/8635.htmlQuote:
NVIDIA GeForce GTX 480 Final Specs & Pricing Revealed
GeForce GTX 480 : 512 SP, 384-bit, 295W TDP, US$499
GeForce GTX 470 : 448 SP, 320-bit, 225W TDP, US$349
Internal benchmarks reveal that GeForce GTX 470 is some 5-10% faster than Radeon HD 5850 and similiar for GeForce GTX 480 over the Radeon HD 5870. Interestingly, the TDP of GeForce GTX 480 is almost similar to Radeon HD 5970 which is a dual GPU card. Interestingly, our sources revealed that there are indeed plans for dual Fermi cards and the TDP of the card is probably gonna be mind blowing.
http://resources.vr-zone.com/newvr/i...635/GTX480.jpg
so... 295W vs 188W = only 10% extra? Damn....:(
i could have easily guessed this. I mean Nvidia's prices in the past have been pretty outrageous, but think of the competition then, compared to the competition now. makes sense. I knew it wasn't going to be insanely priced. I mean, even if the card was miles better, which it isn't, ATI has had a grasp on the high end for a while now, and dropped DX11 first.
Price is not a spec of the card is it?
And we know Fermi has 512 cores for how long? The only debate was rumour it would have only 480 enabled. 448 cores for GTX470 was pretty much a given, because Tesla parts have 448. So it would make sence to be next downgrade step.
Really, this is not really new info, except for the price, which is NOT a spec.
What i mean is really that without the clocks you dont know THE final specs. Shader count and TDP are NOT enough to define card specs.
I thought the 295W TDP was supposed to be lower, so that is news (to me at least)
If 512sp is correct (which it seems to be right now) then the 700mhz clocks can't be true otherwise the performance difference would be a lot more...
Poop sling fest ahoy.
I'll sure try it out if i can get my hand on 2 x gf100 one day.......! .
I'm not totally sure, but i would tend to believe it keeps the minimum required to boot and the rest for hardware. Therefore all hardware would operate with the remainder and probably would probably crash or something like address error, BSD or what not.
Is there anyone on Extreme that has 2 x 5970 running on XP 32bit ? Just ask them to try it out ! lmao
So they don't know the clocks, but they know how fast they are. Sure, I'll believe that.
It seems more like NV is keeping everyone in the dark, so they all went the Charlie way, covering every possibility so they can state, they've been right all along.
Wow, the number of false leads and misinformation in this thread and the links / pictures posted is simply stunning. Mind blowing actually.
Nvidia always charges more, doubt those prices are accurate :shakes:
Well, that is really simple, the issue only arises with addressed memory, not available memory. The same way you can stick 8 GB of ram into a computer running 32bit windows, it doesn't have an issue booting or anything like that, but it still can't address more memory than it is capable of.
Edit: I should note I am referring to Windows Xp, and not Server editions of Windows which have no trouble addressing much more than 4GB.
The heatpipes increase the height of the GTX480 out of specs.
Should I expect the dual Fermi to have double height? LOL.
It's not false. It's just links to legitimate sites like VrZone and Semiaccurate or personal hopes.
Blame the linked sites in case of any misinformation posted here.
... hhehehe, half of my posts are BS? No, i only play game with you, who dont know anything! I know everything, but its funny play with you, nobodies.
That spec are BS, reality is different, only 480s picture on FAKE slides is correct, nothing more.
blame nvidia for all that misinfo ... its their fault
http://vr-zone.com/articles/nvidia-g...aled/8635.html
http://i44.tinypic.com/2qjkikz.jpgQuote:
GeForce GTX 480 : 512 SP, 384-bit, 295W TDP, US$499
GeForce GTX 470 : 448 SP, 320-bit, 225W TDP, US$349
Internal benchmarks reveal that GeForce GTX 470 is some 5-10% faster than Radeon HD 5850 and similiar for GeForce GTX 480 over the Radeon HD 5870. Interestingly, the TDP of GeForce GTX 480 is almost similar to Radeon HD 5970 which is a dual GPU card. Interestingly, our sources revealed that there are indeed plans for dual Fermi cards and the TDP of the card is probably gonna be mind blowing.
It seems that there is a lot of "myths" around the 32bit and 4gb+ memory addressing.
The facts are:
32 bit operating systems ARE able to use and address more than 4GB of ram. As mentioned before Enterprise software must be use (and I use the word Enterprise in order to describe Microsoft Windows OS - such as Microsoft Windows 2003 Enterprise Edition or Windows 2008 Enterprise Edition 32 bit which actually supports up to 64GB - yes 64gb! in 32 bit see here http://msdn.microsoft.com/en-us/libr...78(VS.85).aspx). However going over 4GB comes with a performance hit.
Also I can verify the above as I have setup some servers with 32bit Windows 2003 Enterprise Edition with 16GB of Ram and I’ve seen the usage on those gone as high as 12GB of physical memory used (Citrix servers with 150+ users per server).
So it is clear that ANY 32 bit Operating System is capable of supporting and using more than 4gb - including all the XP, Vista, 7 we use at home. Why hasn’t this feature been enabled? Nobody knows but rumours around tend to point to certain licensing costs on Microsoft in order to use this technology - costs which MS will recoup through the more expen$$ive OS-es.
The simplest answer is most people flat out lie about what they know. Not that anyone would do that right? right?!
XS should make a "prophet" title, and anyone who posts info ahead of time with 110% accuracy gets it, that way we'd know ahead of time who's "most probably" got actual info and not just speaking out of their festering asses as is the normal among these kinds of threads..
If we don't even have the clocks how can we know the performance. So many rumors out there. Honestly besides the partners, I guess no one knows the specs, not even AMD. I think anything leaked so far are all from engineering samples so anything is possible for retail at the moment.
I don't know if their is hype for this thing as much as their is mystery. I bet not a single reviewer has this card in their hands yet, at least the final one. This NDA/security on this card is tight.
It seems that some are like a loaded spring, waiting to for the first opportunity to say 'see Charlie was full of s))t!). Take this 512 core rumor for example and some posts above. His info has always been, that if there will be any 512 core parts at all, they will be extremely limited and possibly reviewer editions only. If it's 295W, that info isn't all that hard to believe, is it?? Common sense says that they are teatering on the edge of manufacturable, and dropping to 480 cores for availability seems pretty likely. I'll trust Charlie's is correct, given his track record. And if it's wrong, so what.
Nail on the head... That said the *correct* conclusion to all this is mainstream 32bit versions of windows won't address excess of 4GB where enterprise versions can. Regardless it doesnt matter. Anyone buying a new platform for performance purposes and still wants to cling x86 doesn't get my sympathies... If you are buying a high end gpu config with 1GB + vram, you more than likely laid down a few bills which in turn means you more than likely populated said system with a liberal amount of RAM which in turn means you'd be retarded to do all of this and still use x86. :rofl:
GeForce GTX 480 : 512 SP, 384-bit, 295W TDP, US$499
GeForce GTX 470 : 448 SP, 320-bit, 225W TDP, US$349
Internal benchmarks reveal that GeForce GTX 470 is some 5-10% faster than Radeon HD 5850 and similiar for GeForce GTX 480 over the Radeon HD 5870.
http://vr-zone.com/articles/nvidia-g...aled/8635.html
I like the prices but tdp is nuts, if gtx 480 is just 5-10% more than 5870 for $499 its noting gr8 "5870 2gb" 5970's “launch” price was about $600USD.
That 5-10% comes at one hell of a price + power consumption hit.
I suspect thePower Dissipation Density will make OCing these difficult. ATI can drop prices + increase MHz to compete and still have the selling point of lower power consumption by the looks of things
Who exactly buys high-end GPU and looks at load power consumption? I know I don't and I couldn't care less. If it's gonna draw 400W, I still don't care, if they have the performance to back it up, which we won't know for a couple more days.
What's this habit of posting rumours like they are facts? If it came from a website, which hasn't been wrong, that would be something else, but most of them are just guessing. And what's really messed up, that you believe them every time...not only it becomes annoying very quickly, it makes threads completely useless.
wow zed x now you are playing with us heh so you are basically a liar thanks for confirming that :D now not only half but all your posts must be ignored lol
This one http://forums.vr-zone.com/news-aroun...gtx-480-a.html
Incidentally, one other guy confirmed, that they are indeed fake and he should know, because he has the card.
I dont think VR-Zone would repost for the sake of reposting.
I also think that for a big enough % of users to matter... power dissipation does play a part.
Its the equivalent of saying "Hey! Look at this packet of biscuits! Theres a few more in the pack than usual and it costs a crapload more, but theres more biscuits, so its all good!"
I though this design with heatpipes over the card is fake too.
Talk about premature.. The HD 5800 series has been out since Q3 last year, and has had ample time to refine and optimize it's drivers.
The same can't be said for Fermi though, and I can recall a few times when Nvidia has released drivers for their new cards which increased performance by as much as 30% across the board (Geforce 3 and Detonator 20xx drivers).
I hope this happens with Fermi as well :D
I look at average performance per watt in comparison to competing products. Unfortunately there isn't much to speak of currently since we are limited to two Manuf in the high end.
I don't care if the product X draws 300 or 295 and performs similarly to product Y which draws the same or within ~10%, but I do care if it draws 600 watts and performs similarly (within ~10% on average) to a 225 watt product Y.
Not only is it more expensive to run but I now have twice the heat output that will require additional cooling capacity in the summer (AC).
Where does it stop otherwise?
I'm not going to build a 2Kilo Watt system just to get 60 FPS in crysis...
Fermi GX2 tdp will be mind blowing, i'm sure even if they use 470 chips...I'm under the impression if nvidia release a GTX 475 would be with those 480sp people were talking around now that 512sp of the GTX 480 are "confirmed".
nsegative's post looks more accurate,
and it also looks like AMD will keep the lead if they even just drop prices to what they were when they first sold them.
First post updated
I need one of these (GTX 470) :D
http://www.xtremesystems.org/forums/...&postcount=198
http://www.xtremesystems.org/forums/...&postcount=223
Please read before posting, that's all ...
april 6th
holy (there should be a 4x :banana: button) 295w TDP? and only 5-10% better than 5870? along with higher price tag?
forget it, you'll probably save enough on your energy bills with the 5870 to buy another (over a few years of course, no more than 10 though)
april 6th because it's my birthday
You make it seem like it's impossible, when it's already happened before. :rolleyes:
I gave a specific example even. Any long time buyer of Nvidia cards will remember when the Geforce 3 series first came out, it was barely faster than the Geforce 2 series that it was replacing.....until Nvidia released the new Detonator 20xx drivers that boosted performance by an average of 20 to 30%, and up to 40% and 50% in some cases..
So yes, it can happen. Driver optimizations can make a huge difference in performance, especially if the architecture is radically different from the previous generation; which Fermi happens to be.
I don't know. Nvidia is a master of squeezing performance out of their cards with drivers..
Even the G200 got some nice performance increases from driver optimizations, and it had way more similarities to the G80 and G90 cards, than Fermi will have to the G200..
When I first bought my GTX 285, it was using the 17xxx drivers. After Nvidia released the 18xxx drivers, I noticed a significant improvement in performance in a lot of games, to the tune of 15% or so..
These latest 19xxx are also very good, but the improvements are more specific than across the board..