just stating that the interface is pci-express doesnt mean it is certified
http://www.pcisig.com/developers/com...list/pcie_2.0/
nothing there about 4870 x2, nor 5970, nor gtx 295
Printable View
just stating that the interface is pci-express doesnt mean it is certified
http://www.pcisig.com/developers/com...list/pcie_2.0/
nothing there about 4870 x2, nor 5970, nor gtx 295
Yes, I was about to post that. Indeed looks like those cards didn't get the certification, but no mention of GTX 285 or 275 either.
Well, dual Fermi = 800-1000$ card, anyone? :rofl:
Average price of the Radeon 5970 in the US... 680$.
Lowest price 619$, highest 780$...
Both cards are pointless and worthless for me, a single 5870/GTX3xxx is more than enough to enjoy all the latest and most demanding games on any monitor/resolution ( except eyefinity 4000+x2500+ )
GF100 is not only gaming GPU. I personally have been looking into GPGPU lately, and a dual-GF100 can become a great GPGPU setup, and serve as a personal supercomputer ;)
Prices depends on competition, HD5790 became so ridiculously expensive just because the lack of it.
If ATi can't put up a good fight (at least with a worthy refresh), then the GF100 could get really expensive. Lets hope ATi drops a new GPU at the same time.
Heres Evergreens results:
http://images.anandtech.com/reviews/.../5870/5870.png
and here is fermi's -
http://www.flarerecord.com/wp-conten...-all-folks.jpg
Thats where the problem is.
90% gamers dont give a damn about this. All they want is cheap value for money graphics card to game on.
This is where Nvidia is only hurting the whole PC gaming industry.
The architecture is solid. It should perform, but its too early (technically its late, but its too expensive now to manufacture. ).
This will push more people away from PC gaming.
Honestly at this point, ATI wont be worried at all. The main cash cow for them is 5770 and lower cards, and there is no sign of competition there.
I would agree if the game performance was left out, as the early rumors/speculations where indicating, but these early results shows a great game performance too. The price would more depends on the competition, and less on the architecture .
On the other hand, many small and medium size business which can't afford a $100.000+ supercomputer, can become the new consumers of the GPGPU and get the same performance with a fraction of the price.
I see you concerns too, and it going to be exciting to see where the price and gaming performance on the retail GPU will end up.
If anyone was concerned about the cheap cards and the poor people who want to game on their PCs we'd have 100$ 5870's right now instead of the crap performance given by the cheap cards out there.
Quoted right from Charlie's mouth.
The statement is false. Large system builders don't care about extreme high end cards and filter most of them down to their boutique high performance shops like Dell's Alienware and HP's VoodooPC. These higher-end shops care less about compliance since their clients aren't concerned with having the systems run in datacenters, work computers, etc.
There are no potential problems 300W+ cards would cause other than on the power supply feeding them. The PCI-E slot is designed to provide up to 75W of power and no more while power connectors aren't an issue either since a pair of 8-pins can provide up to 300W plus the 75W from the slot for 375W.
As for board partners, they aren't concerned either. They can still claim a card is compatible with the PCI-E interface BUT they won't be able to put the PCI-Express logo on their packaging not claim compliance in their documentation.
All in all, not a big deal. :up:
i think you're forgetting that Quadro and Tesla are two of nvidia's cash cows... never underestimate the power of high margin products. the gpgpu features of the fermi arch will probably allow nvidia to dominate the professional and hpc markets. also, i've yet to see any proof of gpgpu features decreasing the performance or value of gf100. yes it's a big chip, but the performance improvements it will probably justify the extra cost.
Wizzard, I'd be interested in your take on the TDP of intels cpu's. It's been shown they have the potentil to draw 195W or more, and they rate their TDP as average, and not even their max power is documented at anywhere near 195. Should there, or is there different definitions of TDP from CPU's to GPU's. That's the confusing part to me. Where's the standards?
Dominating the professional and hpc markets is fine, but not having cards for the mass consumer market is risky, as it let's ATI grab at that market with little/no competition. Nvidia should work on getting more mainstream cards out faster than they are currently doing, or they run the long-term risk of being pushed out of the lower markets and becoming a niche company. Sure, that's probably not happening any time soon, but they still need to make sure it doesn't happen later, either.
Edit: Now that I think about it, this feels a bit like disruption coming, kinda like the whole mainframe/mini-computer/PC-thing all those years ago.
if you're going to take issue with speculation, then you should take issue with this thread as we now have 60+ pages of conversation about a product that doesn't yet exist. i think you might have forgotten where you are....
i agree, these delays are costing them dearly, but i don't think it's all in vein. from what i understand the 'fermi' arch is going to be the basis for many future gpu's after the gf100, similar to what amd did with r600.
That was probably an ulterior motive. As we know, it underclocks itself when it starts to overheats. Even at stock. By keeping it under 300w they greatly reduce the chance of it over heating. If they had kept the 5870 clocks the over heating issue would have been found much sooner and would have been over exaggerated.
I don't think the 300w limit was something they really cared about but was something they were able to also achieve in getting the 5970 to not down clock itself often
Why they didn't just design a better heatsink though is beyond me.
It depends. Basically, the second you go over 300W you're no longer backwards-compatible with PCI-E 1.x which can only deliver 75W through the PCI-E slot while PCI-E 2.0 can deliver up to 150W. So, if you make a 300W card you have one of two options: add two 8-pin PCI-E connectors or risk alienating everyone with PCI-E 1.x mobos in addition to confusing the heck out of potential customers who don't know any better.