*cough*B1*cough*
Printable View
New benchmark are out here
Intel's Core i7 960, 6GB of 1600MHz , an Asus Rampage II Extreme , Win 7
http://images.hardwarecanucks.com/im...0/GF100-42.jpg
http://images.hardwarecanucks.com/im...0/GF100-43.jpg
http://images.hardwarecanucks.com/im...0/GF100-44.jpg
Source :
http://www.hardwarecanucks.com/forum...oscope-14.html
http://www.hardwarecanucks.com/forum...oscope-13.html
http://www.fudzilla.com/content/view/17325/1/
its been already posted :)
february/march lol..
oh wait ... wait.. someone seems to forget that they actually need boards to make people surprised. and without cards.. no game..
if we want to be surprised next month.. actually let the people that make and sell the card know they're getting them.. which is not the case if nv told it's partners last week they don't expect any shipment in march.
I hope you're not expecting Twintech, Inno3D, Sparkle or Gainward getting the chips prior to eVGA, BFG, XFX :p:
Checking 70 pages isn't easy, though.
I has been rumored in some sites that the GTX380 will sport a $520 sticker :)
That is a goood price IMO, around $500 it will be $100 cheaper than the 5970 and $100 more expensive than the 5870 not bad for a fastest single core. But the price of 5950 and 5870 @ 1Ghz is still to be inked and it will be blood bath if 5950 is priced around $500.
5870 @ 1ghz will be much cheaper and may be a good choice for a person who cant afford the gf100.
You have to realize that both architectures are very different, you can't compare ATI's shaders to Nvidia's Cuda Cores on a one-by-one basis.Quote:
Originally Posted by NapalmV5
It's similar to the old Athlon VS Pentium 4 debate.
It's not about quantity of working units, but how they work, and ultimately how good performance is.
And BTW, you are supposing that fermi will have the upper hand on said comparison, but most people don't share that thought.
Think about this: Why did Nvidia compare GF100 vs the 5870? (instead of the 5970). That should give you an idea.
Considering what Nvidia said about XFX, I wouldn't expect them to be on the first batch of fermi cards.
Anyone expecting SuperCompute functions in GF100 are badly mistaken..
(expect some more superb announcements in the coming month)
Thanks Jawed for the linkQuote:
I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.
^ Think anyone could figure out a way to bypass that limit?
sure it is :)
http://www.xtremesystems.org/Forums/...hreadid=241120
Don't count on it, every soft Quadro hack is just superficial (device ID and SPEC marks) and Teslas are even more of the same.
Now where is that "but it's still faster with 186GFLOPS" crap, when it's taking 100W more and costing an estimated 150USD more it :banana::banana::banana::banana: be doing that.
Neliz, no specifics, just the 280W ceiling added unto the 520USD estimates.
But ultimately Cypress' pricing is dynamic, and I'm placing my bet on AMD keeping their distance (and opening up the 5800s to custom designs) beyond the price (in)elasticity of green users.
They've been "designing" for quite some time now first results in a bit more than two weeks.
edit: NOW!
http://techpowerup.com/113418/MSI_HD...assembled.html
http://www.techpowerup.com/img/10-01...card_naked.jpg
double precision on the gpu is completely irrelevant for everyone in this forum, afaik there are no commercially available applications that use it. it's used in secret stuff that decides if your company make hundreds of millions of dollars more or not.
expect ati and amd to have big drama about double precision, ask them for real-word applications you could use to benchmark
I've got a couple of customers that actually use DP (and yes, they are in the financial world) and they have used GeForces/CUDA all the time now (starting development on 8800GT's) And they were indeed quite expecting that all their CUDA efforts would get a nice boost come Q2.
DP would be appreciated perhaps even in situations such as a GPGPU based renderer (no, not the lame raytrace-everythingmajig kind but a OCL implementation of a unbiased renderer)
nVidia's headache here is that Cypress achieved IEEE-854 compliant DP and has perhaps enough cache read/write flexibility/addressing capabilities that would make it a decent choice for most devs.
As for real world apps... is GPGPU even important enough yet to talk about speed? Besides all those distributed computing projects I thought the first round of CUDA/Stream apps were horrid jokes.
I know ppl who use DP based software and use consumer chips for that purpose saves them a wallop and its almost as good.