Yeah, when fermi was first announced there was all kinds of talk of turning your PC into a supercomputer, but I'm sure I remember reading about nv slashing the consumer card's DP capabilities. Was that just a rumor or?
If nVidia Tesla market gets too big, there wont be any Fermi for gamers. Afterall, why sell for $600 to gamers, when you can charge $3000 in HPC.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
While the Tesla line up will have half rate DP compared to SP Flop performance, i believe GeForce will only have one eight DP compared to SP Flop performance, the official news was there backthen. How they block that capability precisely, that hasn't been answered quite clearly.
The whole point is, few cards will be launched yes will be around $599 - $699 and that is being too optimistic. This will be like a super ultra soft launch, probably not even 10 thousand units. This move will be to calm down angry customers and Nvidia adepts, even though you will never find it but it was launched. Ebay prices, expect $1000+. Nvidia's primary business is where the money will be generated. What is the point of selling a product that costs $500 to manufacture at $699 if you can make more money selling Tesla based products at $2500.
I hope you realise that building fab is not something cheap and also it does not end with just building it
i think they will always sell consumer cards... always... if only to get rid of the busted tr4sh silicon
which is kinda what i expect for fermi actually... fully fledged cards will be tesla only, broken ones will be consumer cards... well, except for some limited supply of fully enabled cards for pr purposes maybe...
Fermi gets cancelled
To be tweeted this Monday.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
The price will depend on the competition. If ATi can't (or don't want) to put up a good performance-fight, then a superior Fermi won't come cheap, for sure. But you are speculating based on what?
Tesla is something else, we are talking about CUDA that runs on all upcoming NV-GPUs. CUDA has already been used for supercomputer-activities, and is accelerating many Gflop-demanding tasks both for personal and business use. But my question (which you are avoiding to answer) was: how can you make a negative point out of something that can be a great help to those who use their PC for more than just a game-console.?
Last edited by Sam_oslo; 02-20-2010 at 03:18 AM.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Negative point where? Avoiding your question?
You better start reading posts slowly. That post I have written my prediction which is not based on anything, prediction can be personal no need to be based on anything, second when you make a product that is going to cost much more than what you were offering you need to justify the price. That is basically what Nvidia is doing, "marketing hype".
It might be a stellar supercomputer and so the price. Is this a negative point? I don't see it as.
Don't get offensive. Try to hold the focus on the subject. Personal focus is childish and is not going to help you.
You are speculating that CUDA and supercomputer-performance is something negative, because it will make the GPU more expensive. All based on your wild guesses and speculations about the "yet-unknown" price.
Anybody with a couple av days marketing classes wold tell you that the competition between nVidia and ATi will impact the price. But you are bringing inn Tesla into this, and speculating that the competition between nVidia's own products will impact the price?
Last edited by Sam_oslo; 02-20-2010 at 05:06 AM.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.
The real problem we-the-army-of-cuda-elves face is cooling 4 dual GPU cards dangling off those slow old PCIe slots.
What we REALLY want Nvidia to do is to get GloFo to write quad-Fermis that drop into spare mobo sockets.
But looks like AMD may do this first... what's an apprentice elf to do![]()
quad fermi? that wouldnt physically fit on a package of any cpu socket, even G34![]()
besides it being impossible to pay for, impossible to cool, and starving from memory bandwidth... what would be the point of this? more system bandwidth... but it wouldnt be able to use it since it would be completely starved memory bandwidth wise...
what we need is hyper transport instead of pciE... if qpi would be open then that would work too...
but just out of curiousity... why do you think those gpus are system bandwidth limited exactly?
silverstone had a quad 9800gx2 gpgpu demo at computex and they said it scaled linearly with every card they added... which means pciE bandwidth wasnt an issue at all as adding more cards results in some slots switching from 16x to 8x...
Last edited by saaya; 02-20-2010 at 08:01 AM.
Ok, that'll be fine for a next step--our stuff is still computationally bound and scales nicely on 1-8 GPUs. But adding double-width, 12" cards is sooo like using (12") cards to add a MB of extended ram to a 286. Just wondering what might be down the road... quad Fermis and NO intel inside?![]()
thats what nvidia wants... at least for tesla...
i dont think fermi successor will be there yet... it might do well with a slow basic cpu, but itll still need a cpu i think... but the one after that... in 2013+ might do just fine without a cpu altogether...
if you look at a tesla server today the cpu is not much more of one of the many chips on the mainboard that powers and connects the vgas, which are the server blades pretty much![]()
I'm not so sure about that.
There are multiple constraints, and bandwidth is just one. #GPUs has to be >= #frames being rendered ahead. Hypothetically, if you had 100 GPUs running at 50fps, those 100 frames would be 2 sec of lag. And D3D and OpenGL have limitations on #chain/buffers.
There is also diminishing returns due to cost of partitioning work, and how uniform the work units are. Most games I play dont have exact same pixel/texture/vertex workload on all parts of the screen all the time.
Bottom line:
If you cant get anywhere near 50% let alone 100% scaling in many (not all) games with just 2 GPU, how do expect it to be possible for quad?
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
"SemiAccurate gets some GTX480 scores"
http://www.semiaccurate.com/2010/02/...gtx480-scores/
EDIT: Whoops, already posted, shoulda refreshed the page![]()
Bookmarks