As has been stated, its not exclusively for gaming. Its for bigger projects.
Deal with it.
If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.
As has been stated, its not exclusively for gaming. Its for bigger projects.
Deal with it.
If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.
I'm obviously talking about the CUDA API, not about the CUDA architecture. The CUDA architecture is something internal, as it's the ISA, and access is granted through other intermediate layers, so nothing to standardize here. It's their propietary API what they want to see used, nobody uses "architectures" directly...
That's exactly the problem, IMO (and it's extensible to the previous gen too).
NVIDIA is trying to reach a new (emerging) market, what they call the HPC market, with their architecture. The problem is that this new market isn't there yet, so they can't split their R+D and chip manufacturing in 2 different architectures and/or product lines, so they have to make 3D rendering chips (the current market) that are good for the HPC market.
That eats transistors, developement of the architecture, and so. So the resultant chip isn't specialiced to the 3D rendering market, and has difficulties to compete with other products that are (in an efficiency performance/features to cost manner).
I'm starting to think that's the main reason of the weak performance to size ratio of the past generation, and I'm starting to think that we are going to see a repeat in this one.
That could be a way to describe what's going on with nV lately, using a hybrid solution in order to explore both markets (gaming and gpgpu) with one product.
If they stand in both markets, I bet the gpgpu market will have it's own line of chips and the gaming market will have it's own too. Maybe we are far from that, maybe one or two generations for that to happen... It depends on many factors, but the most relevant of all, imo, is Fermi's success.
Are we there yet?
You'd have to use a crappy resource hogging AV program anyway to need GPGPU acceleration, don't see the point in it.![]()
everyone knows that Nvidia Fermi will Beat ATI Evergreen but the important part is how much would it cost.
OS : Mac OS X Snow Leopard 10.6.1 (10B504) [Vanilla Kernel/Chameleon-RC3/DSDT/10.6 Retail/Windows 7 RTM 64-bit]
Motherboard : EVGA 141-BL-E760-A1 Classified | Processor : Intel® Core i7 950 Nehalem 3.06GHz [OC'4.2GHz] [Liquid Cooling]
Video Card : 2x EVGA GeForce GTX 285 2GB [SLI] [QE/CI] | Memory : CORSAIR DOMINATOR 12GB [6 x 2GB] DDR3 1600Mhz CL7
Hard Drive : SAMSUNG Spinpoint F1 [4x 1TB] [4TB] | SAMSUNG Spinpoint F1 [2x 500GB] [1TB] | DVD Drive : Dual Pioneer DVR-216DBK
WiFi : LINKSYS WMP300N [Wireless-N] | Case/PSU : Silverstone TJ07 MurderMod/CORSAIR 1000HX 1000W | RC : HighPoint-RocketRAID 2300
Audio : RME HDSPe AIO | MIDI Controller : M-AUDIO Axiom 61 | Dual-Display : Viewsonic VX2260WM
The gaming market is what got them to where they are. I'm glad that they are trying to enter new markets, GPGPU has so many possibilities. But what they can't do is enter those new markets at the expense of the market that really supports them, gaming.
If they have poor price/performance or power/performance for games then those of us who use video cards primarily for gaming may pass on it. GPGPU needs more killer apps developed on standards and then it will become more of a decision making factor.
Last edited by Solus Corvus; 10-08-2009 at 02:31 PM.
I'd say they can... as long as they price their products acording to what they offer to the consumer. Think about last generation, for example. Once the HD4000 series was launched, GTX200 series adjusted their prices accordingly.
Of course, given the higher cost of GTX200, that translated into economical loses for them, but that could be seen as an investment on the new market they are aiming to. If they have the funds to assume this investment...
Then, if everything goes well, and the new market grow as expected, they can split both product lines, normalizing the situation, and try to make the investment in the new market profitable.
Of course, if they have gone into this too soon, or they have overestimated the profitability of this market, or underestimated the loses they are going to face, they could find themselves in a complicated finantial position... maybe they're risking because of the Intel arrival to the market with Larrabee, I don't know...
I'm not an expert on this matter, but I think I see some logic in all this (that may be completely unrealistic of course). Good or bad decision, I don't know... too much for me.
![]()
I think it's reasonable to say: If Fermi fails, so does nvidia, by how much depends on how much of a miss it is. If it succeeds it's business as usual, at least from a gaming and enthusiast perspective.
Ever since the specs were released, this thread has been nothing but people repeating the same thing over and over for the past week or two.
IMO it should be locked until more concrete info is leaked/released. AKA closed for a month or two cause that's how long until we get some actual new news on gt300.
even with the specs, we cant really tell how these cards perform until they are run through a bunch of benchmarks, becuase they are too different than existing cards.
historically though, newer graphics cards tend to be better than the graphics cards that already are on the market. its somewhat rare to see exceptions to that rule.
Last edited by grimREEFER; 10-08-2009 at 02:44 PM.
DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis
thats just pure denial. you can get an approximation of performance based on all of the whitepapers and specs that have been released. no one said anything about "mind blowing performance". its fairly obvious that it will beat the 5870 even if its only 60% faster. thats based from the increase in bandwidth.
We can't know for sure, but I'm completely convinced about GTX380 (or whatever they name it) being faster than HD5870. By how much is other story, and how much room they will have between HD5870 and GTX380 to place a GTX360, too.
We're talking about a monstrous >500mm^2 chip, and even if they are focusing more on the GPU computing side than on the 3D rendering side, Fermi should perform better than the much smaller Cypress. Better not be otherwise if they don't want to be in serious troubles, since it's a much more costly chip.
I can't believe people think about how GT200 was abnormally priced at launch and was completely s//tstormed by 4870, and then say that this is going to be another flop like that.
Let me add some IQ (not image quality, INTELLIGENCE QUOTIENT) on the table: Nvidia, just like us, had no idea that 4800 would be so powerful and they priced everything super high. After the launches, the dust settled in, and Nvidia adjusted the prices accordingly. While I think 4700/4800 series continued to be a better pick than GT200b's, it wasn't a knockout after the price adjustments had been made. So the "total flop" nature of the initial G200's was because Nvidia was caught their pants down by the 4800s.
Now 5800's have been released and Nvidia knows both its price and performance and WILL adjust the Fermi boards' price accordingly. In no way Fermi will be a $500 board that performs on par with the (then will be) $300ish 5870. Yes, the chip is super big and most likely this isn't going to be a super launch season for Nvidia, but it's not just going to be a super expensive board that will barely match the competition's half priced board, like the G200's were initially.
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
why does everyone want Nvidia to fail? All it means is that your AMD card will be more expensive next time.
They don't.
I personally wish nvidia would stop all this IDIOTIC behaviour they have inflicted over the past two years.
Renaming the same product multiple times as new material, bad vista drivers for which they patsied the blame.
Marketing the infamous whoopass event. Having faulty designed products (hl solder bump issues) and denying it even after it's exposed they were well aware a year before that point.
Restricitve physx behaviour, physx indeed being snake oil moonshine faker that does PHYSICS! ...at the expense of major performance drops.
We had physics for years before physx with none of this strategic performance limitation to make new cards relevant
Taking jibes at intel. Slowing down direct x api advancements
They should take a stance of a professional company sometime, earn that respect they lost
I wish Nvidia would stop trying to be Intel or AMD and just concentrate on gaming... Jen-Hsun Huang is delusional in thinking Nvidia has that much clout. They need to stick with their CORE BUSINESS, CUDA doesn't matter to 99.9% of the populace!
But Nvidia has hung their entire business hat on it's acceptance, dumb!
I'm not even going to bother explaining what is wrong with this. If somebody else wants to have a go, be my guest.
OCN is full of blind AMD/ATI fanboys who have no idea what they are talking about. XS was more immune for a while, but the cancer has started to rapidly spread here as well.
Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
—Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.
Bookmarks