I seriously doubt it though, not the performance part but the price part. But well with all eyes on Fermi now, they can pretty much make up anything.
Printable View
Unfortunately, there are. Formerly, there's the question of the hardware itself. I suppose that the demo you're talking about would be running on CUDA. So the real problem here is not the CPU but to be able to run the physics in addition to the graphical load at the GPU. Demos use to be graphically simpler than games, and even if not, you will be always cutting down your graphical resources to include physics. And graphics are shown in screenshots...
Then, it's the thing of how much of the target audience can run that code. Probably only people with a high end graphics card (a much smaller target audience than you could think, amongst gamers). Then, if it's done in CUDA, cut that in half (no ATi compatibility). That gives some unacceptably low numbers that makes non profitably to invest resources on that. Sincerelly, if you were a developer, you would better invest your resources on a thing that so little people could see, or in something more widely useable instead?
Someone could argue than some graphical settings are not usable except with high end hw. Well, graphics are special in a thing: games are sold by selling screenshots, and the graphics are the screenshots. So even if not everyone can take advantage of the graphics, the developers take advantage of their effort with everyone...
Notice that even when PhysX is the most widely used physics library, only a few games have any kind of CUDA accelerated effects (Batman and Mirror's Edge basically, if we don't count the laughable falling leaves in Sacred 2 or the extra out-of-main-game level/s on UT3). There's a reason for that.
Maybe you're right with it being the future of the videogames, maybe not (I'm somewhat skeptical in that it's a good idea to transfer workload from CPU to GPU in a field of sw that has been GPU bottlenecked for years now, but I don't think it's impossible). But anyway I don't think it will happen any soon (not at the GF100 life time anyway, oh well, at least if it finally lives in 2010 or so...). And when it does, it will be with some kind of widely supported GPGPU standard (be it OpenCL, DirectCompute, or whatever it will be), and I think most of them are more immature than CUDA right now.
[QUOTE=
"GeForce GTX 380 will be 15% faster than dual GPU Radeon HD 5970."
"In terms of performance GTX 360 will sit between HD 5870 and dual GPU HD 5970."
Words on MSRP for GTX 380 is $499 and GTX 360 is $379, if that's true then count me in. :D[/QUOTE]
i just got new info nvidia decided to release gtx 370 which also will be faster than 5970 (didn't say how much though) Msrp will be 459 and great thing about this new one it will have only one slot cooler :eek:
My source (Santa) told me that everybody should write his wishes to this thread.
Edit: The GTX385 will cost $99, beat a 5970 by 27,25% and have passive one slot cooling.
YOu all are making fun of fermi, come on guys it has the potential to really be a either a series 5 or a series 6....
Series 5 was beaten black and blue by ATi and Nvidia took their revenge with series 6 which was quite different architecture and included hyper games like Doom3 in the bundle....
If GTX360 is on par with 5870 it will be a huge accomplishment but "and a huge but" if nvidia does not fund DX11 games it may as well have a low track record for DX11 and we will again have something like 5870's performance coming close to GTX 380...
I was thinking about posting this earlier, but I guess now is the time :p:
I love how all you guys make jokes and laugh about Fermi, the delay and it's performance ( yeah, I've seen lots of people saying it's going to stink, it won't even come close to a 5870, etc ) and still cry like kids asking nVIDIA to release it ASAP...
:confused: How so?
ATi still produce leading edge and cutting edge GPU's.....albeit availability is awful and the launch here in the UK has been all but a paper launch... shipments ARE coming in.
You can now get a Radeon 5870.... for £400 odd pounds and one e-tailer even has TWO Radeon 5970's in stock!!!! for £525
After Christmas the prices will come down to common sense levels (if the cards are stocked nicely) by that I mean Radeon 5870 £250 and the 5970 £350
John
I disagree. Nvidia will be in trouble again.
The issue of differing die sizes, memory bus sizes and PCB costs would favour AMD/ATI again.
Plus, the 5970 is already here so that puts quite a lot of pressure on the "GTX 380". Also, AMD/ATI may pull out a tweaked Cypress for maybe a "5890" or something like that again...
5870 already here?!... only just and highly over priced too and NO 2GB model.... hardly what I would call here, but here nether the less.
What do you mean nVidia would be in trouble again?!?
I agree that they are in trouble NOW (by not having a new DirectX 11 card on the table) but the last time they were in trouble was the notorious Geforce FX series cards
John
i was expecting nvidia to aim a little smaller this time around, so they can have a single card that uses 180W and a duel card would be right at 300W. but instead they went massive again. and have we even herd of a dx11 card from nvidia that will cost less than $200? those are the ones that will sell the most, and given how games are still being built for consoles, it will be strong enough to really enjoy a game on any 150$ monitor.
G200 (rather RV770) was trouble for Nvidia. A spade is called a spade.
Massive price cuts after after the first week, subsiding AIB partners, profit margins damaged while AMD/ATI's market share increased in both desktop and notebook sectors.
The high costs of producing G200 compared to RV770 was what made Nvidia's pricing and product placement strategy inflexible.
Also the Radeon 5xxx series supply shortages are blown way out of proportion. I remember that the G80 was facing similar problems years ago. It's just pent up demand in the face of a new OS and new DirectX revision.
I wouldn't consider 5970 putting "a lot" of pressure if prices keep going up since prices have reached $700+ (I never seen such price tag since G80 Ultra) and yet not knowing how the "GTX 380" will perform, its hard to find a lot of people willing to pay $700 for a video card now days (since gets replaced in s short time compared to other pc hardware) nvidia is probably working on a GX2 version as well which could be nvidia triumph card...
Inflexible, thats just pulling things out your crack, they managed to shrink gt200 to 55nm, drop the prices to competitive levels and still maintain margins and performance during the time frame up until now.
Just a refresher on q3 results:
http://img704.imageshack.us/img704/2807/37015610.jpg
nvidia does more than sell gt280s. just cause they were making 5$ on every 280 and 260 sold, does not mean they ever paid off the cost to design them. if they were able to keep the prices only a few bucks higher, they could have made alot higher margin (if they are working under 5% margin for a 200$ chip. then to get 10% margins (or double profit) they only need to sell it for 210$) again, thats an example scenario, and i have no idea how much they either hate or love themselves for how the gt200 series went. but i think we all know ati loved how well the 4000 series worked for them.
Ummm... he was specifically talking about the gaming card/market, not the professional/workstation which is what was saving Nvidia for most of the G200/RV770/RV870 time period.
G200's BOM was around 2x RV770s. The gap between GF100 and RV870 is going to get even larger. I will be very surprised if we don't see 3-4 cutdown variants of GF100, not at launch but eventually.